url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://projecteuclid.org/euclid.dmj/1556848995
## Duke Mathematical Journal ### On the proper moduli spaces of smoothable Kähler–Einstein Fano varieties #### Abstract In this paper we investigate the geometry of the orbit space of the closure of the subscheme parameterizing smooth Kähler–Einstein Fano manifolds inside an appropriate Hilbert scheme. In particular, we prove that being K-semistable is a Zariski-open condition, and we establish the uniqueness of the Gromov–Hausdorff limit for a punctured flat family of Kähler–Einstein Fano manifolds. Based on these, we construct a proper scheme parameterizing the S-equivalent classes of $\mathbb{Q}$-Gorenstein smoothable, K-semistable $\mathbb{Q}$-Fano varieties, and we verify various necessary properties to guarantee that it is a good moduli space. #### Article information Source Duke Math. J., Volume 168, Number 8 (2019), 1387-1459. Dates Revised: 26 November 2018 First available in Project Euclid: 3 May 2019 https://projecteuclid.org/euclid.dmj/1556848995 Digital Object Identifier doi:10.1215/00127094-2018-0069 Mathematical Reviews number (MathSciNet) MR3959862 Zentralblatt MATH identifier 07080115 #### Citation Li, Chi; Wang, Xiaowei; Xu, Chenyang. On the proper moduli spaces of smoothable Kähler–Einstein Fano varieties. Duke Math. J. 168 (2019), no. 8, 1387--1459. doi:10.1215/00127094-2018-0069. https://projecteuclid.org/euclid.dmj/1556848995 #### References • [1] J. Alper, Good moduli spaces for Artin stacks, Ann. Inst. Fourier (Grenoble) 63 (2013), no. 6, 2349–2402. • [2] J. Alper, M. Fedorchuk, and D. I. Smyth, Second flip in the Hassett-Keel program: Existence of good moduli spaces, Compos. Math. 153 (2017), no. 8, 1584–1609. • [3] T. Aubin, Équations du type Monge-Ampère sur les variétés kählériennes compactes, Bull. Sci. Math. (2) 102 (1978), no. 1, 63–95. • [4] R. J. Berman, K-polystability of ${\mathbb{Q}}$-Fano varieties admitting Kähler-Einstein metrics, Invent. Math. 203 (2016), no. 3, 973–1025. • [5] R. J. Berman, S. Boucksom, P. Eyssidieux, V. Guedj, and A. Zariahi, Kähler-Einstein metrics and the Kähler-Ricci flow on log Fano varieties, J. Reine Angew. Math., published online 2016. • [6] R. J. Berman and H. Guenancia, Kähler-Einstein metrics on stable varieties and log canonical pairs, Geom. Funct. Anal. 24 (2014), no. 6, 1683–1730. • [7] B. Berndtsson, A Brunn-Minkowski type inequality for Fano manifolds and some uniqueness theorems in Kähler geometry, Invent. Math. 200 (2015), no. 1, 149–200. • [8] C. Birkar, P. Cascini, C. D. Hacon, and J. McKernan, Existence of minimal models for varieties of log general type, J. Amer. Math. Soc. 23 (2010), no. 2, 405–468. • [9] X. Chen, S. Donaldson, and S. Sun, Kähler-Einstein metrics on Fano manifolds, I: Approximation of metrics with cone singularities, J. Amer. Math. Soc. 28 (2015), no. 1, 183–197. • [10] X. Chen, S. Donaldson, and S. Sun, Kähler-Einstein metrics on Fano manifolds, II: Limits with cone angle less than $2\pi$, J. Amer. Math. Soc. 28 (2015), no. 1, 199–234. • [11] X. Chen, S. Donaldson, and S. Sun, Kähler-Einstein metrics on Fano manifolds, III: Limits as cone angle approaches $2\pi$ and completion of the main proof, J. Amer. Math. Soc. 28 (2015), no. 1, 235–278. • [12] X. Chen and S. Sun, Calabi flow, geodesic rays, and uniqueness of constant scalar curvature Kähler metrics, Ann. of Math. (2) 180 (2014), no. 2, 407–454. • [13] S. K. Donaldson, Scalar curvature and projective embeddings, I, J. Differential Geom. 59 (2001), no. 3, 479–522. • [14] S. K. Donaldson, Scalar curvature and stability of toric varieties, J. Differential Geom. 62 (2002), no. 2, 289–349. • [15] S. K. Donaldson, “Kähler geometry on toric manifolds, and some other manifolds with large symmetry” in Handbook of Geometric Analysis, Adv. Lect. Math. (ALM) 7, International Press, Somerville, 2008, 29–75. • [16] S. K. Donaldson, “Kähler metrics with cone singularities along a divisor” in Essays in Mathematics and Its Applications, Springer, Heidelberg, 2012, 49–79. • [17] S. K. Donaldson, “Stability, birational transformations and the Kähler-Einstein problem” in Surveys in Differential Geometry, Vol. XVII, Surv. Differ. Geom. 17, International Press, Boston, 2012, 203–228. • [18] S. K. Donaldson, “Algebraic families of constant scalar curvature Kähler metrics” in Surveys in Differential Geometry, 2014: Regularity and Evolution of Nonlinear Equations, Surv. Differ. Geom. 19, International Press, Somerville, 2015, 111–137. • [19] S. K. Donaldson and S. Sun, Gromov-Hausdorff limits of Kähler manifolds and algebraic geometry, Acta Math. 213 (2014), no. 1, 63–106. • [20] J.-M. Drézet, “Luna’s slice theorem and applications” in Algebraic Group Actions and Quotients (Wynko, 2000), Hindawi, Cairo, 2004, 39–89. • [21] J. Fine and J. Ross, A note on positivity of the CM line bundle, Int. Math. Res. Not. IMRN 2006, no. 95875. • [22] A. Fujiki and G. Schumacher, The moduli space of extremal compact Kähler manifolds and generalized Weil-Petersson metrics, Publ. Res. Inst. Math. Sci. 26 (1990), no. 1, 101–183. • [23] C. D. Hacon, J. McKernan, and C. Xu, ACC for log canonical thresholds, Ann. of Math. (2) 180 (2014), no. 2, 523–571. • [24] C. D. Hacon and C. Xu, “On finiteness of B-representations and semi-log canonical abundance” in Minimal Models and Extremal Rays (Kyoto, 2011), Adv. Stud. Pure Math. 70, Math. Soc. Japan, Tokyo, 2016, 361–378. • [25] R. Hartshorne, Algebraic Geometry, Grad. Texts in Math. 52, Springer, New York, 1977. • [26] J. Kollár, Rational Curves on Algebraic Varieties, Ergeb. Math. Grenzgeb. (3) 32, Springer, Berlin, 1996. • [27] J. Kollár, “Moduli of varieties of general type” in Handbook of Moduli, Vol. II, Adv. Lect. Math. (ALM) 25, International Press, Somerville, 2013, 131–157. • [28] J. Kollár, Y. Miyaoka, and S. Mori, Rational connectedness and boundedness of Fano manifolds, J. Differential Geom. 36 (1992), no. 3, 765–779. • [29] J. Kollár and S. Mori, Birational Geometry of Algebraic Varieties, Cambridge Tracts in Math. 134, Cambridge Univ. Press, Cambridge, 1998. • [30] J. Kollár and N. I. Shepherd-Barron, Threefolds and deformations of surface singularities, Invent. Math. 91 (1988), no. 2, 299–338. • [31] C. Li and S. Sun, Conical Kähler-Einstein metrics revisited, Comm. Math. Phys. 331 (2014), no. 3, 927–973. • [32] C. Li, X. Wang, and C. Xu, Quasi-projectivity of the moduli space of smooth Kähler-Einstein Fano manifolds, Ann. Sci. Éc. Norm. Supér. (4) 51 (2018), no. 3, 739–772. • [33] C. Li, X. Wang, and C. Xu, Degeneration of Fano Kähler-Einstein manifolds, preprint, arXiv:1411.0761v1 [math.AG]. • [34] C. Li and C. Xu, Special test configuration and K-stability of Fano varieties, Ann. of Math. (2) 180 (2014), no. 1, 197–232. • [35] D. Mumford, J. Forgarty, and F. Kirwan, Geometric Theory, 3rd ed., Ergeb. Math. Grenzgeb. (3) 34, Springer, Berlin, 1994. • [36] Y. Odaka, The GIT stability of polarized varieties via discrepancy, Ann. of Math. (2) 177 (2013), no. 2, 645–661. • [37] Y. Odaka, Compact moduli spaces of Kähler-Einstein Fano varieties, Publ. Res. Inst. Math. Sci. 51 (2015), no. 3, 549–565. • [38] Y. Odaka, On the moduli of Kähler-Einstein Fano manifolds, preprint, arXiv:1211.4833v4 [math.AG]. • [39] Y. Odaka, C. Spotti, and S. Sun, Compact moduli spaces of del Pezzo surfaces and Kähler-Einstein metrics, J. Differential Geom. 102 (2016), no. 1, 127–172. • [40] H. Ono, Y. Sano, and N. Yotsutani, An example of an asymptotically Chow unstable manifold with constant scalar curvature, Ann. Inst. Fourier (Grenoble) 62 (2012), no. 4, 1265–1287. • [41] S. T. Paul, CM stability of projective varieties, preprint, arXiv:1206.4923v1 [math.AG]. • [42] S. T. Paul and G. Tian, CM stability and the generalized Futaki, II, Astérisque 328 (2009), 339–354. • [43] D. H. Phong and J. Sturm, “Lectures on stability and constant scalar curvature” in Handbook of Geometric Analysis No. 3, Adv. Lect. Math. (ALM) 14, International Press, Somerville, 2010. • [44] R. Sjamaar, Holomorphic slices, symplectic reduction and multiplicities of representations, Ann. of Math. (2) 141 (1995), no. 1, 87–129. • [45] J. Song and X. Wang, The greatest Ricci lower bound, conical Einstein metrics and Chern number inequality, Geom. Topol. 20 (2016), no. 1, 49–102. • [46] C. Spotti, Degenerations of Kähler-Einstein Fano manifolds, preprint, arXiv:1211.5334v1 [math.DG]. • [47] S. Sun, C. Spotti, and C. Yao, Existence and deformations of Kähler-Einstein metrics on smoothable $\mathbb{Q}$-Fano varieties, Duke Math. J. 165 (2016), no. 16, 3043–3083. • [48] G. Székelyhidi, The Kähler-Ricci flow and K-polystability, Amer. J. Math. 132 (2010), no. 4, 1077–1090. • [49] R. P. Thomas, “Notes on GIT and symplectic reduction for bundles and varieties” in Surveys in Differential Geometry, Vol. X, Surv. Differ. Geom. 10, International Press., Somerville, 2006, 221–273. • [50] G. Tian, On Calabi’s conjecture for complex surfaces with positive first Chern class, Invent. Math. 101 (1990), no. 1, 101–172. • [51] G. Tian, Kähler-Einstein metrics with positive scalar curvature, Invent. Math. 130 (1997), no. 1, 1–37. • [52] G. Tian, “Existence of Einstein metrics on Fano manifolds” in Metric and Differential Geometry, Progr. Math. 297, Birkhäuser, Basel, 2012, 119–159. • [53] G. Tian, K-stability and Kähler-Einstein metrics, Comm. Pure Appl. Math. 68 (2015), no. 7, 1085–1156. • [54] E. Viehweg, Quasi-Projective Moduli for Polarized Manifolds, Ergeb. Math. Grenzgeb. (3) 30, Springer, Berlin, 1995. • [55] X. Wang and C. Xu, Nonexistence of asymptotic GIT compactification, Duke Math. J. 163 (2014), no. 12, 2217–2241. • [56] S.-T. Yau, On the Ricci curvature of a compact Kähler manifold and the complex Monge-Ampère equation, I, Comm. Pure Appl. Math. 31 (1978), no. 3, 339–441.
2019-11-18 04:08:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 2, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43998244404792786, "perplexity": 1529.2929896698067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669431.13/warc/CC-MAIN-20191118030116-20191118054116-00298.warc.gz"}
http://www.ephy.at/fluid-dynamics-mass-continuity/
# Fluid Dynamics - Conservation of Mass, Momentum and Energy ## General Mass Continuity Equation $$0 = \frac{\partial}{\partial t} ( \int _{CV} \rho dV ) + \int _{CV} \rho (\overrightarrow{v} \cdot \overrightarrow{n}) dA$$ Where $$\frac{\partial}{\partial t} ( \int _{CV} \rho dV )$$ is the rate of mass change in the CV. $$\int _{CV} \rho (\overrightarrow{v} \cdot \overrightarrow{n}) dA$$ is the net rate of mass flow through the CS. And $$v$$ is the linear velocity of the fluid and $$\overrightarrow{v}$$ indicates direction of flow. For incompressible steady linear flow with one input and one output, the rate of mass flow is written as $$\rho v_1 A_1 = \rho v_2 A_2$$ ## General Momentum Continuity Equation $$\overrightarrow{F_R} = \frac{\partial}{\partial t} ( \int _{CV} \overrightarrow{v} \rho dV ) + \int _{CV} \overrightarrow{v} \rho (\overrightarrow{v} \cdot \overrightarrow{n}) dA$$ Where $$\overrightarrow{F_R}$$ is the resultant force on the system. For incompressible steady linear flow with one input and one output, the conservation of momentum is written as $$F_R = \rho A_2 v_2 ^2 - \rho A_1 v_1 ^2$$ ## General Energy Continuity Equation $$\frac{\partial}{\partial t} ( \int _{CV} \rho e dV ) + \int _{CV} \rho e (\overrightarrow{v} \cdot \overrightarrow{n}) dA = \dot{Q}_{net} + \dot{W}_{net}$$ The sum of the net rate of energy change in the control volume and the net rate of energy flow through the control surface is equal to the net rate of energy transferred via heat and work. The specific energy $$e$$ is the sum of the internal, kinetic and potential energies. $$e = u + \frac{1}{2} v^2 + gz$$ $$E = U + \frac{1}{2} mv^2 + mgz$$ Since $$\dot{W}_net$$ is the sum of shaft and stress work, where stress is only non-zero at the CS, the stress work can be moved to the left side of the equation resulting as follows $$\frac{\partial}{\partial t} ( \int _{CV} \rho e dV ) + \int _{CV} ( u + \frac{p}{\rho} + \frac{v^2}{2} + gz ) \rho (\overrightarrow{v} \cdot \overrightarrow{n}) dA = \dot{Q}_{net} + \dot{W}_{net,shaft}$$
2019-05-25 11:56:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6714780926704407, "perplexity": 318.7090936027142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258003.30/warc/CC-MAIN-20190525104725-20190525130725-00474.warc.gz"}
http://tex.stackexchange.com/questions/73265/where-is-beamer-sty
# Where is beamer.sty? Every time I try to make a `beamer` presentation (using the version that came with my TeX distribution), it asks me for `beamer.sty`. Which isn't in the official repo for `beamer`. Arrgh. Has anybody got a solution? - Complementary to the answers search for `beamer.cls` and you'll find it which means latex interprets it as a class even though your TeX distro downloads it as a package. – percusse Sep 19 '12 at 21:14 `cd /usr/local/texlive/2012` and then `find -name "*.cls"|grep 'beamer'` – cmhughes Sep 19 '12 at 22:27 `beamer` is a LaTeX class not a package. ``````\documentclass{beamer} `while (true) { it is_not_ a package };` – Herbert Sep 19 '12 at 20:57 @rake there us a distinction between "package" as used by miktex or texlive package managers where "package" means any collection of files installed as a unit, and "package" as meant by latex `\usepackage`. To latex beamer is a class loaded with `\documentclass` not a package loaded with `\usepackage`. – David Carlisle Sep 19 '12 at 20:58 @DavidCarlisle Yeah, but do you need to `\usepackage{beamer}` to get it to work? After all it isn't a standard class. – rake Sep 20 '12 at 6:46
2016-02-07 04:23:26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9798505902290344, "perplexity": 2175.1377999805595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701148475.34/warc/CC-MAIN-20160205193908-00117-ip-10-236-182-209.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/410634/problems-with-hat-and-spacing
# Problems with \hat+ and spacing Having an arbitrary document, inside math mode spacing works great when addition is used. x'=x_1 + x_2 + \cdots + x_n but when + changes to \hat+ like in x'=x_1 \hat+ x_2 \hat+ \cdots \hat+ x_n the spacing does not work well. • \mathbin{\hat{+}} – Manuel Jan 16 '18 at 12:28 You can use \mathbin{\hat{+}}. Or define a macro \newcommand*\hatplus{\mathbin{\hat{+}}} I thought it was necessary to add \DOTSB to give the info to \dots to see wether use center dots or lower dots, but it seems that it's intelligent enough to see \mathbin.
2020-04-03 18:18:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9572219252586365, "perplexity": 6819.812700939097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370515113.54/warc/CC-MAIN-20200403154746-20200403184746-00540.warc.gz"}
http://math.stackexchange.com/questions/60340/fibonacci-modular-results
# Fibonacci modular results Can any one give a generalization of the following properties in a single proof? I have checked the results, which I have given below by trial and error method. I am looking for a general proof, which will cover the all my results below: 1. Every third Fibonacci number is even. 2. 3 divides every 4th Fibonacci number. 3. 5 divides every 5th Fibonacci number. 4. 4 divides every 6th Fibonacci number. 5. 13 divides every 7th Fibonacci number. 6. 7 divides every 8th Fibonacci number. 7. 17 divides every 9th Fibonacci number. 8. 11 divides every 10th Fibonacci number. 9. 6, 9, 12 and 16 divides every 12th Fibonacci number. 10. 29 divides every 14th Fibonacci number. 11. 10 and 61 divides every 15th Fibonacci number. 12. 15 divides every 20th Fibonacci number. - For a start, see the Wikipedia page, Primes and Divisibility section. –  Bill Dubuque Aug 28 '11 at 15:41 I have seen and I don't think it will good help in writing a single proof for all the cited results. –  Gandhi Aug 29 '11 at 6:14 You can read more about Pisano period at wikipedia and in this MO answer. –  Martin Sleziak Aug 29 '11 at 9:44 @Gandhi The first sentence at Bill's link, "...Every $k$th number of the sequence is a multiple of $F(k)$" gives you an immediate proof of all your results. –  Byron Schmuland Aug 29 '11 at 12:30 Byron schmuland! Still I couldn't get the Bill's link. could you explain, how the link supports to write a proof for all my result. –  Gandhi Aug 29 '11 at 13:51 show 1 more comment Most of the divisibility properties of Fibonacci numbers follow from the fact that they comprise a divisibility sequence, i.e. $\rm\:m\:|\:n\ \Rightarrow\ F_m\:|\:F_n\:.\:$ All of your statements above are special cases of this, e.g. $\rm\:F_{15} = 610\:,\:$ so $\rm\:15\:|\:n\ \Rightarrow\ F_{15}\:|\:F_n\:\Rightarrow\:610\:|\:F_n,\:$ which is precisely your statement $11,\:$ that $10$ and $61$ divide every $15\:$'th Fibonacci number. In fact $\rm\:F_n\:$ is strong divisibility sequence $\rm\:(F_m,F_n) = F_{(m,n)},\:$ i.e. $\rm\:gcd(F_m,F_n) = F_{\gcd(m,n)}\:.\:$ This stronger property specializes to the above property when $\rm\:m\:|\:n\:\ (\!\iff \gcd(m,n) = m\:\!).\:$ The proof is not difficult. Here is a way straightforward to proceed. Recall the Fibonacci addition law $\rm\:F_{n+m} =F_{n+1}\:F_m + F_n\:F_{m-1}\:.\:$ After applying the shift $\rm\:n\to n-m\$ this addition law becomes $\rm\:F_n = F_{n-m+1}\:F_m + F_{n-m}\:F_{m-1}\!\equiv F_{n-m}\:F_{m-1}\pmod{F_m}.\:$ Hence for $\rm\:k=m-1\:$ we may invoke the Theorem below to conclude that $\rm\:f_n = F_n\:$ is a strong divisibility sequence. Theorem $\$ Let $\rm\ f_n\:$ be an integer sequence such that $\rm\ f_{\:0} =\: 0,\ f_1 = 1\$ and such that for all $\rm\:n > m\:$ holds $\rm\ \: f_n\equiv\: f_{\:k}\ f_{n-m}\:\ (mod\ f_m)\$ with $\rm\:k < n,\ (k,m)\: =\: 1\:.\:$ Then $\rm\ (f_n,f_m)\: =\ f_{\:(n,\:m)}$ Proof $\$ By induction on $\rm\:n + m\:$. The theorem is trivially true if $\rm\ n = m\$ or $\rm\ n = 0\$ or $\rm\: m = 0.\:$ So assume wlog $\rm\:n > m > 0.\:$ Since $\rm\:k+m < n+m,\:$ by induction $\rm\:(f_{\:k},\:f_m)=\:f_{\:(k,\:m)}=\:f_1 = 1.\:$ Thus $\rm\ (f_n,\:f_m)\: =\: (f_{\:k}\:f_{n-m},\:f_m)\: =\: (f_{n-m},\:f_m)\: =\: f_{\:(n-m,\:m)} =\: f_{\:(n,\:m)}\:$ follows by induction (which applies here since $\rm\:(n-m)+m\: <\: n+m\:\!),\:$ and by employing well-known gcd laws, namely $\rm\:(a,b) = (a',\:b)\ \ if\ \ a\equiv a'\pmod{b}\$ and $\rm\:(c\:a,b) = (a,b)\:$ if $\rm\:(c,b) = 1\:.\quad$ QED You may find it insightful to simultaneously examine other strong divisibility sequences, e.g. see my post here on $\rm\:f_n = (x^n-1)/(x-1)\:.\:$ In this case $\rm\: \gcd(f_m,f_n)\: =\: f_{\:\gcd(m,n)}\:$ may be interpreted as a $\rm\:q$-analog of the integer Bezout identity, for example $$\rm\displaystyle\ 3\ =\ (15,21)\ \ \leadsto\ \ \frac{x^3-1}{x-1}\ =\ (x^{15} + x^9 + 1)\ \frac{x^{15}-1}{x-1}\ -\ (x^9+x^3)\ \frac{x^{21}-1}{x-1}$$ - I know this. But, I am looking a single proof, which will generalize the whole part of my post. –  Gandhi Aug 29 '11 at 6:08 @Gandhi But it does yield all your statements - see my edit above. –  Bill Dubuque Aug 29 '11 at 15:42 Thank you for this divisibility property and better explanation. –  Gandhi Aug 29 '11 at 17:06 Yes! I understand that. Thank you... –  Gandhi Aug 29 '11 at 18:05 Maybe @Gandhi is looking for the en.wikipedia.org/wiki/Theory_of_everything –  The Chaz 2.0 Aug 29 '11 at 22:54 I guess that the standard way to understand all these divisibility results in one single swoop is to observe that the Fibonacci sequence modulo any number N becomes periodic. For instance, Fibonacci modulo 2 is 0, 1, 1, 0, 1, 1, 0, ...... proving the even-ness of $F_n$ for $n=0,3,6,9,...$. Fibonacci modulo $3$ is 0, 1, 1, 2, 0, 2, 2, 1, 0, 1, 1, 2, 0, ..... making obvious that $3$ divides $F_n$ for $n=0, 4, 8, 12, ...$ . Try yourself the next ones! NOTE: the same technique can be applied to any linear recursive sequence with constant coefficients. - good and I am looking for a better way if possible. Anyhow, thank you so much for reply –  Gandhi Aug 29 '11 at 6:08 As other posters have already indicated, for every positive integer $N$, there is some $D(N)$ such that every $D$-th Fibonacci is divisible by $N$. The next logical question to my mind is how to compute $D(N)$. Observe that, if $a$ and $b$ are relatively prime, then $D(ab)=D(a) D(b)$ (exercise!). In other words, $D$ is a multiplicative function and is determined by its values for prime powers. I'll talk just about computing $D(p)$ for $p$ a prime. Recall the formula $$F_n = \frac{1}{\sqrt{5}} \left( \tau^n - (-\tau^{-1})^n \right)$$ where $\tau = (1+\sqrt{5})/2$. Suppose that the prime $p$ is $\pm 1 \bmod 5$. Then there is a square root of $5$ in $\mathbb{Z}/p$. The above formula is still valid in terms of that square root. For example, if $p=11$, then the square roots of $5$ modulo $11$ are $4$ and $7$. We have $(1+4)/2 \equiv 8 \mod 11$ and $(1+7)/2 \equiv 4 \mod 11$ and, sure enough, $F_n = (1/4) \left( 8^n - 4^n \right) \mod 11$. So $p$ divides $F_n$ if and only if $\tau^n = (- \tau^{-1})^n$. In other words, we have to compute the order of $- \tau^2$ in the unit group of $\mathbb{Z}/p$. (In the above example, $- \tau^2 \equiv - 4^2 \equiv -64 \equiv 2 \mod 11$, so the conclusion is that $11$ divides $F_n$ if and only if $2^n \equiv 1 \mod 11$.) By Lagrange's theorem, we see that $D(p)$ will divide $p-1$ for $p$ which is $\pm 1 \bmod 5$. I can say more, but this is really an excellent project for a beginning number theorist to play with for his or herself. What can you say about primes which are $\pm 2 \bmod 5$? What can you say about prime powers? For $p \equiv \pm 1 \mod 5$, when does $D(p)$ divide $(p-1)/2$? There isn't a complete formula here, but there are lots of great things to observe. - Sir, Tank you for your solution. Can you explain about D(p) divide (p-1)/2 for P = +/- 1 (mod 5)?. Once again thank you. –  Gandhi Sep 9 '11 at 1:44 Sir, can you explain little bit more about your observations. –  Gandhi Sep 10 '11 at 4:16 Dear David Speyer! I am waiting for your reply for my recent comment (see just above this comment). –  Gandhi Sep 12 '11 at 16:52 Hello David! explain the last part of your post –  Gandhi Sep 14 '11 at 1:50 +1 Moving my comment here, because it is closely related to David's answer. In the case $p>5, p\equiv\pm2\pmod{5}$ an observation we can make immediately is that the "numbers" $\tau$ and $-\tau^{-1}$ are then conjugate under the Frobenius automorphism of the finite field $GF(p^2)$. So $$\tau^p=-\frac{1}{\tau}\in GF(p^2).$$ This implies that both the conjugates are roots of unity of order that is a factor of $2(p+1)$ in $GF(p^2)$. Therefore also the shortest period of the Fibonacci sequence modulo $p$ is a factor of $2(p+1)$. This is much better than the $(p^2-1)$ I promised earlier. –  Jyrki Lahtonen Jun 14 '12 at 13:37 The general proof of this is that the fibonacci numbers arise from the expression $$F_n \sqrt{-5} = (\frac 12\sqrt{-5}+\frac 12\sqrt{-1})^n - (\frac 12\sqrt{-5}-\frac 12\sqrt{-1})^n$$ Since this is an example of the general $a^n-b^n$, which $a^m-b^m$ divides, if $m \mid n$, it follows that there is a unique number, generally coprime with the rest, for each number. Some of the smaller ones will be $1$. The exception is that if $f_n$ is this unique factor, such that $F_n = \prod_{m \mid n} f_n$, then $f_n$ and $f_{np^x}$ share a common divisor $p$, if $p$ divides either. So for example, $f_8=7$, and $f_{56}=7*14503$, share a common divisor of $7$. This means that modulo over $49$ must evidently work too. So $f_{12} = 6$, shares a common divisor with both $f_4=3$ and $f_3 = 4$, is unique in connecting to two different primes. Gauss's law of quadratic recriprocality applies to the fibonacci numbers, but it's a little more complex than for regular bases. Relative to the fibonacci series, reduce modulo 20, to 'upper' vs 'lower' and 'long' vs 'short'. For this section, 2 is as 7, and 5 as 1, modulo 20. Primes that reduce to 3, 7, 13 and 17 are 'upper' primes, which means that their period divides $p+1$. Primes ending in 1, 9, 11, 19 are lower primes, meaning that their periods divide $p-1$. The primes in 1, 9, 13, 17 are 'short', which means that the period divides the maximum allowed, an even number of times. For 3, 7, 11, 19, it divides the period an odd number of times. This means that all odd fibonacci numbers can be expressed as the sum of two squares, such as $233 = 8^2 + 13^2$, or generally $F_{2n+1} = F^2_n + F^2_{n+1}$ So a prime like $107$, which reduces to $7$, would have an indicated period dividing $108$ an odd number of times. Its actual period is $36$. A prime like $109$ divides $108$ an even number of times, so its period is a divisor of $54$. Its actual period is $27$. A prime like $113$ is indicated to be upper and short, which means that it divides $114$ an even number of times. It actually has a period of $19$. Artin's constant applies here as well. This means that these rules correctly find some 3/4 of all of the periods exactly. The next prime in this progression, $127$, actually has the indicated period for an upper long: 128. So does $131$ (lower long), $137$ (upper short, at 69). Likewise $101$ (lower short) and $103$ (upper long) show the maximum periods indicated. No prime under $20*120^4$ exists, where if $p$ divides some $F_n$, so does $p^2$. This does not preclude the existance of such a number. -
2014-03-17 17:09:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8967496156692505, "perplexity": 301.8657361590536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678705768/warc/CC-MAIN-20140313024505-00070-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.esaral.com/q/solve-the-following-62142
# Solve the following Question: Sulphurous acid $\left(\mathrm{H}_{2} \mathrm{SO}_{3}\right)$ has $\mathrm{Ka}_{1}=1.7 \times 10^{-2}$ and $\mathrm{Ka}_{2}=6.4 \times 10^{-8} .$ The $\mathrm{pH}$ of $0.588 \mathrm{MH}_{2} \mathrm{SO}_{3}$ is_______________.  (Round off to the Nearest Integer) Solution: (1) $\mathrm{H}_{2} \mathrm{SO}_{3}[$ Dibasic acid $] \mathrm{c}=0.588 \mathrm{M}$ $\Rightarrow \mathrm{pH}$ of solution $\mathrm{p}$ due to First dissociation only since $\mathrm{K}_{\mathrm{a}},>>\mathrm{Ka}_{2}$ $\Rightarrow$ First dissociation of $\mathrm{H}_{2} \mathrm{SO}_{3}$ $\mathrm{H}_{2} \mathrm{SO}_{3}(\mathrm{aq}) \rightleftharpoons \mathrm{H}^{\oplus}(\mathrm{aq})+\mathrm{HSO}_{3}^{-}(\mathrm{aq}): \mathrm{ka}_{1}=1.7 \times 10^{-2}$ $\mathrm{t}=0 \quad \mathrm{C}$ $\mathrm{t} \quad \mathrm{C}-\mathrm{x} \quad \mathrm{x} \quad \mathrm{x}$ $\Rightarrow \mathrm{Ka}_{1}=\frac{1.7}{100}=\frac{\left[\mathrm{H}^{\oplus}\right]\left[\mathrm{HSO}_{3}^{-}\right]}{\left[\mathrm{H}_{2} \mathrm{SO}_{3}\right]}$ $\Rightarrow \frac{1.7}{100}=\frac{\mathrm{x}^{2}}{(0.58-\mathrm{x})}$ $\Rightarrow \quad 1.7 \times 0.588-1.7 \mathrm{x}=100 \mathrm{x}^{2}$ $\Rightarrow \quad 100 \mathrm{x}^{2}+1.7 \mathrm{x}-1=0$ $\Rightarrow \quad\left[\mathrm{H}^{\oplus}\right]=\mathrm{x}=\frac{-1.7+\sqrt{(1.7)^{2}+4 \times 100 \times 1}}{2 \times 100}=0.09186$ Therefore $\mathrm{pH}$ of sol. is : $\mathrm{pH}=-\log \left[\mathrm{H}^{\oplus}\right]$ $\Rightarrow \quad \mathrm{pH}=-\log (0.09186)=1.036 \simeq 1$
2023-04-01 14:10:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6214768886566162, "perplexity": 2862.3742809376695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950030.57/warc/CC-MAIN-20230401125552-20230401155552-00004.warc.gz"}
http://mathhelpforum.com/advanced-math-topics/203992-topology-proofs-bases-print.html
# Topology Proofs, Bases • September 24th 2012, 11:34 AM Kiefer Topology Proofs, Bases 1) B={(a,b)|a,b are rational numbers} is a countable base for the usual topology for the real numbers. 2) For any topologies T1 and T2 for X, if B1 is a base for T1 and B1 is a subset of T1, then T1 is a subset of T2 Def1: Bp is a local base for p in (X,T) iff p is an element of B and B is an element of T for every B an element of Bp; and when p is an ellement of G is an element of T, then there is Bg and element of Bp such that p is an element of Bg, which is a subset of G. Def2: B is a base for the topology T iff B= the union of all Bp st p is an element of X, where each Bp is a loca base for p in the topological space (X,T). • September 25th 2012, 04:54 AM johnsomeone Re: Topology Proofs, Bases Quote: Originally Posted by Kiefer 1) B={(a,b)|a,b are rational numbers} is a countable base for the usual topology for the real numbers. You should prove this yourself. It's very easy by the definition of a base and if you know some basic facts about countabilitiy (rationals are countable, products of countable are countable.) If you're going to continue in math, this is an important fact & example that you should understand well. Having a countable base comes up often enough that it's given it's own name. "A topological space is second countable if it has a countable base." This example, exploiting that the rationals are dense in the reals to produce a countable base (generalized to $\mathbb{R}^n$), is the "stock example" of what/why/how a base is countable. (And the exploitability of countable dense subsets is itself a common/useful enough feature to have it's own name, "separable". Furthermore, these two ideas, separable and second countable, are often considered together, and are equivalent properties for nice spaces.) Quote: Originally Posted by Kiefer 2) For any topologies T1 and T2 for X, if B1 is a base for T1 and B1 is a subset of T1, then T1 is a subset of T2 Are you sure you didn't make a mistake in writing down the problem? Again, if you're going to continue in math, it might be worth investing a little time learning some basic LaTex. Your entire post was kinda difficult to read, but would've been very easy to read had it been done in LaTex.
2015-01-31 12:11:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8373121619224548, "perplexity": 614.5642885556929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115869647.75/warc/CC-MAIN-20150124161109-00109-ip-10-180-212-252.ec2.internal.warc.gz"}
https://czesciogrodowe.pl/25849/calcium+metal+and+water+balanced+equation+function.html
# calcium metal and water balanced equation function You are asking for the following equation to be written out and balanced: calcium hydroxide plus carbon dioxide yields calcium carbonate plus water. The chemical equation is written in syol form ### Ca+H2O=Ca(OH)2+H2 Balanced … 2018/12/11· Ca+H₂O=Ca(OH)₂+H₂ Balanced Equation||Calcium+Water=Calcium hydroxide+Hydrogen Balanced Equation||calcium water calcium hydroxide hydrogen syol equation||Wh ### Determining Calcium Ion Concentration in Water … Calcium and magnesium ions dissolved in water cause water hardness. Ethylenediaminetetraacetic acid (EDTA), shown on the right in its deprotonated form, is commonly used in a titration to determine the concentration of Ca 2+ and Mg 2+ ions in water because both ions form complexes with EDTA. ### 1) When calcium is reacted with cold water, the … When 11.0 g of calcium metal is reacted with water, 5.00 g of calcium hydroxide is produced. Using the following balanced equation, calculate the percent yield for the reaction? Ca(s) + 2 H2O(l) → Ca(OH)2(aq) + H2(g) 84.0% 12.3% 45.5% 24.6% on Bones are ### K+Cl2=KCl Balanced Equation|| Potassium + Chlorine = … 2018/11/17· potassium + chlorine = potassium chloride balanced equation potassium metal and chlorine gas coine to form potassium chloride solid potassium chloride formula ### Solved: Write the neutralization reaction for each of the … Answer to: Write the neutralization reaction for each of the following. Balance the resulting equation. A. Ca(OH)2 + HCl arrow B. H2SO4 + Al(OH)3 ### Calcium carbide, CaC2 reacts with water to form … Question: Calcium carbide, {eq}CaC_2{/eq} reacts with water to form acetylene, {eq}C_2H_2{/eq} and calcium hydroxide. a. Balance equation b. If you were designing the procedure for producing ### Class 10 Science Chapter 3 Board Questions of Metal and … Explain why calcium metal after reacting with water starts floating on its surface. Write the chemical equation for the reaction. Name one more metal that starts floating after some time when immersed in water. [CBSE 2012] ### why does calcium starts floating when it reacts with … Dear student! Actually, when calcium reacts with water, the reaction is less vigorous and it forms calcium hydroxide with release of hydrogen gas, but the heat evolved is not sufficient for hydrogen gas to ch fire and so it sticks over the surface of calcium ### Chemistry of Calcium (Z=20) - Chemistry LibreTexts Calcium metal is fairly reactive and coines with water at room temperature of produce hydrogen gas and calcium hydroxide $Ca(s) + 2H_2O(g) \rightarrow Ca(OH)_2(aq) + H_2(g)$ Product will reveal hydrogen bubbles on calcium metal''s surface. ### Write a balanced equation for each of the reactions you … Write a balanced equation for each of the reactions you observed in Parts 1-7. Start with the reactant formulas shown in the Data Table and refer to the "Types of Chemical Reactions" page for information and examples. Remeer to check charge balance in the ### Calcium nitrate dihydrate is dissolved in water. What … Most of the metal hydroxides and oxides are insoluble in water bit some of the alkali metal hydroxides, Ba (OH) 2 and Sr (OH) 2 are soluble in water. Explanation of Solution Calcium nitrate is dissolved in water soluble, because Almost all the salts of Na + , K + , NH 4 + , salts of nitrate ( NO 3 - ) , chlorate ( ClO 3 - ) , Perchlorate ( ClO 4 - ) , Acetate ( CH 3 CO 2 - ) are soluble ### Lakhmir Singh Chemistry Class 10 Solutions For Chapter 1 … Calcium carbonate reacts with hydrochloric acid to produce calcium chloride, water and carbon dioxide gas. (b) Write balanced chemical equation with state syols for the following reaction: Sodium hydroxide solution reacts with hydrochloric acid solution to produce sodium chloride solution and water. ### Write the equation for the thermal decomposition of … Write the equation for the thermal decomposition of calcium carbonate. Include state syols. CaCO 3 (s) ---> CaO (s) + CO2 (g) All metal carbonates decompose to give the metal oxide and carbon dioxide. As calcium ions has a 2+ charge, the formula of 3 ### How do you write "calcium + nitrogen -> calcium … 2015/12/28· Calcium is a metal, so its formula will simply be #"Ca"#. Nitrogen is a diatomic molecular compound, making it #"N"_2#. Since calcium nitride is an ionic compound, by evaluating its constituent ions we can determine its formula. The calcium ion is a 2+ ion, or #"Ca"^(2+)#.. ### water treatment – removing hardness (calcium and … Calcium carbonate precipitation takes place with the formation of sodium carbonate that will react with permanent hardness according to reactions (5) and (6) above. Using caustic soda will, therefore, lower water hardness to a level that is equal to twice the reduction ### How can I balance this chemical equation? Sodium … 2014/7/23· This is a double replacement reaction. Here''s how to balance double replacement equations > Your unbalanced equation is "Na"_3"PO"_4 + "CaCl"_2 → "Ca"_3("PO"_4)_2 + "NaCl" 1. Start with the most complied formula, "Ca"_3("PO"_4)_2. Put a 1 in front of it. ### 5.1: Writing and Balancing Chemical Equations … PROBLEM $$\PageIndex{3}$$ Write a balanced molecular equation describing each of the following chemical reactions. Solid calcium carbonate is heated and decomposes to solid calcium oxide and carbon dioxide gas. Gaseous butane, C 4 H 10, reacts with diatomic oxygen gas to yield gaseous carbon dioxide and water vapor. ### Phosphoric Acid And Barium Hydroxide Net Ionic Equation Phosphoric Acid And Barium Hydroxide Net Ionic Equation ### Calcium in diet: MedlinePlus Medical Encyclopedia 2020/8/4· Calcium is one of the most important minerals for the human body. It helps form and maintain healthy teeth and bones. A proper level of calcium in the body over a lifetime can help prevent osteoporosis.Calcium helps your body with: Building strong bones and teeth ### Balance equation. Calcium oxide +water = calcium … 2013/7/12· I got answer but actually it wasn''t properly balanced. In the hydrogen wasn''t properly balanced. Source(s): balance equation calcium oxide water calcium hydroxide: 0 0 0 Login to reply the answers Post How do you think about the for ### 4.1Writing and Balancing Chemical Equations molecules and2 molesof water molecules. Figure 4.3 Regardless of the absolute nuer of molecules involved, the ratios between nuers of molecules are the same as that given in the chemical equation. Balancing Equations balanced ### Ca + O2 = CaO - Chemical Equation Balancer The balanced equation will appear above. Use uppercase for the first character in the element and lowercase for the second character. Examples: Fe, Au, Co, Br, C, O, N, F. Ionic charges are not yet supported and will be ignored. Replace immutable groups in ### Balance the Chemical Equation for the reaction of … Therefore this is now a balanced equation. Whenever we balance an eqaution you have to change the nuer of molecules used represented by the big nuers before a molecule e.g 2HCl. You cannot change the subscript (small) nuers as this is the nuer of each element in a molecule, and you would end up making up your own molecule that doesn''t make sense! ### Calcium Carbide for Acetylene Production - Rexarc Blog 2019/2/21· Calcium carbide should be kept in air and water tight metal packages, and labelled “Calcium Carbide – Dangerous If Not Dry”. Calcium carbide in drums should not exceed 250 kg. It should be stored where water cannot enter. Containers should be regularly
2022-01-20 22:19:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6544227600097656, "perplexity": 6247.274608717045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302706.62/warc/CC-MAIN-20220120220649-20220121010649-00535.warc.gz"}
https://byjus.com/question-answer/a-number-becomes-a-perfect-square-when-we-subtract-1-from-it-which-of-the-given-options-cannot-be-the-last-digit/
Home > A number becomes a perfect square when we subtract 1 from it. Which of the given options cannot be the last digit of that number? Question # A number becomes a perfect square when we subtract $1$ from it. Which of the given options cannot be the last digit of that number? A $2$ B $4$ C $5$ D $0$ Solution ## The correct option is B $4$Step 1: Explanation of the correct optionGiven that a number becomes a perfect square when we subtract $1$ from it.We know that the unit digit of the square numbers will have $0,1,4,5,6,9$.Thus, for all perfect squares, the unit digit will consist of only $0,1,4,5,6,9$ and none of the square numbers will end with $2,3,7,8$.According to the question, a number becomes a perfect square when we subtract $1$ from it. Therefore the number's unit's digit cannot be $3,4,8,9$.Therefore, option (B) is correct.Step 2: Explanation of the incorrect optionOptions (A), (C), and (D) do not give the correct answer.MathematicsStandard VIII Suggest Corrections 0 Similar questions View More
2022-08-11 07:57:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7037259340286255, "perplexity": 356.4554660414834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571246.56/warc/CC-MAIN-20220811073058-20220811103058-00404.warc.gz"}
https://godiego.co/posts/Jarvis/
Hack The Box: Jarvis write-up Post Cancel # Hack The Box: Jarvis machine write-up Jarvis was one of the funniest and most interesting machines I’ve done so far. I learned a lot from it. It starts with a SQL injection that can be exploited to obtain some credentials, which are then used to log in to a phpmyadmin panel. Then, the version used is vulnerable, so we can gain command execution as www-data. After that, it’s possible to pivot to the user pepper by running encapsulation commands on a script run as that user. Finally, in order to escalate to root we just need to enumerate SUID binaries and find out that systemctl is one of those (which is not normal). Let’s start! The IP of the machine is 10.10.10.143. ### Enumeration As always, we start by enumerating open ports to discover the services running in the machine. I fire up nmap: Result of nmap scan 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 # Nmap 7.70 scan initiated Sun Jun 30 18:12:38 2019 as: nmap -p- -sV -sC -oN nmap/initial 10.10.10.143 Nmap scan report for supersecurehotel.htb (10.10.10.143) Host is up (0.047s latency). Not shown: 65532 closed ports PORT STATE SERVICE VERSION 22/tcp open ssh OpenSSH 7.4p1 Debian 10+deb9u6 (protocol 2.0) | ssh-hostkey: | 2048 03:f3:4e:22:36:3e:3b:81:30:79:ed:49:67:65:16:67 (RSA) | 256 25:d8:08:a8:4d:6d:e8:d2:f8:43:4a:2c:20:c8:5a:f6 (ECDSA) |_ 256 77:d4:ae:1f:b0:be:15:1f:f8:cd:c8:15:3a:c3:69:e1 (ED25519) 80/tcp open http Apache httpd 2.4.25 ((Debian)) | http-cookie-flags: | /: | PHPSESSID: |_ httponly flag not set |_http-server-header: Apache/2.4.25 (Debian) |_http-title: Stark Hotel 64999/tcp open http Apache httpd 2.4.25 ((Debian)) |_http-server-header: Apache/2.4.25 (Debian) |_http-title: Site doesn't have a title (text/html). Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel Service detection performed. Please report any incorrect results at https://nmap.org/submit/ . # Nmap done at Sun Jun 30 18:26:53 2019 -- 1 IP address (1 host up) scanned in 855.12 seconds We can see different things: SSH on the usual port and two http servers on port 80 and 64999. I started by taking a look at port 80 and didn’t even need to look at 64999 to root the box, so it was probably a rabbit hole. #### Port 80 enumeration We can see that we are going to deal with a hotel booking website. So, I immediately started up DirBuster to run in the background while I tinkered around manually looking at the functionality. DirBuster didn’t find many things, only that there was a phpmyadmin panel under /phpmyadmin. The report can be found here. Website found on port 80 Manually I bumped into this interesting parameter, which was vulnerable to SQL injection. So I fired up sqlmap and it dumped the database and some credentials: code parameter vulnerable sqlmap result: user table from mysql db We can see there is a user and password column: Obtained credentials Good! We have the following credentials: DBadmin:imissyou. #### Gaining a shell With the creds we can access the phpmyadmin panel and upon inspecting the version we find out it’s vulnerable: Vulnerable version 4.8.0 Using metasploit: We can see that the version 4.8.0 and 4.8.1 are vulnerable, so we can immediately set the options and exploit! Setting the options and getting a proper shell with python #### From www-data to pepper Once there I started enumerating with the usual sudo -l and found something really juicy, a python script I could run as pepper (the user account on the box). Output of sudo -l I checked the contents of the script and found this interesting and dangerous functionality: 1 2 3 4 5 6 7 8 def exec_ping(): forbidden = ['&', ';', '-', '', '||', '|'] command = input('Enter an IP: ') for i in forbidden: if i in command: print('Got you') exit() os.system('ping ' + command) Obviously we need to bypass that blacklist. That is really easy with commands encapsulation: \$(command). My idea was the following: getting a pair of SSH keys, then via the command injection copy my public key to .ssh/authorized_keys and then just log in through SSH. Simple! 1. I generated the keys with ssh-keygen. 2. I copy my public key key to /tmp/mykey. 3. I copy /tmp/mykey into /home/pepper/.ssh/authorized_keys. We need to add the -p option to the script to call that function and remember to add the sudo -u pepper at the beginning to run as that user. 4. I log in through SSH and can read user.txt. Steps above visualized #### From pepper to root This privilege escalation was one of the best I’ve come across so far. It turned out to be simple yet a great learning curve. I ran the usual enumeration tool, LinEnum.sh, and searched manually for SUID executables to find that /bin/systemctl was one. If you don’t know what a SUID executable means I recommend you read this article. As that is not usual, I thought that was the misconfiguration that would give me root. So I created a service from a template like this: s.service 1 2 3 4 5 6 7 8 9 [Unit] Description=TEST [Service] ExecStart=/bin/cat /root/root.txt > /tmp/myHASH [Install] WantedBy=multi-user.target However, there was a problem: neither/bin/systemd nor /etc/systemd were writeable. After researching I found out that there exists the link option, which basically lets us run it as if it was in the directory. I ran the following: 1 2 systemctl link /tmp/s.service systemctl start s However, no file appeared in the /tmp directory. I checked the status and it turns out there was an error with which I got the flag (lol): Getting root hash However, I was bothered, so I tried to make it work by runnning instead a bash file. So I created a file named script.sh (creative, I know) with the following contents: 1 2 #!/bin/sh - /bin/cat /root/root.txt > /tmp/test.txt At first I forgot the first line and that’s why there is an error on the next image. After adding it everything ran smoothly and I got my test.txt. Making the service work with a bash script I was still bothered for not making it work without a bash script so I came up with the idea of using /bin/sh -c ´command´ in the ExecStart field of the service. And it worked! Making the service work without a bash script Of course, I could have set up a listener on netcat and then executed a reverse shell instead of just using cat or cp`. I hope you learned as much as I did! Until next time!
2022-12-04 12:56:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3420255780220032, "perplexity": 4677.31924757483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710972.37/warc/CC-MAIN-20221204104311-20221204134311-00185.warc.gz"}
http://www.mathwarehouse.com/exponential-decay/half-life.php
## Exponential Decay of Radioactive and other substances In this first chart, we have a radioactive substance with a half life of 5 years. As you can see, the substance initially has 100% of its atoms, but after its first half life (5 years) only 50% of the radioactive atoms are left. That's what 'half life' means. Literally, half of the substance is gone every five years (the half life of this particular substance). So, in our example , after the second life is over (that's 10 years since each half life is 5 years), there will be $$\frac 1 2$$ of $$50\%$$ of the substance left, which, of course is $$25 \%$$. And the pattern continues, every 5 years another half life reduces the substance by $$\frac 1 2$$, so after the the third life is over ( the 15 year mark), there will be $$\frac 1 2$$ of $$25\%$$ of the substance left , which is $$12.5 \%$$. ### General Formula of Half Life As you can might be able to tell from Graph 1,Half life is a particular case of exponential decay. One in which 'b' is $$\frac 1 2$$. So, generally speaking, half life has all of the properties of exponential decay. ### Specific, Real, Examplesof Half Life Example 1 Iodine-131 is a radioactive substance and has a pretty short half life of only 8 days. Graph 3, below, represents the graph of its half life. If 30 grams is given to a patient, then, how much of the substance is left after 8 days? Since 8 days is 1 half life, we just multiply the starting amount by $$\frac 1 2$$ $$\text{30 grams} \cdot \frac 1 2 = \text{ 15 grams }$$ You can see, on Graph 3, that 1 half life is the point (1,15). How much is left after 16 days Since 16 days is 2 half lives, we just mulitply, the last value of 15 by $$\frac 1 2$$ to get 7.5 ### What does half life mean on a graph? Well, if the half life is '3 years' how does that relate to the graph? What if the half life is '4 minutes' ? In short, the half life tells us the scale of the graph. If the half life is '3 years', then each tick mark on the graph represents 3 years. On the other hand, if the half life is '4 minutes', then each tick mark on the graph represents 4 minutes . ### Ultimate Math Solver (Free) Free Algebra Solver ... type anything in there!
2016-10-01 15:28:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43305787444114685, "perplexity": 874.8472182313102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738663010.9/warc/CC-MAIN-20160924173743-00034-ip-10-143-35-109.ec2.internal.warc.gz"}
http://en.wikisource.org/wiki/Page:A_Treatise_on_Electricity_and_Magnetism_-_Volume_1.djvu/255
# Page:A Treatise on Electricity and Magnetism - Volume 1.djvu/255 Jump to: navigation, search This page has been proofread, but needs to be validated. The images within the first sphere form a converging series, the sum of which is $-P\frac{e^{\varpi-u}-1}{e^{\varpi}-1}.$ This therefore is the quantity of electricity on the first or interior sphere. The images outside the second sphere form a diverging series, but the surface-integral of each with respect to the spherical surface is zero. The charge of electricity on the exterior spherical surface is therefore $P\left(\frac{e^{\varpi-u}-1}{e^{\varpi}-1}-1\right)=-P\frac{e^{\varpi}-e^{\varpi-u}}{e^{\varpi}-1}$ If we substitute for these expressions their values in terms of $OA, OB$, and $OP$, we find charge on $A=-P\frac{OA}{OP}\frac{PB}{AB},$ charge on $B=-P\frac{OB}{OP}\frac{AP}{AB}.$ If we suppose the radii of the spheres to become infinite, the case becomes that of a point placed between two parallel planes $A$ and $B$. In this case these expressions become charge on $A=-P\frac{PB}{AB},$, charge on $B=-P\frac{AP}{AB}.$. Fig. 15 172.] In order to pass from this case to that of any two spheres not intersecting each other, we begin by finding the two common inverse points $O, O'$ through which all circles pass that are orthogonal to both spheres. Then, inverting the system with respect to either of these points, the spheres become concentric, as in the first case. The radius $OAPB$ on which the successive images lie becomes an arc of a circle through $O$ and $O'$, and the ratio of $O'P$ to $OP$ is
2013-05-18 19:43:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8505985736846924, "perplexity": 376.10236083851225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382764/warc/CC-MAIN-20130516092622-00042-ip-10-60-113-184.ec2.internal.warc.gz"}
https://mathematics-monster.com/lessons/order_of_operations.html
# What Is the Order of Operations ## What Is the Order of Operations? The order of operations tells us what order to perform operations in. A calculation may have several operations, such as: adding, subtracting, multiplying, dividing and squaring. ## Why Do We Need the Order of Operations? Imagine we wanted to find the answer to the calculation below: This calculation contains two operations: adding and multiplying. There are two orders to doing this calculation and two answers. Do we add then multiply, or multiply then add? ### Order 1 Add the first two numbers, then multiply the result with the third number. 1 + 2 × 3 = 3 × 3 = 9 ### Order 2 Multiply the last two numbers, then add the result to the first number. 1 + 2 × 3 = 1 + 6 = 7 Which answer is the correct one? It turns out the second order of operations is the correct one. Luckily, there is a simple way to use the correct order. ## BODMAS BODMAS is an acronym for the order of operations. It stands for: The order of operations is read from top to bottom. The operations with a curly bracket ({) are on the same level, and can be performed in any order. • Brackets. Evaluate brackets first. • Order. Evaluate exponents (such as squares and square roots) second. • Division and Multiplication. Evaluate numbers that are divided and multiplied third. • Addition and Subtraction. Evaluate numbers that are added and subtracted fourth. ## How to Use the Order of Operations Using the order of operations is easy. ### Question Find 2 + 32 − (8 × 2) ÷ 2. # 1 Brackets. Evaluate expressions within brackets first. In our example, there is one pair of brackets: (8 × 2) = 16. 2 + 32(8 × 2) ÷ 2 = 2 + 3216 ÷ 2 # 2 Order. Evaluate numbers with exponents second. In our example, there is one exponent: 32 = 9. 2 + 32 − 16 ÷ 2 = 2 + 9 − 16 ÷ 2 # 3 Division and Multiplication. Evaluate numbers that are divided and multiplied third. In our example, there is one division: 16 ÷ 2 = 8. 2 + 9 − 16 ÷ 2 = 2 + 9 − 8 # 4 Evaluate numbers that are added and subtracted fourth. In our example, there is one +'s and one . Addition and subtraction take the same precedence, so it does not matter which order we do them in. We will do them left to right. 2 + 9 − 8 = 11 − 8 $$\:\:\:\:\:\:\:\:\:\:\:\:$$ as 2 + 9 = 11 11 − 8 = 3 2 + 32 − (8 × 2) ÷ 2 = 3 ## Slider The slider below shows another real example of how to use the order of operations. Open the slider in a new tab In another example, there is a pair of brackets which contains a long expression that itself needs to use the order of operations. Open the slider in a new tab
2019-03-22 00:34:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7685718536376953, "perplexity": 721.1959399803665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202588.97/warc/CC-MAIN-20190321234128-20190322020128-00003.warc.gz"}
http://newton.cam.ac.uk/seminar/20190812113012301
# Quantitative results on continuity of the spectral factorisation mapping Presented by: Eugene Shargorodsky King's College London Date: Monday 12th August 2019 - 11:30 to 12:30 Venue: INI Seminar Room 1 Abstract: It is well known that  the matrix spectral factorisation mapping is continuous from the Lebesgue space $L^1$ to  the Hardy space $H^2$ under the additional assumption of uniform integrability of the logarithms of the spectral densities to be factorised (S. Barclay; G. Janashia, E. Lagvilava, and L. Ephremidze). The talk will report on a joint project with Lasha Epremidze and Ilya Spitkovsky, which aims at obtaining quantitative results characterising this continuity. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible. Presentation Material:
2019-12-05 22:30:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6210262775421143, "perplexity": 3675.644460440747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482284.9/warc/CC-MAIN-20191205213531-20191206001531-00051.warc.gz"}
http://pizzamanfundraising.com/9bwcq/the-reaction-of-alkali-metals-with-oxygen-produce-98b7de
Alkali metals react quickly with oxygen and are stored under oil to prevent oxygen from reaching the surface of the bare metal.. Lithium, sodium and potassium will all burn in air when heated to give the corresponding alkaline oxides (see below). Lithium produces an oxide, sodium produces a peroxide, and potassium, cesium, and rubidium produce superoxides. The universal indicator changes from green to purple . The Periodic Table. The alkali metals lithium, sodium and potassium will all react vigorously with the halogens to form a crystalline halide salt. This is the salt lithium chloride (LiCl). The alkali salts of O − 2 are orange-yellow in color and quite stable, if they are kept dry. If the acid is relatively dilute, the reaction produces nitrogen monoxide, although this immediately reacts with atmospheric oxygen, forming nitrogen dioxide. All the alkali metals react directly with oxygen; lithium and sodium form monoxides, Li 2 O and Na 2 O, and the heavier alkali metals form superoxides, MO 2. Alkali metals react with water and emit hydrogen and relevant metal hydroxide are given. Any alkali metal, on coming in contact with air or oxygen, starts burning and oxides are formed in the process. In excess of dilute acids or water they release hydrogen peroxide. Alkali metals are given the name alkali because the oxides of these metals react with water to form a metal hydroxide that is basic or alkaline. Na 2 O 2 + 2 HCl → 2 NaCl + H 2 O 2. The peroxides and superoxides are potent oxidants. The alkali metals are so called because reaction with water forms alkalies (i.e., strong bases capable of neutralizing acids). The alkali metals are soft metals that are highly reactive with water and oxygen. These alkali metals rapidly react with oxygen to produce several different ionic oxides. How does Sodium React with Water? The compound in brackets represents the minor product of combustion. Salient Features. All the Group 1 elements react vigorously with chlorine. In alkali metal: Reactions with oxygen. What is the likely identity of the metal? Solid sodium metal reacts with water to produce aqueous sodium hydroxide and hydrogen gas. The general equation for this reaction is: metal + oxygen → metal oxide. They also have a silver-like shine and are great conductors of heat and light. Note: The first three in the table above produce hydroxides and the rest, if they react, produce oxides. Water and alkali metals reactions. Group 1 metals react with oxygen gas produces metal oxides. All the alkali metals react vigorously with cold water. The Alkali Metals - Group 1- Reaction with the Halogens. The table below shows the types of compounds formed in reaction with oxygen. Some metals will react with oxygen when they burn. Reaction with Oxygen . Note: You will find the reason why lithium forms a nitride on the page about reactions of Group 2 elements with air or oxygen.You will find what you want about 3/4 of the way down that page. Reactions of alkali metals with chlorine. The other alkali metals (Rb, Cs, Fr) form superoxide compounds (in which oxygen takes the form O 2-) as the principal combustion products. K, Rb, and Cs 14) Write the balanced equation for the reaction of potassium with water. Upon exposure to air, alkali metal peroxides absorb CO 2 to give peroxycarbonates. There is a diagonal relationship between lithium and magnesium. K2O(s) + H2O(l) -----> 2 KOH(aq) Some alkali metas form peroxides susch as sodium forming Na2O2. The only alkali metal to react with atmospheric nitrogen is lithium. In this session, students shall learn about the combustion reactions of metals and non metals and also about the ensuing products. A. Reaction with Oxygen. Superoxide forms salts with alkali metals and alkaline earth metals.The salts CsO 2, RbO 2, KO 2, and NaO 2 are prepared by the reaction of O 2 with the respective alkali metal.. It also explains why alkali metals burn vigorously when you place them in a jar filled with oxygen. With excess oxygen, the alkali metals can form peroxides, M 2 O 2, or superoxides, MO 2. However, the first three are more common. When this substance is dissolved in water, the solution gives a positive test for hydrogen peroxide, H2O2. However, nitrate ions are easily reduced to nitrogen monoxide and nitrogen dioxide. When a metal reacts with oxygen, a metal oxide forms. Most of them react with atmospheric oxygen to form metal oxides. You must know how to test for hydrogen gas! Acids and alkali metals … The white powder is the oxide of lithium, sodium and potassium. The metals at the top of the reactivity series are powerful reducing agents since they are easily oxidized. Group 1 Metals (4X) + Oxygen Gas (O2)→ Metal Oxide(2X2O) Lithium, sodium and potassium form white oxide powders after reacting with oxygen. lithium + fluorine lithium fluoride. How do the Alkali Metals React with Water? 13) Which alkali metals can react with oxygen to form either the peroxide or the superoxide? Sodium superoxide (NaO 2) can be prepared with high oxygen pressures, whereas the superoxides of rubidium, potassium, and cesium can be prepared directly by combustion in air.By contrast, no superoxides have been isolated in pure form in the case of lithium or the alkaline-earth metals, although… Metals reacting with nitric acid, therefore, tend to produce oxides of nitrogen rather than hydrogen gas. Alkali metals get there name from their reaction with water: + ... Alkali metals react with oxygen to produce an alkali metal oxide. Heavier alkali metals … The alkali metals fizz when they react with water. For example, the reactions of lithium with the halogens are. Few reactions are generally formulated for peroxide salt. They're so soft that you can cut them with a plastic knife. These metal hydroxides are strong bases and dissolve very well in water. All the alkali metals react vigorously with halogens to produce salts, the most industrially important of which are NaCl and KCl. A.Peroxide B.Oxides C.Superoxides [ (D)ALL OF THE ABOVE ] Cr₂O₇²⁻ can act as a _____ agent in the solid state as well as in the water solutions. They react with chlorine to form white crystalline salts. All three metals are less dense than water and so they float. The alkaline earth metals react with oxygen in the air to give the corresponding oxide: Reaction with nitrogen? The further down the element lies on the periodic table, the more severe the reaction. If a piece of hot lithium is lowered into a jar of chlorine, a vigorous reaction takes place forming white powder that settles on the sides of the jar. 0 3 . The Alkali Metals - Group 1- Reaction with Water. These reactions are called combustion reactions. Which of the following are NOT metal elements? The Alkali Metals - Reaction with Oxygen (burning in air).. How do the Alkali Metals React with Oxygen?. 2Na + 2H 2 O = 2NaOH + H 2. Two examples of combustion reactions are: Iron reacts with oxygen to form iron oxide: 4 Fe + 3 O 2 → 2 Fe 2 O 3 The halogens are fluorine, chlorine, bromine and iodine. The general reaction of an alkali metal (M) with H 2 O (l) is given in the following equation: $\ce{ 2M(s) + 2H2O(l) \longrightarrow 2M^{+}(aq) + 2OH^{-}(aq) + H2 (g)}$ From this reaction it is apparent that OH-is produced, creating a basic or alkaline environment. For example, magnesium reacts with oxygen to produce magnesium oxide, and zinc reacts with oxygen to produce zinc oxide. The reaction of alkali metals with water is represented by the following equation: 2 M(s or l) + 2 H 2 O (l) --> 2 M (OH) 2 (aq) + H 2 (g) Where M is the alkali metal. The following equation shows the formation of superoxide, where M represents K, Rb, Cs, or Fr: $M(s) + O_2(g) \rightarrow MO_2(s) \label{9}$ These compounds tend to be effective oxidizing agents due to the fact that O 2-is one electron short of a … The alkali metals react with halogens (group 17) to form ionic halides; the heavier chalcogens (group 16) to produce metal chalcogenides; and oxygen to form compounds, whose stoichiometry depends on the size of the metal atom. In each reaction, hydrogen gas is given off and the metal hydroxide is produced. Find my revision workbooks here: https://www.freesciencelessons.co.uk/workbooksIn this video, we look at how metals react with the element oxygen. The production of the hydroxide (alkali) can be tested by adding universal indicator (UI) to the reaction vessel - UI changes from green to purple in the presence of these hydroxides. Lithium experiences the … Turn over for the next question . Reducing agent [ (B)OXIDIZING AGENT ] C.Detergent D.None of the above. The six elements in the alkali metals group are, in order of appearance on the periodic table: lithium, sodium, potassium, rubidium, cesium and francium. Reason 4. A reaction of an alkali and alkaline earth metal and oxygen may produce which of the following? You will find this discussed on the page about electronegativity. 8 SPECIMEN MATERIAL 0 3 This question is about the reactions of acids. The alkali metals lithium, sodium and potassium all react with cold water forming a soluble alkaline hydroxide and hydrogen gas. This explains why when you cut an alkali metal the shiny surface quickly dulls as an oxide layer forms, having reacted with oxygen. Lithium. Reaction of potassium with water is a highly flammable. When burned in air, alkaline earth metals will react with nitrogen (as well as with oxygen) to give the corresponding nitride: This is different from the alkali metals, of whom only lithium reacts with N 2. Alkali metals also have a strong reducing property. Any metal that reacts with oxygen will produce a metal oxide. Group 1: The Alkali Metals. The usual oxide, M 2 O, can be formed with alkali metals generally by limiting the supply of oxygen. How do the Alkali Metals React with the Halogens? Lithium's reactions are often rather like those of the Group 2 metals. Oxides: O 2- , peroxides: O 2 2-, super oxide: O 2 - . In a sense they are salts, a metal combined with a non netal, but they are more commonly called oxides Oxides of the alkali metals are special since they are base anhydrides and react with water when they dissolve to for solutions of bases. What are some other reactions of the alkaline earth metals? Alkali metals are so-called because when they react with water, they create highly alkaline substances. Reason 3. Upon heating, the reaction with water leads to the release of oxygen. Following are some of the important reactions of alkali metals: 1. Transition metal peroxides Reason . Salts. The sodium disappears faster than the lithium. Upon reacting with oxygen, alkali metals form oxides, peroxides, superoxides and suboxides. When water touches alkali metals the reaction produces hydrogen gas and a strong alkaline solution, also known as a base. Reaction with oxygen? (c) Write a balanced chemical equation for reaction of the white substance with water. These metal oxides dissolve in water produces alkali. The rate of reaction with oxygen, or with air, depends upon whether the metals are in the solid or liquid state , as well as upon the degree of mixing of the metals with the oxygen or air. When the solution is tested in a burner flame, a lilac-purple flame is produced. However, different metals have different reactivities towards oxygen (unreactive metals such as gold and platinum do not readily form oxides when exposed to air). (b) One of the alkali metals reacts with oxygen to form a solid white substance. When the white powder is dissolved in … Alkali metal, any of the six elements of Group 1 (Ia) of the periodic table—lithium, sodium, potassium, rubidium, cesium, and francium. The reactions of the alkaline earth metal and oxygen may produce which of the important reactions of alkali metals when! Of oxygen solution, also known as a base produces hydrogen gas and a strong alkaline solution, known. Hydrogen and relevant metal hydroxide are given water forms alkalies ( i.e., strong bases dissolve. Reduced to nitrogen monoxide and nitrogen dioxide and a strong alkaline solution, also known as base... Metal and oxygen known as a base, they create highly alkaline.. ( burning in air ).. how do the alkali metals are metals... A balanced chemical equation for the reaction, the most industrially important of which are and. Oxygen in the air to give peroxycarbonates are soft metals that are reactive! Can be formed with alkali metals - Group 1- reaction with water of an alkali metal peroxides CO... White powder is the salt lithium chloride ( LiCl ) the compound in brackets represents minor... 2, or superoxides, MO 2 at the top of the.! + 2 HCl → 2 NaCl + the reaction of alkali metals with oxygen produce 2 following are some other reactions metals., if they are kept dry air ).. how do the alkali metals - 1-... The important reactions of alkali metals react with chlorine metals the reaction produces hydrogen gas a. Burning and oxides are formed in the reaction of alkali metals with oxygen produce table above produce hydroxides and the metal are! Metals the reaction produces nitrogen monoxide, although this immediately reacts with to. The types of compounds formed in reaction with oxygen to form metal oxides with water to produce salts the! Of heat and light, superoxides and suboxides Write a balanced chemical for! Is relatively dilute, the alkali metals lithium, sodium and potassium the salt lithium chloride ( the reaction of alkali metals with oxygen produce ) further! Down the element lies on the page about electronegativity metal hydroxide are given which are NaCl and KCl to aqueous. You cut an alkali and alkaline earth metal and oxygen when this substance is dissolved in water note: first... Alkali and alkaline earth metals supply of oxygen, MO 2 alkaline hydroxide and hydrogen.... = 2NaOH + H 2 O, can be formed with alkali -! Metals get there name from their reaction with oxygen to form a crystalline halide salt, on coming in with... Nacl + H 2 O 2, or superoxides, MO 2 reaction, hydrogen gas is off... Quickly dulls as an oxide layer forms, having reacted with oxygen and! And so they float when they burn will find this discussed on the page about.... Peroxides: O 2 + 2 HCl → 2 NaCl + H 2 the superoxide the peroxide the. Alkali metals: 1 and Cs 14 ) Write a balanced chemical for. More severe the reaction produces hydrogen gas and a strong alkaline solution also. Between lithium and magnesium capable of neutralizing acids ) some other reactions of the important reactions of acids oxygen. They react, produce oxides of nitrogen rather than hydrogen gas and a strong alkaline solution also... Shiny surface quickly dulls as an oxide, M 2 O, be! ) Write a balanced chemical equation for reaction of potassium with water forms (... Students shall learn about the reactions of alkali metals fizz when they burn, MO 2: reaction the! As an oxide layer forms, having reacted with oxygen gas produces metal oxides 2 HCl → 2 NaCl H! Write a balanced chemical equation for this reaction is: metal + oxygen → metal oxide forms shiny surface dulls! Heavier alkali metals - Group 1- reaction with oxygen, the reaction peroxide... Na 2 O 2 2-, super oxide: reaction with oxygen gas produces metal oxides very. Produce salts, the most industrially important of which are NaCl and KCl ( b ) One the! Salts of O − 2 are orange-yellow in color and quite stable, if they are kept dry for... Na 2 O, can be formed with alkali metals: 1 as a base are and!, strong bases and dissolve very well in water substance with water metal reacts with atmospheric is! Can cut them with a plastic knife they react with water some other reactions acids... Water is a highly flammable lies on the periodic table, the alkali metals are metals! Excess oxygen, alkali metals generally by limiting the supply of oxygen a metal oxide in this session students... Ensuing products is lithium: O 2-, peroxides: O 2, or superoxides MO... Silver-Like shine and are great conductors of heat and light metal + oxygen → metal oxide positive for. Bromine and iodine bromine and iodine although this immediately reacts with oxygen to a. The page about electronegativity bases and dissolve very well in water reactive with water reactive water. This reaction is: metal + oxygen → metal oxide, cesium, rubidium... Peroxide, and potassium oxygen to produce an alkali metal to react oxygen... Water to produce aqueous sodium hydroxide and hydrogen gas produces metal oxides,,., strong bases and dissolve very well in water shall learn about the reactions. Metal peroxides absorb CO 2 to give peroxycarbonates hydroxides are strong bases the reaction of alkali metals with oxygen produce of neutralizing acids ) rapidly react water! Upon reacting with nitric acid, therefore, tend to produce salts, the solution gives a positive test hydrogen! Monoxide and nitrogen dioxide and alkaline earth metal and oxygen may produce which of the Group 2 metals orange-yellow... Easily reduced to nitrogen monoxide, although this immediately reacts with oxygen, alkali metal absorb! Types of compounds formed in the table below shows the types of formed. All react with water metals at the top of the alkali metals lithium, sodium potassium! So-Called because when they react with atmospheric oxygen, alkali metals can react with oxygen? are given table shows! Metal and oxygen may produce which of the important reactions of metals and non metals and non and! Group 1 elements react vigorously with the halogens are, can be formed with alkali metals lithium, and! Water leads to the release of oxygen metals rapidly react with water when! Easily reduced to nitrogen monoxide, although this immediately reacts with oxygen to produce several different ionic.. Down the element lies on the page about electronegativity superoxides and suboxides page about electronegativity that are highly reactive water. If they are easily oxidized b ) One of the alkaline earth?... Highly flammable the top of the Group 1 metals react with water is diagonal. Them with a plastic knife produce superoxides oxide: reaction with water down the element lies the. Severe the reaction produces nitrogen monoxide and nitrogen dioxide will all react with water leads to release. React vigorously with halogens to produce zinc oxide do the alkali metals the reaction with water is a flammable. Metal and oxygen are easily oxidized that you can cut them with a plastic knife the process further! Also have a silver-like shine and are great conductors of heat and light, or superoxides, MO.. In water is relatively dilute, the solution is tested in a jar filled with to. Agents since they are kept dry the alkali metals - Group 1- reaction with water they!, tend to produce several different ionic oxides have a silver-like shine and are great conductors of heat light... Oxides are formed in reaction with water forms alkalies ( i.e., strong bases of! Metals get there name from their reaction with water, the reaction with when... On the periodic table, the reactions of alkali metals can react with oxygen will a. Alkaline hydroxide and hydrogen gas and a strong alkaline solution, also known as a base lithium. Periodic table, the reactions of acids at the top of the following forming nitrogen.. They 're so soft that you can cut them with a plastic knife super oxide: reaction with water to... Metals lithium, sodium produces a peroxide, H2O2 acids or water they release hydrogen peroxide above produce hydroxides the!, and potassium, cesium, and Cs 14 ) Write the balanced equation for the reaction in excess dilute! Rb, and rubidium produce superoxides C.Detergent D.None of the following their with. Some metals will react with oxygen to produce aqueous sodium hydroxide and hydrogen gas H O! Corresponding oxide: reaction with water leads to the release of oxygen oxide... Emit hydrogen and relevant metal hydroxide is produced which of the above, 2. Because when they burn and rubidium produce superoxides hydroxide are given, if they react, produce.... The alkali metals rapidly react with oxygen most of them react with to... Can form peroxides, M 2 O 2 -, magnesium reacts with atmospheric nitrogen is lithium + 2. In a jar filled with oxygen to form a crystalline halide salt metals … a... The first three in the air to give peroxycarbonates are so called because reaction with nitrogen shall about. Superoxides, MO 2 the first three in the process when you place them in jar! ).. how do the alkali metals react with oxygen? metal oxide reaction. Hydroxide is produced acid is relatively dilute, the solution is tested in a filled... So called because reaction with oxygen in excess of dilute acids or water they release hydrogen peroxide,.... Zinc oxide oxides, peroxides: O 2 2-, super oxide: O 2 2-, peroxides: 2... Some metals will react with oxygen? general equation for this reaction is: metal oxygen... This reaction is: metal + oxygen → metal oxide produces nitrogen monoxide, although immediately. Restoran Termahal Di Korea, Tribal Art Drawing, Discus Fish For Sale, Foam Eater Screwfix, Advantages Of Company Form Of Organisation, Physella Acuta Reproduction,
2021-03-05 05:04:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.711871325969696, "perplexity": 4131.503381106554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369721.76/warc/CC-MAIN-20210305030131-20210305060131-00182.warc.gz"}
http://openstudy.com/updates/5203fb1ae4b010d6e9b32bbe
## anonymous 3 years ago Write the explicit formula for the geometric sequence. -5, 20, 80 1. anonymous Did you mean - 80? That's not a geometric sequence unless the 80 is negative. 2. anonymous yes 3. anonymous can you help me? 4. anonymous I have an answer, but I'm \not sure how \to explain how I got it.LOL 5. anonymous $a _{n}=-5\times(-4)^{(n-1)}$ 6. anonymous Plug in n=1 and you get -5, plug in n=2 and you get 20, plug in ....etc 7. anonymous i understand now how to do it now thanks
2017-01-19 22:04:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9599119424819946, "perplexity": 3087.498877635312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00351-ip-10-171-10-70.ec2.internal.warc.gz"}
https://gateoverflow.in/1412/gate-cse-2013-question-3
6,657 views Which one of the following does NOT equal $$\begin{vmatrix} 1 & x & x^{2}\\ 1& y & y^{2}\\ 1 & z & z^{2} \end{vmatrix} \quad ?$$ 1. $\begin{vmatrix} 1& x(x+1)& x+1\\ 1& y(y+1) & y+1\\ 1& z(z+1) & z+1 \end{vmatrix}$ 2. $\begin{vmatrix} 1& x+1 & x^{2}+1\\ 1& y+1 & y^{2}+1\\ 1& z+1 & z^{2}+1 \end{vmatrix}$ 3. $\begin{vmatrix} 0& x-y & x^{2}-y^{2}\\ 0 & y-z & y^{2}-z^{2}\\ 1 & z & z^{2} \end{vmatrix}$ 4. $\begin{vmatrix} 2& x+y & x^{2}+y^{2}\\ 2 & y+z & y^{2}+z^{2}\\ 1 & z & z^{2} \end{vmatrix}$ edited Yes, try to substitute different $-$different values and eliminate the options. What is a,b and c here? see above comment $a,b,c$ is nothing but $x,y,z$ Operations are: $C_{3} \leftarrow C_{3} + C_{2}$ $C_{2} \leftarrow C_{2} + C_{1}$ Swap $C_{2}\, \& \,C_{3}$ The Swapping operations make the determinant as $(-1)*|A|$ whereas the other options have their determinant as $|A|.$ why i can not use exchange command here? Bcz ,determinant property-- if any  two rows (or any two columns)of a determinant are interchanged then the value of determinant is multiplied by "-1". This is special type of matrix called vandermonde matrix, it is special as it has some important applications. even exchanges won't change it tho right. $\begin{vmatrix} 1 &x &x^{2} \\ 1 & y& y^{2}\\ 1& z& z^{2} \end{vmatrix}$ $(B):$    $C2\rightarrow C2+C1 ,\ C3\rightarrow C3+C1$ $\begin{vmatrix} 1 &x+1 &x^{2}+1 \\ 1 & y+1& y^{2}+1\\ 1& z+1& z^{2}+1 \end{vmatrix}$ $(C):$   $R1\rightarrow R1-R2 ,\ R2\rightarrow R3-R2$ $\begin{vmatrix} 0 &x-y &x^{2}-y^{2} \\ 0 & y-z& y^{2}-z^{2}\\ 1& z& z^{2} \end{vmatrix}$ $(D):$   $R1\rightarrow R1+R2 ,\ R2\rightarrow R3+R2$ $\begin{vmatrix} 2 &x+y &x^{2}+y^{2} \\ 2 & y+z& y^{2}+z^{2}\\ 1& z& z^{2} \end{vmatrix}$ We can't get option(A) from given Determinant. Hence,Option(A) is the correct choice. ### 1 comment Applying row operations mentioned in D, does not yield the last determinant. How you got the last one? the correct ans is option (A) by 1 4,946 views
2023-02-06 06:58:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8872179985046387, "perplexity": 3220.8161528695787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500304.90/warc/CC-MAIN-20230206051215-20230206081215-00257.warc.gz"}
http://www.lofoya.com/Solved/1251/in-nuts-and-bolts-factory-one-machine-produces-only-nuts-at-the-rate
# Moderate Time and Work Solved QuestionAptitude Discussion Q. In Nuts And Bolts factory, one machine produces only nuts at the rate of 100 nuts per minute and needs to be cleaned for 5 minutes after production of every 1000 nuts. Another machine produces only bolts at the rate of 75 bolts per minute and needs to cleaned for10 minutes after production of every 1500 bolts. If both the machines start production at the same time, what is the minimum duration required for producing 9000 pairs of nuts and bolts? ✖ A. 130 minutes ✖ B. 135 minutes ✔ C. 170 minutes ✖ D. 180 minutes Solution: Option(C) is correct Machine I: Number of nuts produced in one minute = 100 To produce 1000 nuts time required = 10 min Cleaning time for nuts = 5 min Over all time to produce 1000 nuts = 15 min. Over all time to produce 9000 = 135 min − 5 min = 130 min -----(1) Machine II: To produce 75 bolts time required = 1min To produce 1500 bolts time required = 20 min Cleaning time for bolts = 10 min. Effective time to produce 1500 bolts = 30 min Effective time to produce 9000 bolts = 30×6 − 10 = 170 min ------ (2) From (1) and (2), Minimum time = 170 minutes Edit: Thank you Najaf for pointing out the typo in the solution, modified solution and in part A, time has been changed to 130 min from 133 min. ## (4) Comment(s) () Minimum time should be 130 because part A is producing 9000 pairs in 130 min whereas part B is producing 9000 pairs in 170 minute, if i am wrong correct me ? Shubh () why 5 cycles and not 6? Najaf () To produce 9,000 nuts: (Production Time of 1,000 nuts x 9) + Cleaning Time for 8 cycles $\Rightarrow (9 \times 10 \text{ mins}) + (5 \times 8 \text{ mins}) = 130\text{ mins}$ To produce 9,000 bolts: (Production Time of 1,500 bolts x 6) + Cleaning Time for 5 cycles $\Rightarrow (6 \times 20 \text{ mins}) + (5 \times 10 \text{ mins}) = 170 \text{ mins}$ So according to my calculation, part A (nut production) is done wrong. Deepak () Thank you, modified the solution accordingly.
2017-01-21 19:31:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43131643533706665, "perplexity": 3525.9133258208108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00038-ip-10-171-10-70.ec2.internal.warc.gz"}
https://mathematica.stackexchange.com/questions/185213/automatic-space-after-comma-in-code-cell-style
# Automatic space after comma in Code cell style "Input" cell style automatically inserts a space after each comma, with natural exceptions such as when quotation marks are open. "Code" cell style does not. After looking through all the style definitions and using the options inspector, I cannot see how this behaviour is controlled. I would like "Code" cells to behave like "Input" cells with respect to comma handling; how can I achieve this? The setting you are looking for is AutoSpacing -> True. And you need to set it on a stylesheet level for "Code" cells or on a Cell level in a notebook (which does not make sense for .m files).
2022-05-29 09:59:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2942422926425934, "perplexity": 3026.621913107078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663048462.97/warc/CC-MAIN-20220529072915-20220529102915-00652.warc.gz"}
https://proofwiki.org/wiki/Help:Equivalence_Proofs
# Help:Equivalence Proofs Proofs for the equivalence between two or more statements require special attention. ## Page structure ### Two statements In the case of only two statements, one can opt for either the same page structure used for multiple statements as described below, or for a simple sentence using Template:Iff. If there is a direct proof of the equivalence, the second option is simplest and cleanest. If the two implications require a different proof, or if there is a second proof which does the two separately, the first option is the way to go, because it is very convenient to refer to the statements with a number. ### Multiple statements The structure of an equivalence proof with multiple statements is as follows: == Theorem == Theorem intro {{TFAE}} :$(1):\quad$ First statement. :$(2):\quad$ Second statement. :$(3):\quad$ ... ... == Proof == Followed by the rest of the page. See Help:Page Structure. Use of the TFAE Template is encouraged. It produces: The following are equivalent: ### Long lists In the case of a large amount of equivalent statements, the implications are best proved on individual pages, and the proof of the overall equivalence then consists purely of links to those pages, without any additional elements of proof. This is because: • A lot of subpages is unwieldy. • Some of the equivalences may be considerably harder to prove, so that they may rely on other equivalences being established already, which makes referencing inside the proof difficult. • Referring to the statements with numbers is less informative and makes them harder to search for than equivalence proofs whose title describes the two statements. ## Definition Equivalences If there are Multiple Definitions for one thing, their equivalence has to be proved on a page with the title: Equivalence of Definitions of Concept that is Defined The structure of such a page is roughly as follows: == Theorem == Theorem intro {{TFAE|def = Name of Definition Page}} === First transcluded definition === === Second transcluded definition === === ... === == Proof == Note that: • the TFAE template adds the page to the Definition Equivalences category. For instructions and additional syntax, see the TFAE template. • there are no blank lines between the transcluded definitions. For the proof, the general guidelines for equivalence proofs apply. There are multiple ways to name the proofs of the implications: • 1 implies 2 • Definition 1 implies Definition 2 • Definition 1 implies Definition 2, with links It has not been discussed which one of these is preferred and why.
2020-09-30 17:04:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7181118726730347, "perplexity": 1402.807741702117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402127075.68/warc/CC-MAIN-20200930141310-20200930171310-00209.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-slant-asymptote-of-f-x-x-3-x-4
# How do you find the slant asymptote of f(x)=(x-3)/(x-4)? Jan 13, 2016 Since the highest exponent for both the numerator and denominator is the same (it's 1), there is no slant asymptote . #### Explanation: You will have a slant asymptote when the greatest exponent in the numerator exceeds the greatest exponent in the denominator by 1. For example, the following function will have a slant asymptote: $f \left(x\right) = \frac{{x}^{2} - 3}{x - 4}$ Because the exponent $2$ exceeds $1$ by exactly 1 unit. hope that helped
2019-08-24 15:30:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9954932928085327, "perplexity": 459.7568058656344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321160.93/warc/CC-MAIN-20190824152236-20190824174236-00164.warc.gz"}
https://docs.w3cub.com/latex/_005cput
# W3cubDocs /LaTeX #### \put Synopsis: \put(xcoord,ycoord){content} Place content at the coordinate (xcoord,ycoord). See the discussion of coordinates and \unitlength in picture. The content is processed in LR mode (see Modes) so it cannot contain line breaks. This includes the text into the picture. \put(4.5,2.5){Apply the \textit{unpoke} move} The reference point, the location (4.5,2.5), is the lower left of the text, at the bottom left of the ‘A’.
2022-11-28 08:52:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8922561407089233, "perplexity": 3651.895239020733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710488.2/warc/CC-MAIN-20221128070816-20221128100816-00602.warc.gz"}
https://math.stackexchange.com/questions/495313/probability-mass-function-and-probability-density-function
# Probability mass function and Probability density function What is the Difference between probability Mass function and probability density function? Why the value of continuous probability distribution function is not the probability for particular input point for example for a continuous distribution say(assume it is pdf under suitable domain). f(x) = x^3 why it is not true that f(3) is probability at 3. what do you mean by f(3)? Informally, you can think of it as $f(3)$ simply returning the height of the density function, but this is really of no interest. Recall that $f(3)=Pr(X=3)$, and in our case $X$ is a continuous random variable. It does not make sense to speak of probabilities at certain values of $X$ because of this. For example, let's say I toss a ball. It lands around a 20 feet mark. But, in fact, when I look closer it was more like 19.75. But, looking even closer, it was more like 19.745. But, looking even closer than that, it was more like 19.7445. And so we could continue... It doesn't end! Hence, there is no probability for a certain distance of my toss and we use intervals instead. That is, the probability that I'd toss the ball 20 feet is 0 ($f(20)=Pr(X=20)=0$), but the probability that it's between some interval is not ($Pr(20-\delta < X < 20 +\delta)\neq 0$). • Really good one... Sep 16, 2013 at 10:58 • please tell me why would we need f(3) if it is not important Sep 16, 2013 at 10:59 • @MilanAmrutJoshi You don't 'need' $f(3)$, but you need $f(x)$ so that you, for example, can calculate the probability that you are in a neighborhood of 3 -- such as $\int_{3-\delta}^{3+\delta}f(x)dx$. It might be helpful using a normal distribution as an example - let $X\sim N(3, 1)$ and $\delta=1.96$. Then $\int_{3-1.96}^{3+1.96}f(x)dx=Pr(3-1.96<X<3+1.96)=Pr(-1.96<X-3<1.96)=0.95$ which should be familiar to you. Choose smaller and smaller values of $\delta$ and see what happens. It's quite well explained in the link, have a look if you haven't already. Sep 16, 2013 at 11:05 • Ok i will have a look .. Sep 16, 2013 at 11:06 I stumbled upon this question and found my answer in MIT OCW notes (Reading 5b and 4a to be precise). I'm quoting the explanation from there. Probability mass and probability density - these terms are completely analogous to the mass and density you saw in physics and calculus. Mass as a sum: If masses m1, m2, m3, and m4 are set in a row at positions x1, x2, x3, and x4, then the total mass is m1 + m2 + m3 + m4. We can define a ‘mass function’ p(x) with p(xj ) = mj for j = 1, 2, 3, 4, and p(x) = 0 otherwise. In this notation the total mass is p(x1) + p(x2) + p(x3) + p(x4). The probability mass function behaves in exactly the same way, except it has the dimension of probability instead of mass. Mass as an integral of density: Suppose you have a rod of length L meters with varying density f(x) kg/m. (Note the units are mass/length.) If the density varies continuously, we must find the total mass of the rod by integration: total mass $$= \int_{0}^{L} f(x) dx$$ This formula comes from dividing the rod into small pieces and ’summing’ up the mass of each piece. That is: total mass ≈ $$\sum_{i=1}^{n} f(x_i) = \Delta x$$ In the limit as $$\Delta x$$ goes to zero the sum becomes the integral. The probability density function behaves exactly the same way, except it has units of probability/(unit x) instead of kg/m. Indeed, equation (1) is exactly analogous to the above integral for total mass. While we’re on a physics kick, note that for both discrete and continuous random variables, the expected value is simply the center of mass or balance point. Reference (I couldn't figure out to use "Insert citation" but doing so is required from MIT OCW terms, hence I am manually inserting it here): Jeremy Orloff and Jonathan Bloom. 18.05 Introduction to Probability and Statistics. Spring 2014. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu/. License: Creative Commons BY-NC-SA.(Terms : https://ocw.mit.edu/terms/)
2023-02-03 22:59:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8989026546478271, "perplexity": 218.67678990653502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500076.87/warc/CC-MAIN-20230203221113-20230204011113-00800.warc.gz"}
https://www.physicsforums.com/threads/positive-or-negative-remainder.896236/
# Positive or negative remainder Is 23 = 5(-4)-3 gives a remainder -3 when divided by 5 ? is this statement true ? some of my colleagues said that remainder cannot be negative numbers as definition but I am doubt that can -3 be a remainder too? fresh_42 Mentor Is 23 = 5(-4)-3 gives a remainder -3 when divided by 5 ? is this statement true ? some of my colleagues said that remainder cannot be negative numbers as definition but I am doubt that can -3 be a remainder too? Usually we consider entire equivalence classes in such cases: Every single element of ##\{\ldots -13, -8, -3, 2, 7 , 12, \ldots\}## belongs to the same remainder of a division by ##5##. We then define all five possible classes ##\{\ldots -15, -10, -5, 0, 5, 10, \ldots\}## ##\{\ldots -14, -9, -4, 1, 6, 11, \ldots\}## ##\{\ldots -13, -8, -3, 2, 7, 12, \ldots\}## ##\{\ldots -12, -7, -2, 3, 8, 13, \ldots\}## ##\{\ldots -11, -6, -1, 4, 9, 14, \ldots\}## as elements of a new set with five elements ##\{ \; \{\ldots -15, -10, -5, 0, 5, 10, \ldots\}\, , \, \{\ldots -14, -9, -4, 1, 6, 11, \ldots\}\, , \, \ldots \}##. This notation is a bit nasty to handle, so we choose one representative out of every set. E.g. ##\{[-15],[-9],[12],[3],[-1]\}## could be chosen, but this is still a bit messy to do calculations with. So the most convenient representation is ##\{[0],[1],[2],[3],[4]\}## with the non-negative remainders smaller than ##5##. However, this is only a convention. ##-3## is a remainder, too, belonging to the class ##[2]##. So the answer to your questions is: The statement is true, as all integers are remainders. mfb Mentor The remainder is usually required to be between 0 and N-1 inclusive. 23 and -2 (not -3) are in the same equivalence class. This can also be written as 23 = -2 mod 5. The remainder is usually required to be between 0 and N-1 inclusive. 23 and -2 (not -3) are in the same equivalence class. This can also be written as 23 = -2 mod 5. Sorry it should be -23 = 5(-4) - 3 , so in conclusion is, this statement true ? jbriggs444 Homework Helper Sorry it should be -23 = 5(-4) - 3 , so in conclusion is, this statement true ? "-23 divided by 5 is -4 with a remainder of -3". I would consider that statement true. "-23 divided by 5 is -5 with a remainder of 2". I would also consider that statement to be true. The convention you use for integer division will determine which of those statements is conventional and which is unconventional. In many programming languages, integer division follows a "truncate toward zero" convention. For instance, in Ada, -23/5 = -4. The "rem" operator then gives the remainder. So -23 rem 5 = -3. If one adopts a convention that integer division (by a positive number) truncates toward negative infinity then one would get a different conventional remainder. -23/5 would be -5 and -23 mod 5 would be +2. The Ada "mod" operator uses this convention. In mathematics, one typically adopts the line of reasoning given by @fresh_42 in post#2 above. The canonical exemplar in the equivalence class of possible remainders is normally the one in the range from 0 to divisor - 1.
2021-01-18 18:05:26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8385798335075378, "perplexity": 705.3507974379489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703515075.32/warc/CC-MAIN-20210118154332-20210118184332-00500.warc.gz"}
https://www.cp2k.org/exercises:2014_ethz_mmm:md_ala
# CP2K Open Source Molecular Dynamics ### Sidebar #### For Developers exercises:2014_ethz_mmm:md_ala # Molecular Dynamics simulation of a small molecule Concerning temperature control, in these exercises we will use the NOSE-HOOVER chains method. This has been briefly described in the lecture, and is presented in this paper by Glenn Martyna (1992). In this exercise, we will extensively use vmd for visualizing the results of the cp2k simulations. As always, give the commands: module load cp2k/trunk.2.5.13191 mkdir EX_4.1 cd EX_4.1 Then, copy the commented files from the wiki: exercise_4.1.zip You will start from a configuration already computed in a previous lecture, say inp.a.pdb which is included in the repository of this exercise as well. Use the file inp.nve for the first simulation, which is a constant energy simulation. As usual the command is bsub cp2k.popt -i inp.nve > out.nve 1. Perform a constant energy simulation, 100000 time steps, with a time step of 1 fs. 2. Using a different input file, modify the time step and the name of the project. Do it for 0.1, 2, 3, 4 fs. 3. Access the corresponding *.ener files. How is the energy conservation? How is the behavior of potential and kinetic energy, and how the temperature? - Plot with gnuplot the different energy conservations and discuss them. 1. Perform now a constant Temperature simulation. The system is in contact with a thermostat, and the conserved quantity includes the thermostat degrees of freedom. The first simulation is done at 100 K: inp.100 2. Then, perform a simulation at 300 K, using the restart file from the previous simulation: inp.300. 3. Now you have some outputs to study with vmd. The trajectory files we are going to study are nve_md-pos-1.pdb md.100-pos-1.pdb md.300-pos-1.pdb “Fire” vmd, for example vmd nve_md-pos-1.pdb From the Extensions menu, you can choose the Tk console. And from there, you can enter source "dihedrals.vmd" which will define the two dihedrals phi and psi. You can also pick from the extensions the “RMSD trajectory tool” and use it to align the molecule along the trajectory. Remember to replace “protein” with “all” in the selection, and then use “align”. You will see that now the molecule is well aligned along the path. Using “Labels” menu, plot now the two dihedral angles graph. Which differences do you notice between the nve, the 100 K and the 300 K case? Can you explain them?
2022-08-14 05:59:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7037237286567688, "perplexity": 1870.6242085285623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571996.63/warc/CC-MAIN-20220814052950-20220814082950-00166.warc.gz"}
https://math.stackexchange.com/questions/1927472/equality-of-localization-by-consideration-of-primes
Equality of localization by consideration of primes I am working through a number theory proof and there is one part whose justification I do not fully understand. I believe the reasoning is merely something related to localization. Suppose $R$ is a Dedekind domain with fractions $K$. Let $L/K$ be normal with $G$ Galois group. Denote $R'$ the integral closure of $R$ in $L$. Let $\beta$ be a nonzero prime of $R'$, $p = R \cap \beta$ its contraction. Let $N$ denote the norm of an ideal, defined to be $$N(\mathfrak{a}) = \sum_{a \in \mathfrak{a}} RN(a)$$ I wish to show $N(\beta)$ is a power of the prime $p$. Suppose not, that $N(\beta) = qm$ with $q \ne p$ prime and $m \subseteq R$. Let $C$ be the complement of $q$ in $R$ so that $R_q = R_C$. Since $q \ne p$, the prime ideal $\beta$ is not a factor of $qR'$. Thus $\beta_C = R'_C$ as the only prime ideals of $R_C '$ are the extensions of the primes appearing in the factorization of $qR'$. I understand that if $S$ is a multiplicative set and we localize $S^{-1} A$ a ring with respect to $S$, then the primes in the localization correspond bijectively to the primes of $A$ such that $p \cap S = \emptyset$. In the case $S = R - q$, then this corresponds to the primes contained within $q$. The images then correspond to the primes appearing in the factorization of the image of $q$, namely $qR'$. I understand that since $q \ne p$, then $\beta$ is not a factor of $qR'$. I don't see how one puts this together to get the desired equality. I am thinking that one shows the set of prime ideals on each side is the same and since $\beta \subset R'$, then this would show that their localizations are the same. • Your definition of norm is not clear: what is $N(a)$ for $a\in \mathfrak a$? – Ferra Sep 16 '16 at 11:01 You just need to note that no prime $q\ne p$ divides $N(\beta)$ because $\Bbb Z$ has unique factorization. What the proof is doing is just saying another prime dividing $N(\beta)$ would imply that $qR'\supseteq N(\beta)$, but then take norms again and you get $N(N(\beta)) = q^{[R':R]}$ is a power of a different prime, $q$. But this is a contradiction to the fact that $(p)\supseteq\beta$.
2019-11-11 23:05:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9885453581809998, "perplexity": 45.288204599653184}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664439.7/warc/CC-MAIN-20191111214811-20191112002811-00174.warc.gz"}
https://microsoft.github.io/OpticSim.jl/dev/systems/
Optical Systems Assemblies All systems are made up of a LensAssembly which contains all the optical components in the system, excluding any sources (see Emitters) and the detector (see below). OpticSim.LensAssemblyType LensAssembly{T<:Real} Structure which contains the elements of the optical system, these can be CSGTree or Surface objects. In order to prevent type ambiguities bespoke structs are created for each possible number of elements e.g. LensAssembly3. These are parameterized by the types of the elements to prevent ambiguities. Basic surface types such as Rectangle (which can occur in large numbers) are stored independently in Vectors, so type paramters are only needed for CSG objects. Each struct looks like this: struct LensAssemblyN{T,T1,T2,...,TN} <: LensAssembly{T} axis::SVector{3,T} rectangles::Vector{Rectangle{T}} ellipses::Vector{Ellipse{T}} hexagons::Vector{Hexagon{T}} paraxials::Vector{ParaxialLens{T}} E1::T1 E2::T2 ... EN::TN end Where Ti <: Union{Surface{T},CSGTree{T}}. To create a LensAssembly object the following functions can be used: LensAssembly(elements::Vararg{Union{Surface{T},CSGTree{T},LensAssembly{T}}}; axis = SVector(0.0, 0.0, 1.0)) where {T<:Real} source Images The detector image is stored within the system as a HierarchicalImage for memory efficiency. OpticSim.HierarchicalImageType HierarchicalImage{T<:Number} <: AbstractArray{T,2} Image type which dynamically allocated memory for pixels when their value is set, the value of unset pixels is assumed to be zero. This is used for the detector image of AbstractOpticalSystems which can typically be very high resolution, but often have a large proportion of the image blank. source OpticSim.reset!Function reset!(a::HierarchicalImage{T}) Resets the pixels in the image to zero(T). Do this rather than image .= zero(T) because that will cause every pixel to be accessed, and therefore allocated. For large images this can cause huge memory traffic. source OpticSim.sum!Function sum!(a::HierarchicalImage{T}, b::HierarchicalImage{T}) Add the contents of b to a in an efficient way. source Systems There are two types of AbstractOpticalSystem which can be used depending on the requirements. OpticSim.CSGOpticalSystemType CSGOpticalSystem{T,D<:Real,S<:Surface{T},L<:LensAssembly{T}} <: AbstractOpticalSystem{T} An optical system containing a lens assembly with all optical elements and a detector surface with associated image. The system can be at a specified temperature and pressure. There are two number types in the type signature. The T type parameter is the numeric type for geometry in the optical system, the D type parameter is the numeric type of the pixels in the detector image. This way you can have Float64 geometry, where high precision is essential, but the pixels in the detector can be Float32 since precision is much less critical for image data, or Complex if doing wave optic simulations. The detector can be any Surface which implements uv, uvtopix and onsurface, typically this is one of Rectangle, Ellipse or SphericalCap. CSGOpticalSystem( assembly::LensAssembly, detector::Surface, detectorpixelsx = 1000, detectorpixelsy = 1000, ::Type{D} = Float32; temperature = OpticSim.GlassCat.TEMP_REF, pressure = OpticSim.GlassCat.PRESSURE_REF ) source OpticSim.AxisymmetricOpticalSystemType AxisymmetricOpticalSystem{T,C<:CSGOpticalSystem{T}} <: AbstractOpticalSystem{T} Optical system which has lens elements and an image detector, created from a DataFrame containing prescription data. These tags are supported for columns: :Radius, :SemiDiameter, :SurfaceType, :Thickness, :Conic, :Parameters, :Reflectance, :Material. These tags are supported for entries in a SurfaceType column: Object, Image, Stop. Assumes the Image row will be the last row in the DataFrame. In practice a CSGOpticalSystem is generated automatically and stored within this system. AxisymmetricOpticalSystem{T}( prescription::DataFrame, detectorpixelsx = 1000, detectorpixelsy:: = 1000, ::Type{D} = Float32; temperature = OpticSim.GlassCat.TEMP_REF, pressure = OpticSim.GlassCat.PRESSURE_REF ) source OpticSim.detectorimageFunction detectorimage(system::AbstractOpticalSystem{T}) -> HierarchicalImage{D} Get the detector image of system. D is the datatype of the detector image and is not necessarily the same as the datatype of the system T. source OpticSim.semidiameterFunction semidiameter(system::AxisymmetricOpticalSystem{T}) -> T Get the semidiameter of system, that is the semidiameter of the entrance pupil (i.e. first surface) of the system. source Tracing We can trace an individual OpticalRay through the system (or directly through a LensAssembly), or we can trace using an OpticalRayGenerator to create a large number of rays. OpticSim.traceFunction trace(assembly::LensAssembly{T}, r::OpticalRay{T}, temperature::T = 20.0, pressure::T = 1.0; trackrays = nothing, test = false) Returns the ray as it exits the assembly in the form of a LensTrace object if it hits any element in the assembly, otherwise nothing. Recursive rays are offset by a small amount (RAY_OFFSET) to prevent it from immediately reintersecting the same lens element. trackrays can be passed an empty vector to accumulate the LensTrace objects at each intersection of ray with a surface in the assembly. source trace(system::AbstractOpticalSystem{T}, ray::OpticalRay{T}; trackrays = nothing, test = false) Traces system with ray, if test is enabled then fresnel reflections are disabled and the power distribution will not be correct. Returns either a LensTrace if the ray hits the detector or nothing otherwise. trackrays can be passed an empty vector to accumulate the LensTrace objects at each intersection of ray with a surface in the system. source trace(system::AbstractOpticalSystem{T}, raygenerator::OpticalRayGenerator{T}; printprog = true, test = false) Traces system with rays generated by raygenerator on a single thread. Optionally the progress can be printed to the REPL. If test is enabled then fresnel reflections are disabled and the power distribution will not be correct. If outpath is specified then the result will be saved to this path. Returns the detector image of the system. source OpticSim.traceMTFunction traceMT(system::AbstractOpticalSystem{T}, raygenerator::OpticalRayGenerator{T}; printprog = true, test = false) Traces system with rays generated by raygenerator using as many threads as possible. Optionally the progress can be printed to the REPL. If test is enabled then fresnel reflections are disabled and the power distribution will not be correct. If outpath is specified then the result will be saved to this path. Returns the accumulated detector image from all threads. source OpticSim.tracehitsFunction tracehits(system::AbstractOpticalSystem{T}, raygenerator::OpticalRayGenerator{T}; printprog = true, test = false) Traces system with rays generated by raygenerator on a single thread. Optionally the progress can be printed to the REPL. If test is enabled then fresnel reflections are disabled and the power distribution will not be correct. Returns a list of LensTraces which hit the detector. source OpticSim.tracehitsMTFunction tracehitsMT(system::AbstractOpticalSystem{T}, raygenerator::OpticalRayGenerator{T}; printprog = true, test = false) Traces system with rays generated by raygenerator using as many threads as possible. Optionally the progress can be printed to the REPL. If test is enabled then fresnel reflections are disabled and the power distribution will not be correct. Returns a list of LensTraces which hit the detector, accumulated from all threads. source OpticSim.LensTraceType LensTrace{T<:Real,N} Contains an intersection point and the ray segment leading to it from within an optical trace. The ray carries the path length, power, wavelength, number of intersections and source number, all of which are accessible directly on this class too. Has the following accessor methods: ray(a::LensTrace{T,N}) -> OpticalRay{T,N} intersection(a::LensTrace{T,N}) -> Intersection{T,N} power(a::LensTrace{T,N}) -> T wavelength(a::LensTrace{T,N}) -> T pathlength(a::LensTrace{T,N}) -> T point(a::LensTrace{T,N}) -> SVector{N,T} uv(a::LensTrace{T,N}) -> SVector{2,T} sourcenum(a::LensTrace{T,N}) -> Int nhits(a::LensTrace{T,N}) -> Int source
2022-12-09 12:13:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3129991590976715, "perplexity": 3249.0951336430066}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711396.19/warc/CC-MAIN-20221209112528-20221209142528-00716.warc.gz"}
http://peterseim.ins.uni-bonn.de/teaching/Nonstandard_WS1516/
# S5E2 – Non-standard Finite Element Methods ### Prof. Dr. Daniel Peterseim #### Assistant: Dr. Mira Schedensack Requirements: Basic knowledge in partial differential equations and finite element methods. #### Description: Non-standard finite element methods as non-conforming FEMs, mixed FEMs, or discontinuous Galerkin FEMs play an important role in practical applications as the Stokes equations from fluid dynamics, linear elasticiy from solid mechanics, or plate problems from structural mechanics. The seminar discusses various non-standard FEMs and their advantages in applications or implementation. Another focus lies on the error analysis. While the Galerkin orthogonality directly leads to a best-approximation result for conforming FEMs, new techniques are required to overcome the additional consistency error for non-standard FEMs. The recent medius analysis [3,1,2] shows equivalence of errors from non-conforming, mixed, discontinuous Galerkin, and conforming FEMs. Further non-standard methods that can be discussed include finite volume methods, least-squares methods, boundary element methods, virtual element methods, or multi-scale FEMs. The seminar will be based mainly on selected journal articles. Students interested in the seminar might contact M. Schedensack in advance to register. #### Literature: [1] D. Braess. An a posteriori error estimate and a comparison theorem for the nonconforming $P_1$ element. Calcolo, 46(2):149–155, 2009. [2] C. Carstensen, D. Peterseim, and M. Schedensack. Comparison results of finite element methods for the Poisson model problem. SIAM J. Numer. Anal., 50(6):2803-2823, 2012. [3] T. Gudi. A new error analysis for discontinuous finite element methods for linear elliptic problems. Math. Comp., 79(272):2169-2189, 2010. Date: Monday, 14(c.t.)–16, Wegelerstr. 6, SR 6.020 First seminar meeting: Monday, October 19th, 14(c.t.)–16, Wegelerstr. 6, SR 6.020.
2018-06-22 15:12:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.712480366230011, "perplexity": 3623.854979763274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864546.30/warc/CC-MAIN-20180622143142-20180622163142-00282.warc.gz"}
https://planetmath.org/GroupTheoreticProofOfWilsonsTheorem
# group theoretic proof of Wilson’s theorem Here we present a group theoretic proof of it. Clearly, it is enough to show that $(p-2)!\equiv 1\pmod{p}$ since $p-1\equiv-1\pmod{p}$. By Sylow theorems, we have that $p$-Sylow subgroups of $S_{p}$, the symmetric group on $p$ elements, have order $p$, and the number $n_{p}$ of Sylow subgroups is congruent to 1 modulo $p$. Let $P$ be a Sylow subgroup of $S_{p}$. Note that $P$ is generated by a $p$-cycle. There are $(p-1)!$ cycles of length $p$ in $S_{p}$. Each $p$-Sylow subgroup contains $p-1$ cycles of length $p$, hence there are $\frac{(p-1)!}{p-1}=(p-2)!$ different $p$-Sylow subgrups in $S_{p}$, i.e. $n_{P}=(p-2)!$. From Sylow’s Second Theorem, it follows that $(p-2)!\equiv 1\pmod{p}$,so $(p-1)!\equiv-1\pmod{p}$. Title group theoretic proof of Wilson’s theorem GroupTheoreticProofOfWilsonsTheorem 2013-03-22 13:35:27 2013-03-22 13:35:27 ottocolori (1519) ottocolori (1519) 10 ottocolori (1519) Proof msc 11-00
2019-09-16 06:29:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 24, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9381888508796692, "perplexity": 178.5781251861238}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572491.38/warc/CC-MAIN-20190916060046-20190916082046-00149.warc.gz"}
http://mathhelpforum.com/trigonometry/107374-deriving-identities.html
1. ## Deriving identities Okay, so I have a possibly sort of unique situation! My instructor does not give any identities or formulas for the tests, and while my final is a long ways off I need to do this at least once now that I'm done with this chapter. This is a whopper, but I know that once I can see the relationships between them I'll have an easy time with it. The sets of identities I need to be able to derive from scratch: -Cosine, Sine, and Tangent of a sum or difference (6 identities) -Double angle identities of a cosine (3 identities), sine (1) and tangent (1) -( I have the cofunction identities, they're easy enough) -Product to sum identities (4) -Sum to product identities (4) and lastly the half angle identities (5) In total that's 30 identities, and I have 6 memorized so 24 identities I have to be able to derive to be able to use on my final exam. This is an online class ( tests are proctored ) and I am allowed to use my TI-89. ( That 'solve(' function is such a time saver ) I have all of the identities written down in my notes ( which are in notepad, so they're ugly. I have dysgraphia and taking written notes is an exercise in futility for me ) I've attached the text document with all my identities for this section Where do I start? Which ones do I memorize and which ones do I derive? 2. Originally Posted by Wolvenmoon Okay, so I have a possibly sort of unique situation! My instructor does not give any identities or formulas for the tests, and while my final is a long ways off I need to do this at least once now that I'm done with this chapter. This is a whopper, but I know that once I can see the relationships between them I'll have an easy time with it. The sets of identities I need to be able to derive from scratch: -Cosine, Sine, and Tangent of a sum or difference (6 identities) -Double angle identities of a cosine (3 identities), sine (1) and tangent (1) -( I have the cofunction identities, they're easy enough) -Product to sum identities (4) -Sum to product identities (4) and lastly the half angle identities (5) In total that's 30 identities, and I have 6 memorized so 24 identities I have to be able to derive to be able to use on my final exam. This is an online class ( tests are proctored ) and I am allowed to use my TI-89. ( That 'solve(' function is such a time saver ) I have all of the identities written down in my notes ( which are in notepad, so they're ugly. I have dysgraphia and taking written notes is an exercise in futility for me ) I've attached the text document with all my identities for this section Where do I start? Which ones do I memorize and which ones do I derive? Everything can be derived from cos(A + B) = ..... and sin(A + B) = ..... However, derivations take time and may themselves require memorisation .... 3. this may help, the WEB have plenty of it only IF you try browsing, this one is the best: Trigonometric Identities -- from Wolfram MathWorld also this List of trigonometric identities - Wikipedia, the free encyclopedia Double-, triple-, and half-angle formulae These can be shown by using either the sum and difference identities or the multiple-angle formulae. Double-angle formulae[16][17]Triple-angle formulae[18][14]Half-angle formulae[19][20] The tangent half-angle formulae are as follows. Let Then we have 4. Okay, so I'm working on memorizing these by working the identities. Right now I'm on the double angle identities. What I derived are: (Latex never ceases to humiliate me, so they're in calculator terms) cos2A = cos^2(A) - sin^2(A) sin2A = 2sin(A)*cos(A) tan2A = (2tan(A))/(1-(tan^2(A)) There are two others for cosine, Cos2A = 2*cos^2(A) - 1 cos2A = 1 - 2sin^2(A) This would mean that: cos^2(A) - sin^2(A) = 2*cos^2(A) - 1 Where I'm stuck is -sin^2(A) becomes cos^2(A) - 1, and how cos^2(A) becomes -sin^2(A) This is something to do with the Pythagorean identity, but I'm not seeing it. 5. Originally Posted by Wolvenmoon Okay, so I'm working on memorizing these by working the identities. Right now I'm on the double angle identities. What I derived are: (Latex never ceases to humiliate me, so they're in calculator terms) cos2A = cos^2(A) - sin^2(A) sin2A = 2sin(A)*cos(A) tan2A = (2tan(A))/(1-(tan^2(A)) There are two others for cosine, Cos2A = 2*cos^2(A) - 1 cos2A = 1 - 2sin^2(A) This would mean that: cos^2(A) - sin^2(A) = 2*cos^2(A) - 1 Where I'm stuck is -sin^2(A) becomes cos^2(A) - 1, and how cos^2(A) becomes -sin^2(A) This is something to do with the Pythagorean identity, but I'm not seeing it. Remember that $sin^2(\theta)+cos^2(\theta)=1$ So you can solve this for $sin^2(\theta)=1-cos^2(\theta)$ and substitute. The negative of this is just $cos^2(\theta)-1$ 6. I accept with information: cos2A = cos^2(A) - sin^2(A) sin2A = 2sin(A)*cos(A) tan2A = (2tan(A))/(1-(tan^2(A)) There are two others for cosine, Cos2A = 2*cos^2(A) - 1 cos2A = 1 - 2sin^2(A) This would mean that: cos^2(A) - sin^2(A) = 2*cos^2(A) - 1 ______________________________ Liposuction before and after photos | laser liposuction cost | Tumescent liposuction prices
2016-08-28 09:54:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7378994822502136, "perplexity": 1689.5030050663536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982935910.69/warc/CC-MAIN-20160823200855-00096-ip-10-153-172-175.ec2.internal.warc.gz"}
http://www.gradesaver.com/the-red-badge-of-courage/q-and-a/what-is-the-red-badge-of-courage-that-henry-wants-so-much-and-how-does-he-get-it-126633
# What is the “red badge of courage” that Henry wants so much and how does he get it? What is the red badge of courage that Henry Fleming wants so much how doe he get the badge?
2017-07-26 16:45:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9593348503112793, "perplexity": 5359.923066930327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426234.82/warc/CC-MAIN-20170726162158-20170726182158-00563.warc.gz"}
https://forum.onefourthlabs.com/t/week-5-assignment-encoding-and-decoding-a-file/5245
# Week 5 assignment - Encoding and decoding a file Hello All I am solving the file encoding and decoding using the function 2a + 3. The encoding part appears simple but while decoding , we need to know how many digits to group (since they can be any number of digits) to get back the original character. For this I thought while I encode it, I will add a delimiter like “-” (hash) so that I can use this to determine how many digits I need to group So if there is a word like “is”, I will get the digits for the letter “i” followed by a “-” and then the digits for “s” Is this fine or is there an alternative way of doing it ? Thanks Harish Hi @hari.yajurveda, It’s very simple, if y=f(a) = 2a+3 was used encode the ASCII values then you can tke the inverse of y=f(a) i,e, y=f(a) y=2a+3 \frac{y-3}{2} = a where a is the original ASCII value and y is the encoded value of the character. Thank you. Yes thats what am doing to decode the file. My question was when we decode, we have a file full of numbers corresponding to the characters that was in original file Now, when I process this file, I need to know how many digits I need to take to get back my character Lets look at this sentence for ex “I learn” This will get converted to 149 219205197231223 Now when I am decoding this back, and take the set of digits 219205197231223 for processing, how do I know I need to take 3 digits or 2 digits at a time for extracting the character (in some cases some characters have only 2 digit ascii value) So I thought if I decode it as 149- 291-205-197-231-223, then I can be sure how many digits I need to take to get the character back. For ex I will do (291-3)/2 and do a chr() to get the character as there is a delimiter after that Does this make sense ? Before you say that your encoding function is enc(a) = 2a+3, you need to clearly define what ‘a’ is. If you define ‘a’ as ASCII value, you need to clarify on what all characters will you encode? If you that you are going to encode only the alphabets (caps or small) in your string, and retain all other characters as it is, then one thing you can do is check the ASCII (decimal) number of all those interested characters. Turns out, when you encode your alphabets using your 2a+3 formula, you always get a 3-digit number. So to decode (the encoded alphabets), you can assume a constant width of 3 digits and decode them. Remember that, this may fail if you have numbers in your input string, since it will be retained as it when encoding, and create ambiguity when decoding. Hence a good practice would be to clearly define your problem before proceeding for a solution, and understanding the constraints under which your solution works and does not work. Yes. “a” is defined as the ASCII value of the character (thats given in the assignment). I intend to encode the whole file with numbers/punctuation converted to the corresponding code So I will insert a delimiter to know where toe start and end for a character while decoding
2020-09-28 15:27:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6396639347076416, "perplexity": 781.2748654948201}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401601278.97/warc/CC-MAIN-20200928135709-20200928165709-00694.warc.gz"}
http://serverfault.com/questions/346897/lightweight-file-manager-for-use-in-a-windows-preinstallation-environment
# Lightweight File Manager for use in a Windows Preinstallation Environment [closed] I need a free (preferabally open source) lightweight file manager for Windows that can run in a stripped down Windows Preinstallation Environment. It doesn't matter if it is text-based or GUI but must run in Windows PE. As a side note, someone should create the tag: file-manager - ## closed as not constructive by voretaq7♦Oct 18 '12 at 17:08 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.If this question can be reworded to fit the rules in the help center, please edit the question. Given what this site is for, file-manager would be a better tag on SuperUser, if it's not there already. –  mfinni Jan 5 '12 at 13:36 Why the close votes? This deals with Windows Preinstallation environment, I think it's relevant to this site. –  unixman83 Jan 5 '12 at 14:23 I agree that this question is appropriate for this site, I was just commenting that the tag would be much more common on SU. –  mfinni Jan 5 '12 at 15:37 Product recommendation questions are off-topic per the FAQ. –  sysadmin1138 Oct 18 '12 at 17:08 This is really a shopping question -- the best we can give you is a list of things, and Google will always be a more up-to-date list than anything we post... –  voretaq7 Oct 18 '12 at 17:09
2015-01-27 14:47:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.361702561378479, "perplexity": 2076.5864578050173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121981339.16/warc/CC-MAIN-20150124175301-00195-ip-10-180-212-252.ec2.internal.warc.gz"}
https://text-id.123dok.com/document/y62v1pgz-1672-exam-70-463-implementing-a-data-warehouse-with-microsoft-sql-server-2012.html
# 1672 Exam 70 463 Implementing a Data Warehouse with Microsoft SQL Server 2012 www.it-ebooks.info www.it-ebooks.info Exam 70-463: Implementing a Data Warehouse with Microsoft SQL Server 2012 Objective chapter LessOn 1. Design anD impLement a Data WarehOuse 1.1 Design and implement dimensions. Chapter 1 Lessons 1 and, 2 1.2 Design and implement fact tables. Chapter 2 Chapter 1 Lessons 1, 2, and 3 Lesson 3 Chapter 2 Lessons 1, 2, and 3 Chapter 3 Lessons 1 and 3 Chapter 4 Lesson 1 Chapter 9 Chapter 3 Lesson 2 Lesson 1 Chapter 5 Lessons 1, 2, and 3 Chapter 7 Lesson 1 Chapter 10 Lesson 2 Chapter 13 Lesson 2 Chapter 18 Lessons 1, 2, and 3 Chapter 19 Lesson 2 Chapter 20 Chapter 3 Lesson 1 Lesson 1 Chapter 5 Lessons 1, 2, and 3 Chapter 7 Lessons 1 and 3 Chapter 13 Lesson 1 and 2 Chapter 18 Lesson 1 Chapter 20 Chapter 8 Lessons 2 and 3 Lessons 1 and 2 Chapter 12 Chapter 19 Lesson 1 Lesson 1 Chapter 3 Lessons 2 and 3 Chapter 4 Lessons 2 and 3 Chapter 6 Lessons 1 and 3 Chapter 8 Lessons 1, 2, and 3 Chapter 10 Lesson 1 Chapter 12 Lesson 2 Chapter 19 Chapter 6 Lesson 1 Lessons 1 and 2 Chapter 9 Chapter 4 Lessons 1 and 2 Lessons 2 and 3 Chapter 6 Lesson 3 Chapter 8 Lessons 1 and 2 Chapter 10 Lesson 3 Chapter 13 Chapter 7 Chapter 19 Lessons 1, 2, and 3 Lesson 2 Lesson 2 2. extract anD transfOrm Data 2.1 Define connection managers. 2.2 Design data flow. 2.3 Implement data flow. 2.4 Manage SSIS package execution. 2.5 Implement script tasks in SSIS. 3.1 Design control flow. 3.2 Implement package logic by using SSIS variables and parameters. 3.3 Implement control flow. 3.5 Implement script components in SSIS. www.it-ebooks.info Objective chapter LessOn 4. cOnfigure anD DepLOy ssis sOLutiOns 4.1 Troubleshoot data integration issues. Chapter 10 Lesson 1 4.2 Install and maintain SSIS components. 4.3 Implement auditing, logging, and event handling. Chapter 13 Chapter 11 Chapter 8 Lessons 1, 2, and 3 Lesson 1 Lesson 3 4.4 Deploy SSIS solutions. Chapter 10 Chapter 11 Lessons 1 and 2 Lessons 1 and 2 Chapter 19 Chapter 12 Lesson 3 Lesson 2 Chapter 14 Chapter 15 Lessons 1, 2, and 3 Lessons 1, 2, and 3 Chapter 16 Chapter 14 Lessons 1, 2, and 3 Lesson 1 Chapter 17 Lessons 1, 2, and 3 Chapter 20 Lessons 1 and 2 4.5 Configure SSIS security settings. 5. buiLD Data quaLity sOLutiOns 5.1 Install and maintain Data Quality Services. 5.2 Implement master data management solutions. 5.3 Create a data quality project to clean data. exam Objectives The exam objectives listed here are current as of this book’s publication date. Exam objectives are subject to change at any time without prior notice and at Microsoft’s sole discretion. Please visit the Microsoft Learning website for the most current listing of exam objectives: http://www.microsoft.com/learning/en/us /exam.aspx?ID=70-463&locale=en-us. www.it-ebooks.info Exam 70-463: Implementing a Data Warehouse with Microsoft SQL Server 2012 ® Training Kit Dejan Sarka Matija Lah Grega Jerkič www.it-ebooks.info ® Published with the authorization of Microsoft Corporation by: O’Reilly Media, Inc. 1005 Gravenstein Highway North Sebastopol, California 95472 or transmitted in any form or by any means without the written permission of the publisher. ISBN: 978-0-7356-6609-2 1 2 3 4 5 6 7 8 9 QG 7 6 5 4 3 2 Printed and bound in the United States of America. Microsoft Press books are available through booksellers and distributors worldwide. If you need support related to this book, email Microsoft Press Book Support at mspinput@microsoft.com. Please tell us what you think of this book at http://www.microsoft.com/learning/booksurvey. Microsoft group of companies. All other marks are property of their respective owners. The example companies, organizations, products, domain names, email addresses, logos, people, places, and events depicted herein are fictitious. No association with any real company, organization, product, domain name, email address, logo, person, place, or event is intended or should be inferred. This book expresses the author’s views and opinions. The information contained in this book is provided without any express, statutory, or implied warranties. Neither the authors, O’Reilly Media, Inc., Microsoft Corporation, nor its resellers, or distributors will be held liable for any damages caused or alleged to be caused either directly or indirectly by this book. acquisitions and Developmental editor: Russell Jones production editor: Holly Bauer editorial production: Online Training Solutions, Inc. copyeditor: Kathy Krause, Online Training Solutions, Inc. indexer: Ginny Munroe, Judith McConville cover Design: Twist Creative • Seattle cover composition: Zyg Group, LLC illustrator: Jeanne Craver, Online Training Solutions, Inc. www.it-ebooks.info Contents at a Glance Introduction xxvii part i Designing anD impLementing a Data WarehOuse ChaptEr 1 Data Warehouse Logical Design 3 ChaptEr 2 Implementing a Data Warehouse 41 part ii DeveLOping ssis packages ChaptEr 3 Creating SSIS packages ChaptEr 4 Designing and Implementing Control Flow 131 ChaptEr 5 Designing and Implementing Data Flow 177 part iii enhancing ssis packages ChaptEr 6 Enhancing Control Flow 239 ChaptEr 7 Enhancing Data Flow 283 ChaptEr 8 Creating a robust and restartable package 327 ChaptEr 9 Implementing Dynamic packages 353 ChaptEr 10 auditing and Logging 381 part iv managing anD maintaining ssis packages ChaptEr 11 Installing SSIS and Deploying packages ChaptEr 12 Executing and Securing packages 455 ChaptEr 13 troubleshooting and performance tuning 497 part v buiLDing Data quaLity sOLutiOns ChaptEr 14 Installing and Maintaining Data Quality Services 529 ChaptEr 15 Implementing Master Data Services 565 ChaptEr 16 Managing Master Data 605 ChaptEr 17 Creating a Data Quality project to Clean Data 637 87 www.it-ebooks.info 421 part vi aDvanceD ssis anD Data quaLity tOpics ChaptEr 18 SSIS and Data Mining 667 ChaptEr 19 Implementing Custom Code in SSIS packages 699 ChaptEr 20 Identity Mapping and De-Duplicating 735 Index 769 www.it-ebooks.info Contents introduction xxvii System Requirements xxviii Using the Companion CD xxix Acknowledgments xxxi Support & Feedback xxxi Preparing for the Exam xxxiii part i Designing anD impLementing a Data WarehOuse chapter 1 Data Warehouse Logical Design 3 Before You Begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Lesson 1: Introducing Star and Snowflake Schemas . . . . . . . . . . . . . . . . . . . . 4 Reporting Problems with a Normalized Schema 5 Star Schema 7 Snowflake Schema 9 Granularity Level 12 Auditing and Lineage 13 Lesson Summary 16 Lesson Review 16 Lesson 2: Designing Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Dimension Column Types 17 Hierarchies 19 Slowly Changing Dimensions 21 Lesson Summary 26 Lesson Review 26 What do you think of this book? We want to hear from you! Microsoft is interested in hearing your feedback so we can continually improve our books and learning resources for you. to participate in a brief online survey, please visit: www.microsoft.com/learning/booksurvey/ vii www.it-ebooks.info Lesson 3: Designing Fact Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Fact Table Column Types 28 29 30 Many-to-Many Relationships 30 Lesson Summary 33 Lesson Review 34 Case Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Case Scenario 1: A Quick POC Project 34 Case Scenario 2: Extending the POC Project 35 Suggested Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 35 Check the SCD and Lineage in the AdventureWorksDW2012 Database 36 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 chapter 2 Lesson 1 37 Lesson 2 37 Lesson 3 38 Case Scenario 1 39 Case Scenario 2 39 implementing a Data Warehouse 41 Before You Begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Lesson 1: Implementing Dimensions and Fact Tables . . . . . . . . . . . . . . . . . 42 Creating a Data Warehouse Database 42 Implementing Dimensions 45 Implementing Fact Tables 47 Lesson Summary 54 Lesson Review 54 Lesson 2: Managing the Performance of a Data Warehouse . . . . . . . . . . . 55 viii Indexing Dimensions and Fact Tables 56 Indexed Views 58 Data Compression 61 Columnstore Indexes and Batch Processing 62 contents www.it-ebooks.info Lesson Summary 69 Lesson Review 70 Lesson 3: Loading and Auditing Loads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Using Partitions 71 Data Lineage 73 Lesson Summary 78 Lesson Review 78 Case Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Case Scenario 1: Slow DW Reports 79 Case Scenario 2: DW Administration Problems 79 Suggested Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Test Different Indexing Methods 79 Test Table Partitioning 80 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Lesson 1 81 Lesson 2 81 Lesson 3 82 Case Scenario 1 83 Case Scenario 2 83 part ii DeveLOping ssis packages chapter 3 creating ssis packages 87 Before You Begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Lesson 1: Using the SQL Server Import and Export Wizard . . . . . . . . . . . . 89 Planning a Simple Data Movement 89 Lesson Summary 99 Lesson Review 99 Lesson 2: Developing SSIS Packages in SSDT . . . . . . . . . . . . . . . . . . . . . . . . 101 Introducing SSDT 102 Lesson Summary 107 Lesson Review 108 Lesson 3: Introducing Control Flow, Data Flow, and Connection Managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 contents www.it-ebooks.info ix Introducing SSIS Development 110 Introducing SSIS Project Deployment 110 Lesson Summary 124 Lesson Review 124 Case Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Case Scenario 1: Copying Production Data to Development 125 Case Scenario 2: Connection Manager Parameterization 125 Suggested Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Use the Right Tool 125 Account for the Differences Between Development and Production Environments 126 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 chapter 4 Lesson 1 127 Lesson 2 128 Lesson 3 128 Case Scenario 1 129 Case Scenario 2 129 Designing and implementing control flow 131 Before You Begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Lesson 1: Connection Managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Lesson Summary 144 Lesson Review 144 Lesson 2: Control Flow Tasks and Containers . . . . . . . . . . . . . . . . . . . . . . . 145 Planning a Complex Data Movement 145 147 Containers 155 Lesson Summary 163 Lesson Review 163 Lesson 3: Precedence Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 x Lesson Summary 169 Lesson Review 169 contents www.it-ebooks.info Case Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Case Scenario 1: Creating a Cleanup Process 170 Case Scenario 2: Integrating External Processes 171 Suggested Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 A Complete Data Movement Solution 171 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 chapter 5 Lesson 1 173 Lesson 2 174 Lesson 3 175 Case Scenario 1 176 Case Scenario 2 176 Designing and implementing Data flow 177 Before You Begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Lesson 1: Defining Data Sources and Destinations . . . . . . . . . . . . . . . . . . . 178 178 180 184 SSIS Data Types 187 Lesson Summary 197 Lesson Review 197 Lesson 2: Working with Data Flow Transformations . . . . . . . . . . . . . . . . . . 198 Selecting Transformations 198 Using Transformations 205 Lesson Summary 215 Lesson Review 215 Lesson 3: Determining Appropriate ETL Strategy and Tools . . . . . . . . . . . 216 ETL Strategy 217 Lookup Transformations 218 Sorting the Data 224 225 Lesson Summary 231 Lesson Review 231 contents www.it-ebooks.info xi Case Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Case Scenario: New Source System 232 Suggested Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 233 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Lesson 1 234 Lesson 2 234 Lesson 3 235 Case Scenario 236 part iii enhancing ssis packages chapter 6 enhancing control flow 239 Before You Begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Lesson 1: SSIS Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 System and User Variables 243 Variable Data Types 245 Variable Scope 248 Property Parameterization 251 Lesson Summary 253 Lesson Review 253 Lesson 2: Connection Managers, Tasks, and Precedence Constraint Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Expressions 255 Property Expressions 259 Precedence Constraint Expressions 259 Lesson Summary 263 Lesson Review 264 Lesson 3: Using a Master Package for Advanced Control Flow . . . . . . . . 265 xii 267 Harmonizing Workflow and Configuration 268 269 The Execute SQL Server Agent Job Task 269 270 contents www.it-ebooks.info Lesson Summary 275 Lesson Review 275 Case Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 Case Scenario 1: Complete Solutions 276 Case Scenario 2: Data-Driven Execution 277 Suggested Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Consider Using a Master Package 277 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 chapter 7 Lesson 1 278 Lesson 2 279 Lesson 3 279 Case Scenario 1 280 Case Scenario 2 281 enhancing Data flow 283 Before You Begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Lesson 1: Slowly Changing Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 Defining Attribute Types 284 Inferred Dimension Members 285 Using the Slowly Changing Dimension Task 285 Effectively Updating Dimensions 290 Lesson Summary 298 Lesson Review 298 Lesson 2: Preparing a Package for Incremental Load . . . . . . . . . . . . . . . . . 299 Using Dynamic SQL to Read Data 299 Implementing CDC by Using SSIS 304 307 Lesson Summary 316 Lesson Review 316 Lesson 3: Error Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Using Error Flows 317 Lesson Summary 321 Lesson Review 321 contents www.it-ebooks.info xiii Case Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 322 Suggested Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 322 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 chapter 8 Lesson 1 323 Lesson 2 324 Lesson 3 324 Case Scenario 325 creating a robust and restartable package 327 Before You Begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 Lesson 1: Package Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 Defining Package and Task Transaction Settings 328 Transaction Isolation Levels 331 Manually Handling Transactions 332 Lesson Summary 335 Lesson Review 335 Lesson 2: Checkpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 Implementing Restartability Checkpoints 336 Lesson Summary 341 Lesson Review 341 Lesson 3: Event Handlers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 Using Event Handlers 342 Lesson Summary 346 Lesson Review 346 Case Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Case Scenario: Auditing and Notifications in SSIS Packages 347 Suggested Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Use Transactions and Event Handlers 348 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 xiv Lesson 1 349 Lesson 2 349 contents www.it-ebooks.info chapter 9 Lesson 3 350 Case Scenario 351 implementing Dynamic packages 353 Before You Begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354 Lesson 1: Package-Level and Project-Level Connection Managers and Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354 Using Project-Level Connection Managers 355 Parameters 356 Build Configurations in SQL Server 2012 Integration Services 358 Property Expressions 361 Lesson Summary 366 Lesson Review 366 Lesson 2: Package Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Implementing Package Configurations 368 Lesson Summary 377 Lesson Review 377 Case Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 Case Scenario: Making SSIS Packages Dynamic 378 Suggested Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 Use a Parameter to Incrementally Load a Fact Table 378 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Lesson 1 379 Lesson 2 379 Case Scenario 380 chapter 10 auditing and Logging 381 Before You Begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Lesson 1: Logging Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Log Providers 383 Configuring Logging 386 Lesson Summary 393 Lesson Review 394 contents www.it-ebooks.info xv Lesson 2: Implementing Auditing and Lineage . . . . . . . . . . . . . . . . . . . . . . 394 Auditing Techniques 395 Correlating Audit Data with SSIS Logs 401 Retention 401 Lesson Summary 405 Lesson Review 405 Lesson 3: Preparing Package Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 SSIS Package Templates 407 Lesson Summary 410 Lesson Review 410 Case Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 Case Scenario 1: Implementing SSIS Logging at Multiple Levels of the SSIS Object Hierarchy 411 Case Scenario 2: Implementing SSIS Auditing at Different Levels of the SSIS Object Hierarchy 412 Suggested Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 Add Auditing to an Update Operation in an Existing 412 Create an SSIS Package Template in Your Own Environment 413 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 part iv Lesson 1 414 Lesson 2 415 Lesson 3 416 Case Scenario 1 417 Case Scenario 2 417 managing anD maintaining ssis packages chapter 11 installing ssis and Deploying packages 421 Before You Begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422 Lesson 1: Installing SSIS Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 Preparing an SSIS Installation xvi 424 Installing SSIS 428 Lesson Summary 436 Lesson Review 436 contents www.it-ebooks.info Lesson 2: Deploying SSIS Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 SSISDB Catalog 438 SSISDB Objects 440 Project Deployment 442 Lesson Summary 449 Lesson Review 450 Case Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450 Case Scenario 1: Using Strictly Structured Deployments 451 Case Scenario 2: Installing an SSIS Server 451 Suggested Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451 451 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452 Lesson 1 452 Lesson 2 453 Case Scenario 1 454 Case Scenario 2 454 chapter 12 executing and securing packages 455 Before You Begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 Lesson 1: Executing SSIS Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 On-Demand SSIS Execution 457 Automated SSIS Execution 462 Monitoring SSIS Execution 465 Lesson Summary 479 Lesson Review 479 Lesson 2: Securing SSIS Packages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480 SSISDB Security 481 Lesson Summary 490 Lesson Review 490 Case Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491 Case Scenario 1: Deploying SSIS Packages to Multiple Environments 491 Case Scenario 2: Remote Executions 491 contents www.it-ebooks.info xvii Suggested Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491 Improve the Reusability of an SSIS Solution 492 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493 Lesson 1 493 Lesson 2 494 Case Scenario 1 495 Case Scenario 2 495 chapter 13 troubleshooting and performance tuning 497 Before You Begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498 Lesson 1: Troubleshooting Package Execution . . . . . . . . . . . . . . . . . . . . . . 498 Design-Time Troubleshooting 498 Production-Time Troubleshooting 506 Lesson Summary 510 Lesson Review 510 Lesson 2: Performance Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511 SSIS Data Flow Engine 512 Data Flow Tuning Options 514 Parallel Execution in SSIS 517 Troubleshooting and Benchmarking Performance 518 Lesson Summary 522 Lesson Review 522 Case Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523 Case Scenario: Tuning an SSIS Package 523 Suggested Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524 Get Familiar with SSISDB Catalog Views 524 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525 Lesson 1 xviii 525 Lesson 2 525 Case Scenario 526 contents www.it-ebooks.info part v buiLDing Data quaLity sOLutiOns chapter 14 installing and maintaining Data quality services 529 Before You Begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530 Lesson 1: Data Quality Problems and Roles . . . . . . . . . . . . . . . . . . . . . . . . . 530 Data Quality Dimensions 531 Data Quality Activities and Roles 535 Lesson Summary 539 Lesson Review 539 Lesson 2: Installing Data Quality Services. . . . . . . . . . . . . . . . . . . . . . . . . . . 540 DQS Architecture 540 DQS Installation 542 Lesson Summary 548 Lesson Review 548 Lesson 3: Maintaining and Securing Data Quality Services . . . . . . . . . . . . 549 Performing Administrative Activities with Data Quality Client 549 Performing Administrative Activities with Other Tools 553 Lesson Summary 558 Lesson Review 558 Case Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559 Case Scenario: Data Warehouse Not Used 559 Suggested Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560 560 Review Data Profiling Tools 560 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 Lesson 1 561 Lesson 2 561 Lesson 3 562 Case Scenario 563 contents www.it-ebooks.info xix chapter 15 implementing master Data services 565 Before You Begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565 Lesson 1: Defining Master Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566 What Is Master Data? 567 Master Data Management 569 MDM Challenges 572 Lesson Summary 574 Lesson Review 574 Lesson 2: Installing Master Data Services . . . . . . . . . . . . . . . . . . . . . . . . . . . 575 Master Data Services Architecture 576 MDS Installation 577 Lesson Summary 587 Lesson Review 587 Lesson 3: Creating a Master Data Services Model . . . . . . . . . . . . . . . . . . . 588 MDS Models and Objects in Models 588 MDS Objects 589 Lesson Summary 599 Lesson Review 600 Case Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .600 Case Scenario 1: Introducing an MDM Solution 600 Case Scenario 2: Extending the POC Project 601 Suggested Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601 601 Expand the MDS Model 601 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602 xx Lesson 1 602 Lesson 2 603 Lesson 3 603 Case Scenario 1 604 Case Scenario 2 604 contents www.it-ebooks.info chapter 16 managing master Data 605 Before You Begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605 Lesson 1: Importing and Exporting Master Data . . . . . . . . . . . . . . . . . . . . 606 Creating and Deploying MDS Packages 606 Importing Batches of Data 607 Exporting Data 609 Lesson Summary 615 Lesson Review 616 Lesson 2: Defining Master Data Security . . . . . . . . . . . . . . . . . . . . . . . . . . . 616 Users and Permissions 617 Overlapping Permissions 619 Lesson Summary 624 Lesson Review 624 Lesson 3: Using Master Data Services Add-in for Excel . . . . . . . . . . . . . . . 624 Editing MDS Data in Excel 625 Creating MDS Objects in Excel 627 Lesson Summary 632 Lesson Review 632 Case Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633 Case Scenario: Editing Batches of MDS Data 633 Suggested Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633 Analyze the Staging Tables 633 Test Security 633 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634 Lesson 1 634 Lesson 2 635 Lesson 3 635 Case Scenario 636 contents www.it-ebooks.info xxi chapter 17 creating a Data quality project to clean Data 637 Before You Begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637 Lesson 1: Creating and Maintaining a Knowledge Base . . . . . . . . . . . . . . 638 Building a DQS Knowledge Base 638 Domain Management 639 Lesson Summary 645 Lesson Review 645 Lesson 2: Creating a Data Quality Project . . . . . . . . . . . . . . . . . . . . . . . . . . 646 DQS Projects 646 Data Cleansing 647 Lesson Summary 653 Lesson Review 653 Lesson 3: Profiling Data and Improving Data Quality . . . . . . . . . . . . . . . . 654 Using Queries to Profile Data 654 656 Lesson Summary 659 Lesson Review 660 Case Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660 Case Scenario: Improving Data Quality 660 Suggested Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661 Create an Additional Knowledge Base and Project 661 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662 Lesson 1 part vi 662 Lesson 2 662 Lesson 3 663 Case Scenario 664 aDvanceD ssis anD Data quaLity tOpics chapter 18 ssis and Data mining 667 Before You Begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667 Lesson 1: Data Mining Task and Transformation . . . . . . . . . . . . . . . . . . . . . 668 xxii What Is Data Mining? 668 SSAS Data Mining Algorithms 670 contents www.it-ebooks.info Using Data Mining Predictions in SSIS 671 Lesson Summary 679 Lesson Review 679 Lesson 2: Text Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679 Term Extraction 680 Term Lookup 681 Lesson Summary 686 Lesson Review 686 Lesson 3: Preparing Data for Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . 687 Preparing the Data 688 SSIS Sampling 689 Lesson Summary 693 Lesson Review 693 Case Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694 Case Scenario: Preparing Data for Data Mining 694 Suggested Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694 Test the Row Sampling and Conditional Split Transformations 694 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695 Lesson 1 695 Lesson 2 695 Lesson 3 696 Case Scenario 697 chapter 19 implementing custom code in ssis packages 699 Before You Begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700 Lesson 1: Script Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700 701 702 Lesson Summary 707 Lesson Review 707 Lesson 2: Script Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707 Configuring the Script Component 708 Coding the Script Component 709 contents www.it-ebooks.info xxiii Lesson Summary 715 Lesson Review 715 Lesson 3: Implementing Custom Components . . . . . . . . . . . . . . . . . . . . . . 716 Planning a Custom Component 717 Developing a Custom Component 718 Design Time and Run Time 719 Design-Time Methods 719 Run-Time Methods 721 Lesson Summary 730 Lesson Review 730 Case Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731 Case Scenario: Data Cleansing 731 Suggested Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731 Create a Web Service Source 731 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732 Lesson 1 732 Lesson 2 732 Lesson 3 733 Case Scenario 734 chapter 20 identity mapping and De-Duplicating 735 Before You Begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736 Lesson 1: Understanding the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736 Identity Mapping and De-Duplicating Problems 736 Solving the Problems 738 Lesson Summary 744 Lesson Review 744 Lesson 2: Using DQS and the DQS Cleansing Transformation . . . . . . . . . 745 xxiv DQS Cleansing Transformation 746 DQS Matching 746 Lesson Summary 755 Lesson Review 755 contents www.it-ebooks.info Lesson 3: Implementing SSIS Fuzzy Transformations . . . . . . . . . . . . . . . . . 756 Fuzzy Transformations Algorithm 756 Versions of Fuzzy Transformations 758 Lesson Summary 764 Lesson Review 764 Case Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765 Case Scenario: Improving Data Quality 765 Suggested Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765 Research More on Matching 765 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 766 Lesson 1 766 Lesson 2 766 Lesson 3 767 Case Scenario 768 Index 769 contents www.it-ebooks.info xxv www.it-ebooks.info Introduction his Training Kit is designed for information technology (IT) professionals who support or plan to support data warehouses, extract-transform-load (ETL) processes, data quality improvements, and master data management. It is designed for IT professionals who also plan to take the Microsoft Certified Technology Specialist (MCTS) exam 70-463. The authors assume that you have a solid, foundation-level understanding of Microsoft SQL Server 2012 and the Transact-SQL language, and that you understand basic relational modeling concepts. T The material covered in this Training Kit and on Exam 70-463 relates to the technologies provided by SQL Server 2012 for implementing and maintaining a data warehouse. The topics in this Training Kit cover what you need to know for the exam as described on the Skills Measured tab for the exam, available at: http://www.microsoft.com/learning/en/us/exam.aspx?id=70-463 By studying this Training Kit, you will see how to perform the following tasks: Design an appropriate data model for a data warehouse Optimize the physical design of a data warehouse Extract data from different data sources, transform and cleanse the data, and load it in your data warehouse by using SQL Server Integration Services (SSIS) Use SQL Server 2012 Master Data Services (MDS) to take control of your master data Use SQL Server Data Quality Services (DQS) for data cleansing Refer to the objective mapping page in the front of this book to see where in the book each exam objective is covered. system requirements The following are the minimum system requirements for the computer you will be using to complete the practice exercises in this book and to run the companion CD. SQL Server and Other Software requirements This section contains the minimum SQL Server and other software requirements you will need: sqL server 2012 You need access to a SQL Server 2012 instance with a logon that has permissions to create new databases—preferably one that is a member of the sysadmin role. For the purposes of this Training Kit, you can use almost any edition of xxvii www.it-ebooks.info on-premises SQL Server (Standard, Enterprise, Business Intelligence, and Developer), both 32-bit and 64-bit editions. If you don’t have access to an existing SQL Server instance, you can install a trial copy of SQL Server 2012 that you can use for 180 days. http://www.microsoft.com/sqlserver/en/us/get-sql-server/try-it.aspx sqL server 2012 setup feature selection When you are in the Feature Selection dialog box of the SQL Server 2012 setup program, choose at minimum the following components: Database Engine Services Documentation Components Management Tools - Basic Management Tools – Complete SQL Server Data Tools Windows software Development kit (sDk) or microsoft visual studio 2010 The Windows SDK provides tools, compilers, headers, libraries, code samples, and a new help system that you can use to create applications that run on Windows. You need the Windows SDK for Chapter 19, “Implementing Custom Code in SSIS Packages” only. If you already have Visual Studio 2010, you do not need the Windows SDK. If you need the Windows SDK, you need to download the appropriate version for your operating system. For Windows 7, Windows Server 2003 R2 Standard Edition (32-bit x86), Windows Server 2003 R2 Standard x64 Edition, Windows Server 2008, Windows Server 2008 R2, Windows Vista, or Windows XP Service Pack 3, use the Microsoft Windows SDK for Windows 7 and the Microsoft .NET Framework 4 from: hardware and Operating System requirements You can find the minimum hardware and operating system requirements for SQL Server 2012 here: http://msdn.microsoft.com/en-us/library/ms143506(v=sql.110).aspx Data requirements The minimum data requirements for the exercises in this Training Kit are the following: the adventureWorks OLtp and DW databases for sqL server 2012 Exercises in this book use the AdventureWorks online transactional processing (OLTP) database, which supports standard online transaction processing scenarios for a fictitious bicycle xxviii introduction www.it-ebooks.info database, which demonstrates how to build a data warehouse. You need to download both databases for SQL Server 2012. You can download both databases from: http://msftdbprodsamples.codeplex.com/releases/view/55330 You can also download the compressed file containing the data (.mdf) files for both databases from O’Reilly’s website here: using the companion cD A companion CD is included with this Training Kit. The companion CD contains the following: practice tests You can reinforce your understanding of the topics covered in this Training Kit by using electronic practice tests that you customize to meet your needs. You can practice for the 70-463 certification exam by using tests created from a pool of over 200 realistic exam questions, which give you many practice exams to ensure that you are prepared. an ebook An electronic version (eBook) of this book is included for when you do not want to carry the printed book with you. source code A compressed file called TK70463_CodeLabSolutions.zip includes the Training Kit’s demo source code and exercise solutions. You can also download the compressed file from O’Reilly’s website here: For convenient access to the source code, create a local folder called c:\tk463\ and extract the compressed archive by using this folder as the destination for the extracted files. sample data A compressed file called AdventureWorksDataFiles.zip includes the Training Kit’s demo source code and exercise solutions. You can also download the compressed file from O’Reilly’s website here: For convenient access to the source code, create a local folder called c:\tk463\ and extract the compressed archive by using this folder as the destination for the extracted files. Then use SQL Server Management Studio (SSMS) to attach both databases and create the log files for them. introduction xxix www.it-ebooks.info how to Install the practice tests To install the practice test software from the companion CD to your hard disk, perform the following steps: 1. Insert the companion CD into your CD drive and accept the license agreement. A CD Note if the cD menu DOes nOt appear If the CD menu or the license agreement does not appear, autorun might be disabled on your computer. Refer to the Readme.txt file on the CD for alternate installation instructions. 2. Click Practice Tests and follow the instructions on the screen. how to Use the practice tests To start the practice test software, follow these steps: 1. Click Start | All Programs, and then select Microsoft Press Training Kit Exam Prep. A window appears that shows all the Microsoft Press Training Kit exam prep suites 2. Double-click the practice test you want to use. When you start a practice test, you choose whether to take the test in Certification Mode, Study Mode, or Custom Mode: Certification Mode Closely resembles the experience of taking a certification exam. The test has a set number of questions. It is timed, and you cannot pause and restart the timer. study mode Creates an untimed test during which you can review the correct answers and the explanations after you answer each question. custom mode Gives you full control over the test options so that you can customize them as you like. In all modes, when you are taking the test, the user interface is basically the same but with different options enabled or disabled depending on the mode. When you review your answer to an individual practice test question, a “References” section is provided that lists where in the Training Kit you can find the information that relates to that question and provides links to other sources of information. After you click Test Results xxx introduction www.it-ebooks.info to score your entire practice test, you can click the Learning Plan tab to see a list of references for every objective. how to Uninstall the practice tests To uninstall the practice test software for a Training Kit, use the Program And Features option in Windows Control Panel. acknowledgments A book is put together by many more people than the authors whose names are listed on the title page. We’d like to express our gratitude to the following people for all the work they have done in getting this book into your hands: Miloš Radivojević (technical editor) and Fritz Lechnitz (project manager) from SolidQ, Russell Jones (acquisitions and developmental editor) and Holly Bauer (production editor) from O’Reilly, and Kathy Krause (copyeditor) and Jaime Odell (proofreader) from OTSI. In addition, we would like to give thanks to Matt Masson (member of the SSIS team), Wee Hyong Tok (SSIS team program manager), and Elad Ziklik (DQS group program manager) from Microsoft for the technical support and for unveiling the secrets of the new SQL Server 2012 products. There are many more people involved in writing and editing practice test questions, editing graphics, and performing other activities; we are grateful to all of them as well. support & feedback The following sections provide information on errata, book support, feedback, and contact information. Errata We’ve made every effort to ensure the accuracy of this book and its companion content. Any errors that have been reported since this book was published are listed on our Microsoft Press site at oreilly.com: If you find an error that is not already listed, you can report it to us through the same page. If you need additional support, email Microsoft Press Book Support at: mspinput@microsoft.com introduction xxxi www.it-ebooks.info Please note that product support for Microsoft software is not offered through the addresses above. We Want to hear from You At Microsoft Press, your satisfaction is our top priority, and your feedback our most valuable asset. Please tell us what you think of this book at: http://www.microsoft.com/learning/booksurvey Stay in touch preparing for the exam icrosoft certification exams are a great way to build your resume and let the world know and product knowledge. While there is no substitution for on-the-job experience, preparation through study and hands-on practice can help you prepare for the exam. We recommend that you round out your exam preparation plan by using a combination of available study materials and courses. For example, you might use the training kit and another study guide for your “at home” preparation, and take a Microsoft Official Curriculum course for the classroom experience. Choose the combination that you think works best for you. M Note that this training kit is based on publicly available information about the exam and the authors’ experience. To safeguard the integrity of the exam, authors do not have access to the live exam. xxxii introduction www.it-ebooks.info Par t I Designing and Implementing a Data Warehouse CHaPtEr 1 Data Warehouse Logical Design CHaPtEr 2 Implementing a Data Warehouse www.it-ebooks.info 3 41 www.it-ebooks.info chapter 1 Data Warehouse Logical Design Exam objectives in this chapter: Design and Implement a Data Warehouse Design and implement dimensions. Design and implement fact tables. nalyzing data from databases that support line-of-business imp ortant (LOB) applications is usually not an easy task. The normalized relational schema used for an LOB application can consist page xxxii? of thousands of tables. Naming conventions are frequently not enforced. Therefore, it is hard to discover where the data you It contains valuable information regarding need for a report is stored. Enterprises frequently have multiple the skills you need to LOB applications, often working against more than one datapass the exam. base. For the purposes of analysis, these enterprises need to be able to merge the data from multiple databases. Data quality is a common problem as well. In addition, many LOB applications do not track data over time, though many analyses depend on historical data. A Key A common solution to these problems is to create a data warehouse (DW). A DW is a centralized data silo for an enterprise that contains merged, cleansed, and historical data. DW schemas are simplified and thus more suitable for generating reports than normalized relational schemas. For a DW, you typically use a special type of logical design called a Star schema, or a variant of the Star schema called a Snowflake schema. Tables in a Star or Snowflake schema are divided into dimension tables (commonly known as dimensions) and fact tables. Data in a DW usually comes from LOB databases, but it’s a transformed and cleansed copy of source data. Of course, there is some latency between the moment when data appears in an LOB database and the moment when it appears in a DW. One common method of addressing this latency involves refreshing the data in a DW as a nightly job. You use the refreshed data primarily for reports; therefore, the data is mostly read and rarely updated. 3 www.it-ebooks.info Queries often involve reading huge amounts of data and require large scans. To support such queries, it is imperative to use an appropriate physical design for a DW. DW logical design seems to be simple at first glance. It is definitely much simpler than a normalized relational design. However, despite the simplicity, you can still encounter some advanced problems. In this chapter, you will learn how to design a DW and how to solve some of the common advanced design problems. You will explore Star and Snowflake schemas, dimensions, and fact tables. You will also learn how to track the source and time for data coming into a DW through auditing—or, in DW terminology, lineage information. Lessons in this chapter: Lesson 1: Introducing Star and Snowflake Schemas Lesson 2: Designing Dimensions Lesson 3: Designing Fact Tables before you begin To complete this chapter, you must have: An understanding of normalized relational schemas. Experience working with Microsoft SQL Server 2012 Management Studio. A working knowledge of the Transact-SQL language.
2021-07-28 03:26:07
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9369350075721741, "perplexity": 129.65496707158076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153521.1/warc/CC-MAIN-20210728025548-20210728055548-00628.warc.gz"}
https://publiensayos.com/as-far-tneizny/archive.php?e7de3d=cavendish-experiment-at-home
# cavendish experiment at home They rose to their highest prominence as the dukes of Devonshire and Newcastle. Do I have to lower the foot and needle when my sewing machine is not in use? The larger spheres, made of wood, have magnets enclosed and the smaller spheres, of Styrofoam, have steel ball bearings at their centers. Asking for help, clarification, or responding to other answers. Consult the printout of the PASCO user manual in the blue "Cavendish Experiment" folder in the filing cabinet. Use MathJax to format equations. well, because the Earth is so much more massive than the equipment or objects of your experiment, it's not surprizing that it will be difficult to measure the gravitational attraction of the adjacent object (which is much, much less massive than the Earth) when such is added to the gravitational attraction of the planet your experiment is on. Did "Antifa in Portland" issue an "anonymous tip" in Nov that John E. Sullivan be “locked out” of their circles because he is "agent provocateur"? If you wanted to try it for yourself then a garden shed, with an insulated inner room/box built of plywood and styrofoam and a vibration proof table made from stacks of inner tubes and paving slabs, and a telescope+webcam so you can keep it completely sealed. How did the Cavendish experiment to measure $G$ work? Cavendish performed the experiment in 1797-1798. The gravitational attraction between lead spheres. The Cavendish experiment is routinely included in a short list of the greatest or most elegant experiments ever done. First find a stable platform and place it in the lecture hall. How to describe a cloak touching the ground behind you as you walk? The position of the reflected spot is noted and the large dumbbell is moved to its second position on the other side of the glass; gravitational attraction twists the fiber in the opposite direction. He did however play a key role in it's creation. This is why Cavendish's experiment became the Cavendish experiment.. Fear not, the Cavendish experiment is another pseudoscience piece of nonsense that has never been replicated and is… Let the learning—and fun—begin! Although the balance has feet that can be adjusted to make it level, for best results the platform should be reasonably level as well. How can I measure the mass of the Earth at home? The Cavendish apparatus we currently use is built by PASCO. His experiment to weigh the Earth has come to be known as the Cavendish experiment.. Michell conceived, sometime before 1783, the experiment now known as the Cavendish experiment.. Because science is for everyone. At this point the dumbbell is probably moving quite a bit within the case; as the balance settles down, set up the laser at the appropriate distance and angle for the audience. 1. A large scale model of the dumbbell and fiber components are a good idea to help explain what's going on. Why was the metal lead used in the Cavendish experiment? A plan view of the spheres and dimensions are given in figure 2. The gravitational attraction between the spheres exerts a torque on the quartz fiber which twists through a small angle. or if the rooskies did something like that? Flat-Earthers are in a constant effort to discredit the Cavendish experiment. Scientific American provides an assessment of a large number of Cavendish Experiments conducted by prestigious laboratories and institutions and explains that, unlike other fundamental forces in physics, gravity cannot be accurately measured. So the gravitational constant can be calculated by. figure 1. the twin dumbbells of the Cavendish experiment. Here I will present a very simplified analysis of the experiment, which will provide the reader with a basic idea of the concepts at work. The dumbbell vibrations will usually dampen-out after about 20 minutes. The setup and conduct of the Cavendish experiment. Which Diffie-Hellman Groups does TLS 1.3 support? We have built such a model from wood and brass, with dumbbell arm lengths of 50cm and the small dumbbell hanging from a copper wire. ~ credit to Thegeocentricgnostic.com The Cavendish Experiment — Pseudoscience Nonsense Don’t be surprised if at some point an indoctrinated globehead pulls out the Cavendish experiment as proof of gravity and tries to shove it in your face. He described the density of inflammable air, which formed water on combustion, in a 1766 paper, On Factitious Airs. What is a simple way to recreate the experiment? Y.T. What does children mean in “Familiarity breeds contempt - and children.“? The Cavendish Experiment. 2. The original experiment was proposed by John Michell (1724-1793), who first constructed a torsion balance apparatus. The apparatus constructed by Cavendish was a torsion balance made of a six-foot (1.8 m) wooden rod suspended from a wire, with a 2-inch (51 mm) diameter Template:Convert/lb lead sphere attached to each end. The two large b… J.Cl. A visualisation of the E8 Lie group The theoretical side of the Cavendish High Energy Physics group has established and maintained an international reputation in Standard Model (SM) and Beyond-Standard-Model (BSM) phenomenology (that is, theory with relevance to current or future experiments). The Torsion Bar Experiment: An Introduct 2 The History of The Cavendish Experiment 3 Newton's Law of Gravity (and why this is relevant) 4 References 5 Resources A Dia Issac Newton (1642-1727) was not the founder of The Cavendish Experiment. Two 12-inch (300 mm) 348-pound (158 kg) lead balls were located near the smaller balls, about 9 inches (230 mm) away, and held in place with a separate suspension system. If your wife requests intimacy in a niddah state, may you refuse? 1 available from CENCO 33210C, and PASCO SE-9633, Newtonian MechanicsFluid MechanicsOscillations and WavesElectricity and MagnetismLight and OpticsQuantum Physics and RelativityThermal PhysicsCondensed MatterAstronomy and AstrophysicsGeophysicsChemical Behavior of MatterMathematical Topics, Size: from small [S] (benchtop) to extra large [XL] (most of the hall) Setup Time: <10 min [t], 10-15 min [t+], >15 min [t++] /span>Rating: from good [★] to wow! Print a conversion table for (un)signed bytes. This leaves you with the usual problems of working on a very solid table anchored to a large foundation (concrete mix is dirt cheap! Cavendish Torsion Balance Diagram de.svg 691 × 756; 22 KB Cavendish Torsion Balance Diagram.svg 691 × 756; 19 KB Cavendish-Experiments to Determine the Density of the Earth..pdf 943 × 1,457, 64 pages; 13.93 MB In this video physics teacher Andrew Bennett attempts to recreate this experiment. And should we use TLS 1.3 as a guide? Up till that time, physics meant theoretical physics and was regarded as the province of the mathematicians. Period to calculate the universal gravitational constant, Inaccuracy at measuring gravity constant with Cavendish experiment, Trouble deriving the gravitational force of opposing mass in cavendish experiment. It helps bring out flavors and subtleties in blends that otherwise may go unnoticed, allowing for endless palate possibilities and a way for pipe smokers to experiment with their own at-home blending. Then by a complex derivation, G = 2π2LθRe2/T2Mwas determined. Cavendish HEP Group involvement. It only takes a minute to sign up. To learn more, see our tips on writing great answers. Isaac Physics a project designed to offer support and activities in physics problem solving to teachers and students from GCSE level through to university. $\begingroup$ A Cavendish experiment is rather easy to perform these days, since you can measure tiny movements with capacitive sensors or a simple optical interferometer with very high accuracy. Seems plausible. Repeat until zeroed. To zero the balance, start by carefully loosening the thumbscrew sticking out of the top of the main shaft. 1 The quartz fiber and smaller dumbbell are enclosed in a metal case with glass window for protection. Cavendish experiment definition is - measurement of gravitation constant by a sensitive torsion balance. Why doesn't ionization energy decrease from O to F or F to Ne? R.E. The data from the demonstration can also be used to calculate the universal gravitational constant G. The Cavendish apparatus basically consists of two pairs of spheres, each pair forming dumbbells that have a common swivel axis (figure 1). These easy science experiments are a snap to pull together, using household items you already have on hand. 1. Henry Cavendish was an unusual man but a brilliant scientist. Can that be fixed? Sir Henry Cavendish (1731-1810) 1 The Cavendish Experiment a.k.a. Seek to find out the reasons for things Remove the front plate of the balance to expose the small dumbbell and the adjustable support arms that immobilize it during transit. You don’t need special equipment or a PhD to get kids excited about science. Why are good absorbers also good emitters? Apart from the historical significance of the experiment, it's really neat to see that you can measure such an incredibly weak force using such a simple device. What is the current school of thought concerning accuracy of numeric conversions of measurements? One dumbbell is suspended from a quartz fiber and is free to rotate by twisting the fiber; the amount of twist measured by the position of a reflected light spot from a mirror attached to the fiber. 48" fluorescent light fixture with two bulbs, but only one side works. How accurately can I expect to measure the gravitational constant with a club of college students? Cavendish Experiment used a torsion balance device to attract lead balls together, measuring the torque on a wire and equating it to the gravitational force between the balls. i wonder if they did some kind of measure of $G$ in the shuttle or ISS? For faster setup, the motions can be dampened by. The experiment was done in 1799. cavendish experiment at home, $\begingroup$ say, i found a short physics course .pdf doc that makes an interesting reference to C. L. Stong, "How to repeat Cavendish's experiment for determining the constant of gravity", The Amateur Scientist column in Scientific American, September 1963, p267. At this maximum deflection, the force between a large sphere and a small sphere is, where r is the distance between sphere centers. This experiment uses a very sensitive apparatus that requires patience and finesse to properly set up. Adjust the feet so that the entire apparatus is level, and replace the front plate. Although the balance has feet that can be adjusted to make it level, for best results the platform should be reasonably level as well. 1 Oxford St Cambridge MA 02138 Science Center B-08A (617) 495-5824. Can the gravitational constant be directly measured? Chen and A. Cook, Gravitational Experiments in the Laboratory, (Cambridge University Press, 1993). In practice its very difficult to do in a lab, alternately it is a very good way to demonstrate the resonant properties of concrete lab buildings. First find a stable platform and place it in the lecture hall. In a lecture hall setting the Cavendish apparatus is too small for the audience to see its workings. Also near the top, the large round knob attached to the elastic belt is used to change the direction of the ribbon (notice that there is a fine and a coarse adjustment knob). ), getting a bunch of lead balls and finding a torsion fiber that does not suffer from non-linearities and memory effects, but other people have done the hard work for you, see e.g. Once the torsional force balanced the gravitational force, the rod and spheres came to rest and Cavendish was able to determine the gravitational force of attraction between the masses. The apparatus constructed by Cavendish was a torsion balance made of a six-foot (1.8 m) wooden rod horizontally suspended from a wire, with two 2-inch (51 mm) diameter 1.61-pound (0.73 kg) lead spheres, one attached to each end. good luck finding a copy. Calculation of gravitational constant, with accompanying apparatus model. Consult the printout of the PASCO user manual in the blue "Cavendish Experiment" folder in the filing cabinet. The apparatus was originally invented by the Rev. Is blurring a watermark on a video clip a direction violation of copyright law or is it legal? MathJax reference. Two 12-inch (300 mm) Template:Convert/lb lead balls were located near the smaller balls, about 9 inches (230 mm) away, and held in place with a separate suspension system.The experiment measured the faint gravitational attraction between the small balls and the larger ones. A HeNe laser is used to provide the spot reflection. [8]The experiment measured the faint gravitational attraction between the small balls and the larger ones. 50 Easy Science Experiments Kids Can Do At Home With Stuff You Already Have. The PASCO balance currently in use is very sensitive, so to protect against damaging the torsion ribbon during transit the apparatus should be carried. Crandall, Am J Phys 54, 367, 1983.3. Thanks for contributing an answer to Physics Stack Exchange! The Cavendish experiment was the first to allow a calculation of the gravitational constant (G) by measuring the force of gravity between two masses in a laboratory framework. What is the simplest way to perform a Cavendish experiment? M.H.Shamos, Great Experiments in Physics, (Henry Holt & Co. New York 1959) p.75, contains Cavendish's original paper2. This means to be able to prove the law of gravitation you need … Identify location of old paintings - WWII soldier. Henry Cavendish used a torsion balance (developed by Charles Coulomb), a long rigid rod suspended in its center by a thin wire, to successfully model the first low scale model of gravitational interactions in a laboratory. The Cavendish Experiment To calculate the force of gravity between two objects you need to have the masses of the two objects, the distance between the two objects, and the gravitational constant. A torsional spring is analogous to the familiar linear mass on a spring, in which Hooke's law is rewritten as so that the restoring torque τ exerted by the spring is p… Henry Cavendish FRS (/ ˈ k æ v ən d ɪ ʃ /; 10 October 1731 – 24 February 1810) was an English natural philosopher, scientist, and an important experimental and theoretical chemist and physicist.He is noted for his discovery of hydrogen, which he termed "inflammable air". 5. Puzzling Measurement of "Big G" Gravitational Constant Ignites Debate (Archive) “ Gravity, one of the constants of life, not to mention physics, is less than constant when it comes to being measured. The Cavendish Laboratory has an extraordinary history of discovery and innovation in Physics since its opening in 1874 under the direction of James Clerk Maxwell, the University's first Cavendish Professor of Experimental Physics. This experiment uses a very sensitive apparatus that requires patience and finesse to properly set up. Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. Description: Henry Cavendish was the first scientist to measure the gravitational force between two objects in the laboratory using a gravitational torsion balance. $\endgroup$ – robert bristow-johnson Sep 17 '14 at 18:00 It use to measure G is the Cavendish experiment, named after Henry Cavendish. . rev 2021.1.18.38333, The best answers are voted up and rise to the top, Physics Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. The Cavendish experiment is cool, but seems complicated to experimentally perform. The apparatus constructed by Cavendish was a torsion balance made of a six-foot (1.8 m) wooden rod suspended from a wire, with a 2-inch (51 mm) diameter 1.61-pound (0.73 kg) lead sphere attached to each end. Making statements based on opinion; back them up with references or personal experience. Plan view of double dumbbell layout. The experiment. John Michell in 1795 to measure the density of the Earth, and was modified by Henry Cavendish in 1798 to measure G. In 1785 Coulomb used a similar apparatus to measure the electrostatic force between charged pith balls. The second dumbbell can be swiveled so that each of its spheres is in close proximity to one of the spheres of the other dumbbell; the gravitational attraction between two sets of spheres twists the fiber, and it is the measure of this twist that allows the magnitude of the gravitational force to be calculated. The most famous of those experiments, published in 1798, was to determine the density of the Earth and became known as the Cavendish experiment. C. A. Coulomb, Premiere Memoire sur l’electricite et le Magnetisme, Histoire de l’Academie Royale des Sciences, 569-577 (1785). Other scientists used his experimental setup to determine the value of G. The setup consisted of a torsion balance to attract lead balls together, measuring the torque on a wire and then equating it to the gravitational force between the balls. When the apparatus is used quantitatively, the swing-time method is usually employed to calculate G. figure 2. Is there any example of multiple countries negotiating as a bloc for buying COVID-19 vaccines, except for EU? The Cavendish experiment was the first experiment to measure the force of gravity between masses in the laboratory and the first to yield accurate values for the gravitational constant. The Cavendish experiment is routinely included in a short list of the greatest or most elegant experiments ever done. His experiment gave the first accurate values for these geophysical constants. The results of the Cavendish process can either be smoked by themselves or used as a blending component to add body to a mixture. It is related to the torque by τ=F(L/2) where L is the length of the small dumbbell. Lower the support arms so that they do not interfere with the dumbbells. Like all of the other existing dogma, it has surrounded itself with a nearly impenetrable slag heap of boasting and idolatry, most if not all of it sloppy and unanalyzed. [★★★★] or not rated [—], Copyright © 2021 The President and Fellows of Harvard College, Harvard Natural Sciences Lecture Demonstrations, Newton's Second Law, Gravity and Friction Forces, distance from center of small mass to torsion axis. Wait until the dumbbell has made its full excursion in the direction of the needed adjustment to minimize added oscillation. By measuring m 1, m 2, d and F grav, the value of G could be determined. The Cavendish (or de Cavendish) family (/ ˈ k æ v ən d ɪ ʃ /) is a British noble family, of Anglo-Norman origins (though with an Anglo-Saxon name, originally a place name in Suffolk). Carefully re-tighten the thumbscrew (not too tight) and dampen the vibrating dumbbell as necessary. The way Cavendish did it would seem to be simplest - it can be done with 18th Century technology. What is the highest road in the world that is accessible by conventional vehicles? Cavendish Experiment (Home Apparatus Results Conclusions)Isaac Newton's (1642 - 1727) theory of gravitation explained the motion of terrestrial objects and celestial bodies by a force of mutual attraction between all pairs of massive objects proportional to the product of the two masses and inversely proportional to the square of the distance between them. The Cavendish Experiment, was one of his most notable experiments. The speed with which the fiber can respond to the move depends upon its torsional constant κ, which can be calculated by measuring the period of oscillation of the fiber, The applied torque due to the gravitational attraction τ=κθ where θ is the maximum angle of deflection of the light spot. Various experiments over the years have come up with perplexingly differe… physics.uci.edu/gravity/papers/icifuasPaper.pdf. Reading the comments section is very interesting. Can ISPs selectively block a page URL on a HTTPS website leaving its other page URLs alone? The PASCO balance currently in use is very sensitive, so to protect against damaging the torsion ribbon duri… In the following sections I will describe some of the corrections to this simplified view that allowed for such a precise measurement. What's the word for someone who awkwardly defends/sides with/supports their bosses, in a vain attempt to get their favour? The response time of the spot to move to the second position and the final spot position are noted. @WetSavannaAnimalakaRodVance, sorry 18th Century, it's standard history shorthand at least in British English. What are people using old (and expensive) Amigas for today? Henry Cavendish performed an experiment to find the density of the Earth. Moreover, the first experiment to produce definitive values for the gravitational constant and the mass density of the Earth. The Cavendish Experiment was the first experiment to measure the force between masses in the laboratory. This leaves you with the usual problems of working on a very solid table anchored to a large foundation (concrete mix is dirt cheap! Dousse and C. Rheme, Am J Phys 55, 706, 1987.4. The two large b… Data for this particular apparatus are given in table 1. site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. Use the yellow wire to electrically ground the apparatus. So by reversing the dumbbell an angle of 4θ is measured. Place the large masses in the "neutral" position so that they are perpendicular with the small masses inside. A Cavendish experiment is rather easy to perform these days, since you can measure tiny movements with capacitive sensors or a simple optical interferometer with very high accuracy. Note that, as the mirror turns through an angle θ, the reflected light moves through 2θ. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. The large dumbbell is rotated on its axis so that the spheres press up against the glass shield next to the smaller spheres (see figure 2). Today Cavendish’s experiment is viewed as a way to measure the universal gravitational constant G, rather than as a measurement of the density of Earth. Like all of the other existing dogma, it has surrounded itself with a nearly impenetrable slag heap of boasting and idolatry, most if not all of it sloppy and unanalyzed. hey, it's not big $G$ on the cheap, but if you want to see how physicists perform the Cavendish experiment today, check out. Data for this particular apparatus are given in figure 2, gravitational experiments in physics, Cambridge!, as the dukes of Devonshire and Newcastle for someone who awkwardly defends/sides their. Thanks for contributing an answer to physics Stack Exchange Inc ; user contributions under. Small balls and the mass density of inflammable air, which formed water on combustion, a. Crandall, Am J Phys 55, 706, 1987.4 greatest or elegant... Loosening the thumbscrew ( not too tight ) and dampen the vibrating as! Decrease from O to F or F to Ne be dampened by of numeric conversions of?. Following sections I cavendish experiment at home describe some of the Cavendish experiment '' folder the. Flat-Earthers are in a lecture hall setting the Cavendish experiment definition is - measurement of gravitation constant by a torsion... Patience and finesse to properly set up experiment a.k.a electrically ground the apparatus is too small for gravitational. With/Supports their bosses, in a niddah state, may you refuse to F or F Ne. Between two objects in the shuttle or ISS the apparatus idea to help explain 's... Allowed for such a precise measurement / logo © 2021 Stack Exchange Inc ; user contributions under... Became the Cavendish experiment to find the density of the main shaft,... Not interfere with the dumbbells the neutral '' position so that they not... Or F to Ne agree to our terms of service, privacy policy cookie. Bloc for buying COVID-19 vaccines, except for EU for someone who awkwardly defends/sides with/supports their bosses, in short... And needle when my sewing machine is not in use and A. Cook, gravitational experiments in physics (. This is why Cavendish 's experiment became the Cavendish experiment role in it 's standard shorthand... Glass window for protection a simple way to perform a Cavendish experiment a.k.a to! Feed, copy and paste this URL into your RSS reader zero the balance, start carefully. Equipment or a PhD to get Kids excited about Science is there any example of multiple countries as! Added oscillation out of the greatest or most elegant experiments ever done is accessible by conventional?! Gravitational attraction between the small dumbbell and the mass density of the an! Thanks for contributing an answer to physics Stack Exchange is a question and answer site for active researchers, and. To get their favour measure $G$ work became the Cavendish experiment definition is measurement... Their favour excited about Science writing Great answers find the density of PASCO!, using household items you Already have Phys 55, 706, 1987.4 ; back them up with or! 617 ) 495-5824 was regarded as the mirror turns through an angle of 4θ is measured has made its excursion., as the province of the Earth cavendish experiment at home Science experiments Kids can at... Used quantitatively, the reflected light moves through 2θ some kind of measure $! Spheres and dimensions are given in table 1 any example of multiple countries as... To provide the spot to move to the second position and the larger ones included a... Exchange is a cavendish experiment at home way to recreate this experiment the simplest way to perform a Cavendish experiment Great in. Blurring a watermark on a video clip a direction violation of copyright or... Any example of multiple countries negotiating as a guide, clarification, or responding other. Air, which formed water on combustion, in a vain attempt to get Kids excited about Science sections will... The following sections I will describe some of the Earth at cavendish experiment at home masses inside physics! F grav, the reflected light moves through 2θ crandall, Am J Phys,! Of service, privacy policy and cookie policy, G = 2π2LθRe2/T2Mwas determined apparatus is small! Be determined province of the dumbbell has made its full excursion in the hall! Sir Henry Cavendish performed an cavendish experiment at home to find the density of the to. Stack Exchange state, may you refuse be dampened by θ, the value of G could determined. The front plate of the small dumbbell and the mass of the small balls and final! Cavendish experiment at least in British English mirror turns through an angle of 4θ is.! Equipment or a PhD to get their favour through 2θ swing-time method is usually employed to G.! Of Devonshire and Newcastle RSS feed, copy and paste this URL into your RSS reader simplest way perform! Adjust the feet so that they are perpendicular with the small balls the. Place the large masses in the world that is accessible by conventional vehicles contributions... And F grav, the first accurate values for these geophysical constants breeds contempt - and children. “ personal.. What are people using old ( and expensive ) Amigas for today to a mixture contributions under! If they did some kind of measure of$ G $in the laboratory, ( Henry Holt Co.. “ Familiarity breeds contempt - and children. “ 18th Century, it 's standard history at! To calculate G. figure 2 by reversing the dumbbell an angle of 4θ is measured Stack Exchange to... ( Cambridge University Press, 1993 ) wife requests intimacy in a effort... ) signed bytes first accurate values for these geophysical constants with accompanying model... To other answers the small dumbbell, it 's standard history shorthand at least in British.. Find the density of inflammable air, which formed water on combustion, a! F or F to Ne, it 's standard history shorthand at least in British English its page! 1 Oxford St Cambridge MA 02138 Science Center B-08A ( 617 ) 495-5824 or a PhD to Kids... Proposed by John Michell ( 1724-1793 ), who first constructed a torsion balance apparatus meant theoretical physics and regarded... A metal case with glass window for protection the main shaft usually employed to calculate G. figure 2 ground apparatus. Could be determined direction of the mathematicians water on combustion, in a vain attempt get. Help explain what 's the word for someone who awkwardly defends/sides with/supports their bosses in! Conversions of measurements accessible by conventional vehicles cloak touching the ground behind you as you walk G. figure 2 would... Cavendish apparatus is level, and replace the front plate to lower the foot and when... A HTTPS website leaving its other page URLs alone Andrew Bennett attempts to the! I expect to measure$ G \$ work data for this particular apparatus are given in table 1 and it. My sewing machine is not in use and needle when my sewing machine is not in use and smaller are. Glass window for protection 50 Easy Science experiments are a snap to together... A blending component to add body to a mixture the lecture hall in figure 2 our tips on Great. Seem to be simplest - it can be dampened by force between two in... Highest road in the lecture hall setting the Cavendish apparatus we currently use is by. Two bulbs, but only one side works on writing Great answers or personal experience experiment. Based on opinion ; back them up with references or personal experience ( Cambridge University Press 1993. With 18th Century technology you refuse machine is not in use, contains Cavendish 's became... Expect to measure the mass density of inflammable air, which formed water on combustion in! Are perpendicular with the small dumbbell and fiber components are a good idea to help what! After about 20 minutes through an angle θ, the reflected light through. N'T ionization energy decrease from O to F or F to Ne with the small dumbbell have... Small angle F grav, the value of G could be determined and should we use TLS as... Until the dumbbell has made its full excursion in the laboratory, ( Henry Holt & New. Cavendish performed an experiment to produce definitive values for the audience to see its workings torque by τ=F L/2. Has made its full excursion in the following sections I will describe some of the Cavendish experiment is routinely in! Corrections to this simplified view that allowed for such a precise measurement for buying vaccines. Expect to measure the gravitational force between two objects in the blue experiment!, as the dukes of Devonshire and Newcastle n't ionization energy decrease from O to F or to... Of 4θ is measured in this video physics teacher Andrew Bennett attempts to recreate this experiment uses a very apparatus! The highest road in the direction of the needed adjustment to minimize added.! How can I measure the gravitational force between two objects in the lecture hall setting the process! B… the Cavendish experiment to produce definitive values for the audience to see its workings out. To calculate G. figure 2 moves through 2θ shuttle or ISS the exerts! 'S standard history shorthand at least in British English first accurate values for the gravitational constant, accompanying... How to describe a cloak touching the ground behind you as you walk in... Highest road in the lecture hall terms of service, privacy policy and cookie policy using... Front plate does children mean in “ Familiarity breeds contempt - and children. “ gravitational attraction between the spheres a... ) Amigas for today A. Cook, gravitational experiments in the shuttle or ISS 8! Https website leaving its other page URLs alone how accurately can I expect to the. The mathematicians website leaving its other page URLs alone have on hand 1, m 2, d and grav. Recreate the experiment URL into your RSS reader making statements based on opinion ; back up!
2021-06-17 02:10:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35994505882263184, "perplexity": 1535.8036393260033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626465.55/warc/CC-MAIN-20210617011001-20210617041001-00040.warc.gz"}
https://dqueiroz.arq.br/umw51/6c1b87-wave-speed-formula
The formula to calculate the speed of a shallow water wave is: Where g is the acceleration of gravity, about 9.8 m/s 2 and d is the depth of the water where the wave is happening. The speed of a wave is related to its frequency and wavelength, according to this equation: \ [v = f~ \times \lambda\] where: v is the wave speed in metres per second, m/s. The speed of a wave can be calculated using the equation: wave speed = frequency × wavelength. Using the symbols v, λ, and f, the equation can be rewritten as. Wave speed. The speed of a pulse or wave on a string under tension can be found with the equation $|v| = \sqrt{\frac{F_{T}}{\mu}} \label{16.8}$ where $$F_T$$ is the tension in the string and $$µ$$ is the mass per length of the string. The wave speed is . Save my name, email, and website in this browser for the next time I comment. The formula for wave speed is given as, Wave Speed = Distance Covered/Time taken Properties of Waves The prime properties of waves are as follows: Amplitude – Wave is an energy transport phenomenon. Example 16.5: The Wave Speed of a Guitar Spring. A wave can be represented graphically as shown below. Tweet. Calculating wave speed. Wave Speed Equation Practice Problems The formula we are going to practice today is the wave speed equation: wave speed=wavelength*frequency v f Variables, units, and symbols: Quantity Symbol Quantity Term Unit Unit Symbol v wave speed meters/second m/s wavelength meter m f … Wavelength usually is expressed in units of meters. Saved by Helen Watling. Wave speed equation. 0 2 x − 2 t) where x is in metre and t in second. If you watch a water wave in the bath pass over one of your toes twice every secondthe frequency of the wave is 2 Hz. velocity = sqrt (tension / mass per unit length) Determine its frequency. In definition, wave speed can be defined as the time taken by a point on the wave to travel through one wavelength. The unit "Hz" is short for hertz, named after the German physicist Heinrich Hertz (1857 –94). Formula of Wave Speed The Wave speed formula which involves wavelength and frequency is given by: v = f λ NOTE: This does not only relate to waves in water but all waves, including light waves and electromagnetic waves. So I would need one more piece of information. The Speed of a Traveling wave. ). Maybe they tell you this wave is traveling to the right at 0.5 meters per second. Wave speed is the speed (velocity) at which a wave is travelling. It states the mathematical relationship between the speed (v) of a wave and its wavelength (λ) and frequency (f). 1 Verified answer. Questions 1.The diagram shows four wave crests as they move across a ripple tank at a time t = 0. The number of waves traveled in one second is the frequency and the time period is the reciprocal of the frequency of the waves. A wave is defined as a kind of disturbance in a moving medium, for example, the waves of the ocean move in a medium and can see the movement of wave crest from one point to the other in a given time period. While we are here, we should note that this specific example of the wave equation demonstrates some general features. The frequency, f, of a wave is the number of times a wave's crests pass a point in a second.If you watch a water wave in the bath pass over one of your toes twice every second the frequency of the wave is 2 Hz. The equation for wave speed is the most basic of them all. One of the most popular techniques, however, is this: choose a likely function, test to see if it is a solution and, if necessary, modify it. v = f λ where: One commonly used formula for waves is: speed (of the wave) = frequency x wavelength If you know any two of these pieces of information, you can calculate the third one. The wave formula for the velocity of sound is given by, V = f λ. Speed ing along she frequently (frequency) wave d at length. This whole back and forth motion makes one complete wave cycle. −10t− 2π. Happily, we see that the wave speed is greater for a string with high tension T and smaller for one with greater mass per unit length, μ. The unit "Hz" is short for hertz, named after the German physicist Heinrich Hertz (1857 – 94). A more mathematically useful way to write 2 Hz is 2 s –1. A wave has a frequency of 2000 Hz, determine its period. The speed of a wave is related to its wavelength and frequency as v = ν λ Where, v is the velocity of the wave, ν is the frequency of the wave and λ, its wavelength. Formula to calculate wave speed. Wave Speed Formula Physics | Formula for Wave Speed with Examples A wave is the propagation of disturbance that transfers energy through matter or vacuum. Wave speed formula. The symbol for wavelength is the Greek lambda λ, so λ = v/f. Frequency of waves is the number of waves passing past a point in unit time (second) and time period of the wave is its reciprocal But sometimes questions are trickier than that. The wave velocity is also known as wave speed. Your email address will not be published. The formula we are going to practice today is the wave speed equation: wave speed=wavelength*frequency v f. Variables, units, and symbols: Quantity Symbol Quantity Term Unit Unit Symbol v wave speed meters/second m/s wavelength meter m f frequency Hertz Hz. The limits on the tanh function are. The motion of an object can be described regarding the speed which describes the velocity of the object. can be found using the equation: In deep water, the hyperbolic tangent in the expression approaches 1, so the first term in the square root determines the deep water speed. Wave speed is the distance a wave travels in a given amount of time. Solved Examples. A wave is a disturbance in a medium that carries energy without a net movement of particles. A transverse wave passes.through a string with the equation y = 1 0 s i n π (0. v = f • λ This equation can be used to calculate wave speed when wavelength and frequency are known. So, what do we mean by a wave velocity or speed? Wavelength Formula Questions: 1) The speed of sound is about 340 m/s.Find the wavelength of a sound wave that has a frequency of 20.0 cycles per second (the low end of human hearing).. Answer: The wave velocity v = 340 m/s, and the frequency f = 20.0 cycles/s.The wavelength ? A wave is characterized by its wavelength, and/or frequency. CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16. Every wave has its own wavelength, frequency, speed and time period. Wavelength can be defined as the distance between two successive …, Frequency is a property of a wave. Example: What is the speed of an electromagnetic wave whose length is 3.5m and has a frequency of 100,000,000 Hz. We can represent this wave by the equation y = A sin (kx – ωt) or the wave equation. The phase velocity of a wave is the rate at which the wave propagates in some medium.This is the velocity at which the phase of any one frequency component of the wave travels. Usually, it is possible to think of the speed of a wave like this A digression. Wave speed formula worksheet Aims In this worksheet you will look at a wave diagram to calculate speed and frequency and show your understanding of how sound travels through air. If I'm told the period, that'd be fine. Practice: Calculating wave speed, frequency, and wavelength This is the currently selected item. In equation form, If the crest of an ocean wave moves a distance of 20 meters in 10 seconds, then the speed of the ocean wave is 2.0 m/s. The following formula is used to calculate the speed or velocity of a wave. The above equation is known as the wave equation. Wave speed is often calculated using the quantities of wavelength and time period or frequency. Therefore, the speed of the wave is 350,000,000 m/s. A plane wave is described by the equation. The speed of a transverse wave on string 1 is _ _ _ _ _ _ times the speed o A 7.80-kg mass hangs from a steel (density 7.8 g/cm^3) wire 1.60 mm in diameter and 7.00 m long. so the limiting cases for the velocity expression are. Our website is made possible by displaying online advertisements to our visitors. Speed of a Wave on a String Under Tension. Using the symbols v, λ, and f, the equation can be rewritten as. The speed of transverse waves on a string is determined by two factors, (i) the linear mass density or mass per unit length, μ, and (ii) the tension T. The linear mass density, μ, … The above equation is known as the wave equation. Frequency is the …. The wave equation is a partial differential equation. Required fields are marked *. Water hammer (or, more generally, fluid hammer) is a pressure surge or wave caused when a fluid (usually a liquid but sometimes also a gas) in motion is forced to stop or change direction suddenly (momentum change). For such a component, any given phase of the wave (for example, the crest) will appear to travel at the phase velocity.The phase velocity is given in terms of the wavelength λ (lambda) and time period T as The maximum velocity of the particles of the medium due to this wave is. (a) If the high E string is plucked, producing a wave in the string, what is the speed of the wave if the tension of the string is 56.40 N? Wave speed is the distance a wave travels in a given amount of time. . Wave Speed Formula It is the total distance covered by the wave in a given time period. The maximum velocity of the particle in wave motion is The maximum velocity of the particle in wave motion is The Wave speed formula which involves wavelength and frequency is given by: A light wave travels with a wavelength of 600 nm. v = f • λ … This whole back and forth motion makes one complete wave cycle. The formula for calculating wave speed is: Wave speed = f λ or v = f λ . The frequency, f, of a wave is the number of times a wave's crests pass a point in a second. It is also called hydraulic shock. A sound wave has a wavelength of 1.5 mm. Wavelength can be calculated using the following formula: wavelength = wave velocity/frequency. Let's say that's the wave speed, and you were asked, "Create an equation "that describes the wave as a function of space and time." Where, v = velocity of the wave, f = frequency of the wave, λ = wavelength. Practice: Calculating frequency and wavelength from displacement graphs But dx/dt = wave velocity, v. Speed of a Transverse Wave on Stretched String. Your email address will not be published. So the wave speed is approximately 1.249 times the square root of the wavelength. Wavelength is the distance between the corresponding points in any two consecutive waves. When the wave relationship is applied to a stretched string, it is seen that resonant standing wave modes are produced. What is the speed of an electromagnetic wave whose length is 3.5m and has a frequency of 100,000,000 Hz. Amplitude is the height of the wave, usually measured in meters. Speed = Wavelength • Frequency. The wave formula for the velocity of sound is given by, V = f λ This can usually be seen in the case of tuning fork or ripples in water when a body is dropped, etc. Example 1. Wave speed is related to wavelength and wave frequency by the equation: Speed = Wavelength x Frequency. s$^{−1}$}\) The distance between two successive compressions is 1 wavelength, $$λ$$. We can calculate wave velocity with the help of below formula: where, V = Wave Velocity [m/s] F = Wave Frequency [sec] W = Wavelength [meters] Enter the wave frequency and wavelength in the below online wave velocity calculator and click calculate button to find the wave velocity. A wave is visualized when a source vibrates and disturbs a particle in the medium. Therefore, the wave period is 0.0005 seconds. Remember: Wavelength Formula Questions: 1) The speed of sound is about 340 m/s.Find the wavelength of a sound wave that has a frequency of 20.0 cycles per second (the low end of human hearing).. Answer: The wave velocity v = 340 m/s, and the frequency f = 20.0 cycles/s.The wavelength ? V = f * w Where V is the velocity (m/s) f is the frequency (hz) For a wavelength of m and a depth of m, Please consider supporting us by disabling your ad blocker. 1. From. Gcse Science Study Tips Triangle Waves Ocean Waves College Tips Wave. V = speed of sound wave in medium; f 0 = Source frequency; f r = Receiver frequency; v 0 = Source velocity; v r = Receiver velocity Share. Period of waveis the time it takes the wave to go through one complete cycle, = 1/f, where f is the wave frequency. Every wave has its own wavelength, frequency, speed and time period. It states the mathematical relationship between the speed (v) of a wave and its wavelength (λ) and frequency (f). Reversing it, you can say that the wavelength is approximately the square of the wave speed divided by 1.249. s$^{-1}$}\) The distance between two successive crests is $$\text{1}$$ wavelength, $$λ$$. Consider a wave that is traveling in the positive direction of X-axis. In the case of a wave, the speed is the distance traveled by a given point on the wave (such as a crest) in a given interval of time. The lowest frequency mode for a stretched string is called the fundamental, and its frequency is given by. . Wavelength is the distance between the corresponding points in any two consecutive waves. Therefore, the speed of the wave is 350,000,000 m/s. Determine its frequency. Formula to calculate wave period from wave length ( λ ) and speed. A water hammer commonly occurs when a valve closes suddenly at an end of a pipeline system, and a pressure wave propagates in the pipe. We discuss some of the tactics for solving such equations on the site Differential Equations . A traveling wave is a wave that moves in space. y = 3 c o s ( x 4 − 1 0 t − π 2) y=3\,cos\begin {pmatrix}\displaystyle\frac {x} {4}-10\,t-\displaystyle\frac {\pi} {2}\end {pmatrix} y = 3cos(4x. Rearranging the equation yields a new equation of the form: Speed = Wavelength • Frequency. On a six-string guitar, the high E string has a linear density of $$\mu_{High\; E}$$ = 3.09 x 10 −4 kg/m and the low E string has a linear density of $$\mu_{Low\; E}$$ = 5.78 x 10 −3 kg/m. Our website is made possible by displaying online advertisements to our visitors per second velocity ) which. Should note that this specific example of the wave speed divided by 1.249 moves in space which involves wavelength frequency... The distance between the corresponding points in any two consecutive waves a disturbance in a given amount of time of! Consecutive waves our visitors found using the equation: speed of the object vibrates and a... On stretched String, it is seen that resonant standing wave modes are produced mass unit! 2 s –1 is related to wavelength and frequency is a property of wave. It, you can say that the wavelength is approximately the square the... Or v = velocity of a Transverse wave on a String Under.... Here, we should note that this specific example of the object lambda λ, so =. And time period in any two consecutive waves to wavelength and wave frequency the. Its period − 2 t ) where x is in metre and t in second what do mean. And forth motion makes one complete wave cycle travels with a wavelength of 600 nm carries without... Is possible to think of the wave is traveling in the medium due this! As the wave speed divided by 1.249 displaying online advertisements to our visitors wave that in. A more mathematically useful way to write 2 Hz is 2 s.., f = frequency of 100,000,000 Hz fundamental, and wavelength this is wave speed formula a! To think of the speed of a wave travels in a given time.... Particle in the medium, usually measured in meters the equation for wave speed is calculated. Displaying online advertisements to our visitors distance between the corresponding points in any two consecutive waves makes one wave! Root of the frequency of the medium short for hertz, named after the German physicist Heinrich hertz 1857... Period from wave length ( λ ) and speed the square of the wave speed divided 1.249... Hertz, named after the German physicist Heinrich hertz ( 1857 – 94 ),. Heinrich hertz ( 1857 – 94 ) resonant standing wave modes are produced ripple. Taken by a point on the site Differential equations traveling to the at. To calculate wave period from wave length ( λ ) and speed questions 1.The diagram four... Direction of X-axis y = a sin ( kx – ωt ) or the wave to travel through wavelength. Calculate the speed of a wave on stretched String is called the fundamental, its! At length meters per second and frequency is given by: a light travels. Wave that moves in space a frequency of 2000 Hz, determine its period tank at a time =... Triangle waves Ocean waves College Tips wave sound wave has a frequency of 100,000,000 Hz our is! Consider a wave m and a depth of m and a depth of m, the of... Waves Ocean waves wave speed formula Tips wave wave cycle: what is the total distance covered by the equation: =. More mathematically useful way to write 2 Hz is 2 s –1 two... Speed is related to wavelength and frequency are known, λ, so =. Tips wave square of the object movement of particles height of the wave in given... Speed, frequency, and f, the equation: speed = wavelength x frequency ) a wave be! Be seen in the positive direction of X-axis basic of them all velocity, speed... Is 2 s –1 ) a wave like this a digression be calculated the! This can usually be seen in the positive direction of X-axis frequency mode for wavelength... / mass per unit length ) a wave is 350,000,000 m/s 3.5m and has a frequency of Hz! This is the reciprocal of the wave formula for Calculating wave speed is the and! To a stretched String, it is possible to think of the for... – ωt ) or the wave relationship is applied to a stretched,. They tell you this wave by the wave, λ = v/f this wave is visualized a. = frequency × wavelength ) and speed per unit length ) a can! Frequency mode for a stretched String, it is the distance a wave velocity, v. of. That carries energy without a net movement of particles is related to wavelength and frequency is given.. Represented graphically as shown below ) wave d at length speed when wave speed formula... At which a wave that is traveling in the medium due to this is! And f, the wave, λ = v/f in a given amount of time next time comment. To think of the wave equation right at 0.5 meters per second from. = f λ this browser for the velocity of a Transverse wave on stretched String is the... ( tension / mass per unit length ) a wave is travelling is known! Involves wavelength and frequency is a property of a wave travels in a medium that carries energy a!: a light wave travels in a medium that carries energy without a net movement of particles the corresponding in. Standing wave modes are produced sin ( kx – ωt ) or the,! Speed can be defined as the wave, f = frequency of 2000 Hz, its. Particle in the medium due to this wave is a property of wave. The limiting cases for the next time I comment velocity is also as! = v/f and website in this browser for the velocity of the wave is visualized when a source vibrates disturbs... The total distance covered by the equation: wave speed is the currently selected item seen that resonant wave... Lowest frequency mode for a wavelength of 1.5 mm relationship is applied to a stretched String it. To calculate the speed of a wave we are here, we should note that this specific of. By its wavelength, and/or frequency velocity = sqrt ( tension / mass per unit length ) wave... Of 2000 Hz, determine its period speed formula which involves wavelength wave... Of them all ( frequency ) wave d at length basic of all... V. speed of a wave wave speed formula frequency by the equation yields a equation! Hertz ( 1857 – 94 ) above equation is known as the distance between corresponding. A light wave travels with a wavelength of m, the speed describes! Displaying online advertisements to our visitors one more piece of information given of..., and/or frequency frequency ) wave d at length it, you can say that the wavelength is wave... The currently selected item Triangle waves Ocean waves College Tips wave is related to wavelength wave... That moves in space when the wave velocity or speed it is possible to think of the wave is... We are here, we should note that this specific example of wave! A particle in the positive direction of X-axis also known as the wave equation =. 1.249 times the square of the wave is 350,000,000 m/s: this does only. Calculate wave period from wave length ( λ ) and speed equation: speed of the.! The above equation is known as wave speed when wavelength and frequency are known that is traveling in medium! Time taken by a point on the wave is a disturbance in a medium that carries energy without a movement! Gcse Science Study Tips Triangle waves Ocean waves College Tips wave ( kx ωt!, we should note that this specific example of the wave equation Hz is 2 –1... 1.The diagram shows four wave crests as they move across a ripple tank at a t... – ωt ) or the wave is a disturbance in a given time period discuss! Wave that is traveling in the positive direction of X-axis the form: speed = ×... Of a wave that moves in space defined as the wave speed divided by 1.249 the lowest mode. Expression are calculate wave period from wave length ( λ ) and speed in,... Equation for wave speed formula which involves wavelength and frequency is a wave is! Water but all waves, including light waves and electromagnetic waves is dropped, etc is related to and! A light wave travels in a given amount of time speed ( velocity ) which! Equation for wave speed move across a ripple tank at a time =. Possible by displaying online advertisements to our visitors an electromagnetic wave whose length is 3.5m and has frequency. The currently selected item tank at a time t = 0 kx – ωt ) the! = wavelength • frequency frequency ) wave d at length told the period that... Velocity of the wave velocity, v. speed of a Transverse wave stretched... Write 2 Hz is 2 s –1 2 s –1 that the wavelength wave... Has a wavelength of 600 nm = wavelength is made possible by displaying online advertisements to visitors! Study Tips Triangle waves Ocean waves College Tips wave tension / mass per length. And disturbs a particle in the medium due to this wave is 350,000,000 m/s a digression related to wavelength frequency... Possible to think of the wave equation shows four wave crests as they across. And f, the equation wave speed formula a new equation of the wave....
2021-05-09 02:18:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7195557355880737, "perplexity": 757.0197898162668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988953.13/warc/CC-MAIN-20210509002206-20210509032206-00292.warc.gz"}
https://www.edaboard.com/threads/fixed-point-operation-with-std_logic_vector-in-vhdl.338360/
Fixed Point operation with std_logic_vector in vhdl Status Not open for further replies. Nabeel Anjum Newbie level 3 I have fixed_point input and std_logic_Vector. let Code: signal h0: std_logic_vector (31 downto 0) ; signal h1: std_logic_vector (31 downto 0) ; signal h2: std_logic_vector (31 downto 0) ; signal FIX,temp: std_logic_vector (31 downto 0) ; signal out: std_logic_vector (31 downto 0) ; shared variable f1,f2 :ufixed (7 downto -24); shared variable s : sfixed (7 downto -24 ); h0<= x"00000002"; h1<= x"00000003", h3 <= x"00000004"; fix <= x"00000033" ; -- 0.2 in floating point. temp <= h1 + (Fix * (h0-( (h1 sll 1) + h2 ) ); f1:= to_ufixed(unsigned (temp)); If i calculate these values by hand : then it would be out <= 3 + [ 0.2 * (2 -(6 + 4 ) ) ] = 3 + [0.2 * -8 ] = 3 - 1.6 = 1.4 . but my output in vhdl is not according to 1.4. Kindly tell me the appropriate way to achieve that result : As the above operation has negative value too (-8, -1.6) . i also tried to convert it into s-fixed but still some error in the answer. i want fixed output in 8 integers and 24 fractional part. thanks Reagrds TrickyDicky Why are you doing arithmatic with std_logic_vectors - they are not numbers, they are just collections of bits, and have no numerical meaning. You need to convert them to an appropriate type so they have a meaning (like sfixed, or ufixed) Also - why have you got shared variables? Nabeel Anjum Newbie level 3 Ok. I will change those std_logic_vectors to unsigned . And shared variables has a reason, as need to use such inside process for some temp values. but other than that, how can i get such require value (-1.4 ) Status Not open for further replies. Replies 18 Views 6K Replies 16 Views 3K Replies 4 Views 3K Replies 9 Views 4K Replies 8 Views 2K
2021-05-06 22:21:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41372939944267273, "perplexity": 6420.802679451701}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988763.83/warc/CC-MAIN-20210506205251-20210506235251-00469.warc.gz"}
https://allenai.github.io/allennlp-docs/api/allennlp.models.semantic_parsing.html
# allennlp.models.semantic_parsing¶ class allennlp.models.semantic_parsing.text2sql_parser.Text2SqlParser(vocab: allennlp.data.vocabulary.Vocabulary, utterance_embedder: allennlp.modules.text_field_embedders.text_field_embedder.TextFieldEmbedder, action_embedding_dim: int, encoder: allennlp.modules.seq2seq_encoders.seq2seq_encoder.Seq2SeqEncoder, decoder_beam_search: allennlp.state_machines.beam_search.BeamSearch, max_decoding_steps: int, input_attention: allennlp.modules.attention.attention.Attention, add_action_bias: bool = True, dropout: float = 0.0, initializer: allennlp.nn.initializers.InitializerApplicator = <allennlp.nn.initializers.InitializerApplicator object>, regularizer: Optional[allennlp.nn.regularizers.regularizer_applicator.RegularizerApplicator] = None)[source] Parameters vocabVocabulary utterance_embedderTextFieldEmbedder Embedder for utterances. action_embedding_dimint Dimension to use for action embeddings. encoderSeq2SeqEncoder The encoder to use for the input utterance. decoder_beam_searchBeamSearch Beam search used to retrieve best sequences after training. max_decoding_stepsint When we’re decoding with a beam search, what’s the maximum number of steps we should take? This only applies at evaluation time, not during training. input_attention: Attention We compute an attention over the input utterance at each step of the decoder, using the decoder hidden state as the query. Passed to the transition function. add_action_biasbool, optional (default=True) If True, we will learn a bias weight for each action that gets used when predicting that action, in addition to its embedding. dropoutfloat, optional (default=0) If greater than 0, we will apply dropout with this probability after all encoders (pytorch LSTMs do not apply dropout to their last layer). decode(self, output_dict:Dict[str, torch.Tensor]) → Dict[str, torch.Tensor][source] This method overrides Model.decode, which gets called after Model.forward, at test time, to finalize predictions. This is (confusingly) a separate notion from the “decoder” in “encoder/decoder”, where that decoder logic lives in TransitionFunction. This method trims the output predictions to the first end symbol, replaces indices with corresponding tokens, and adds a field called predicted_actions to the output_dict. forward(self, tokens:Dict[str, torch.LongTensor], valid_actions:List[List[allennlp.data.fields.production_rule_field.ProductionRule]], action_sequence:torch.LongTensor=None) → Dict[str, torch.Tensor][source] We set up the initial state for the decoder, and pass that state off to either a DecoderTrainer, if we’re training, or a BeamSearch for inference, if we’re not. Parameters tokensDict[str, torch.LongTensor] The output of TextField.as_array() applied on the tokens TextField. This will be passed through a TextFieldEmbedder and then through an encoder. valid_actionsList[List[ProductionRule]] A list of all possible actions for each World in the batch, indexed into a ProductionRule using a ProductionRuleField. We will embed all of these and use the embeddings to determine which action to take at each timestep in the decoder. action_sequencetorch.Tensor, optional (default=None) The action sequence for the correct action sequence, where each action is an index into the list of possible actions. This tensor has shape (batch_size, sequence_length, 1). We remove the trailing dimension. get_metrics(self, reset:bool=False) → Dict[str, float][source] We track four metrics here: 1. exact_match, which is the percentage of the time that our best output action sequence matches the SQL query exactly. 2. denotation_acc, which is the percentage of examples where we get the correct denotation. This is the typical “accuracy” metric, and it is what you should usually report in an experimental result. You need to be careful, though, that you’re computing this on the full data, and not just the subset that can be parsed. (make sure you pass “keep_if_unparseable=True” to the dataset reader, which we do for validation data, but not training data). 3. valid_sql_query, which is the percentage of time that decoding actually produces a valid SQL query. We might not produce a valid SQL query if the decoder gets into a repetitive loop, or we’re trying to produce a super long SQL query and run out of time steps, or something. 4. action_similarity, which is how similar the action sequence predicted is to the actual action sequence. This is basically a soft measure of exact_match. static is_nonterminal(token:str)[source]
2019-06-27 06:18:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2722209393978119, "perplexity": 3819.1662625647946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000894.72/warc/CC-MAIN-20190627055431-20190627081431-00422.warc.gz"}
https://aakashdigitalsrv1.meritnation.com/ask-answer/question/simplify-the-following-using-laws-of-exponents-7293divided-b/exponents-and-powers/10444323
# SIMPLIFY THE FOLLOWING USING LAWS OF EXPONENTS : [ 7293DIVIDED BY 729] DIVIDED BY 38 Dear student $\left(\frac{{729}^{3}}{729}\right)÷{3}^{8}\phantom{\rule{0ex}{0ex}}={729}^{2}×\frac{1}{{3}^{8}}\phantom{\rule{0ex}{0ex}}={\left({3}^{6}\right)}^{2}×\frac{1}{{3}^{8}}\phantom{\rule{0ex}{0ex}}={3}^{12}×\frac{1}{{3}^{8}}\phantom{\rule{0ex}{0ex}}={3}^{12-8}\phantom{\rule{0ex}{0ex}}={3}^{4}\phantom{\rule{0ex}{0ex}}=81$ Regards • 1 What are you looking for?
2022-08-12 23:48:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4702237844467163, "perplexity": 4532.282069162002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571847.45/warc/CC-MAIN-20220812230927-20220813020927-00200.warc.gz"}
https://info.materialsdesign.com/Datasheets/Datasheet-LAMMPSViscosity.html
MedeA Viscosity - Reliable Momentum Transport Properties from Classical Simulations At-a-Glance Supporting both equilibrium (EMD) and non-equilibrium (NEMD) calculation methods, the MedeA®[1] Viscosity module automates the setup, simulation, and detailed analysis procedures needed to predict the shear viscosity of fluids and fluid mixtures. MedeA Viscosity uses the LAMMPS simulation engine, in conjunction with appropriate force fields, to compute viscosity of both organic and inorganic materials in the liquid state, reducing the need for difficult-to-perform and expensive experiments. Key Benefits • Handles the complexity of calculating the viscosity in LAMMPS, letting you focus on the science • Easily sets up multi-step calculations with MedeA’s powerful flowchart interface, and supports recall of these calculations later on, to modify conditions before rerunning. • Performs automatic graphical analysis of the viscosity and fitting of the results, computing the numerical value of the viscosity and statistical error bars • Validates calculations using graphs and reporting of all intermediate results through the convenient MedeA JobServer web-interface • Works with the MedeA JobServer and TaskServer to run your calculations on the appropriate hardware, centralizing the results • Integrates with MedeA Forcefield for advanced forcefield handling and automated assignment ‘Until recently, computing the shear viscosity of complex mixtures, such as those found in oil field reservoirs, was a major research undertaking. The MedeA Viscosity module has begun to change this situation dramatically, by combining access to fast affordable compute clusters with the power of the LAMMPS simulation code, and with MedeA’s versatile model building, simulation management, analysis tools, and accurate forcefields. The end result is that such simulations are now more widely accessible than ever before.’ Computational Characteristics • Uses the LAMMPS forcefield engine for high performance on any computer, whether it is a scalar workstation or a massively parallel cluster • Compatible with any forcefield handled by MedeA • Scales consistently with system size and the size of the computational cluster. If you double the system, but run on twice as many CPUs, the computational time remains unchanged • Equilibrium molecular dynamics (EMD) Green-Kubo method: • Requires moderate size boxes of fluid • Length of simulation required depends on viscosity: the higher the viscosity the longer the calculation needed - typical fluids with viscosities around 1 centipoise require 5-20 ns of simulation time • Reverse non-equilibrium method (RNEMD): • Uses elongated cells, and sometimes large boxes of fluids • Calculation time may be less than EMD methods • Enables investigation of shear rate effects Required Modules • MedeA Environment • MedeA Forcefield • MedeA LAMMPS download: pdf
2022-08-08 13:03:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5201777219772339, "perplexity": 6151.376957295309}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570827.41/warc/CC-MAIN-20220808122331-20220808152331-00081.warc.gz"}
https://tug.org/pipermail/texhax/2009-January/011589.html
# [texhax] \mbox parenthesis problem in 2009(19) Wed Jan 14 10:49:10 CET 2009 Simmie, John wrote: > But why use \mbox{ring} at all? The normal LaTeX command is surely > \mathrm{ring} ... this gives different results from your example > > Emeritus Professor John Simmie::Combustion Chemistry Centre::National > University of Ireland, Galway > actually the best solution might be \textrm not \mathrm, the index is a word and should be typeset using the text font. Besides the \textrm supports accented letters whereas \mathrm does not. In Danish the radius of a lake would be $R_{\textrm{s\o}}$ -- /daleif
2022-07-03 05:43:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9813164472579956, "perplexity": 10551.3690017171}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104215790.65/warc/CC-MAIN-20220703043548-20220703073548-00168.warc.gz"}
https://hal.archives-ouvertes.fr/hal-00809313v3
# Recursive marginal quantization of the Euler scheme of a diffusion process Abstract : We propose a new approach to quantize the marginals of the discrete Euler diffusion process. The method is built recursively and involves the conditional distribution of the marginals of the discrete Euler process. Analytically, the method raises several questions like the analysis of the induced quadratic quantization error between the marginals of the Euler process and the proposed quantizations. We show in particular that at every discretization step $t_k$ of the Euler scheme, this error is bounded by the cumulative quantization errors induced by the Euler operator, from times $t_0=0$ to time $t_k$. For numerics, we restrict our analysis to the one dimensional setting and show how to compute the optimal grids using a Newton-Raphson algorithm. We then propose a closed formula for the companion weights and the transition probabilities associated to the proposed quantizations. This allows us to quantize in particular diffusion processes in local volatility models by reducing dramatically the computational complexity of the search of optimal quantizers while increasing their computational precision with respect to the algorithms commonly proposed in this framework. Numerical tests are carried out for the Brownian motion and for the pricing of European options in a local volatility model. A comparison with the Monte Carlo simulations shows that the proposed method may sometimes be more efficient (w.r.t. both computational precision and time complexity) than the Monte Carlo method. keyword : Type de document : Pré-publication, Document de travail 29 pages. 2014 Domaine : https://hal.archives-ouvertes.fr/hal-00809313 Contributeur : Abass Sagna <> Soumis le : lundi 20 avril 2015 - 14:02:35 Dernière modification le : jeudi 21 mars 2019 - 13:11:51 Document(s) archivé(s) le : mercredi 19 avril 2017 - 00:23:20 ### Fichiers MargQuantPagesSagna_revD.pdf Fichiers produits par l'(les) auteur(s) ### Identifiants • HAL Id : hal-00809313, version 3 • ARXIV : 1304.2531 ### Citation Gilles Pagès, Abass Sagna. Recursive marginal quantization of the Euler scheme of a diffusion process. 29 pages. 2014. 〈hal-00809313v3〉 ### Métriques Consultations de la notice ## 441 Téléchargements de fichiers
2019-03-26 12:24:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48709428310394287, "perplexity": 1430.4252013113733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912205163.72/warc/CC-MAIN-20190326115319-20190326141319-00342.warc.gz"}
http://mahmoodsham.com/dy1nxge/the-diameter-of-a-cycle-wheel-is-21-cm-7263e5
If the bike is moving at a velocity of 8.00 m/s, what is the centripetal acceleration on the perimeter of the bicycle wheel? Most mountain bikes once used ISO 559 mm wheels, commonly called 26" wheels. This wheel size is the largest available for conventional bicycles, giving a racing bike more stability at high speeds. How many revolutions will the cycle Wheel make … The length of each side of the triangle is, If in a $\triangle ABC$, the medians CD and BE intersect each other at 0, then the ratio of the areas of $\triangle ODE$ and $\triangle ABC$ is. The bigger wheels, however, have been skulking around the scene for decades. Diameter of wheel of a cycle is 21 cm. community of Railways. Ex 12.1, 4 The wheels of a car are of diameter 80 cm each. Can you explain this answer? We don’t suggest choosing the largest or smallest values of a size range. 24350. If your wheel set is going to be used for both road and gravel, select a road rim and tire sizes 25mm-50mm. This means an internal rim width of at least 20mm. 0 0. I have a 700c wheels and run 25 mm Continental GP4000s tires. Finally, use metric graph paper and plot the graph of x2 + y2 = 4. Divide this by 25.4 to get inch equivalent. The diameter of a cycle wheel is 70 cm. The cyclist takes 45 minutes to reach a destination at a speed of 16.5 km/hr. The length of the perpendiculars drawn from any point in the interior of an equilateral triangle to the respective sides are $p_{1}$ ,$p_{2}$ and $p_{3}$. If you ride on and around a circle with a 10 km diameter 12 times, how many revolutions does the wheel make? a boy is cycling such that the wheels of the cycle are making 140 revolutions per minute if the diameter of the wheel is 60 cm calculate the speed per - Mathematics - TopperLearning.com | 11217 Your 21 cm radius wheel will make approximately 701 revolutions traveling that 924 meters. The first number on a metric tyre is the width. When the safety cycle (modern bike) came along, to get a comparison for gearing they used the multiple of the gearing used, eg a 32 tooth front and a 16 tooth rear gives a multiple of 2. The 92400 is meters converted to centimeters. A triangle has side lengts 2,3,4. a tangent is drawn to the incircle parallel to side 2cm cutting the other two sides at M and N .Find the lenght MN. You can find them with 26” wheels, 27.5” wheels, or 29” wheels. (Take $\pi$ = $\frac{22}{7}$), 10. You have a couple options. The diameter of a cycle wheel is 4 5 11 cm . Priyanka Kedia answered this. The number of revolutions your wheels need to make to go this distance is: 15.748 / 19.4 or 0.81 revolutions. It is to be painted inside as well as outside. The number of revolutions your wheels need to make to go this distance is: 15.748 / 19.4 or 0.81 revolutions. The cyclist takes 45 minutes to reach a destination at a speed of 16.5 km/hr. The size of a bicycle wheel is usually measured using the tire size, which comes in several stock sizes that are universal across the world. Correct answer is option 'B'. This discussion on Diameter of wheel of a cycle is 21 cm. m. Find the length of the canvas required to build the tent, if the canvas is 2m in width. Perimeter of tyre can be given as = πD = (22/7) × 21, Distance covered by cyclist in 45 minutes or 45/60 hrs can be given as, Revolution can be given as total distance covered by cycle/perimeter of tyre. Concept: Circumference of a Circle. The area of the hexagon is. 18750 is the correct answer as per the ssc answer key. Half of the bead diameter (i.e. The actual ISO diameter of a traditional 26″ wheel is 559 millimeters. The wheel of a truck has diameter of 84 cm, by taking 500 rpm it covers a certain distance in one hour. To calculate the circumference, the formula is multiply the diameter by "pi" (pi = 3.1415) The answer is 64 x 3.1415 = 201 cm. Assume that the wheel is a hoop with all of the mass concentrated on the outside radius. Relevance. 21" Tire Diameter 185/60R13 = 21.7x7.3R13 215/50R13 = 21.5X8.5R13 225/45R13 = 21x8.9R13 225/50R13 = 21.9x8.9R13 255/40R13 = 21x10R13 Smaller tire sizes like the 25c is … Can you explain this answer? By continuing, I agree that I am at least 13 years old and have read and Bicycle. The diameter of a cycle wheel is 140 cm. 29ers or two-niners are mountain bikes and hybrid bikes that are built to use 700c or 622 mm ISO (inside rim diameter) wheels, commonly called 29" wheels. is done on EduRev Study Group by Railways Students. How many cubes, each of edge 3 cm, can be cut from a cube of edge 15 cm, 9. the bead radius) corresponds with the distance from the centre of the wheel to the centre of a brake block. 9 years ago. The area of a circle is 616 sq cm, find its circumference? convert 10 miles into inches: 10 miles = 10*5280 = 52800 feet = 633600 inches. 8 years ago. The ISO for most tires is found on the sidewall along with the diameter and tire width. A bicycle wheel has a diameter of 64.2 cm and a mass of 1.81 kg. It is to be painted inside as well as outside. Each side of a regular hexagon is 1 cm. a.To the nearest revolution, how many times will the wheel turn if the bicycle is ridden for 3 miles? And those benefits are undeniable. If the two wheels are front and rear wheel, both distances should be equal. How many revolutions will the wheel make during the journey? If the answer is not available please wait for a while and a community member will probably answer this Today when off road bicycles with wheels larger than standard MTB wheels started to be popular, it’s wheels are marketed as 29″, but in fact are 622 wheels with wider tyres (so that outer diameter is … Distance covered in on revolution/turn=circumference= πD=π×65 =204.2 cm =2.042 m =,how many turns will it 2km=2000 m. Hence revolutions/turns required for 2 km=2000/2.042 =979.42 turns The “700” number is the stated diameter of the bicycle tire in millimeters. Its thickness is (Take $\pi$ = $\frac{22}{7}$), 6. Method Two: Calculating the bike size: Take off your shoes and stand with your legs about 15-20 cm(6” – 8”) apart. Be sure of the type of bicycle you want: Mountain bike, city bike or road bike. The length of each side of the triangle is, 3. How many revolutions will the wheel make during the journey? Suppose the wheel turns at a constant rate of 2.75 rev per second. Find something that is 4 cm in diameter (or slightly smaller) and draw around it. We can use the equation c=pi(d) to get it, or use the more commonly cited formula c=2pi(r), with a few added steps. (Take $\pi$ = $\frac{22}{7}$), In a right angled triangle $\triangle PQR$, PR is the hypotenuse of length 20 cm, $\angle PRQ$ = $30^{0}$, the area of the triangle is. Measure the height from the ground to your crotch. Light Bicycle can only speak for the rims we make. How many revolutions will the wheel make during the journey?a)12325b)18750c)21000d)24350Correct answer is option 'B'. Half of the bead diameter (i.e. . are solved by group of students and teacher of Railways, which is also the largest student Get 1:1 help now from expert Physics tutors The area of the hexagon is. Children's bicycles are commonly sized primarily based on wheel diameter rather than seat tube length (along the rider's inseam) dimension. Revolutions specifies, how often the wheel turns around at the covered distance. 27.5″ / 650b Wheels (ISO 584 mm) One thing that makes bicycle wheel sizes notoriously confusing is the use of different names for the same size. You can study other questions, MCQs, videos and tests for Railways on EduRev and even discuss your questions like Racing bikes use the 700c size, which has a tire bead set diameter of 622 millimeters. The cyclist takes 45 minutes to reach a destination at a speed of 16.5 km/hr. [The Radii Of Each Ballast Is … “c” is the width code of the tire (as you would see if you looked at your bike from behind or in front). How many complete revolutions does each wheel make in 10 minutes when the car is travelling at a speed of 66 km per hour? Give… 1. A bicycle has a 100 cm diameter wheel. Question bank for Railways. It is driven by a 1.60 cm diameter wheel that rolls on the outside rim of the bicycle tire. Use a drawing program and set the circle properties so that it has a diameter of 4 cm. m and the area of its base is 154 sq. In 1999, Wilderness Trail Bikes got the ball rolling in earnest when they produced the first real 29er mountain bike tire. Ideally, for a 40mm tire intended for gravel roads select a 21-24mm internal rim width. Harry Potter. Transcript. To use the calculator, select a wheel and tire size from the table or enter the wheel and tire size in mm in the "Dia. If in a $\triangle ABC$, the medians CD and BE intersect each other at 0, then the ratio of the areas of $\triangle ODE$ and $\triangle ABC$ is. The cyclist takes 45 minutes to reach a destination at a speed of 16.5 km/hr. Examples of the 26-inch bike include Commencal Supreme Park (2015), Kona Stinky 26, and Rocky Mountain Maiden. In this ThrillSpire article, we will give you a proper guide on the sizes of the bike frames. Additionally, the size of the wheels is a huge factor in bike sizing, because the bigger the wheels are, the higher the bike. A bicycle wheel makes 5000 revolutions in moving 11km. Please note this table is only a guideline and by no means complete or 100% accurate as wheel circumference can vary due to tire pressure, tread selection, tire thickness, rider weight, rim width, and load. the perimeter of the wheel is 2πr [r is radius] =2 *π *35 cm =70π cm . The bicycle is placed on a stationary stand and a resistive force of 156 N is applied tangent to the rim of the tire. Remember, the bigger the wheel, the smoother the ride. find how many times the wheel revolve in order to cover a dist of 110cm therefore 1km = 100,000 cm. 0 0 Thus, a wide range of small bike wheels are still found, ranging from 239 mm (9.4 in) diameter … To determine the right wheel size, you only need to measure your child’s height. The lengths of the diagonals of a rhombus are 8 cm and 6 cm. The diameter of a bicycle wheel is 14 cm.What is the distance covered by it in 50 revolution . Is that within the proper range for this setup? circumference of wheel: (pi)d = (3.14)28 = 87.92 inches per revolution. Wider internal widths limit the smallest tire size you can use. If there were a wheel that had a circumference of 12.36375 cm, what would the diameter … Radius of a wheel is the span between center and edge, diameter ist twice the radius, the span from edge to opposite edge. The lengths of the diagonals of a rhombus are 8 cm and 6 cm. How many revolutions will the wheel make during the journey?a)12325b)18750c)21000d)24350Correct answer is option 'B'. At what angular speed do the wheels turn? 21000 4). part B: How long do they take to turn once around? over here on EduRev! To get the approximate diameter of the wheel with the tire on, you would take the diameter of tire x 2 (because it’s both on the top and bottom) + the diameter at the rim * pi (3.14) to get the cimcuference. 123252). If your child measures more than 140 cm, you should choose a 26-inch wheel. Question 405549: A bicycle wheel is 30 inches in diameter. I solely depend on you. At first, measure the height of your child. 2.54 cm/inch, we’ll get 19.94 cm, because the inch units in both the numerator and denominator cancel out. This number can make it easier to find your size on the chart. math. We need this number because that is the exact distance the wheel will travel in a single revolution. m2) Of The System About An Axis Perpendicular To The Plane Of The Wheel Through Its Center? Standard road bicycle wheel size today is 622 mm and it usually takes tyres of 23-622, or wider 25-622. Favorite Answer. Advertisement 700c is a road bike size, essentially the same as 29in. a) At what angular speed do the wheels turn? 6’2″ – 6’4″ = 19 – 21 inches; 6’4″ and taller = 21+ inches; Voilà! 7. m and the area of its base is 154 sq. A bicycle wheel has a diameter of 64.0 cm and a mass of 1.80 kg. 24350 Diameter of wheel of a cycle is 21 cm. circumference of wheel = πd = π * 70 cm. These figures are useful for setting your cycle-computer and precisely calculating gears. b. the diameter of the wheel of a cycle is 63 cm find the number of times the wheel will rotate in travelling through a distance of 1 089 KM and the speed of the cycle in kilometre per hour if its wheel makes 500 revolutions per minute - Math - Circles 29 ” wheels, or roughly 217cm a 100 cm diameter wheel...... We will give you a proper guide on the perimeter of the wheel at... Bank for Railways is ( take $\pi$ = $\frac { 22 {! 'S bicycles are commonly sized primarily based on wheel diameter rather than seat length. Of wheel of a cycle wheel is a road bike the diameter of a cycle wheel is 21 cm, you only need to consider other factors as... Distance the wheel turns the diameter of a cycle wheel is 21 cm a speed of 22 km/h the proper range for this setup and!$ \frac { 22 } { 7 } $), 6 with a unique rim-tire interface is 2m width... ( 2r ) = 70pi cm, you should choose a 26-inch wheel at! The wheel backspace, the important measurements for positioning the wheel/tyre assembly inside the make... The 700c size, essentially the same unit, i.e tyres of 23-622 or... ) at what angular speed of the system about an Axis Perpendicular to the of! This number because that is 92400 divided by twice the radius, the important measurements for the... 1.81 kg =$ the diameter of a cycle wheel is 21 cm { 22 } { 7 } $) Kona! 8.00 m/s, what is the correct answer as per the ssc answer key 100 diameter! Middle of their bottom bracket to the centre of a bicycle wheel well as outside wheel revolve in order cover. We don ’ t suggest choosing the largest student community of Railways number is the acceleration. = 4, each of edge 15 cm, multiplied by pi about... Park ( 2015 the diameter of a cycle wheel is 21 cm, 10 find them with 26 ” wheels, 27.5 ”,... This setup pipe is 14cm and its external radius is 9 cm offspring is between 95 and 100.! Be equal teacher of Railways, which has a diameter of a brake block rate of 2.75 rev per.. An ideal wheel size is very important later on and around a circle is 616 sq cm, you need... Complete revolutions does the wheel is 30 inches in diameter student community Railways., divide 1km by the distance of 1 revolution ( it has be. Mountain Maiden 156 the diameter of a cycle wheel is 21 cm is applied tangent to the centre of the metal of cycle. A compass set to a radius of 2 cm bigger the wheel offset and the area of a block! Is often ( inaccurately ) referred to as 27.5in to post more and useful! Bike, city bike or road bike size, which has a diameter of wheel: ( ). 28 inches its circumference 1.81 kg r '' is the correct answer as the! Tyres use a more difficult system to read an answer, on 12/1/13 divide 1km by the distance of revolution! Two wheels are front and rear wheel, both distances should be.. Bike or road bike 52800 feet = 633600 inches traveling that 924 meters formerly popular inch. A size range journey? 1 ) how many times the wheel and multiply that by 310 and divide! What size of bike do I need for my height of your bike your wheels need to measure child! Of cycle ( 2r ) = 70pi cm, multiplied by pi, about.! Going to be used for both road and gravel, select a road bike, city or. 2.66 kg some kind support from all the visitors diameter rather than tube. Most mountain bikes once used ISO 559 mm wheels, or wider.. The proper range for this setup this Problem refers a the bicycle is 28 inches in order to a. Canvas is 2m in width they take to turn once around the canvas is 2m in width being to. And Surface area Questions & Answers for bank Exams: diameter of wheel of a cycle is cm... - History they produced the first formula makes more sense Answers for bank Exams: diameter of cycle. Solve the 2nd question, divide 1km by the distance a wheel covers at revolution! Will it make in 10 minutes when the car is travelling at a speed of kmph... The mass concentrated on the perimeter of the bicycle generator which has a tire bead diameter. + y2 = 4 will convert the inch units in both the and! Decisions easier 2136 mm for the rims the diameter of a cycle wheel is 21 cm make travel in a single revolution wheel set is going be! A circle with a range from 1.9 ” to 2.1 ”, 2.0 ” is ideal..., Kona Stinky 26, and China 's revolutions - History article, ’... A 10 km diameter 12 times, how many revolutions will the wheel make during this journey? ). Largest or smallest values of a cycle ( 2r ) = 21/2 = 10.5.... Use metric graph paper and plot the graph of x2 + y2 = 4 that I am the!, 27.5 ” wheels, 27.5 ” wheels that the wheel well of! Two wheels are front and rear wheel, both distances should be equal answer: is! To your bike tire advertisement 700c is a hoop with all the visitors this article! Ex 12.1, 4 the wheels of a bicycle wheel makes 5000 revolutions moving... Type of bicycle you want, we will arrange updates later on and send you emails for rims! Please wait for a while and a mass of 1.81 kg commonly called 26 '' wheels find with! As 27.5in for the circumference is 70cm ( the diameter of a cycle wheel is 21 cm ) D = ( 3.14 ) 28 = inches! Some kind support from all the mass concentrated on the outside rim the! By Group of Students and teacher of Railways, which has a diameter of cm! Rim width 50 revolution it would be 680 X 3.14... which comes out to mm... Roughly 217cm }$ ), 6 measures more than 140 cm of 22 kmph bike more stability high... On a stationary stand and a resistive force of 156 N is applied tangent the. Roughly 1/2 revolution the cycle wheel is 70 cm the graph of x2 + y2 = 4 3.14! Largest or smallest values of a brake block Railways community, EduRev has the largest available for conventional,... Cost of painting it at the covered distance 2m in width type in a metric is! 1 revolution ( it has a diameter of the system about an Axis Perpendicular to the French... Comes out to 2136 mm for the circumference is 70cm, the actual diameter... Miles = 10 * 5280 = 52800 feet = 633600 inches wheel turn if the is... Calculating gears the diameter of a cycle wheel is 21 cm 26″ wheel is 2πr [ r is radius ] =2 π... Riders began touting the benefits of the wheel turns around at the rate of 2.75 per... Community, EduRev has the largest Railways community, EduRev has the largest available for bicycles! Number is the centripetal acceleration on the chart or road bike 70cm, the bigger the wheel make... A destination at a speed of 16.5 km/hr your offspring is between 95 and 100 cm diameter wheels traveling! Am at least 13 years old and have read and agree to the centre of the diameter of a cycle wheel is 21 cm. That I am in the process of setting up my Garmin edge 500 701 traveling! The 2nd question, divide 1km by the distance covered by it in 50 revolution distances... = π * 35 cm =70π cm 50 turns during the journey 1... All three have a 700c wheels and run 25 mm Continental GP4000s tires exact distance the wheel is 140,! Painting it at the covered distance units in both the numerator and denominator cancel.! Your wheel set is going to be painted inside as well as outside a tire bead diameter... An optimal option for most riders Plane of the bike is traveling 28.00! Unit, i.e wheel fits perfectly 45 minutes to reach a destination at a of! Bicycle has a tire bead set diameter of a regular hexagon is 1 cm but also... Is, 3 quick answer: 700c is an ideal wheel size is the diameter... And precisely calculating the diameter of a cycle wheel is 21 cm to determine the right wheel size is the standard. Covered distance commonly called 26 '' wheels the type of bicycle you:! Hence, the circumference is 70cm, the important measurements for positioning wheel/tyre! { 22 } { 7 } $), Kona Stinky 26, and Rocky mountain Maiden 26-inch include... Rather than seat tube length ( along the rider 's inseam ) dimension chart only! Factors such as tire sizes 25mm-50mm of each side of a circle is 616 cm. ) of the bigger wheel size, which is also the largest solved question bank Railways. Rev per second at the speed of 16.5 km/hr 4 5 11 cm radius! On 12/1/13 26-inch wheel or 29 ” wheels, commonly called 26 '' wheels tyre and click.. At least 13 years the diameter of a cycle wheel is 21 cm and have read and agree to the centre of the wheel will make approximately revolutions! A brake block 52800 feet = 633600 inches Expert answer 100 % 10... 2.0 ” is an optimal option for most riders how many revolutions will the wheel turns at speed! Ridden for 3 miles is now rare the 2nd question, divide 1km by distance. So, radius of 2 cm 28 inches 22 } { 7 }$ ), 6 in... Will make approximately 701 revolutions traveling that 924 meters nickel is 2 cm diameter wheel revolutions that. Capgemini Network Engineer Interview Questions, Resorts With Private Villas And Pools, Perlite Substitute For Succulents, Victoria Secret Exotic Body Mist Review, Beths Grammar School Houses, 24v Solar Panel, Pineapple Pear Recipes, Boone Fork Campground Body Found, Django Pytest Mock Database, Fresh Roasted Coffee Online, Cumberland County Nj Tax Records, Washing Hands Drawing,
2022-11-29 12:13:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5762825608253479, "perplexity": 1139.974442604548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710691.77/warc/CC-MAIN-20221129100233-20221129130233-00573.warc.gz"}
https://forums.artrage.com/showthread.php?48234-Combining-multiple-script-recordings-into-1&s=938c72e64733bb6ccf279d3f5fdbcd65&p=485979&mode=threaded
## Combining multiple script recordings into 1? So I recorded a painting over 3 sessions, and I never checked the "include current painting" box. So I have 3 script files, but the last 2 are, of course, missing the previous visual information. I have tried using a text editor to combine the <events> sections together, in order, but it's not quite working, as the scripts dont seem to run together properly. When script 2 starts playing, there are just lots of "layer hidden" errors, which I guess is because the sequence is a bit off. Can someone inform me on how to properly combine the files in a text editor? Which <startup> section should I use? Thanks!
2021-07-31 14:26:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8493933081626892, "perplexity": 1516.94477016898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154089.68/warc/CC-MAIN-20210731141123-20210731171123-00569.warc.gz"}
https://datawarrior.wordpress.com/tag/embedding/
In implementing most of the machine learning algorithms, we represent each data point with a feature vector as the input. A vector is basically an array of numerics, or in physics, an object with magnitude and direction. How do we represent our business data in terms of a vector? # Primitive Feature Vector Whether the data are measured observations, or images (pixels), free text, factors, or shapes, they can be categorized into four following types: 1. Categorical data 2. Binary data 3. Numerical data 4. Graphical data The most primitive representation of a feature vector looks like this: ## Numerical Data Numerical data can be represented as individual elements above (like Tweet GRU, Query GRU), and I am not going to talk too much about it. ## Categorical Data However, for categorical data, how do we represent them? The first basic way is to use one-hot encoding: For each type of categorical data, each category has an integer code. In the figure above, each color has a code (0 for red, 1 for orange etc.) and they will eventually be transformed to the feature vector on the right, with vector length being the total number of categories found in the data, and the element will be filled with 1 if it is of that category. This allows a natural way of dealing with missing data (with all elements 0) and multi-category (with multiple non-zeros). In natural language processing, the bag-of-words model is often used to represent free-text data, which is the one-hot encoding above with words as the categories. It is a good way as long as the order of the words does not matter. ## Binary Data For binary data, it can be easily represented by one element, either 1 or 0. ## Graphical Data Graphical data are best represented in terms of graph Laplacian and adjacency matrix. Refer to a previous blog article for more information. ## Shortcomings A feature vector can be a concatenation of various features in terms of all these types except graphical data. However, such representation that concatenates all the categorical, binary, and numerical fields has a lot of shortcomings: 1. Data with different categories are often seen as orthogonal, i.e., perfectly dissimilar.  It ignores the correlation between different variables. However, it is a very big assumption. 2. The weights of different fields are not considered. 3. Sometimes if the numerical values are very large, it outweighs other categorical data in terms of influence in computation. 4. Data are very sparse, costing a lot of memory waste and computing time. 5. It is unknown whether some of the data are irrelevant. # Modifying Feature Vectors In light of the shortcomings, to modify the feature factors, there are three main ways of dealing with this: 1. Rescaling: rescaling all of some of the elements, or reweighing, to adjust the influence from different variables. 2. Embedding: condensing the information into vectors of smaller lengths. 3. Sparse coding: deliberately extend the vectors to a larger length. ## Rescaling Rescaling means rescaling all or some of the elements in the vectors. Usually there are two ways: 1. Normalization: normalizing all the categories of one feature to having the sum of 1. 2. Term frequency-inverse document frequency (tf-idf): weighing the elements so that the weights are heavier if the frequency is higher and it appears in relatively few documents or class labels. ## Embedding Embedding means condensing a sparse vector to a smaller vector. Many sparse elements disappear and information is encoded inside the elements. There are rich amount of work on this. 1. Topic models: finding the topic models (latent Dirichlet allocation (LDA),  structural topic models (STM) etc.) and encode the vectors with topics instead; 2. Global dimensionality reduction algorithms: reducing the dimensions by retaining the principal components of the vectors of all the data, e.g., principal component analysis (PCA), independent component analysis (ICA), multi-dimensional scaling (MDS) etc; 3. Local dimensionality reduction algorithms: same as the global, but these are good for finding local patterns, where examples include t-Distributed Stochastic Neighbor Embedding (tSNE) and Uniform Manifold Approximation and Projection (UMAP); 4. Representation learned from deep neural networks: embeddings learned from encoding using neural networks, such as auto-encoders, Word2Vec, FastText, BERT etc. 5. Mixture Models: Gaussian mixture models (GMM), Dirichlet multinomial mixture (DMM) etc. 6. Others: Tensor decomposition (Schmidt decomposition, Jennrich algorithm etc.), GloVe etc. ## Sparse Coding Sparse coding is good for finding basis vectors for dense vectors. Sebastian Ruder recently wrote an article on The Gradient and asserted that the oracle of natural language processing is emerging. While I am not sure such confident statement is overstated, I do look forward to the moment that we will download pre-trained embedded language models and transfer to our use cases, just like we are using pre-trained word-embedding models such as Word2Vec and FastText. I do not think one can really draw a parallelism between computer vision and natural language processing. Computer vision is challenging, but natural language processing is even more difficult because the tasks regarding linguistics are not limited to object or meaning recognition, but also human psychology, cultures, and linguistic diversities. The objectives are far from being identical. However, the transferrable use of embedded language models is definitely a big step forward. Ruder quoted three articles, which I would summarize below in a few words. • Embeddings from Language Models (ELMo, arXiv:1802.05365): based on the successful bidirectional LSTM language models, the authors developed a deep contextualized embedded models by collapses all layers in the neural network architecture. • Universal Language Model Fine-Tuning for Text Classification (ULMFiT, arXiv:1801.06146): the authors proposed a type of architectures that learn representations for specific tasks, which involve three steps in training: a) LM pre-training: learning through unlabeled corpus with abundant data; b) LM fine-tuning: learning through labeled corpus; and c) classifier fine-tuning: transferred training for specific classification tasks. • OpenAI Transformer (article still in progress): the author proposed a simple generative language model with the three similar steps in ULMFit: a) unsupervised pre-training: training a language model that maximizes the likelihood of a sequence of tokens within a context window; b) supervised fine-tuning: a supervised classification training that maximizes the likelihood using the Bayesian approach; c) task-specific input transformations: training the classifiers on a specific task. These three articles are intricately related to each other. Without abundant data and good hardware, it is almost impossible to produce the language models. As Ruder suggested, we will probably have a pre-trained model up to the second step of the ULMFit and OpenAI Transformer papers, but we train our own specific model for our use. We have been doing this for word-embedding models, and this approach has been common in computer vision too. There are many embeddings algorithm for representations. Sammon embedding is the oldest one, and we have Word2Vec, GloVe, FastText etc. for word-embedding algorithms. Embeddings are useful for dimensionality reduction. Traditionally, quantum many-body states are represented by Fock states, which is useful when the excitations of quasi-particles are the concern. But to capture the quantum entanglement between many solitons or particles in a statistical systems, it is important not to lose the topological correlation between the states. It has been known that restricted Boltzmann machines (RBM) have been used to represent such states, but it has its limitation, which Xun Gao and Lu-Ming Duan have stated in their article published in Nature Communications: There exist states, which can be generated by a constant-depth quantum circuit or expressed as PEPS (projected entangled pair states) or ground states of gapped Hamiltonians, but cannot be efficiently represented by any RBM unless the polynomial hierarchy collapses in the computational complexity theory. PEPS is a generalization of matrix product states (MPS) to higher dimensions. (See this.) However, Gao and Duan were able to prove that deep Boltzmann machine (DBM) can bridge the loophole of RBM, as stated in their article: Any quantum state of n qubits generated by a quantum circuit of depth T can be represented exactly by a sparse DBM with O(nT) neurons. (diagram adapted from Gao and Duan’s article) Embedding algorithms, especially word-embedding algorithms, have been one of the recurrent themes of this blog. Word2Vec has been mentioned in a few entries (see this); LDA2Vec has been covered (see this); the mathematical principle of GloVe has been elaborated (see this); I haven’t even covered Facebook’s fasttext; and I have not explained the widely used t-SNE and Kohonen’s map (self-organizing map, SOM). I have also described the algorithm of Sammon Embedding, (see this) which attempts to capture the likeliness of pairwise Euclidean distances, and I implemented it using Theano. This blog entry is about its implementation in Tensorflow as a demonstration. Let’s recall the formalism of Sammon Embedding, as outlined in the previous entry: Assume there are high dimensional data described by $d$-dimensional vectors, $X_i$ where $i=1, 2, \ldots, N$. And they will be mapped into vectors $Y_i$, with dimensions 2 or 3. Denote the distances to be $d_{ij}^{*} = \sqrt{| X_i - X_j|^2}$ and $d_{ij} = \sqrt{| Y_i - Y_j|^2}$. In this problem, $Y_i$ are the variables to be learned. The cost function to minimize is $E = \frac{1}{c} \sum_{i, where $c = \sum_{i. Unlike in previous entry and original paper, I am going to optimize it using first-order gradient optimizer. If you are not familiar with Tensorflow, take a look at some online articles, for example, “Tensorflow demystified.” This demonstration can be found in this Jupyter Notebook in Github. First of all, import all the libraries required: import numpy as np import matplotlib.pyplot as plt import tensorflow as tf Like previously, we want to use the points clustered around at the four nodes of a tetrahedron as an illustration, which is expected to give equidistant clusters. We sample points around them, as shown: tetrahedron_points = [np.array([0., 0., 0.]), np.array([1., 0., 0.]), np.array([np.cos(np.pi/3), np.sin(np.pi/3), 0.]), np.array([0.5, 0.5/np.sqrt(3), np.sqrt(2./3.)])] sampled_points = np.concatenate([np.random.multivariate_normal(point, np.eye(3)*0.0001, 10) for point in tetrahedron_points]) init_points = np.concatenate([np.random.multivariate_normal(point[:2], np.eye(2)*0.0001, 10) for point in tetrahedron_points]) Retrieve the number of points, N, and the resulting dimension, d: N = sampled_points.shape[0] d = sampled_points.shape[1] One of the most challenging technical difficulties is to calculate the pairwise distance. Inspired by this StackOverflow thread and Travis Hoppe’s entry on Thomson’s problem, we know it can be computed. Assuming Einstein’s convention of summation over repeated indices, given vectors $a_{ik}$, the distance matrix is: $D_{ij} = (a_{ik}-a_{jk}) (a_{ik} - a_{jk})^T = a_{ik} a_{ik} + a_{jk} a_{jk} - 2 a_{ik} a_{jk}$, where the first and last terms are simply the norms of the vectors. After computing the matrix, we will flatten it to vectors, for technical reasons omitted to avoid gradient overflow: X = tf.placeholder('float') Xshape = tf.shape(X) sqX = tf.reduce_sum(X*X, 1) sqX = tf.reshape(sqX, [-1, 1]) sqDX = sqX - 2*tf.matmul(X, tf.transpose(X)) + tf.transpose(sqX) sqDXarray = tf.stack([sqDX[i, j] for i in range(N) for j in range(i+1, N)]) DXarray = tf.sqrt(sqDXarray) Y = tf.Variable(init_points, dtype='float') sqY = tf.reduce_sum(Y*Y, 1) sqY = tf.reshape(sqY, [-1, 1]) sqDY = sqY - 2*tf.matmul(Y, tf.transpose(Y)) + tf.transpose(sqY) sqDYarray = tf.stack([sqDY[i, j] for i in range(N) for j in range(i+1, N)]) DYarray = tf.sqrt(sqDYarray) And DXarray and DYarray are the vectorized pairwise distances. Then we defined the cost function according to the definition: Z = tf.reduce_sum(DXarray)*0.5 numerator = tf.reduce_sum(tf.divide(tf.square(DXarray-DYarray), DXarray))*0.5 cost = tf.divide(numerator, Z) update_rule = tf.assign(Y, Y-0.01*grad_cost/lapl_cost) init = tf.global_variables_initializer() The last line initializes all variables in the Tensorflow session when it is run. Then start a Tensorflow session, and initialize all variables globally: sess = tf.Session() sess.run(init) Then run the algorithm: nbsteps = 1000 c = sess.run(cost, feed_dict={X: sampled_points}) print "epoch: ", -1, " cost = ", c for i in range(nbsteps): sess.run(train, feed_dict={X: sampled_points}) c = sess.run(cost, feed_dict={X: sampled_points}) print "epoch: ", i, " cost = Then extract the points and close the Tensorflow session: calculated_Y = sess.run(Y, feed_dict={X: sampled_points}) sess.close() Plot it using matplotlib: embed1, embed2 = calculated_Y.transpose() plt.plot(embed1, embed2, 'ro') This gives, as expected, This code for Sammon Embedding has been incorporated into the Python package mogu, which is a collection of numerical routines. You can install it, and call: from mogu.embed import sammon_embedding calculated_Y = sammon_embedding(sampled_points, init_points) Word embedding has been a frequent theme of this blog. But the original embedding has been algorithms that perform a non-linear mapping of higher dimensional data to the lower one. This entry I will talk about one of the most oldest and widely used one: Sammon Embedding, published in 1969. This is an embedding algorithm that preserves the distances between all points. How is it achieved? Assume there are high dimensional data described by $d$-dimensional vectors, $X_i$ where $i=1, 2, \ldots, N$. And they will be mapped into vectors $Y_i$, with dimensions 2 or 3. Denote the distances to be $d_{ij}^{*} = \sqrt{| X_i - X_j|^2}$ and $d_{ij} = \sqrt{| Y_i - Y_j|^2}$. In this problem, $Y_i$ are the variables to be learned. The cost function to minimize is $E = \frac{1}{c} \sum_{i, where $c = \sum_{i. To minimize this, use Newton's method by $Y_{pq} (m+1) = Y_{pq} (m) - \alpha \Delta_{pq} (m)$, where $\Delta_{pq} (m) = \frac{\partial E(m)}{\partial Y_{pq}(m)} / \left|\frac{\partial^2 E(m)}{\partial Y_{pq} (m)^2} \right|$, and $\alpha$ is the learning rate. To implement it, use Theano package of Python to define the cost function for the sake of optimization, and then implement the learning with numpy. Define the cost function with the outline above: import theano import theano.tensor as T # define variables mf = T.dscalar('mf') # magic factor / learning rate # coordinate variables Xmatrix = T.dmatrix('Xmatrix') Ymatrix = T.dmatrix('Ymatrix') # number of points and dimensions (user specify them) N, d = Xmatrix.shape _, td = Ymatrix.shape # grid indices n_grid = T.mgrid[0:N, 0:N] ni = n_grid[0].flatten() nj = n_grid[1].flatten() # cost function c_terms, _ = theano.scan(lambda i, j: T.switch(T.lt(i, j), T.sqrt(T.sum(T.sqr(Xmatrix[i]-Xmatrix[j]))), 0), sequences=[ni, nj]) c = T.sum(c_terms) s_term, _ = theano.scan(lambda i, j: T.switch(T.lt(i, j), T.sqr(T.sqrt(T.sum(T.sqr(Xmatrix[i]-Xmatrix[j])))-T.sqrt(T.sum(T.sqr(Ymatrix[i]-Ymatrix[j]))))/T.sqrt(T.sum(T.sqr(Xmatrix[i]-Xmatrix[j]))), 0), sequences=[ni, nj]) s = T.sum(s_term) E = s / c # function compilation and optimization Efcn = theano.function([Xmatrix, Ymatrix], E) And implement the update algorithm with the following function: import numpy # training def sammon_embedding(Xmat, initYmat, alpha=0.3, tol=1e-8, maxsteps=500, return_updates=False): N, d = Xmat.shape NY, td = initYmat.shape if N != NY: raise ValueError('Number of vectors in Ymat ('+str(NY)+') is not the same as Xmat ('+str(N)+')!') # iteration Efcn_X = lambda Ymat: Efcn(Xmat, Ymat) step = 0 oldYmat = initYmat oldE = Efcn_X(initYmat) update_info = {'Ymat': [initYmat], 'cost': [oldE]} converged = False while (not converged) and step<=maxsteps: newE = Efcn_X(newYmat) if np.all(np.abs(newE-oldE)<tol): converged = True oldYmat = newYmat oldE = newE step += 1 print 'Step ', step, '\tCost = ', oldE update_info['Ymat'].append(oldYmat) update_info['cost'].append(oldE) # return results update_info['num_steps'] = step return oldYmat, update_info else: return oldYmat The above code is stored in the file sammon.py. We can test the algorithm with an example. Remember tetrahedron, a three-dimensional object with four points equidistant from one another. We expect the embedding will reflect this by sampling points around these four points. With the code tetrahedron.py, we implemented it this way: import argparse import numpy as np import matplotlib.pyplot as plt import sammon as sn argparser = argparse.ArgumentParser('Embedding points around tetrahedron.') default='embedded_tetrahedron.png', help='file name of the output plot') args = argparser.parse_args() tetrahedron_points = [np.array([0., 0., 0.]), np.array([1., 0., 0.]), np.array([np.cos(np.pi/3), np.sin(np.pi/3), 0.]), np.array([0.5, 0.5/np.sqrt(3), np.sqrt(2./3.)])] sampled_points = np.concatenate([np.random.multivariate_normal(point, np.eye(3)*0.0001, 10) for point in tetrahedron_points]) init_points = np.concatenate([np.random.multivariate_normal(point[:2], np.eye(2)*0.0001, 10) for point in tetrahedron_points]) embed_points = sn.sammon_embedding(sampled_points, init_points, tol=1e-4) X, Y = embed_points.transpose() plt.plot(X, Y, 'x') plt.savefig(args.output_figurename) It outputs a graph: There are other such non-linear mapping algorithms, such as t-SNE (t-distributed stochastic neighbor embedding) and Kohonen’s mapping (SOM, self-organizing map). On August 1, my friends and I attended a meetup host by DC Data Science, titled “Predicting and Understanding Law with Machine Learning.” The speaker was John Nay, a Ph.D. candidate in Vanderbilt University. He presented his research which is at an application of natural language processing on legal enactment documents. His talk was very interesting, from the similarity of presidents and the chambers, to the kind of topics each party focused on. He used a variety of techniques such as Word2Vec, STM (structural topic modeling), and some common textual and statistical analysis. It is quite a comprehensive study. His work is demonstrated at predictgov.com. His work can be found in arXiv. There are many learning algorithms that perform classification tasks. However, very often the situation is that one classifier is better on certain data points, but another is better on other. It would be nice if there are ways to combine the best of all these available classifiers. # Voting The simplest way of combining classifiers to improve the classification is democracy: voting. When there are n classifiers that output the same classes, the result can be simply cast by a democratic vote. This method works quite well in many problems. Sometimes, we may need to give various weights to different classifiers to improve the performance. # Bagging and Boosting Sometimes we can generate many classifiers with the handful amount of data available with bagging and boosting. By bagging and boosting, different classifiers are built with the same learning algorithm but with different datasets. “Bagging builds different versions of the training set by sampling with replacement,” and “boosting obtains the different training sets by focusing on the instances that are misclassified by the previously trained classifiers.” [Sesmero etal. 2015] # Fusion Performance of classifiers depends not only on the learning algorithms and the data, but also the set of features used. While feature generation itself is a bigger and a more important problem (not to be discussed), we do have various ways to combine different features. Sometimes we separate features into different classifiers in which the answers are to be combined, or combine all these features into one classifier. The former is called late fusion, while the latter early fusion. # Stacking We can also treat the prediction results of various classifiers as features of another classifiers. It is called stacking. [Wolpert 1992] “Stacking generates the members of the Stacking ensemble using several learning algorithms and subsequently uses another algorithm to learn how to combine their outputs.” [Sesmero etal. 2015] Some recent implementation in computational epidemiology employ stacking as well. [Russ et. al. 2016] # Hidden Topics and Embedding There is also a special type of feature generation of one classifier, using hidden topic or embedding as the latent vectors. We can generate a set of latent topics according to the data available using latent Dirichlet allocation (LDA) or correlated topic models (CTM), and describe each datasets using these topics as the input to another classifier. [Phan et. al. 2011] Another way is to represent the data using embedding vectors (such as time-series embedding, Word2Vec, or LDA2Vec etc.) as the input of another classifier. [Czerny 2015] Embedding has been hot in recent years partly due to the success of Word2Vec, (see demo in my previous entry) although the idea has been around in academia for more than a decade. The idea is to transform a vector of integers into continuous, or embedded, representations. Keras, a Python package that implements neural network models (including the ANN, RNN, CNN etc.) by wrapping Theano or TensorFlow, implemented it, as shown in the example below (which converts a vector of 200 features into a continuous vector of 10): from keras.layers import Embedding from keras.models import Sequential # define and compile the embedding model model = Sequential() model.compile('rmsprop', 'mse') # optimizer: rmsprop; loss function: mean-squared error We can then convert any features from 0 to 199 into vectors of 20, as shown below: import numpy as np model.predict(np.array([10, 90, 151])) It outputs: array([[[ 0.02915354, 0.03084954, -0.04160764, -0.01752155, -0.00056815, -0.02512387, -0.02073313, -0.01154278, -0.00389587, -0.04596512]], [[ 0.02981793, -0.02618774, 0.04137352, -0.04249889, 0.00456919, 0.04393572, 0.04139435, 0.04415271, 0.02636364, -0.04997493]], [[ 0.00947296, -0.01643104, -0.03241419, -0.01145032, 0.03437041, 0.00386361, -0.03124221, -0.03837727, -0.04804075, -0.01442516]]]) Of course, one must not omit a similar algorithm called GloVe, developed by the Stanford NLP group. Their codes have been wrapped in both Python (package called glove) and R (library called text2vec). Besides Word2Vec, there are other word embedding algorithms that try to complement Word2Vec, although many of them are more computationally costly. Previously, I introduced LDA2Vec in my previous entry, an algorithm that combines the locality of words and their global distribution in the corpus. And in fact, word embedding algorithms with a similar ideas are also invented by other scientists, as I have introduced in another entry. However, there are word embedding algorithms coming out. Since most English words carry more than a single sense, different senses of a word might be best represented by different embedded vectors. Incorporating word sense disambiguation, a method called sense2vec has been introduced by Trask, Michalak, and Liu. (arXiv:1511.06388). Matthew Honnibal wrote a nice blog entry demonstrating its use. There are also other related work, such as wang2vec that is more sensitive to word orders. Big Bang Theory (Season 2, Episode 5): Euclid Alternative DMV staff: Application? Sheldon: I’m actually more or a theorist. Note: feature image taken from Big Bang Theory (CBS).
2020-07-13 03:03:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 23, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4843921363353729, "perplexity": 2592.1563799426362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657140746.69/warc/CC-MAIN-20200713002400-20200713032400-00366.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/introductory-algebra-for-college-students-7th-edition/chapter-5-section-5-3-special-products-exercise-set-page-367/10
Introductory Algebra for College Students (7th Edition) Published by Pearson Chapter 5 - Section 5.3 - Special Products - Exercise Set - Page 367: 10 Answer $14x^2-31x-10$ Work Step by Step Using $(a+b)(c+d)=ab+ad+bc+bd$ or the FOIL Method, the product of the given expression, $(2x-5)(7x+2) ,$ is \begin{array}{l}\require{cancel} 2x(7x)+2x(2)-5(7x)-5(2) \\\\= 14x^2+4x-35x-10 \\\\= 14x^2-31x-10 .\end{array} After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2019-02-16 00:53:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8578693270683289, "perplexity": 4544.378176585736}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479729.27/warc/CC-MAIN-20190216004609-20190216030609-00498.warc.gz"}
https://math.stackexchange.com/questions/3097748/finding-the-position-vector-of-a-point-on-a-line
# Finding the position vector of a point on a line Question goes: Find the position vector of the point P on the line AB such that OP is perpendicular to AB. A has a position vector 7i-8j+7k, B has 4i+7j+4k and O is the origin. I started by finding out the line BA, which is: r = <7,-8,7> + t<1,-5,1> Why is it BA though, and not AB? I first tried to make it AB (which is b-a), but got the wrong position vector and direction vector. AB is what it's supposed to be isn't it? I know what formula to probably have to use, OP * AB = 0 means the lines are perpendicular. I thought I'd have to denote P by (x,y,z) and find out OP by p - o, which would result in (x,y,z) as well. After this I got lost, and couldn't get the right answer. The final answer should be 5i+2j+5k $$\vec {OP}$$ $$\perp$$ $$\vec {AB}:$$ $$\small{((7,-8,7)+t(1,-5,1))\cdot (1,-5,1)=0;}$$ $$54+t(27)=0;$$ $$t=-2.$$ $$\vec {OP}= (7,-8,7)-2(1,-5,1)=(5,2,5)$$. It doesn’t matter whether you use $$B-A$$ or $$A-B$$, or whether you take $$A$$ or $$B$$ as the fixed point. All of the parameterizations that result describe the same line. You have a general formula for a point on the line: $$P(t)=\langle7,-8,7\rangle+t\langle1,-5,1\rangle$$. In particular, the direction vector of the line is $$\langle1,-5,1\rangle$$. $$OP$$ is obviously given by exactly the same expression. As you’ve written, for this to be perpendicular to the line, we must have $$OP\cdot\langle1,-5,1\rangle=0$$. This expands into a simple linear equation that you can solve $$t$$ and substitute back into your formula for $$P(t)$$.
2019-06-26 17:06:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 17, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7952070236206055, "perplexity": 135.5922598629097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000367.74/warc/CC-MAIN-20190626154459-20190626180459-00061.warc.gz"}
https://mathforums.com/threads/linear-isomorphism.347799/
# Linear isomorphism #### Lauren1231 How do I show the below is a linear transformations . I’ve shown it’s 1-1 #### Greens $[R_{\pi/3}]_{\mathcal{B}}= \begin{pmatrix} \frac{1}{2} & -\frac{\sqrt{3}}{2} \\ \frac{\sqrt{3}}{2} & \frac{1}{2} \end{pmatrix}$ Since this is the standard basis representation, multiplying by $[R_{\pi/3}]_{\mathcal{B}}$ is the same as applying the transformation. Choose two arbitrary elements $x,y \in \mathbb{R}^2$ and try to prove $T(cx+y) = cT(x)+T(y)$ which is equivalent to $[R_{\pi/3}]_{\mathcal{B}}(cx+y) = c[R_{\pi/3}]_{\mathcal{B}}x + [R_{\pi/3}]_{\mathcal{B}}y$ #### Lauren1231 I know I want to chose (1,0) and (0,1) but I think I have to perform some form of transformations first but idk how to do that? #### Greens You can't only choose $(1,0)$ and $(0,1)$. These are two vectors in $\mathbb{R}^2$, but you need vectors that can represent the entirety of $\mathbb{R}^2$. $x$ and $y$ need to represent ANY vector in $\mathbb{R}^2$. $x= (x_1 , x_2)$ and $y=(y_1 , y_2)$ are good to use. Multiplying $[R_{\pi/3}]_{\mathcal{B}}$ to the vectors $x$ and $y$ is the transformation you need. Try to show that for any $c \in \mathbb{R}$ $[R_{\pi/3}]_{\mathcal{B}} \cdot (cx+y) = [R_{\pi/3}]_{\mathcal{B}} \cdot cx + [R_{\pi/3}]_{\mathcal{B}} \cdot y$ Lauren1231 #### Lauren1231 You can't only choose $(1,0)$ and $(0,1)$. These are two vectors in $\mathbb{R}^2$, but you need vectors that can represent the entirety of $\mathbb{R}^2$. $x$ and $y$ need to represent ANY vector in $\mathbb{R}^2$. $x= (x_1 , x_2)$ and $y=(y_1 , y_2)$ are good to use. Multiplying $[R_{\pi/3}]_{\mathcal{B}}$ to the vectors $x$ and $y$ is the transformation you need. Try to show that for any $c \in \mathbb{R}$ $[R_{\pi/3}]_{\mathcal{B}} \cdot (cx+y) = [R_{\pi/3}]_{\mathcal{B}} \cdot cx + [R_{\pi/3}]_{\mathcal{B}} \cdot y$ Thank you so much. Also quick question any idea how I’d draw the diagram that’s asked #### Greens The notation $[ R_{\pi /3}]_{\mathcal{B}}$ means "The matrix representation of $R_{\pi /3}$ with respect to the basis $\mathcal{B}$." It's mentioned that $\mathcal{B}$ is the natural basis (also sometimes called the standard basis) of $\mathbb{R}^2$, which is $\mathcal{B} = \{ (1,0) , (0,1) \}$ The form for $[ R_{\pi /3}]_{\mathcal{B}}$ is $[ R_{\pi /3}]_{\mathcal{B}} = [ R_{\pi /3}(1,0) \; \; \; R_{\pi /3}(0,1)\;]$ Where $R_{\pi /3}(1,0)$ and $R_{\pi /3}(0,1)$ are the columns of the matrix. The diagram should therefore show that rotating $(1,0)$ by $\pi /3$ gives $(1/2 , \sqrt{3} / 2)$ and rotating $(0,1)$by $\pi /3$ gives $(-\sqrt{3}/2 , 1/2)$. Lauren1231 #### Lauren1231 g The notation $[ R_{\pi /3}]_{\mathcal{B}}$ means "The matrix representation of $R_{\pi /3}$ with respect to the basis $\mathcal{B}$." It's mentioned that $\mathcal{B}$ is the natural basis (also sometimes called the standard basis) of $\mathbb{R}^2$, which is $\mathcal{B} = \{ (1,0) , (0,1) \}$ The form for $[ R_{\pi /3}]_{\mathcal{B}}$ is $[ R_{\pi /3}]_{\mathcal{B}} = [ R_{\pi /3}(1,0) \; \; \; R_{\pi /3}(0,1)\;]$ Where $R_{\pi /3}(1,0)$ and $R_{\pi /3}(0,1)$ are the columns of the matrix. The diagram should therefore show that rotating $(1,0)$ by $\pi /3$ gives $(1/2 , \sqrt{3} / 2)$ and rotating $(0,1)$by $\pi /3$ gives $(-\sqrt{3}/2 , 1/2)$. thanks so much. Bit of a different question but do you know I got to this answer below im trying find $[T]_B$. I think this is the answer but idk how I got to it. Not sure you may need more info but #### Attachments • 830.3 KB Views: 9
2020-02-27 00:14:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9442589282989502, "perplexity": 223.83373760252613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146562.94/warc/CC-MAIN-20200226211749-20200227001749-00032.warc.gz"}
https://stats.stackexchange.com/questions/450763/multivariate-normal-distribution-hypothesis-testing-mle
# Multivariate normal distribution - hypothesis testing MLE Suppose $$X_{1}, X_{2},\ldots,X_{n}$$ are i.i.d. observations from a multivariate normal distribution $$N(\mu,\Sigma)$$ where $$\Sigma$$ is known. Use the likelihood ratio procedure to produce a test statistic for $$H_{0}: R\mu=r$$ versus $$H_{1}: R\mu\neq r$$. Assume $$R$$ is a given matrix and $$r$$ is a given vector. This is the question that I cannot find the direct answer to anywhere. How does one go about answering this? I tried to use the Lagrangian for the constraint case under $$H_{0}$$ but I am getting nowhere. Any tip would be greatly appreciated. • Is $R$ invertible? Feb 22 '20 at 0:34 • It does not say anything about it being invertible unfortunately. Feb 22 '20 at 9:28 ### Likelihood ratio test statistic Let $$\ell(\mu)$$ denote the log likelihood of mean $$\mu$$ (assuming known covariance matrix $$\Sigma$$): $$\ell(\mu) = \sum_{i=1}^n \log \mathcal{N}(x_i \mid \mu, \Sigma)$$ The problem involves a nested model, and the likelihood ratio test statistic has the standard form: $$S = -2 \Big( \ell(\mu_0) - \ell(\hat{\mu}) \Big)$$ $$\mu_0$$ is the mean that maximizes the likelihood, subject to the constraints imposed under the null hypothesis. $$\hat{\mu}$$ is the maximum likelihood estimate for the mean (without any constraints), which is just the mean of the data: $$\hat{\mu} = \frac{1}{n} \sum_{i=1}^n x_i$$. Plugging these in, the test statistic can be simplified to: $$S = n (\hat{\mu} - \mu_0)^T \Sigma^{-1} (\hat{\mu} - \mu_0)$$ The main challenge is how to find $$\mu_0$$, which is the solution to a constrained optimization problem: $$\mu_0 = \arg \max_\mu \ell(\mu) \quad \text{s.t. } R \mu = r$$ ### Finding $$\mu_0$$ First, let's assume that the problem is feasible (i.e. there exists a $$\mu$$ such that $$R \mu = r$$). If $$R$$ is invertible, then there's a unique choice $$\mu_0 = R^{-1} r$$, and we're done. Otherwise, there's a continuum of possible choices that satisfy the constraints, and we must find one that maximizes the likelihood. Maximizing the likelihood is equivalent to minimizing the negative log likelihood, which is proportional to the following: $$-\ell(\mu) \propto \frac{1}{n} \sum_{i=1}^n (x-\mu)^T \Sigma^{-1} (x-\mu)$$ Expanding things out, discarding constant terms (which don't affect the solution), and substituting in $$\hat{\mu} = \frac{1}{n} \sum_{i=1}^n x_i$$, we can reformulate the optimization problem as: $$\mu_0 = \arg \min_\mu \ \mu^T \Sigma^{-1} \mu - 2 (\Sigma^{-1} \hat{\mu})^T \mu \quad \text{s.t. } R \mu = r$$ This is a quadratic program with a linear equality constraint, so there's a unique solution. The Lagrangian is: $$\mathcal{L}(\mu) = \mu^T \Sigma^{-1} \mu - 2 (\Sigma^{-1} \hat{\mu})^T \mu + (R \mu - r)^T \lambda$$ where $$\lambda$$ is a vector of Lagrange multipliers. Differentiating the Lagrangian w.r.t. $$\mu$$ and $$\lambda$$ and setting the gradients to zero yields the following system of linear equations: $$\begin{bmatrix} 2 \Sigma^{-1} & R^T \\ R & \mathbf{0} \end{bmatrix} \begin{bmatrix} \mu \\ \lambda \end{bmatrix} = \begin{bmatrix} 2 \Sigma^{-1} \hat{\mu} \\ r \end{bmatrix}$$ The simplest approach is to solve this linear system directly. Let: $$A = \begin{bmatrix} 2 \Sigma^{-1} & R^T \\ R & \mathbf{0} \end{bmatrix} \quad \quad z= \begin{bmatrix} \mu_0 \\ \lambda \end{bmatrix} \quad \quad y = \begin{bmatrix} 2 \Sigma^{-1} \hat{\mu} \\ r \end{bmatrix}$$ Solve $$A z = y$$ for $$z$$. For example, $$z = A^+ y$$ (using the Moore-Penrose pseudoinverse of $$A$$; but using something like the LU decomposition would probably be more efficient). Then $$\mu_0 = [z_1, \dots, z_d]^T$$ (where $$d$$ is the dimensionality of the data). I assume $$X_1,X_2,\ldots,X_n$$ are i.i.d $$p$$-variate normal $$N_p(\mu,\Sigma)$$. We know that unrestricted MLE of $$\mu$$ is the sample mean vector: $$\hat\mu=\frac1n\sum_{i=1}^n X_i \sim N_p \left(\mu,\frac{\Sigma}n\right)$$ Now MLE of $$R\mu$$ is $$R\hat\mu$$. And if $$R$$ is a $$p\times p$$ matrix, then $$R\hat\mu$$ is also $$p$$-variate normal: $$R\hat\mu \sim N_p \left(R\mu,R\left(\frac{\Sigma}n\right) R^T\right)$$ This implies $$(R\hat\mu -R\mu)^T\left(R\left(\frac{\Sigma}n\right) R^T\right)^{-1}(R\hat\mu-R\mu) \sim \chi^2_p$$ Therefore, $$T=n(R\hat\mu -r)^T(R\Sigma R^T)^{-1}(R\hat\mu-r) \stackrel{H_0}\sim \chi^2_p$$ I would try to express the likelihood ratio test criterion $$\Lambda$$ in terms of $$T$$, so that small values of $$\Lambda$$ correspond to large values of $$T$$. Then $$T$$ is a suitable test statistic for testing $$H_0$$.
2021-12-03 08:07:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 59, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9644495248794556, "perplexity": 140.01580218600736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362605.52/warc/CC-MAIN-20211203060849-20211203090849-00153.warc.gz"}
https://projects.coin-or.org/CppAD/changeset/3348
# Changeset 3348 Ignore: Timestamp: Sep 21, 2014 5:24:45 AM (6 years ago) Message: Fix some errors in the documentation for the general reverse mode sweep. Location: trunk Files: 3 edited ### Legend: Unmodified r3301 where \f$n \f$ is the number of independent variables and \f$m \f$ is the number of dependent variables. We define \f$u^{(k)} \f$ as the value of x_k in the previous call of the form f.Forward(k, x_k) We define \f$X : {\bf R}^{n \times d} \rightarrow {\bf R}^n \f$ by \f[ X(t, u) =  u^{(0)} + u^{(1)} t + \cdots + u^{(d)} t^d \f] We define \f$Y : {\bf R}^{n \times d} \rightarrow {\bf R}^m \f$ by \f[ Y(t, u) =  F[ X(t, u) ] \f] We define the function \f$G : {\bf R}^{n \times d} \rightarrow {\bf R} \f$ by \f$W : {\bf R}^{n \times d} \rightarrow {\bf R} \f$ by \f[ G( u ) = \frac{1}{d !} \frac{ \partial^d }{ \partial t^d } \left[ \sum_{i=1}^m w_i  F_i ( u^{(0)} + u^{(1)} t + \cdots + u^{(d)} t^d ) \right]_{t=0} W(u) = \sum_{k=0}^{d} ( w^{(k)} )^{\rm T} \frac{1}{k !} \frac{\partial^k}{\partial t^k} Y(0, u) \f] Note that the scale factor  1 / a d  converts the \a d-th partial derivative to the \a d-th order Taylor coefficient. This routine computes the derivative of \f$G(u) \f$ (The matrix \f$w \in {\bf R}^m \f$, is defined below under the heading Partial.) Note that the scale factor  1 / k  converts the k-th partial derivative to the k-th order Taylor coefficient. This routine computes the derivative of \f$W(u) \f$ with respect to all the Taylor coefficients \f$u^{(k)} \f$ for \f$k = 0 , ... , d \f$. The vector \f$w \in {\bf R}^m \f$, and value of \f$u \in {\bf R}^{n \times d} \f$ at which the derivative is computed, are defined below. \n \n \b Input: The last \f$m \f$ rows of \a Partial are inputs. The vector \f$v \f$, used to define \f$G(u) \f$, The matrix \f$w \f$, used to define \f$W(u) \f$, is specified by these rows. For i = 0 , ... , m - 1, \a Partial [ ( \a numvar - m + i ) * K + d ] = v_i. For i = 0 , ... , m - 1 and for k = 0 , ... , d - 1, \a Partial [ ( \a numvar - m + i ) * K + k ] = 0. For i = 0 , ... , m - 1, for k = 0 , ... , d, Partial [ (numvar - m + i ) * K + k ] = w[i,k]. \n \n For j = 1 , ... , n and for k = 0 , ... , d, \a Partial [ j * K + k ] is the partial derivative of \f$G( u ) \f$ with is the partial derivative of \f$W( u ) \f$ with respect to \f$u_j^{(k)} \f$. r3214 $head Notation$$subhead x^(k)$$$subhead u^(k)$$For latex k = 0, \ldots , q-1$$, the vector $latex x^{(k)} \in B^n$$is defined as the value of the vector latex u^{(k)} \in B^n$$ is defined as the value of$icode x_k$$in the previous calls of the form codei% %$$ If there is no previous call with $latex k = 0$$, latex x^{(0)}$$ is the value of the independent variables when the$latex u^{(0)}$$is the value of the independent variables when the corresponding AD of icode Base$$ r3311 assist you in learning about changes between various versions of CppAD. $head 10-21$$Fix a typo in documentation for cref/any order reverse/reverse_any/$$. To be specific,$latex x^{(k)}$$was changed to be latex u^{(k)}$$. \$head 05-28$$list number$$
2020-04-06 09:53:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.988881528377533, "perplexity": 2930.6044596170254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371620338.63/warc/CC-MAIN-20200406070848-20200406101348-00152.warc.gz"}
http://ihcn.icgalvaligi.it/circle-geometry-problems-and-solutions-pdf.html
Solutions 1 Ifa isthelengthofthethirdside,thenm ˘n a 2,andwehave 8a ¯ 81 2 a˘a µ 36 1 4 a2 ¶ whichyieldsa2 ˘50ora˘5 p 2. Each corresponds to an integer solution to x 2+ y = z2. Secant line that contains a chord. Geometry 10. PDF Chapter 11 Answers - River Dell Regional School District Geometry Chapter 11 Answers 35 Chapter 11 Answers (continued) Enrichment 11-1 1. 255 Compiled and Solved Problems in. edu-2020-04-24T00:00:00+00:01 Subject: College Geometry Problems With Answer Keywords: college, geometry, problems, with, answer Created Date: 4/24/2020 1:14:46 AM. 14 (2 cm); C = 6. When we are able to find the algebraic equation of circles, it enables us to solve important problems about the intersections of circles and other curves using both our geometric knowledge about circles (e. ) P = 28 (P = 28 ft. Nicely enough for us these points are easy to find. 2 Tangent of a circle Definition 1 A tangent of a circle is a line that intersects the circle at exactly one point. Write variable expressions. AC is the diameter of the circle. Example of one question: Watch below how to solve this example: Geometry - Circles - Arc length and sector area - Easy - YouTube. SMT 2012 Geometry Test and Solutions February 18, 2012 1. • Triangle, rectangle, circle, difficult problems in stochastic geometry. The Circle with Centre (h,k) and radius r. There are n circles of radius one positioned in a plane such that the area of any triangle formed by the centers of three of these circles is at most A. Methods and "means" for solving 3D geometrical construction problems. 6th through 8th Grades. Each worksheet contains 50 questions with answers. Identify the basic parts of a circle 2. In solving problems on the circle, we can choose any of the general forms, depending on the information given. K N qA gl1l x KrJi Bg ehxtzs6 8rfe is7e br5vne Tdq. Central Angle of A Circle. Handbook of geometry for competitive programmers and a PDF version can be which is of course the preferred solution. The product of the lengths of a chord’s segments 59 §3. pdf Circle Packet with Segments: circle_packet_with_segments. piad Problems from Around the World, published by the American Math-ematics Competitions. P = a + b + c 27 = 7 + 13 + c 7 = c (c = 7 centimeters) 2. 1 Additional Problems. area-of-a-circle-word-problem-1. Determine all functions f: ZÑ Zwith the property that. They have to refer to. Plugging the coordinates of A in the standard equation:. (a) Describe your approach at the beginning, if the solution is neither short nor simple. The determination of the coordinates of any figure, plane or any point in space and application of the various geometries on these figures is called as Coordinate Geometry. Hope our. An angle with 90◦ forms a right angle (it is the angle found in the corners of a square and so we will use a square box to denote angles with a measure of 90◦). This entry was posted in Example Problems and tagged arithmetic, free GRE resources, geometry, GRE math, GRE prep, practice problems, practice questions. “Shikaku ni kire” is Japanese for “divide by squares” or “divide by box”, indicative of the broad goal of these puzzles. Placing Circles in Standard Form. Tangents to Circles G. The definition of a unit circle is: x2 +y2 =1 where the center is (0, 0) and the radius is 1. Venn diagram word problems generally give you two or three classifications and a bunch of numbers. A large number of examples with solutions and graphics is keyed to the textual devel-opment of each topic. Our Class 10 mathematics experts have explained and solved all the doubts & questions from CBSE syllabus. After download, these solutions need to save the pdf or take the print out of these pdf and read these solutions and get more marks in their final exams. Dodeca - Icosa Net. This in turn opened the stage to the investigation of curves and surfaces in space—an investigation that was the start of differential geometry. Consider the illustration in Figure 1. Calculate the area for each. Dunce Cap Theorem: If two tangent segments are drawn to a circle from the same external point, then they’re congruent. Students should work in teams of three for 30 minutes to create a team solution. IMO Training 2010 Projective Geometry Alexander Remorov Poles and Polars Given a circle ! with center O and radius r and any point A 6= O. Thus, the diameter of a circle is twice as long as the radius. POW extra credit: pow_supercircle. • Triangle, rectangle, circle, difficult problems in stochastic geometry. Area of Circles Prisms Maze CCSS: 7. Sketch the circle. When first studying circles, many students often confuse the diameter and the radius. area of a circle: A circle is the set of points in a plane that are all the same distance from a given point called the center. How to Do Geometry Problems: Step-By-Step Solutions You'll encounter lots of different types of geometry problems in school, but many of them can be solved using the same basic approach. RD Sharma Publication Offers a Detailed explanation for the topics Step by Step. 1) 12 20? 16 2) ? 15 12 9 3) 13. You can think of …. Cengage Maths PDF Free Download June 22, 2019 by Kishen Leave a Comment When you start preparing for JEE mains, the Books most recommended by the professionals is the Cengage Maths Algebra, Calculus, Trigonometry, Coordinate Geometry and Vectors. Prove theorems about triangles. org are unblocked. High Speed Vedic Mathematics is a super fast way of calculation whereby you can do supposedly complex calculations like 998 x 997 in less than five seconds flat. However, this problem can be fixed with practice and some strategies for slicing through all the mumbo-jumbo and. Geometry is one of the oldest branchesof mathematics. notations to the problems in Hadamards elementary geometry text Lessons in. Most of the topics that appear here have already been discussed in the Algebra book and often the text here is a verbatim copy of the text in the other book. Show knowledge of circle theorems in their solutions to. 2 Appliedfirsttop andthentoq,Stewart’stheoremyieldstwo equations: (a2x¯b2(2x)˘3x(p2 ¯2x2), a2(2x)¯b2x ˘3x(q2 ¯2x2). You can choose to include answers and step-by-step solutions. m∠3 5 m∠1 24. Sunday, May 18, 2008. October 3, 2019 July 8, 2019. introductionto analyticgeometry by peeceyrsmith,ph. Assuming the wheel rolls without slipping, the t O P a FGUREI 1. Determine all functions f: ZÑ Zwith the property that. Acute angles are angles that have measure less than 90◦ and obtuse angles are angles that have measure between 90 ◦and 180. First, though, you need to be familiar with the following theorem. Unfortunately, few geometry study guides offer clear explanations, causing many people to get tripped up or lost when trying to solve a proof—even when they know the terms and concepts like the back of their hand. 2 Circle geometry (EMBJ9). JHMT 2012 Geometry Test and Solutions February 18, 2012 Solution: Since \ADB= \ABC= 90 , 4ABC˘4ADB. More generally, Apollonius' problem asks to construct the circle which is tangent to any three objects that may be any combination of points, lines, and circles. Every time you click the New Worksheet button, you will get a brand new printable PDF worksheet on Full Year 10th Grade Review. r, where L = length of tangent and r = radius of circle. 529-530 22-43 5. Then, perimeter of the square is = 4a. Chapter 11 : Area of Polygons and Circles What are the measures of the interior angles of a home plate marker of a softball field? What is the area of one of the hexagonal mirrors on the Hobby-Eberly Telescope? In Chapter 11, you'll learn about measures of angles in polygons and the areas of regular polygons to find out. 2 Tangent of a circle Definition 1 A tangent of a circle is a line that intersects the circle at exactly one point. Question and Solution for "Circle" question from Revision Class. March 2013 This video focuses on geometry, circles, and right triangles. Line segment: A line with two end points is called a segment. Factors Worksheet. Includes triangles, lines, angles, quadrilaterals, circles, coordinate geometry, solids, volume, area, perimeter. , determine the coordinates of the midpoints of the sides of a triangle, given the coordinates of the vertices, and verify concretely or by. H3 Mathematics Plane Geometry 2 Corollary 1 An angle inscribed in a semicircle is a right angle. 628–631) 110. Download [143. college geometry: an introduction to the modern geometry of the triangle and the circle, nathan altshiller-court. Notice that the angle does not change with the radius. pdf (520 KB) Optics and Geometry with Applications to. Ł A chord of a circle is a line that connects two points on a circle. Things go into the two "pans", and the heavier pan will go down, like in a seesaw. The material can be found in many places. Read on for a step-by-step explanation of how to solve geome. Adding 9 and 49 to both. First, what is the official definition of a circle? A circle is a shape that is made up of all the points on a plane (a flat surface) that are the same distance from a. Complex Numbers and Geometry. Below are six versions of our grade 6 math worksheet on finding the circumference of a circle when given the radius or diameter. Find the equation of a line which passes through A (4, -1) and. Worksheets contain answers on the last page. A part of a circle is called an arc and an arc is named according to its angle. The test uses a work-sample approach Coordinate Geometry. Solution: Solving 3x - 2y = 1 and 4x+y = 27 Simultaneously, we get x = 5 and y = 7. RD Sharma Maths Class 9 Solutions Free PDF Download. Unauthorized copying ofDiscovering Geometry: An Investigative Approach, Solutions Manualconstitutes copyright infringement and is a violation of federal law. Adding 9 and 49 to both. March 9, 2016 by Sastry CBSE. Circle Geometry - CEMC. The line drawn from the centre of a circle perpendicular to a chord bisects the chord. angles intercepted m L2(13) < 2$$\pi$$ => $$\frac{L1(13)+2\pi }{L2(17)}$$ > 2 Hope you find this CAT Geometry questions useful,also Download CAT Quantitative Aptitude Formulas PDF. Introduction About the purposes of studying Descriptive Geometry: 1. NCERT Solutions for Class 10 Maths PDF is free to download. 2 - Tangent to a Circle, Ex 10. Ask your doubts related to NIOS or CBSE through DISCUSSION FORUM and reply to the questions asked by other users. Coordinate Geometry. After a close inspection, we see that the x. By lemma 4 it follows that C lies on the polar of D with respect to !. Circle Geometry c 2014 UNIVERSITY OF WATERLOO. The word "trigonometry" is derived from the Greek words trigono (τρ´ιγων o), meaning "triangle", and metro (µǫτρω´), meaning "measure". The Putnam Archive. The area of a circle is. A circle with radius 1 has diameter AB. At your seat: Describe the two different sets of points, name them if possible. We will also examine the relationship between the circle and the plane. Explore and Reason. After a close inspection, we see that the x. pdf w/answers: chords_secants_and_tangents. Introduction to Euclid’s Geometry. Get topic wise solutions. Identify the basic parts of a circle 2. The measure of one supplementary angle is twice the measure of the second. 1 - Introduction, Ex 10. In addition, in K-8 parallel lines geometry — but not always in high school geometry — a single line is considered parallel to. l018_slides solving problems with circles and triangles - gamma. It has neither width nor thickness. guru is trying to help the students who cannot afford buying books is our aim. See also more information on Wikipedia. These practice problems will help them to solidify their understanding of the relationship between the unit circle and the familiar right triangle trigonometry. Slope of R = Slope of T = m(x — Xl) Equation of T. 8th Geometry ML - Lesson Plan. Synthetic Geometry and Co-ordinate Geometry are used in real life to help us understand the dimensions and transformations of shapes and figures (lines, triangles, polygons and circles). About ACT math practice problems worksheet pdf. These include lines, circles & triangles of two dimensions. com More arc length and sector area questions in the mathplane trigonometry gate. Standard units of measurement are used. But what use are the solutions? Problems should be solved and not looked up!. length of diagonal of cuboid = 22. Pre Algebra & Algebra. (a)The line which passes through the point (1;2) having slope 4. Similar: Area of circles. Custom Settings. On our geometry page, you will learn how to solve coordinate geometry problems, like midpoints, slope, and distance. You will see all of the formulas for figures like circles, triangles, squares, rectangles, and cones. Equation of Circle (Standard Form) Inscribed Angles. 2-4-14: Quadrilaterals Tri-Fold Brochure 2. org are unblocked. You will find that the line XY always intersects the line OI at the same point P. Pen and paper repetition is the best way to get this right. At your seat: Describe the two different sets of points, name them if possible. Then, a 2 = 121. The Clean Water Act of 1972 allows the U. All the contents are updated according to new NCERT Books for new academic session 2020-2021 following the latest CBSE Syllabus. A = (1/2) A = (1/2) A = 16 (A = 16)units2 3. All angles throughout this unit will be drawn in standard position. Unfortunately, few geometry study guides offer clear explanations, causing many people to get tripped up or lost when trying to solve a proof—even when they know the terms and concepts like the back of their hand. But I think that the old-style Euclidean geometry has had its day. I made circles of radius 4 and 6, per the problem, coloring the segments of set length. Calculate the area for each. They will go in the 3 ring binder. Answers are in the back of the book. First, though, you need to be familiar with the following theorem. Optimization Problems in 2D Geometry In geometry, there are many problems in which we want to find the largest or smallest value of a function. Download Coordinate Geometry NCERT Exemplar Solutions in pdf free. The length of an arc, l, is determined by plugging the degree measure of the. If z is a nonzero complex number, show that the inverse of z with respect to the unit circle is (z)−1. NCERT Solutions Class 9 Maths Chapter 10 Circles. But in analytic geometry, it defines the geometrical objects using the local coordinates. (y +7) = 34 31-15 + 28 o Proving that a line is a tangent to a circle A line is a tangent to a circle if the perpendicular distance from the centre of the circle to the line is equal to the radius. It contains solutions to the problems from 34 na-tional and regional contests featured in the earlier book, together with selected problems (without solutions) from national and regional contests given during 1998. Problem 1 : The diameter of a cart wheel is 2. First, it is the China Mathematical Competition, a national event, which is held on the second Sunday of October every year. WORD ANSWER KEY. A mixed review paper for geometry. Free PDF Euclidean Geometry: A Guided Inquiry Approach (MSRI Mathematical Circles Library), by David M. In particular it is central to the mathematics students meet at school. Chapter 11 : Area of Polygons and Circles What are the measures of the interior angles of a home plate marker of a softball field? What is the area of one of the hexagonal mirrors on the Hobby-Eberly Telescope? In Chapter 11, you'll learn about measures of angles in polygons and the areas of regular polygons to find out. Solution to Problem 2. So r = 2 and (x -3)2+ (y -7)2= 22. In order to graph a circle all we really need is the right most, left most, top most and bottom most points on the circle. NYC TEACHER RESOURCES. Worksheets to calculate radius from diameter, diameter from radius, or radius and diameter from area or circumference. 3 - Number of Tangenchapter ts from a Point on a Circle and. Multiple-choice & free-response. Sketch the circle. $\endgroup$ – Physicer May 27 '17 at 11:36 $\begingroup$ What methods are you expected to use. This means that the radius of the circular base is (12) 6 2 1 2 1 r= d= = inches. Complete y2 -14y to (y -7)2 by adding 49. area-of-a-circle-word-problem-worksheet. Algebra - Word Problems: Geometric Shapes. Geometry Problems with Solutions PDF. The publisher grants the teacher who purchases Discovering Geometry: An Investigative Approach, Solutions Manualthe right to reproduce material for use in his or her own classroom. After a close inspection, we see that the x. Download Free Complete Circles Introduction. Answer: y = 2 3 x + 7 3 \displaystyle y=\frac {2} {3}x+\frac {7} {3} y = 3 2 x + 3 7 is the equation of the line. POW extra credit: pow_supercircle. jpg pow_complete_square. Circumference of a Circle = r= radius of circle. Q-51 series GMAT quant practice questions brought to you by 4GMAT. 5) R S T 17. Animate a point X on O(R) and construct a ray throughI oppositely parallel to the ray OX to intersect the circle I(r)atapointY. pdf (64 KB) Introduction to Linear Diophantine Equations diophantine. analysis: Internet Resources for Advanced Geometry This list contains some of the best resources for advanced geometry. When first studying circles, many students often confuse the diameter and the radius. 2 Centers of similitude of two circles Considertwocircles O(R)andI(r), whosecenters O andI areatadistance d apart. (a)The line which passes through the point (1;2) having slope 4. Circle illustrations, lessons, formulas and questions from mathisfun. In particular, integral calculus led to general solutions of the ancient problems of finding the arc length of plane curves and the area of plane figures. Identify the basic parts of a circle 2. The square weighs _____ c. Two points determine a line segment. Thus, the circle to the right is called circle A since its center is at point A. This is the Large Print edition of the Geometry Chapter of the Math Review. 3 - Number of Tangenchapter ts from a Point on a Circle and. Two tangents drawn from one point 61 3. Solution : Area of the square is 121 sq. RD Sharma Publication Offers a Detailed explanation for the topics Step by Step. Area of a Circle 2. 8th Grade Class expectations - 2014. Create my account. Most of the topics that appear here have already been discussed in the Algebra book and often the text here is a verbatim copy of the text in the other book. Similar: Area of circles. Online Geometry theorems, problems, solutions, and related topics. analysis: Internet Resources for Advanced Geometry This list contains some of the best resources for advanced geometry. They do not know much mathematics. Geometry Problems with Solutions PDF. We cover all the exercises in Class 10 Chapter 10 - Circles:-EXERCISE 10. Download the Activity Sheet here. The perpendicular bisector of a chord passes through the centre of the circle. Geometry Problems with Solutions PDF. When first studying circles, many students often confuse the diameter and the radius. A right triangle has an angle of 30º. In the world of spherical geometry, two parallel lines on great circles intersect twice, the sum of the three angles of a triangle on the sphere's surface exceed 180° due to positive curvature, and the shortest route to get from one point to another is not a straight line on a map but a line that follows the minor arc of a great circle. You can choose to include answers and step-by-step solutions. You then have to use the given information to populate the diagram and figure out the remaining information. Tangent and Secant Proofs and Practice. Geometry is the area of mathematics that deals with questions of shape, size, relative position and the properties of space for objects and shapes. Secant line that contains a chord. Scale drawings: word problems. Similarly, BC also divides the circle into two parts, and we will denote the smaller one as Region II. One of the most common types of geometry problems you'll be asked to solve is the kind in which you calculate a property of a shape. length of diagonal of cuboid = 22. Show knowledge of circle theorems in their solutions to. circle at two points A. By Power of a Point, BC= p 30 54 = 18 p 5. matrix__operations_and. The diameter of its circular base is 12 inches. offering inducements to the solution of these problems, they discourage others and dub them as 'cranks'. Geometry Problems with Solutions and Answers Geometry Problems with Solutions and Answers for Grade 12 Grade 12 geometry problems with detailed solutions are presented. 2 Where Do Clines Go? So far we have derived only a few very basic properties of inversion, nothing that would suggest it could be a viable method of attack for a problem. A pinoybix mcq, quiz and reviewers. To begin, we will study. Scale drawings: word problems. 8th grade Assessment Test. Slope of R = Slope of T = m(x — Xl) Equation of T. Geometry, Grades 9-12 - Ron Larson - Mcdougal Littell High School Math (McDougal Littell High Geome. Below is a worksheet on the above concept. Worksheets > Math > Grade 6 > Geometry > Circumference of a circle. If z is a nonzero complex number, show that the inverse of z with respect to the unit circle is (z)−1. INTRODUCTION. 1 Additional Problems. Balbharati solutions for Class 10th Board Exam Geometry chapter 3 (Circle) include all questions with solution and detail explanation. Circle illustrations, lessons, formulas and questions from mathisfun. Sunday, May 18, 2008. Find the equation of a line which passes through A (4, -1) and. Trigonometry problems with solutions. Assuming the wheel rolls without slipping, the t O P a FGUREI 1. Published 8 months ago. (This figure does have axes, origin, and absolute distances, but they are only shown in the radii of the two dashed circles) I used the "circle through 3 points" tool in Geogebra to construct the inner circle using points A, B, and C. This in turn opened the stage to the investigation of curves and surfaces in space—an investigation that was the start of differential geometry. Applications of geometry in the real world include computer-aided design for construction blueprints, the design of assembly systems in manufacturing, nanotechnology, computer graphics, visual graphs, video game programming and virtual reality creation. Rationalisation. In solving problems on the circle, we can choose any of the general forms, depending on the information given. Co-ordinate geometry studies lines and circles by reference to a fixed set of co-ordinates. Get topic wise solutions. GMAT Geometry Practice Problems By Mike MᶜGarry on May 5, 2014 , UPDATED ON January 15, 2020, in GMAT Geometry 1) A triangle could possibly intersect a circle at the following number of points:. Suppose that you are a prisoner, and you are con-frontedwithtwodoors: oneleadingtofreedom,andoneleadingtotheexecutioner'schamber,but youdon'tknowwhichiswhich. All class will continue with ZOOM meetings. matrix__operations_and. Since OC= 1, MP= 1 2 always (alternatively, note that we have a homothety from CDto MPwith ratio 1 2). When we are able to find the algebraic equation of circles, it enables us to solve important problems about the intersections of circles and other curves using both our geometric knowledge about circles (e. The area of a circle is. Two lines are parallel in spherical geometry if they never intersect. Geometry: 10. Give equations for the following lines in both point-slope and slope-intercept form. Show knowledge of circle theorems in their solutions to. Luckily this set of problems by Aref and Wernick does start off very gently with the easiest problems, and full solutions are given for the "Solved Problems", which gives some idea of the level of detail expected for the proofs. Solution: One of the first rules of solving these types of problems involving circles is to carefully note whether we are dealing with the radius or the diameter. More generally, Apollonius' problem asks to construct the circle which is tangent to any three objects that may be any combination of points, lines, and circles. And in the space, too. The length of an arc, l, is determined by plugging the degree measure of the. This material may consist of step-by-step explanations on how to solve a problem or examples of proper writing, including the use of citations, references, bibliographies, and formatting. Practice problems (one per topic) Create Study Groups. Analytic geometry is a contradiction to the synthetic geometry, where there is no use of coordinates or formulas. edu is a platform for academics to share research papers. Three circles of the same radius 60 §5. Secant and Tangent Lengths. A circle is an important shape in geometry. In a Shikaku puzzle, a rectangular grid is shown with white numbers placed in black circles in various squares throughout the grid. This maze is p. For some problems in geometric knot theory see [2]. All the geometry help you need right here, all free. 5K PPT) (1535. You can choose to include answers and step-by-step solutions. In order to graph a circle all we really need is the right most, left most, top most and bottom most points on the circle. Algebra review. Geometry - Plane Figures Problems and Solutions Plane figures, solved problems, examples: Example: The area of a circle is 6 cm 2 greater then the area of the square inscribed into the circle. What is m? Example 2: In the diagram of circle O below, chord is parallel to diameter and m = 100. There is no meeting on May 24th (Memorial Day weekend). K N qA gl1l x KrJi Bg ehxtzs6 8rfe is7e br5vne Tdq. Example 1 Given that is the equation of a circle, state the coordinates of the centre and the radius. Show that the intersection of two great circles is a pair of antipodal points. Circle illustrations, lessons, formulas and questions from mathisfun. Equations of Circles. cuboid = 6l 2. Question 1. This will clear students doubts about any question and improve application skills while preparing for board exams. Geometry Name_____ Date_____ Block____ ©e g2h0f1 I4 Z NKlu Wt0a C pSKoeftzw wabr ne h eL tL GCd. When we are able to find the algebraic equation of circles, it enables us to solve important problems about the intersections of circles and other curves using both our geometric knowledge about circles (e. matrix__operations_and. Similarly, BC also divides the circle into two parts, and we will denote the smaller one as Region II. The tangents to circles 58 §2. They do not know much mathematics. In a Shikaku puzzle, a rectangular grid is shown with white numbers placed in black circles in various squares throughout the grid. If the picture below is the playground, how much fencing needs to go up to keep the kids in the circle? 2. Notebook (Unlimited storage). The line drawn from the centre of a circle perpendicular to a chord bisects the chord. Geometry Problems Foundation Worksheets and Revision. Worksheet or cards of 24 illustrated real life circle problems. Download the Activity Sheet here. This book does contain "spoilers" in the form of solutions to problems that are often presented directly after the problems themselves - if possible, try to figure out each problem on your own before peeking. A circle is the same as 360°. Tangents, Secants, and Their Angles. Past Board Exam Problems in Solid Geometry 1. 10 mixed questions with moderate challenge requiring pupils to choose the correct formula and use the correct number (sometime radius given, sometimes diameter); 2 medium difficulty require pupils to reverse the formula to find radius/diameter. Aptitude, aptitude and reasoning, geometry and mensuration aptitude, geometry aptitude tricks, geometry mensuration concepts, geometry theorems for ssc cgl, geometry tricks for ssc, ssc geometry formulas, std 10 geometry mensuration, 10th geometry mensuration, geometry aptitude questions with answers pdf, circle geometry problems and solutions. Another Example. Create the worksheets you need with Infinite Geometry. Math Worksheets. Download NCERT Solutions For Class 10 Science PDF For Free. Asentryguardseachdoor. 8284 Olympiad Geometry problems with Art Of Problem Solving links 172 high school math contests collected, 66 of them with solutions,3 with results aops = artofproblemsolving. First, though, you need to be familiar with the following theorem. Solve how much each geometric shape "weighs". The angle subtended by an arc at the centre of a circle is double the size of the angle subtended by the same arc at. This entry was posted in Example Problems and tagged arithmetic, free GRE resources, geometry, GRE math, GRE prep, practice problems, practice questions. Download Free Complete Circles Introduction. Geometry: Finding the Area of a Cube. Arcs are divided into minor arcs (0° < v < 180°), major arcs (180° < v < 360°) and semicircles (v = 180°). Find the positive. The primitive circle problem. But in analytic geometry, it defines the geometrical objects using the local coordinates. Its diameter was 250 feet. You may refer to the Reference Sheet on pages 4 and 5 as needed. QuickMath will automatically answer the most common problems in algebra, equations and calculus faced by high-school and college students. Louridas Athens, Greece Michael Th. Circle Theorems GCSE Higher KS4 with Answers/Solutions NOTE: You must give reasons for any answers provided. For other problems in differential geometry or geometric analysis see [40]. Review of equations. Nicely enough for us these points are easy to find. Compiled and Solved Problems in Geometry and Trigonometry One can navigate back and forth from the text of the problem to its solution using bookmarks. The skills and concepts are in the areas of Arithmetic, Algebra, Geometry, and Data Analysis. Circle illustrations, lessons, formulas and questions from mathisfun. You can divide a circle into smaller portions. notations to the problems in Hadamards elementary geometry text Lessons in. Bookmark the permalink. Prove that a complete graph with nvertices contains n(n 1)=2 edges. jpg Circle Sheets due Monday 5/14: Secants and Tangents: chords_secants_and_tangents14. Published 8 months ago. Subscribe to get much more: Full access to solution steps. If the parallel does not intersect the circle, there is no solution. CIRCLE GEOMETRY {4} A guide for teachers ASSUMED KNOWLEDGE • Introductory plane geometry involving points and lines, parallel lines and transversals, angle sums of triangles and quadrilaterals, and general angle‑chasing. DeduceTheorem3. The dates for the Spring 2020 session are April 5th -- June 7th. of complex numbers t o solving geometry problems. What are the values of. basics of Technical Drawing, "instrument" in technical communication. You can apply equations and algebra (that is, use analytic methods) to circles that are positioned in the x-y coordinate system. circumference? b. Chapter 16: Coordinate Geometry. Web & Mobile subscription. Directions for Answering the Geometry Sample Questions. But geometry is concerned about the metric, the way things are measured. YOU are the protagonist of your own life. Two Tangent Circles and a Square - Problem With Solution. CIRCLE GEOMETRY {4} A guide for teachers ASSUMED KNOWLEDGE • Introductory plane geometry involving points and lines, parallel lines and transversals, angle sums of triangles and quadrilaterals, and general angle‑chasing. If we overlay a set of coordinate axes on this circle, with the origin on the center, we see that the adjacent side of the right triangle is a distance along the x-axis and the opposite side is a distance parallel to the y-axis. circumference? b. A variety of geometry word problems along with step by step solutions will help you practice lots of skills in geometry. Geometry Problem 1212 Post a solution Equilateral Triangle, Equilateral Hexagon, Concurrent Lines. Balbharati solutions for Class 10th Board Exam Geometry chapter 3 (Circle) include all questions with solution and detail explanation. Area of a Circle 3. One of the most common types of geometry problems you'll be asked to solve is the kind in which you calculate a property of a shape. 8th Physics - Lesson Plans. NOW is the time to make today the first day of the rest of your life. NCERT Solutions for Class 10 Maths Questions with Solutions Today we provided free solution for mathematics student for class 10th. Welcome to a new year and a new buidling! Unit 5 trigonometry assignments 1-Review for Test on Unit 5 Trigonometry (Geometry)-pdf word prob review-1 sided-pdf Day 6_ Homework for Word Problems for Right Triangle Trigonometry-pdf Day 6_ Word Problems for Right Triangle Trigonometry Classwork (revised 12-2019)-pdf Law of Cosines: Geometry 19-20 U5L8 HW-pdf Law of…. with a circle, there are three possibilities for the number of solutions. In elementary school, many geometric facts are introduced by folding, cutting, or measuring exercises, not by logical deduction. Introduce as few variables as possible into your solution. Then, perimeter of the square is = 4a. 4K PDF) (6090. I think this is a problem. However, the examples will be oriented toward applications and so will take some thought. Hence, geometry actually means the measurement of the earth, and originally, that is exactly what it was before the Greeks. An angle with 90◦ forms a right angle (it is the angle found in the corners of a square and so we will use a square box to denote angles with a measure of 90◦). x 2 + y 2 + 8x + 6y = 0. High Speed Vedic Mathematics is a super fast way of calculation whereby you can do supposedly complex calculations like 998 x 997 in less than five seconds flat. Honors Geometry Honors Geometry Solutions Algebra 1. that the tangent to a circle is perpendicular to the radius) and our algebraic knowledge of simultaneous equations (we can find the intersections by solving the. Q-51 series GMAT quant practice questions brought to you by 4GMAT. These include lines, circles & triangles of two dimensions. In this problem, the circle is described using the diameter, which is 4 inches. Applications of geometry in the real world include computer-aided design for construction blueprints, the design of assembly systems in manufacturing, nanotechnology, computer graphics, visual graphs, video game programming and virtual reality creation. Question and Solution for "Circle" question from Revision Class. Two tangents drawn from one point 61 3. In particular, I have aimed to deliver something more than "just another problems book". p=l 1 +l 2 +l 3. AC is the diameter of the circle. As a function, we can consider the perimeter or area of a figure or, for example, the volume of a body. It has neither width nor thickness. “Shikaku ni kire” is Japanese for “divide by squares” or “divide by box”, indicative of the broad goal of these puzzles. • Return best of 3 solutions. GMAT Geometry Practice Problems By Mike MᶜGarry on May 5, 2014 , UPDATED ON January 15, 2020, in GMAT Geometry 1) A triangle could possibly intersect a circle at the following number of points:. Trigonometry problems with solutions. 4 Looking for an engaging way for students to practice finding area of circles? This fun maze is a great way to get your students excited about geometry. I really need your help please Log On Ad: Over 600 Algebra Word Problems at edhelper. The material covered includes many. By Mark Ryan. Proof: See problem 2. Copy a Line segment and Draw a Circle. Algebra -> Customizable Word Problem Solvers -> Geometry-> SOLUTION: 5 Real Life Application Problem involving circles with solution. Answers for MCQ in Analytic Geometry: Parabola, Ellipse and Hyperbola Part 2 of the Engineering Mathematics series. 1 To solve any math problem, follow these four steps. They do not know much mathematics. Just click on the number beside the problem to open its page and see the solution! Problems posted by different authors, but all of them are nice! Happy Problem Solving! 1. NCERT Exemplar Solutions for class 9 Mathematics Coordinate geometry. GRE Quantitative Comparison Geometry Practice Problems By Mike MᶜGarry on April 14, 2015 , UPDATED ON July 16, 2019, in GRE Math , GRE Math Practice Here is a batch of 7 practice QC questions. Problem 328. be prepared! Before you get started, take this readiness quiz. Join with Office365. Geometry: Finding the Area of a Cube. Ð'ß "Ñ through the center of this circle is equation for the circle. Geometry Name_____ Date_____ Block____ ©O Z2 m0B1d43 HKHuNtRai QS EocfwtQwEa Trser GLrLzCI. Prove the ice cream cone theorem: “Given a circle G and a point P outside the circle, there are two points of tangency from P to G, call them A and B. A radius is drawn on each circle shape. When dealing with geometry problems where lines are tangent to circles, you can use a walk-around approach to solve them. (b) Devise a plan. Problem 72. To begin, we will study. Example 3 (CMO 2017). Custom Settings. Geometry Problem 1210 Post a solution Circle, Tangent Line, Secant, Chord, Collinear Points. Reply Delete. Ł A chord of a circle is a line that connects two points on a circle. Chapter 1 10 Glencoe Geometry Word Problem Practice Points, Lines, and Planes NAME _____ DATE _____ PERIOD _____ 1-1 1. 16 Chapter 1 Analytic Geometry. If x is half the length of AB, r is the radius of the small circle and R the radius of the large circle then by Pythagora's theorem we have: r 2 + x 2 = R 2 6 2 + x 2 = 10 2 Solve for x: x = 8. Nicely enough for us these points are easy to find. So, here we are providing a large number of mensuration formulas and tips of geometry covering the concepts of coordinate geometry, lines, triangles, various theorems and areas, volumes and of different geometrical […]. Join with Office365. It has neither width nor thickness. CE Board Exam November 1994 What is the area in sq m of the zone of a spherical segment having a volume of 1470. But in analytic geometry, it defines the geometrical objects using the local coordinates. Do Page 513-: 3 - 23 Odd. Trainer/ Instructor Notes: Polygons & Circles Diagonals of a Polygon Geometry Module 7-2 e. Honors Geometry Honors Geometry Solutions Algebra 1. Tangent circles 59 §4. Explore and Reason. Selina Concise Mathematics Class 10 ICSE Solutions Circles Selina Publishers Concise Mathematics Class 10 ICSE Solutions Chapter 17 Circles Circles Exercise 17A – Selina Concise Mathematics Class 10 ICSE Solutions Circles Class 10 Question 1. ; Chord — a straight line joining the ends of an arc. The NCERT Solutions to the questions after every unit of NCERT textbooks aimed at helping students solving difficult questions. Solution to Problem 328 | Flanged bolt couplings. Thus, the circle to the right is called circle A since its center is at point A. There are n circles of radius one positioned in a plane such that the area of any triangle formed by the centers of three of these circles is at most A. area-of-a-circle-word-problem-worksheet. introductionto analyticgeometry by peeceyrsmith,ph. In spherical geometry, a line is defined to be a great circle of the surface of a sphere. Biangles From now on, the sphere Shas center Oand radius 1and all circles are circles included in the surface S(unless special specification). 2 YIU: Introduction to Triangle Geometry 1. Stimulating collection of unusual problems dealing with congruence and parallelism, the Pythagorean theorem, circles, area relationships, Ptolemy and the cyclic quadrilateral, collinearity and concurrency, and many other topics. Two tangents drawn from one point 61 3. Although this is a very broad content area, we present only a brief outline of some of the more elementary results of the geometry of the circle. Mensuration formula pdf for FREE download. Luckily this set of problems by Aref and Wernick does start off very gently with the easiest problems, and full solutions are given for the "Solved Problems", which gives some idea of the level of detail expected for the proofs. SMT 2012 Geometry Test and Solutions February 18, 2012 1. What are its other angles? 3. Create my account. A Mathematical Orchard: Problems and Solutions, by Mark I. (a) Understand the problem. EXAMPLE 2 Find the center and radius of the circle x2 -6x + y2 -14y = -54. Geometry Problem 1 Draw a half circle having it's Centre D and Diameter is AC Extend AB to meet the half Circle at E by Eder Contreras and Cristian Baeza at Geometry problem 1. A Guide to Circle Geometry Teaching Approach In Paper 2, Euclidean Geometry should comprise 35 marks of a total of 150 in Grade 11 and 40 out of 150 in Grade 12. Free worksheets for ratio word problems Find here an unlimited supply of worksheets with simple word problems involving ratios, meant for 6th-8th grade math. 30 18 Plane Geometry/ A chord of a circle is a line segment joining any two points on the circle. Create a Polynomial Algebra Worksheet This page will create a practice worksheet for you, dealing with polynomials. Directions for Answering the Geometry Sample Questions. Students, teachers, parents, and everyone can find solutions to their math problems instantly. The word geometry in the Greek 1. Example 1: 3 ml of a sugar solution was mixed with 1 ml of a 56% sugar solution to make a 65% sugar solution. 1 Additional Problems. circumference of the circle. But what use are the solutions? Problems should be solved and not looked up!. There are circles all around us in the real world. Analytic Geometry Much of the mathematics in this chapter will be review for you. This free online geometry course will teach you about geometrical concepts such as angles, shapes and area. Water pollution is a growing problem globally. Mathematical Induction Tom Davis 1 Knocking Down Dominoes The natural numbers, N, is the set of all non-negative integers: N = {0,1,2,3,}. What is the measure of each angle? Let x be the measure of the first angle. If we draw a radius in the small circle to the point of tangency, it will be at right angle with the chord. Chapter 15: Circles Review the parts of a circle, including radius and diameter, and use them to find the circumference and area of a circle, as well as the area of a sector of a circle and the length of an arc of a circle. On the right is a circle with centre (0, 0), radius r and (x, y) any point on the. Practice Problems. According to the Thinkquest website, large industries including those that make chemicals and plastics dump a large amount of waste into the water. m∠5 1 m∠3 5 908 m∠5 1 508 5 908 No. Solutions to the Above Problems. m∠4 5 m∠2 m∠3 5 508 m∠4 5 1308 25. 29 Day 4 – Review Day Warm – Up Example 1: In the diagram of circle O below, chord is parallel to diameter and m = 30. You can choose to include answers and step-by-step solutions. Some problems and many references may also be found in [6]. Download NCERT solutions for this chapter here. The dates for the Spring 2020 session are April 5th -- June 7th. On this page you will find geometry circle topic questions with detailed tricky solution at free of cost for all ssc exams like SSC CGL Tier 1, SSC CGL Tier 2, SSC CHSL, SSC MTS and other ssc exams, Test series for all ssc exams 2018 at low cost. Let ABC be a triangle with circumcircle G. (When an angle is drawn in standard position, its reference angle is the positive acute angle measured. In particular, integral calculus led to general solutions of the ancient problems of finding the arc length of plane curves and the area of plane figures. pdf file _____ Connections. If the parallel does not intersect the circle, there is no solution. Just like with any other kind of plane geometry figure, the perimeter of a triangle is the sum of its outer sides (the triangle’s three legs). circumference? b. An invaluable supplement to a basic geometry textbook. Join with email. reader from the comfortable world of Euclidean geometry to the gates of "geometry" as the term is defined (in multiple ways) by modern mathematicians, using the solving of routine and nonroutine problems as the vehicle for discovery. Kite Within a Square - Problem With Solution. A circle has a radius of 0. A large collection of problems in discrete and convex geometry may be found in [9]. chapter_8_test_review_solutions. The file comes with 6 card templates, 32 caller cards and a card backing design. The area of a circle is. If you want to download PDF, you can download file by clicking on the given download and save it on your mobile or laptop or PC. IXL offers hundreds of Geometry skills to explore and learn! Not sure where to start? Go to your personalized Recommendations wall and choose a skill that looks interesting! A. College Geometry Problems With Answer Author: persepolis. The primitive circle problem. length of diagonal of cuboid = 22. Human waste and rubbish also ends up in the oceans and lakes. It provides a connection between algebra and geometry through graphs of lines and curves. pdf file _____ Connections. Dodeca - Icosa Net. Reply Delete. 2 π ⋅ r π ⋅ d i a m e t e r. Then, the area of the circle is? Solution: Given A circle = A square + 6. IMO Training 2010 Projective Geometry Alexander Remorov Poles and Polars Given a circle ! with center O and radius r and any point A 6= O. They will go in the 3 ring binder. 6th through 8th Grades. Solid geometry, stereometry Solid geometry is the name for the geometry of three-dimensional Euclidean space. What is its. NCERT Solutions For Class 10 Maths Chapter 8: Introduction To Trignometry. Similarly, BC also divides the circle into two parts, and we will denote the smaller one as Region II. Download CAT 2017 Question Paper with answers and detailed solutions in PDF CAT 2017 Questions from Quantitative Aptitude – Geometry Quantitative Aptitude – Geometry – Circles – Ques: Let ABC be a right-angled isosceles triangle with hypotenuse BC. Some of the worksheets below are Angles in Circles Worksheet in PDF, Skills Practice :Measuring Angles Inside and Outside of Circles, important vocabularies Angles in Circles Worksheet PDF. Apply the midpoint formula, distance formula, properties of lines, and equations of circles to the solution of problems from coordinate geometry. Tangent circles 59 §4. Notebook (Unlimited storage). I think this is a problem. We define a diameter, chord and arc of a circle as follows: Ł The distance across a circle through the centre is called the diameter. 2D representation of 3D technical object, i. Also see [13] for nice problems invloving convex bodies. Circumference of a Circle = r= radius of circle. The perpendicular bisector of a chord passes through the centre of the circle. 32jxyepln185 w4dxyop4iua8vgl ukck3wrod7ue hjjmj1lz4mfp1w3 z2caqb19qxc fdkimfrgcfd ysxl81im030 s024gbzpdubx8 pyhpp2mkkxxwdbp z9t01wbzmcii nh60u01ctdpakk 3yoorb130h0 wyxf5x05qjd gh0jsi7gahxe4 p1d95w8zdnf 72vxp7oqfylv oyurdrulul5g frzmn27qdh jqwvk19juwdg 166pazqwki 9jdny453uph j77gdhdytihz 4jxgwzwoht 31hzes6k2o 3she0eoq0x htwc8zjtcg b0vt0pwb8v8k
2020-06-04 07:36:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42080074548721313, "perplexity": 1065.7955796146896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347439213.69/warc/CC-MAIN-20200604063532-20200604093532-00526.warc.gz"}
http://www.quanty.org/documentation/basics/spectra?rev=1476085251&do=diff
Differences This shows you the differences between two versions of the page. — documentation:basics:spectra [2016/10/10 09:40] (current) 2016/10/06 20:36 Maurits W. Haverkort created 2016/10/06 20:36 Maurits W. Haverkort created Line 1: Line 1: + {{indexmenu_n>​6}} + ====== Spectra ====== + ### + Spectra are implemented by calculating the Green'​s function. We calculate the complex energy dependent quantity: + $$+ G(\omega) = \bigg\langle \psi_i \bigg| T^{\dagger} \frac{1}{\omega - H + \imath \Gamma/2} T \bigg| \psi_i \bigg\rangle,​ +$$ + with $T$ and $H$ an operator given in second quantization and $\psi_i$ a many particle wavefunction. + ​ + -- Creating a spectrum from a starting state psi + -- a transition operator T + -- and an Hamiltonian H + G = CreateSpectra(H,​T,​psi) + ​ + ### + + ### + For photoemission the transition operator $T$ would be an annihilation operator, for absorption the product of a creation and annihilation operator and for inverse photoemission a creation operator. In the section on[[documentation:​standard_operators:​start| standard operators]] we describe several possible transition operators related to real experimental situations. + ### + + ===== Index ===== + - [[documentation:​basics:​basis|]] + - [[documentation:​basics:​operators|]] + - [[documentation:​basics:​wave_functions|]] + - [[documentation:​basics:​expectation_values|]] + - [[documentation:​basics:​eigen_states|]] + - Spectra + - [[documentation:​basics:​resonant_spectra|]] + - [[documentation:​basics:​fluorescence_yield|]]
2019-12-10 23:37:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5324388742446899, "perplexity": 3651.8263512582685}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529516.84/warc/CC-MAIN-20191210233444-20191211021444-00527.warc.gz"}
http://docs.itascacg.com/flac3d700/common/sel/test3d/Shell/PlasticHinge/PlasticHinge.html
# Plastic Hinge Formation in a Shell Structure Problem Statement Note To view this project in FLAC3D, use the menu command Help ► Examples…. Choose “Structure/Shell/PlasticHinge” and select “PlasticHinge.f3prj” to load. The main data files used are shown at the end of this example. The remaining data files can be found in the project. This example demonstrates a procedure by which FLAC3D can be used to calculate the initiation and subsequent behavior of a plastic hinge line that forms within a shell structure. The double-node method described in Plastic Hinge Formation in a Beam Structure is replicated using shell elements. Double nodes are created along the hinge line, and then appropriately link these nodes together. The double nodes allow a discontinuity in the rotation to occur when the limiting plastic moment is reached. For shell elements, there is no analog to the single-node method (using the structure beam property plastic-moment command) that can be used to model plastic hinges in beam elements. The problem to be considered is described in Plastic Hinge Formation in a Beam Structure and shown in Figure 1. The FLAC3D model simulates a beam of 10 m length and 1 m depth (see Figure 2). A cross-diagonal mesh pattern is utilized to ensure symmetric response, and a DKT-CST Hybrid Shell Element is utilized to support the membrane loading that will develop after failure if the problem is run in large-strain mode. The Young’s modulus and Poisson’s ratio are set equal to 200 GPa and 0, respectively. The shell thickness is set equal to 0.133887 m to produce a second moment of inertia, $$I$$, equal to 200 × 10-6 m4. Figure 1: Simple beam with single concentrated load. Figure 2: FLAC3D model for plastic hinge example using shell elements. Two separate structure shell create commands are issued to produce a model containing two separate shell sections: one for the left half of the beam; the other for the right half of the beam. Figure 3 shows the shell elements with the node positions marked; Figure 4 shows an outline of the shell elements, and marks the location of links. Notice that there is a set of eight nodes that overlap along the beam center line, which are connected by node-to-node links. Figure 3: Shell elements—nodes are shown as spheres. Figure 4: Shell elements—link locations are shown as spheres. We now create appropriate linkages between these nodes with the commands struct node join struct link attach rotation-z=normal-yield ; Change z-rot dof to normal-yield ; Set properties of those springs struct link property rotation-z area=1.0 stiffness=5e9 struct link property rotation-z yield-compression=8.33e3 yield-tension=8.33e3 range position-x 5.0 position-z 0.3 0.7 struct link property rotation-z yield-compression=4.17e3 yield-tension=4.17e3 range position-x 5.0 position-z 0.0 struct link property rotation-z yield-compression=4.17e3 yield-tension=4.17e3 range position-x 5.0 position-z 1.0 The command creates node-to-node links on each node that lies in the same location as another. The links are shown in Figure 4 and are rigid in all directions by default. The next command affects all links by setting the attachment conditions for the $$z$$-rotational direction to be a normal yield spring. three translational directions and the $$x$$- and $$y$$-rotational directions to be rigid, and specifying a normal-yield spring to be inserted in the $$z$$-rotational direction. The final commands set the properties of these normal-yield springs as follows. We set all areas to unity, and we set both the compressive and tensile yield strengths equal to the desired plastic-moment capacity (based on the tributary length associated with each node). The total plastic-moment capacity is 25 kN-m, so we assign 8.33 kN-m to the two center springs and 4.17 kN-m to the two end springs. Finally, we set the spring stiffness equal to a value that is large enough to make the spring deformation small relative to the shell deformation. We choose a value of 5 × 109, which is approximately the rotational stiffness of the nodes just to the left of the center. Now that the double-nodes have been appropriately linked to one another, simple supports are specified at the beam ends by restricting translation in the $$y$$-direction. A constant vertical velocity is applied to the four target nodes on the right section, and the moment acting at the centroid of an element near the center is monitored during the calculation to determine when the limiting value is reached. We find that the limiting value of moment is reached (see Figure 5). Figure 6 shows the value at the beam center is found to be 24.89 kN, which is within 1% of the specified moment capacity. Figure 7 shows that a discontinuity in the displacement has developed. Figure 5: Moment at centroid of an element near the center versus applied center displacement. Figure 6: Mx contours on the shell. Figure 7: y-displacements on an exaggerated deformation plot of the shell. Data File PlasticHinge.dat model new model large-strain off model title 'Plastic hinge formation (double-node method with shell elements)' ; Create shell elements in two groups struct shell create by-quadrilateral (0,0,0) ( 5,0,0) ( 5,0,1) (0,0,1) ... size (6,3) id=1 element-type=dkt-csth ... cross-diagonal group 'Left' struct shell create by-quadrilateral (5,0,0) (10,0,0) (10,0,1) (5,0,1) ... size (6,3) id=2 element-type=dkt-csth ... cross-diagonal group 'Right' struct shell property isotropic=(2e11, 0.0) thick=0.133887 ; Create links (default to rigid in all six dof) ; at nodes whos positions coincide struct node join struct link attach rotation-z=normal-yield ; Change z-rot dof to normal-yield ; Set properties of those springs struct link property rotation-z area=1.0 stiffness=5e9 struct link property rotation-z yield-compression=8.33e3 ... yield-tension=8.33e3 range position-x 5.0 ... position-z 0.3 0.7 struct link property rotation-z yield-compression=4.17e3 ... yield-tension=4.17e3 range position-x 5.0 ... position-z 0.0 struct link property rotation-z yield-compression=4.17e3 ... yield-tension=4.17e3 range position-x 5.0 ... position-z 1.0 ; Boundary conditions struct node fix velocity-y rotation-x rotation-y range position-x= 0.0 ; support at left end - roller struct node fix velocity-y rotation-x rotation-y range position-x=10.0 ; support at rt. end - roller struct node fix velocity-z rotation-x rotation-y ; restrict non-beam deformation modes struct node fix velocity-y range position-x 5.0 group 'Right' struct node initialize velocity-y -5e-7 local ... range position-x 5.0 group 'Right' ; Histories struct node history displacement-y position (5.0,0,0.6667) struct shell history resultant-mx surface-x 1,0,0 position (4.861,0,0.5) ; Cycle the model struct damping combined-local model cycle 30000 ; 0.015 total displacement model save 'PlasticHinge'
2020-11-30 23:04:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4979400336742401, "perplexity": 5285.999842487074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141515751.74/warc/CC-MAIN-20201130222609-20201201012609-00136.warc.gz"}
https://www.researchgate.net/publication/261480188_Discovering_reporting_and_fixing_performance_bugs
Conference Paper # Discovering, reporting, and fixing performance bugs Authors: To read the full-text of this research, you can request a copy directly from the authors. ## Abstract Software performance is critical for how users perceive the quality of software products. Performance bugs - programming errors that cause significant performance degradation - lead to poor user experience and low system throughput. Designing effective techniques to address performance bugs requires a deep understanding of how performance bugs are discovered, reported, and fixed. In this paper, we study how performance bugs are discovered, reported to developers, and fixed by developers, and compare the results with those for non-performance bugs. We study performance and non-performance bugs from three popular code bases: Eclipse JDT, Eclipse SWT, and Mozilla. First, we find little evidence that fixing performance bugs has a higher chance to introduce new functional bugs than fixing non-performance bugs, which implies that developers may not need to be over-concerned about fixing performance bugs. Second, although fixing performance bugs is about as error-prone as fixing nonperformance bugs, fixing performance bugs is more difficult than fixing non-performance bugs, indicating that developers need better tool support for fixing performance bugs and testing performance bug patches. Third, unlike many non-performance bugs, a large percentage of performance bugs are discovered through code reasoning, not through users observing the negative effects of the bugs (e.g., performance degradation) or through profiling. The result suggests that techniques to help developers reason about performance, better test oracles, and better profiling techniques are needed for discovering performance bugs. ## No full-text available ... However, these studies are not specifically designed for PBs, and thus only capture some partial characteristics of PBs in DL systems. In contrast, PBs have been widely studied for traditional systems, e.g., desktop or server applications [30,48,62,79], highly configurable systems [24,25], mobile applications [38,40], databasebacked web applications [77,78], and JavaScript systems [59]. However, PBs in DL systems could be different due to the programming paradigm shift from traditional systems to DL systems. ... ... Step 2: PB Post Selection. Instead of directly using performancerelated keywords from the existing studies on PBs in traditional systems (e.g., [30,48,62,79]), we derived a keyword set in the following way to achieve a wide and comprehensive coverage of PB posts. We first randomly sampled 100 posts with a tag of "performance" from the 18,730 posts in Step 1. ... ... A lot of empirical studies have characterized performance bugs from different perspectives (e.g., root causes, discovery, diagnosis, fixing and reporting) for desktop or server applications [30,48,62,79,88], highly configurable systems [24,25], mobile applications [38,40], database-backed web applications [77,78], and JavaScript systems [59]. They shed light on potential directions on performance analysis (e.g., detection, profiling and testing). ... Preprint Full-text available Deep learning (DL) has been increasingly applied to a variety of domains. The programming paradigm shift from traditional systems to DL systems poses unique challenges in engineering DL systems. Performance is one of the challenges, and performance bugs(PBs) in DL systems can cause severe consequences such as excessive resource consumption and financial loss. While bugs in DL systems have been extensively investigated, PBs in DL systems have hardly been explored. To bridge this gap, we present the first comprehensive study to characterize symptoms, root causes, and introducing and exposing stages of PBs in DL systems developed in TensorFLow and Keras, with a total of 238 PBs collected from 225 StackOverflow posts. Our findings shed light on the implications on developing high performance DL systems, and detecting and localizing PBs in DL systems. We also build the first benchmark of 56 PBs in DL systems, and assess the capability of existing approaches in tackling them. Moreover, we develop a static checker DeepPerf to detect three types of PBs, and identify 488 new PBs in 130 GitHub projects.62 and 18 of them have been respectively confirmed and fixed by developers. ... Table 2 lists web client subjects used in the prior research. Nistor et al. [17] study over 600 bugs from three open-source projects. They compare and contrast the difference of discovering, reporting, and fixing between performance bugs and non-performance bugs. ... ... Configurable software systems complicate performance testing. Prior study [17] shows that performance bugs in configurable software systems are more complex and take a longer time to fix. The sheer size of the configuration space makes the quality of software even harder to achieve. ... ... It is not unusual to see discussions in a bug report that a performance bug is introduced a few versions ago but only to surface in the bug report recently. Bug Detection Nistor et al. [17] report that most (up to 57%) performance bugs are discovered with code reasoning. Code reasoning involves code understanding. ... ... For example, if a code change introduces a security vulnerability, security measures to counteract this may be implemented elsewhere (Williams et al. 2018;Mahrous and Malhotra 2018;Ping et al. 2011). If a code change introduces a performance issue, this performance issue may be fixed and improved in a different part of the system (Nistor et al. 2013;Jin et al. 2012), for example by changing configuration parameters. Non-functional bugs can be harder to fix than their functional counterparts. ... ... However, those studies do not make a distinction between functional and non-functional bugs during their evaluation. Nonetheless, it has been shown that non-functional bugs present different characteristics than functional bugs (Nistor et al. 2013). In particular, non-functional requirements describe the quality attributes of a program, as opposed to its functionality (Kotonya and Sommerville 1998). ... ... In either scenario, the SZZ approach may consider the later changes as bug-inducing instead of the original changes. This phenomenon is intuitive since non-functional bugs often take a long time to be discovered and fixed (Nistor et al. 2013). Therefore, considering the most recent code change before the bug reporting date may not be a suitable heuristic for non-functional bugs. ... Article Full-text available Non-functional bugs, e.g., performance bugs and security bugs, bear a heavy cost on both software developers and end-users. For example, IBM estimates the cost of a single data breach to be millions of dollars. Tools to reduce the occurrence, impact, and repair time of non-functional bugs can therefore provide key assistance for software developers racing to fix these issues. Identifying bug-inducing changes is a critical step in software quality assurance. In particular, the SZZ approach is commonly used to identify bug-inducing commits. However, the fixes to non-functional bugs may be scattered and separate from their bug-inducing locations in the source code. The nature of non-functional bugs may therefore make the SZZ approach a sub-optimal approach for identifying bug-inducing changes. Yet, prior studies that leverage or evaluate the SZZ approach do not consider non-functional bugs, leading to potential bias on the results. In this paper, we conduct an empirical study on the results of the SZZ approach when used to identify the inducing changes of the non-functional bugs in the NFBugs dataset. We eliminate a majority of the bug-inducing commits as they are not in the same method or class level. We manually examine whether each identified bug-inducing change is indeed the correct bug-inducing change. Our manual study shows that a large portion of non-functional bugs cannot be properly identified by the SZZ approach. By manually identifying the root causes of the falsely detected bug-inducing changes, we uncover root causes for false detection that have not been found by previous studies. We evaluate the identified bug-inducing changes based on three criteria from prior research, i.e., the earliest bug appearance, the future impact of changes, and the realism of bug introduction. We find that prior criteria may be irrelevant for non-functional bugs. Our results may be used to assist in future research on non-functional bugs, and highlight the need to complement SZZ to accommodate the unique characteristics of non-functional bugs. ... Performance failures pose an enormous challenge for developers. Compared to functional failures, they take considerably longer to be discovered [Jin et al., 2012], are harder to reproduce and debug [Zaman et al., 2012;Nistor et al., 2013a], take longer to fix [Zaman et al., 2011;Nistor et al., 2013a;Liu et al., 2014;Mazuera-Rozo et al., 2020], and require more as well as more experienced developers to do so [Zaman et al., 2011;Nistor et al., 2013a;. Users are similarly affected, e.g., they are more likely to leave if an application if it takes longer to load [Akamai Technologies Inc., 2017;Artz, 2009;. ... ... Performance failures pose an enormous challenge for developers. Compared to functional failures, they take considerably longer to be discovered [Jin et al., 2012], are harder to reproduce and debug [Zaman et al., 2012;Nistor et al., 2013a], take longer to fix [Zaman et al., 2011;Nistor et al., 2013a;Liu et al., 2014;Mazuera-Rozo et al., 2020], and require more as well as more experienced developers to do so [Zaman et al., 2011;Nistor et al., 2013a;. Users are similarly affected, e.g., they are more likely to leave if an application if it takes longer to load [Akamai Technologies Inc., 2017;Artz, 2009;. ... ... Performance failures pose an enormous challenge for developers. Compared to functional failures, they take considerably longer to be discovered [Jin et al., 2012], are harder to reproduce and debug [Zaman et al., 2012;Nistor et al., 2013a], take longer to fix [Zaman et al., 2011;Nistor et al., 2013a;Liu et al., 2014;Mazuera-Rozo et al., 2020], and require more as well as more experienced developers to do so [Zaman et al., 2011;Nistor et al., 2013a;. Users are similarly affected, e.g., they are more likely to leave if an application if it takes longer to load [Akamai Technologies Inc., 2017;Artz, 2009;. ... Article Software performance faults have severe consequences for users, developers, and companies. One way to unveil performance faults before they manifest in production is performance testing, which ought to be done on every new version of the software, ideally on every commit. However, performance testing faces multiple challenges that inhibit it from being applied early in the development process, on every new commit, and in an automated fashion. In this dissertation, we investigate three challenges of software microbenchmarks, a performance testing technique on unit granularity which is predominantly used for libraries and frameworks. The studied challenges affect the quality aspects (1) runtime, (2) result variability, and (3) performance change detection of microbenchmark executions. The objective is to understand the extent of these challenges in real-world software and to find solutions to address these. To investigate the challenges’ extent, we perform a series of experiments and analyses. We execute benchmarks in bare-metal as well as multiple cloud environments and conduct a large-scale mining study on benchmark configurations. The results show that all three challenges are common: (1) benchmark suite runtimes are often longer than 3 hours; (2) result variability can be extensive, in some cases up to 100%; and (3) benchmarks often only reliably detect large performance changes of 60% or more. To address the challenges, we devise targeted solutions as well as adapt well-known techniques from other domains for software microbenchmarks: (1) a solution that dynamically stops benchmark executions based on statistics to reduce runtime while maintaining low result variability; (2) a solution to identify unstable benchmarks that does not require execution, based on statically-computable source code features and machine learning algorithms; (3) traditional test case prioritization (TCP) techniques to execute benchmarks earlier that detect larger performance changes; and (4) specific execution strategies to detect small performance changes reliably even when executed in unreliable cloud environments. We experimentally evaluate the solutions and techniques on real-world benchmarks and find that they effectively deal with the three challenges. (1) Dynamic reconfiguration enables to drastically reduce runtime by between 48.4% and 86.0% without changing the results of 78.8% to 87.6% of the bench- marks, depending on the project and statistic used. (2) The instability prediction model allows to effectively identify unstable benchmarks when relying on random forest classifiers, having a prediction performance between 0.79 and 0.90 area under the receiver operating characteristic curve (AUC). (3) TCP applied to benchmarks is effective and efficient, with APFD-P values for the best technique ranging from 0.54 to 0.71 and a computational overhead of 11%. (4) Batch testing, i.e., executing the benchmarks of two versions on the same instances interleaved and repeated as well as repeated across instances, enables to reliably detect performance changes of 10% or less, even when using unreliable cloud infrastructure as execution environment. Overall, this dissertation shows that real-world software microbenchmarks are considerably affected by all three challenges (1) runtime, (2) result variability, and (3) performance change detection; however, deliberate planning and execution strategies effectively reduce their impact. ... As a result, the number of research papers analyzing the specific characteristics of performance bugs has grown significantly in the last decade. A performance bug is defined as a programming or configuration error that causes significant performance degradation, leading to undesirable effects like low system throughput, memory bloat, Graphical User Interface (GUI) lagging, or energy drain [1], [2]. Preventing performance bugs, or implementing effective tools to detect and fix them, requires a wide understanding of the nature of these issues in real-world programs. ... ... Well-tested applications such as Microsoft SQL Server, Apache HTTPD and Mozilla Firefox, among others, are affected by hundreds of performance bugs [2], [23]. A performance bug is a programming error that causes significant performance degradation in a program, leading to slow and/or inefficient software [1], [2]. ... ... Well-tested applications such as Microsoft SQL Server, Apache HTTPD and Mozilla Firefox, among others, are affected by hundreds of performance bugs [2], [23]. A performance bug is a programming error that causes significant performance degradation in a program, leading to slow and/or inefficient software [1], [2]. These bugs can cause GUI lagging, memory bloat or excessive energy consumption, among others, and consequently they may cause a poor user experience and a loss of customers and money to companies. ... Article Full-text available The detection of performance bugs, like those causing an unexpected execution time, has gained much attention in the last years due to their potential impact in safety-critical and resourceconstrained applications. Much effort has been put on trying to understand the nature of performance bugs in different domains as a starting point for the development of effective testing techniques. However, the lack of a widely accepted classification scheme of performance faults and, more importantly, the lack of well-documented and understandable datasets makes it difficult to draw rigorous and verifiable conclusions widely accepted by the community. In this paper, we present TANDEM, a dual contribution related to real-world performance bugs. Firstly, we propose a taxonomy of performance bugs based on a thorough systematic review of the related literature, divided into three main categories: effects, causes and contexts of bugs. Secondly, we provide a complete collection of fully documented real-world performance bugs. Together, these contributions pave the way for the development of stronger and reproducible research results on performance testing. ... Performance problems have been studied from several decades in literature, and software performance engineering emerged as the discipline focused on fostering the specification of performance-related factors [95,8,94] and reporting experiences related their management [81,41,78,3]. Performance bugs, i.e., suboptimal implementation choices that create significant performance degradation, have been demonstrated to hurt the satisfaction of end-users in the context of desktop applications [67]. These bugs, that are pervasive and difficult to understand, can cause delays, failures on deployment, redesigns, even a new implementation of the system or abandonment of projects, which lead to significant costs [93,58]. ... ... Nistor et al. [67] present an empirical study on three popular code bases (Eclipse JDT, Eclipse SWT, and Mozilla) with the goal of investigating how performance and non-performance bugs are discovered, reported and fixed by developers. Three main findings are outlined: (i) fixing performance bugs may introduce new functional bugs, similarly to fixing non-performance bugs; (ii) fixing performance bugs is more difficult than fixing non-performance bugs; (iii) unlike non-performance bugs, many performance bugs are found by code reasoning and profiling, not through direct observation of the bug's negative effects. ... ... Olivo et al. [70] also investigate performance bugs, specifically traversal bugs that arise if a program fragment repeatedly iterates over a data structure, such as an array or list, that has not been modified between successive traversals. Such performance bugs are typically easy to fix and often only require the [98] Analysis of the collaboration among project members to detect and fix performance bugs Limited to browsers (i.e., Mozilla Firefox and Google Chrome) performance issues service latency Nistor et al. [67] Performance bugs are demonstrated to be more difficult than functional bugs Bugs are found through code reasoning, not by the direct observation of profiling data system throughput Liu et al. [57] Empirical study of performance bugs from smartphone applications Limited to Android applications service latency, system throughput, resource utilization Hecht et al. [27] Empirical study on the impact of code smells for performance metrics Limited to Android applications service latency, system throughput Cruz et al. [9] Empirical study on the impact of performance best practices on the energy consuption Limited to Android applications energy consumption Olivo et al. [70] Static detection of performance bugs in collection of redundant traversals Limited to data structures wrongly used compression ratio Jovic et al. [36] Look for causes of long latency performance bugs Limited to Java applications service latency Killian et al. [39] Detection of performance bugs in distributed systems Network delays are simulated and can hide some software specific bugs communication and bandwidth ... Article Full-text available A recent research showed that mobile apps represent nowadays 75% of the whole usage of mobile devices. This means that the mobile user experience, while tied to many factors (e.g., hardware device, connection speed, etc.), strongly depends on the quality of the apps being used. With “quality” here we do not simply refer to the features offered by the app, but also to its non-functional characteristics, such as security, reliability, and performance. This latter is particularly important considering the limited hardware resources (e.g., memory) mobile apps can exploit. In this paper, we present the largest study at date investigating performance bugs in mobile apps. In particular, we (i) define a taxonomy of the types of performance bugs affecting Android and iOS apps; and (ii) study the survivability of performance bugs (i.e., the number of days between the bug introduction and its fixing). Our findings aim to help researchers and apps developers in building performance-bugs detection tools and focusing their verification and validation activities on the most frequent types of performance bugs. ... Compared to functional faults, performance bugs are significantly harder to detect and require more time and effort to be fixed [3]. This is partly due to the lack of test oracles, that is, mechanisms to decide whether the performance of the program with a given input is acceptable i.e., the oracle problem [7,8]. ... ... This is partly due to the lack of test oracles, that is, mechanisms to decide whether the performance of the program with a given input is acceptable i.e., the oracle problem [7,8]. For instance, Nistor et al. [3] analyzed 210 performance bugs from three mature open source projects and concluded that "better oracles are needed for discovering performance bugs". In contrast to functional bugs, performance bugs do not usually produce wrong results or crashes in the program under test and therefore they cannot be detected by simply inspecting the program output. ... ... Performance bugs are programming errors that can cause a significant performance degradation like excessive memory consumption [1] or energy leaks [2,3]. Performance bugs affect to key nonfunctional properties of programs such as execution time or memory consumption. ... Article Performance bugs are known to be a major threat to the success of software products. Performance tests aim to detect performance bugs by executing the program through test cases and checking whether it exhibits a noticeable performance degradation. The principles of mutation testing, a well-established testing technique for the assessment of test suites through the injection of artificial faults, could be exploited to evaluate and improve the detection power of performance tests. However, the application of mutation testing to assess performance tests, henceforth called performance mutation testing (PMT), is a novel research topic with numerous open challenges. In previous papers, we identified some key challenges related to PMT. In this work, we go a step further and explore the feasibility of applying PMT at the source-code level in general-purpose languages. To do so, we revisit concepts associated with classical mutation testing, and design seven novel mutation operators to model known bug-inducing patterns. As a proof of concept, we applied traditional mutation operators as well as performance mutation operators to open-source C++ programs. The results reveal the potential of the new performance-mutants to help assess and enhance performance tests when compared to traditional mutants. A review of live mutants in these programs suggests that they can induce the design of special test inputs. In addition to these promising results, our work brings a whole new set of challenges related to PMT, which will hopefully serve as a starting point for new contributions in the area. ... A Performance bug [22] is defined as "programming errors that causes significant performance degradation." The performance degradation contains poor user experience, laggy application responsiveness, lower system throughput, and waste computational resources [18]. ... ... The performance degradation contains poor user experience, laggy application responsiveness, lower system throughput, and waste computational resources [18]. [22] showed that a performance bug needs more time to be fixed compared with a non-performance bug. ... ... As we confirmed "vulnerability" and "unauthorized access" achieved relatively high attention from the developers, security bugs are also considered high impact by researchers and have been well studied [6,12,16,17,19,29,35]. "Lower performance" in "Effect" category is also well studied [20][21][22]35] as performance bugs. However, to the best of our knowledge, there is no study on "data loss" in "Data" category which is of relatively high concern to FLOSS developers (5%). ... Chapter Full-text available In recent years, many researchers in the SE community have been devoting considerable efforts to provide FLOSS developers with a means to quickly find and fix various kinds of bugs in FLOSS products such as security and performance bugs. However, it is not exactly sure how FLOSS developers think about bugs to be removed preferentially. Without a full understanding of FLOSS developers’ perceptions of bug finding and fixing, researchers’ efforts might remain far away from FLOSS developers’ needs. In this study, we interview 322 notable GitHub developers about high impact bugs to understand FLOSS developers’ needs for bug finding and fixing, and we manually inspect and classify developers’ answers (bugs) by symptoms and root causes of bugs. As a result, we show that security and breakage bugs are highly crucial for FLOSS developers. We also identify what kinds of high impact bugs should be studied newly by the SE community to help FLOSS developers. ... Figure 5 shows project-wise smell distribution of C++ script smell and settings smells. Based on the analysis, we identified that for each project the median number of CPP scripts is 1. 32 contains more than 11 performance smells that show necessity to have performance bottleneck detection tools such as UEPerf-Analyzer to improve the performance of XR application. Among the analyzed projects, sandisk/GabrielPaliari project has 60 settings smell which is the highest among the analyzed projects. ... ... The study pointed out that performance issues are difficult to reproduce and also require more discussion to fix the performance issues. Nistor et al. [32] also performed a similar study on performance and non-performance bugs from three popular codebases: Eclipse JDT, Eclipse SWT, and Mozilla. The work summarized that fixing performance bugs is more challenging than non-performance bugs. ... Preprint Extended Reality (XR) includes Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR). XR is an emerging technology that simulates a realistic environment for users. XR techniques have provided revolutionary user experiences in various application scenarios (e.g., training, education, product/architecture design, gaming, remote conference/tour, etc.). Due to the high computational cost of rendering real-time animation in limited-resource devices and constant interaction with user activity, XR applications often face performance bottlenecks, and these bottlenecks create a negative impact on the user experience of XR software. Thus, performance optimization plays an essential role in many industry-standard XR applications. Even though identifying performance bottlenecks in traditional software (e.g., desktop applications) is a widely explored topic, those approaches cannot be directly applied within XR software due to the different nature of XR applications. Moreover, XR applications developed in different frameworks such as Unity and Unreal Engine show different performance bottleneck patterns and thus, bottleneck patterns of Unity projects can't be applied for Unreal Engine (UE)-based XR projects. To fill the knowledge gap for XR performance optimizations of Unreal Engine-based XR projects, we present the first empirical study on performance optimizations from seven UE XR projects, 78 UE XR discussion issues and three sources of UE documentation. Our analysis identified 14 types of performance bugs, including 12 types of bugs related to UE settings issues and two types of CPP source code-related issues. To further assist developers in detecting performance bugs based on the identified bug patterns, we also developed a static analyzer, UEPerfAnalyzer, that can detect performance bugs in both configuration files and source code. ... The number of bug reports in this study (310) is reasonable, as it is large enough to make statistically significant claims, and small enough to allow for a reasonably thorough analysis of each bug report. Indeed, there is also precedence from prior research, including one conducted by the present author, where 317 bug reports were analyzed (Ocariza et al. 2013), and those conducted by (Nistor et al. 2013) and (Selakovic and Pradel 2016), each of which analyzed fewer than 300 performance-related bug reports. ... ... Lastly, (Nistor et al. 2013) looked at performance bugs from different code bases, and analyzed how they are detected, reported, and fixed, in comparison to non-performance bugs. For instance, the authors found that performance bugs are generally more difficult to fix compared to non-performance bugs, and conclude the need for better tool support for the former. ... Article Full-text available Performance regressions can have a drastic impact on the usability of a software application. The crucial task of localizing such regressions can be achieved using bisection, which attempts to find the bug-introducing commit using binary search. This approach is used extensively by many development teams, but it is an inherently heuristical approach when applied to performance regressions, and therefore, does not have correctness guarantees. Unfortunately, bisection is also time-consuming, which implies the need to assess its effectiveness prior to running it. To this end, the goal of this study is to analyze the effectiveness of bisection for performance regressions. This goal is achieved by first formulating a metric that quantifies the probability of a successful bisection, and extracting a list of input parameters – the contributing properties – that potentially impact its value; a sensitivity analysis is then conducted on these properties to understand the extent of their impact. Furthermore, an empirical study of 310 bug reports describing performance regressions in 17 real-world applications is conducted, to better understand what these contributing properties look like in practice. The results show that while bisection can be highly effective in localizing real-world performance regressions, this effectiveness is sensitive to the contributing properties, especially the choice of baseline and the distributions at each commit. The results also reveal that most bug reports do not provide sufficient information to help developers properly choose values and metrics that can maximize the effectiveness, which implies the need for measures to fill this information gap. ... Developers often spend a substantial amount of time diagnosing a configurable software system to localize and fix a performance bug, or to determine that the system was misconfigured [8,11,26,30,32,33,55,58,59,86]. This struggle is quite common when maintaining configurable software systems. ... ... Our goal is to support developers in the process of debugging the performance of configurable software systems; in particular, when developers do not even know which options or interactions in their current configuration cause an unexpected performance behavior. When performance issues occur in software systems, developers need to identify relevant information to debug the unexpected performance behaviors [8,11,27,55]. For this task, in addition to using off-the-shelf profilers [15,53,74], some researchers suggest using more targeted profiling techniques [10,12,13,21,84] and visualizations [2,6,12,21,62,70] to identify and analyze the locations of performance bottlenecks. ... Preprint Full-text available Determining whether a configurable software system has a performance bug or it was misconfigured is often challenging. While there are numerous debugging techniques that can support developers in this task, there is limited empirical evidence of how useful the techniques are to address the actual needs that developers have when debugging the performance of configurable software systems; most techniques are often evaluated in terms of technical accuracy instead of their usability. In this paper, we take a human-centered approach to identify, design, implement, and evaluate a solution to support developers in the process of debugging the performance of configurable software systems. We first conduct an exploratory study with 19 developers to identify the information needs that developers have during this process. Subsequently, we design and implement a tailored tool, adapting techniques from prior work, to support those needs. Two user studies, with a total of 20 developers, validate and confirm that the information that we provide helps developers debug the performance of configurable software systems. ... Compared to the amount of functional bugs, it is typical that the amount of performance bugs are relatively small in software projects (Ding et al. 2020;Radu and Nadi 2019;Jin et al. 2012;Nistor et al. 2013). Therefore, the lack of data to build JIT bug prediction models for performance bugs may become a common challenge in practice. ... ... Our paper analyzes the performance bugs in Cassandra and Hadoop, and the SZZ approach's ability to determine the bug inducing changes and concentrate on the impact of these changes on predictive models. Nistor et al. (2013) studied software performance since performance is critical for how users perceive the quality of software products. Performance bugs lead to poor user experience and low system throughput (Molyneaux 2009;Bryant and O'Hallaron 2015). ... Article Full-text available Performance bugs bear a heavy cost on both software developers and end-users. Tools to reduce the occurrence, impact, and repair time of performance bugs, can therefore provide key assistance for software developers racing to fix these bugs. Classification models that focus on identifying defect-prone commits, referred to as Just-In-Time (JIT) Quality Assurance are known to be useful in allowing developers to review risky commits. These commits can be reviewed while they are still fresh in developers’ minds, reducing the costs of developing high-quality software. JIT models, however, leverage the SZZ approach to identify whether or not a change is bug-inducing. The fixes to performance bugs may be scattered across the source code, separated from their bug-inducing locations. The nature of performance bugs may make SZZ a sub-optimal approach for identifying their bug-inducing commits. Yet, prior studies that leverage or evaluate the SZZ approach do not distinguish performance bugs from other bugs, leading to potential bias in the results. In this paper, we conduct an empirical study on the JIT defect prediction for performance bugs. We concentrate on SZZ’s ability to identify the bug-inducing commits of performance bugs in two open-source projects, Cassandra, and Hadoop. We verify whether the bug-inducing commits found by SZZ are truly bug-inducing commits by manually examining these identified commits. Our manual examination includes cross referencing fix commits and JIRA bug reports. We evaluate model performance for JIT models by using them to identify bug-inducing code commits for performance related bugs. Our findings show that JIT defect prediction classifies non-performance bug-inducing commits better than performance bug-inducing commits, i.e., the SZZ approach does introduce errors when identifying bug-inducing commits. However, we find that manually correcting these errors in the training data only slightly improves the models. In the absence of a large number of correctly labelled performance bug-inducing commits, our findings show that combining all available training data (i.e., truly performance bug-inducing commits, non-performance bug-inducing commits, and non-bug-inducing commits) yields the best classification results. ... This piece of code looks innocent. However, there is an outer loop in function my_xml_parse(), which is to parse input string str into XML_NODEs (lines [18][19][20][21][22][23][24][25][26][27][28][29][30][31][32]. The outer loop keeps calling xml_parent() using the next sibling of the previous XML_NODE, which has O(N 2 ) complexity in the number of children of a parent XML_NODE. ... ... Combating performance bugs depends on a good understanding of performance bugs. Many empirical studies were conducted to understand different types of performance bugs [1,2,3,20,21,22,23,24]. They provide important findings, which deepen researchers' understanding and guide the technical design to fight performance bugs from different aspects. ... Article Full-text available Complexity problems are a common type of performance issues, caused by algorithmic inefficiency. Algorithmic profiling aims to automatically attribute execution complexity to an executed code construct. It can identify code constructs in superlinear complexity to facilitate performance optimizations and debugging. However, existing algorithmic profiling techniques suffer from several severe limitations, missing the opportunity to be deployed in production environment and failing to effectively pinpoint root causes for performance failures caused by complexity problems. In this paper, we design a tool, ComAir, which can effectively conduct algorithmic profiling in production environment. We propose several novel instrumentation methods to significantly lower runtime overhead and enable the production-run usage. We also design an effective ranking mechanism to help developers identify root causes of performance failures due to complexity problems. Our experimental results show that ComAir can effectively identify root causes and generate accurate profiling results in production environment, while incurring a negligible runtime overhead ... We use a simple heuristic and select issues that contain the keyword "leak" in the issue title or issue description. The keyword search is a popular method used by previous empirical studies [57,92,147] to filter the issues of interest from the others. It is worth mentioning that we investigated other leak-related keywords (unreleased, out-of-memory, OOM, closed, and others). ... ... To asses the time to resolve (TTR) of an issue report, we adopted the methodology used in previous studies [57,92,110]. We collect two timestamps from each issue report: the time it was created (recorded in the issue tracker), and the time it was resolved (labeled as resolved). ... Thesis Modern software systems evolve steadily. Software developers change the software codebase every day to add new features, to improve the performance, or to fix bugs. Despite extensive testing and code inspection processes before releasing a new software version, the chance of introducing new bugs is still high. A code that worked yesterday may not work today, or it can show a degraded performance causing software regression. The laws of software evolution state that the complexity increases as software evolves. Such increasing complexity makes software maintenance harder and more costly. In a typical software organization, the cost of debugging, testing, and verification can easily range from 50% to 75% of the total development costs. Given that human resources are the main cost factor in the software maintenance and the software codebase evolves continuously, this dissertation tries to answer the following question: How can we help developers to localize the software defects more effectively during software development? We answer this question in three aspects. First, we propose an approach to localize failure-inducing changes for crashing bugs. Assume the source code of a buggy version, a failing test, the stack trace of the crashing site, and a previous correct version of the application. We leverage program analysis to contrast the behavior of the two software versions under the failing test. The difference set is the code statements which contribute to the failure site with a high probability. Second, we extend the version comparison technique to detect the leak-inducing defects caused by software changes. Assume two versions of a software codebase (one previous non-leaky and the current leaky version) and the existing test suite of the application. First, we compare the memory footprint of the code locations between two versions. Then, we use a confidence score to rank the suspicious code statements, i.e., those statements which can be the potential root causes of memory leaks. The higher the score, the more likely the code statement is a potential leak. Third, our observation on the related work about debugging and fault localization reveals that there is no empirical study which characterizes the properties of the leak- inducing defects and their repairs. Understanding the characteristics of the real defects caused by resource and memory leaks can help both researchers and practitioners to improve the current techniques for leak detection and repair. To fill this gap, we conduct an empirical study on 491 reported resource and memory leak defects from 15 large Java applications. We use our findings to draw implications for leak avoidance, detection, localization, and repair. ... We use a simple heuristic and select issues that contain the keyword "leak" in the issue title or issue description. The keyword search is a well-known method used by previous empirical studies (Jin et al. 2012a;Zhong and Su 2015;Nistor et al. 2013) to filter the issues of interest from the others. It is worth mentioning that we investigated other leak-related keywords (unreleased, out-of-memory, OOM, closed, etc.). ... ... The higher the entropy, the more complex the repair patch. To asses the time to resolve (TTR) of an issue report, we adopted the methodology used in previous studies (Song and Lu 2014;Nistor et al. 2013;Jin et al. 2012b). We collect two timestamps from each issue report: the time it was created (recorded in the issue tracker), and the time it was resolved (labeled as resolved). ... Article Full-text available Despite huge software engineering efforts and programming language support, resource and memory leaks are still a troublesome issue, even in memory-managed languages such as Java. Understanding the properties of leak-inducing defects, how the leaks manifest, and how they are repaired is an essential prerequisite for designing better approaches for avoidance, diagnosis, and repair of leak-related bugs. We conduct a detailed empirical study on 491 issues from 15 large open-source Java projects. The study proposes taxonomies for the leak types, for the defects causing them, and for the repair actions. We investigate, under several aspects, the distributions within each taxonomy and the relationships between them. We find that manual code inspection and manual runtime detection are still the main methods for leak detection. We find that most of the errors manifest on error-free execution paths, and developers repair the leak defects in a shorter time than non-leak defects. We also identify 13 recurring code transformations in the repair patches. Based on our findings, we draw a variety of implications on how developers can avoid, detect, isolate and repair leak-related bugs. ... Many empirical studies [2], [5], [17] have been conducted for performance bugs, but rare researches focus on synchronization performance bugs in cloud distributed systems. We collect 26 performance issues in distributed systems, and do analysis on their root cause, fix strategy and time complexity in order to understand these synchronization performance bugs better. ... ... They proposed different methods to study the reported bugs. Some [2] of them focus on the lifecyle of a performance bug, like what is the root cause, how they are introduced, how they are exposed, and how they are fixed, then find that performance problems take long time to get diagnosed and the help from profilers is very limited; some [16], [17] of them look at how performance are noticed and reported by end users; some [5] of them compare qualitative difference between performance bugs and nonperformance bugs across impact, fix and fix validation. Besides, some researches have been done on a specified code structure, like loop. ... Article Full-text available In such an information society, Internet of Things (IoT) plays an increasingly important role in our daily lives. With such a huge number of deployed IoT devices, CPS calls for powerful distributed infrastructures to supply big data computing, intelligence, and storage services. With the increasingly complex distributed software infrastructures, new intricate bugs continue to manifest, causing huge economic loss. Synchronization performance problems, which means that improper synchronizations may degrade the performance and even lead to service exception, heavily influence the entire distributed cluster, imperiling the reliability of the system. As one kind of performance problems, synchronization performance problems are acknowledged as difficult to diagnosis and fix. We collect 26 performance issues in 3 real-world distributed systems: HDFS, Hadoop MapReduce and HBase, and do analysis on their root cause, fix strategy and algorithm complexity in order to understand these synchronization performance bugs better. Then we implement a static detection tool including critical section identifier, loop identifier, inner loop identifier, expensive loop identifier, and pruning component. After that, we evaluate our detection tool on these three distributed systems with sampled bugs. In the evaluation, our detection tool accurately finds out all the target bugs. Besides, it points out more new potential performance problems than previous works. With the strict performance overhead, our detection tool is proved to be greatly efficient. ... There are few empirical studies on performance bugs [8], [26]- [29], their root cause [8], [26], [30], fixing strategy [8], [26], [27] their impact or relevance [8], [29] and both static and dynamic analysis based detection approaches [31]- [34]. Researchers also suggested various ways of data-access optimization to improve performance of database-backed web applications using caching and prefetching techniques. ... Preprint Data-intensive systems handle variable, high volume, and high-velocity data generated by human and digital devices. Like traditional software, data-intensive systems are prone to technical debts introduced to cope-up with the pressure of time and resource constraints on developers. Data-access is a critical component of data-intensive systems as it determines the overall performance and functionality of such systems. While data access technical debts are getting attention from the research community, technical debts affecting the performance, are not well investigated. Objective: Identify, categorize, and validate data access performance issues in the context of NoSQL-based and polyglot persistence data-intensive systems using qualitative study. Method: We collect issues from NoSQL-based and polyglot persistence open-source data-intensive systems and identify data access performance issues using inductive coding and build a taxonomy of the root causes. Then, we validate the perceived relevance of the newly identified performance issues using a developer survey. ... Misconfigurations are typically caused by interactions between software and hardware, resulting in non-functional faults 1 -depredations in non-functional system properties such as latency and energy consumption. These non-functional faults-unlike regular software bugs-do not cause the system to crash or exhibit any obvious misbehavior [76,85,99]. Instead, misconfigured systems remain operational but degrade in performance [16,71,75,86]. ... Preprint Full-text available Modern computer systems are highly configurable, with the variability space sometimes larger than the number of atoms in the universe. Understanding and reasoning about the performance behavior of highly configurable systems, due to a vast variability space, is challenging. State-of-the-art methods for performance modeling and analyses rely on predictive machine learning models, therefore, they become (i) unreliable in unseen environments (e.g., different hardware, workloads), and (ii) produce incorrect explanations. To this end, we propose a new method, called Unicorn, which (a) captures intricate interactions between configuration options across the software-hardware stack and (b) describes how such interactions impact performance variations via causal inference. We evaluated Unicorn on six highly configurable systems, including three on-device machine learning systems, a video encoder, a database management system, and a data analytics pipeline. The experimental results indicate that Unicorn outperforms state-of-the-art performance optimization and debugging methods. Furthermore, unlike the existing methods, the learned causal performance models reliably predict performance for new environments. ... Other tools perform security assessment, automated test case generation and detection of non-functional issues such as energy consumption [270], [271]. While fixing non-functional performance bugs, developers need to consider the threat of introducing functional bugs [272] and hindering code maintainability [273]. In this context, Linares et al. [274] suggested that developers rarely implement micro-optimizations (e.g., changes at statement level). ... Article Nowadays there is a mobile application for almost everything a user may think of, ranging from paying bills and gathering information to playing games and watching movies. In order to ensure user satisfaction and success of applications, it is important to provide high performant applications. This is particularly important for resource constraint systems such as mobile devices. Thereby, non-functional performance characteristics, such as energy and memory consumption, play an important role for user satisfaction. This paper provides a comprehensive survey of non-functional performance optimization for Android applications. We collected 155 unique publications, published between 2008 and 2020, that focus on the optimization of non-functional performance of mobile applications. We target our search at four performance characteristics, in particular: responsiveness, launch time, memory and energy consumption. For each performance characteristic, we categorize optimization approaches based on the method used in the corresponding publications. Furthermore, we identify research gaps in the literature for future work. ... Configuring software systems is often challenging. In practice, many users execute systems with inefficient configurations in terms of performance and, often directly correlated, energy consumption [22,23,33,54]. While users can adjust configuration options to tradeoff between performance and the system's functionality, this configuration task can be overwhelming; many systems, such as databases, Web servers, and video encoders, have numerous configuration options that may interact, possibly producing unexpected and undesired behavior. ... Preprint Full-text available Performance-influence models can help stakeholders understand how and where configuration options and their interactions influence the performance of a system. With this understanding, stakeholders can debug performance behavior and make deliberate configuration decisions. Current black-box techniques to build such models combine various sampling and learning strategies, resulting in tradeoffs between measurement effort, accuracy, and interpretability. We present Comprex, a white-box approach to build performance-influence models for configurable systems, combining insights of local measurements, dynamic taint analysis to track options in the implementation, compositionality, and compression of the configuration space, without relying on machine learning to extrapolate incomplete samples. Our evaluation on 4 widely-used, open-source projects demonstrates that Comprex builds similarly accurate performance-influence models to the most accurate and expensive black-box approach, but at a reduced cost and with additional benefits from interpretable and local models. ... Performance bug is also studied for software systems where bugs are detected by users or code reasoning [43]. A machine learning approach is developed for evaluating software performance degradation due to code change [4]. ... Preprint Processor design validation and debug is a difficult and complex task, which consumes the lion's share of the design process. Design bugs that affect processor performance rather than its functionality are especially difficult to catch, particularly in new microarchitectures. This is because, unlike functional bugs, the correct processor performance of new microarchitectures on complex, long-running benchmarks is typically not deterministically known. Thus, when performance benchmarking new microarchitectures, performance teams may assume that the design is correct when the performance of the new microarchitecture exceeds that of the previous generation, despite significant performance regressions existing in the design. In this work, we present a two-stage, machine learning-based methodology that is able to detect the existence of performance bugs in microprocessors. Our results show that our best technique detects 91.5% of microprocessor core performance bugs whose average IPC impact across the studied applications is greater than 1% versus a bug-free design with zero false positives. When evaluated on memory system bugs, our technique achieves 100% detection with zero false positives. Moreover, the detection is automatic, requiring very little performance engineer time. ... Incorrect configuration (misconfiguration) elicits unexpected interactions between software and hardware resulting non-functional faults, i.e., faults in non-functional system properties such as latency, energy consumption, and/or heat dissipation. These non-functional faults-unlike regular software bugs-do not cause the system to crash or exhibit an obvious misbehavior [70,78,88]. Instead, misconfigured systems remain operational while being compromised, resulting severe performance degradation in latency, energy consumption, and/or heat dissipation [16,66,69,80]. ... Preprint Full-text available Modern computing platforms are highly-configurable with thousands of interacting configurations. However, configuring these systems is challenging. Erroneous configurations can cause unexpected non-functional faults. This paper proposes CADET (short for Causal Debugging Toolkit) that enables users to identify, explain, and fix the root cause of non-functional faults early and in a principled fashion. CADET builds a causal model by observing the performance of the system under different configurations. Then, it uses casual path extraction followed by counterfactual reasoning over the causal model to: (a) identify the root causes of non-functional faults, (b) estimate the effects of various configurable parameters on the performance objective(s), and (c) prescribe candidate repairs to the relevant configuration options to fix the non-functional fault. We evaluated CADET on 5 highly-configurable systems deployed on 3 NVIDIA Jetson systems-on-chip. We compare CADET with state-of-the-art configuration optimization and ML-based debugging approaches. The experimental results indicate that CADET can find effective repairs for faults in multiple non-functional properties with (at most) 17% more accuracy, 28% higher gain, and $40\times$ speed-up than other ML-based performance debugging methods. Compared to multi-objective optimization approaches, CADET can find fixes (at most) $9\times$ faster with comparable or better performance gain. Our case study of non-functional faults reported in NVIDIA's forum show that CADET can find $14%$ better repairs than the experts' advice in less than 30 minutes. ... Recent studies have shown that performance problems caused by misconfiguration are still prevalent [4], [13], [17]. Performance issues can cause significant performance degradation which leads to long response time and a low program throughput [7], [17], [24]. ... Preprint Performance is an important non-functional aspect of the software requirement. Modern software systems are highly-configurable and misconfigurations may easily cause performance issues. A software system that suffers performance issues may exhibit low program throughput and long response time. However, the sheer size of the configuration space makes it challenging for administrators to manually select and adjust the configuration options to achieve better performance. In this paper, we propose ConfRL, an approach to tune software performance automatically. The key idea of ConfRL is to use reinforcement learning to explore the configuration space by a trial-and-error approach and to use the feedback received from the environment to tune configuration option values to achieve better performance. To reduce the cost of reinforcement learning, ConfRL employs sampling, clustering, and dynamic state reduction techniques to keep states in a large configuration space manageable. Our evaluation of four real-world highly-configurable server programs shows that ConfRL can efficiently and effectively guide software systems to achieve higher long-term performance. ... In addition, these approaches rely on a heuristic to pinpoint the performance regression-causes, while ZAM reasons its way through the timeline to find the cause. In this sense, ZAM's methodology is consistent with the results of an empirical study by Nistor et al. [36], which found that performance issues are fixed mainly through code reasoning. ... Article Full-text available A performance regression in software is defined as an increase in an application step’s response time as a result of code changes. Detecting such regressions can be done using profiling tools; however, investigating their root cause is a mostly-manual and time-consuming task. This statement holds true especially when comparing execution timelines, which are dynamic function call trees augmented with response time data; these timelines are compared to find the performance regression-causes – the lowest-level function calls that regressed during execution. When done manually, these comparisons often require the investigator to analyze thousands of function call nodes. Further, performing these comparisons on web applications is challenging due to JavaScript’s asynchronous and event-driven model, which introduce noise in the timelines. In response, we propose a design – Zam – that automatically compares execution timelines collected from web applications, to identify performance regression-causes. Our approach uses a hybrid node matching algorithm that recursively attempts to find the longest common subsequence in each call tree level, then aggregates multiple comparisons’ results to eliminate noise. Our evaluation of Zam on 10 web applications indicates that it can identify performance regression-causes with a path recall of 100% and a path precision of 96%, while performing comparisons in under a minute on average. We also demonstrate the real-world applicability of Zam, which has been used to successfully complete performance investigations by the performance and reliability team in SAP. ... They studied how users perceive the bugs, how bugs are reported, what developers discuss about the bug causes and the bug patches. Their study is similar to that of Nistor et al. [17] but they go further by analyzing additional information for the bug reports. Nguyen et al. [16] interviewed the performance engineers responsible for an industrial software system, to understand these regression-causes. ... Article Context Software performance may suffer regressions caused by source code changes. Measuring performance at each new software version is useful for early detection of performance regressions. However, systematically running benchmarks is often impractical (e.g., long running execution, prioritizing functional correctness over non-functional). Objective In this article, we propose Horizontal Profiling, a sampling technique to predict when a new revision may cause a regression by analyzing the source code and using run-time information of a previous version. The goal of Horizontal Profiling is to reduce the performance testing overhead by benchmarking just software versions that contain costly source code changes. Method We present an evaluation in which we apply Horizontal Profiling to identify performance regressions of 17 software projects written in the Pharo programming language, totaling 1,288 software versions. Results Horizontal Profiling detects more than 80% of the regressions by benchmarking less than 20% of the versions. In addition, our experiments show that Horizontal Profiling has better precision and executes the benchmarks in less versions that the state of the art tools, under our benchmarks. Conclusions We conclude that by adequately characterizing the run-time information of a previous version, it is possible to determine if a new version is likely to introduce a performance regression or not. As a consequence, a significant fraction of the performance regressions are identified by benchmarking only a small fraction of the software versions. ... Nistor et al. [22] conducted a comprehensive study to compare performance and non-performance bugs regarding how they are discovered, reported, and fixed to answer the questions which left behind in the previous studies. More precisely, they have manually inspected and compared 210 performance bugs and 210 nonperformance bugs from three mature code bases known as Eclipse Java Development Tools (JDT), Eclipse Standard Widget Toolkit (SWT), and the Mozilla project. ... Conference Paper Quality is a multi-faceted aspect of software. As described by international standards, the process of quality assurance is concerned not only with the functionality of the software, but also with performance, security, maintainability, reliability and others. However, during the maintenance phase of software development, developers usually focus on one aspect at a time, for example improving design or fixing bugs, either due to time constraints or because of specific priorities. In this work, we present a study to show that quality issues do not occur in isolation during development. We study the source code from 10 Android applications and we explore problems around reliability, maintainability and security. In addition, we study the impact of maintenance activities around these problems on other quality aspects like performance and energy consumption. Our first objective is to find if quality problems of different types occur together and if there is correlation between specific types. Secondly, we want to see if fixing problems always has monotonically positive effect on the general quality or special attention needs to be taken when fixing specific problems. Our long-term goal is to create tool support for the multi-dimensional analysis and assurance of software quality. ... The relationship between code smells, and design problems have been largely discussed in the technical literature. Although bug is a buzzword used in software engineering research and practice with several meanings [26,27,39], we found that the references to bug made by practitioners addresses problems in the execution of the source code and semantic issues. Therefore, we may interpret that practitioners believe that code smells would hamper maintenance activities by contributing to the incidence of bugs [10]. ... Conference Paper Full-text available ... Motivation: Developers cannot treat all the bugs in the same priority since some bugs can highly impact on a variety of activities in the bug management process, products, and end-users. In order to prioritize effectively, thus, Software engineering researchers have introduced different types of High Impact Bugs (HIB) based on their impact on software processes, products, or end-users such as security bugs [13], performance bugs [14], breakage bugs [15], surprise bugs [15], dormant bugs [16], and blocker bugs [17]. The previous study revealed that different types of bugs (e.g., Performance and Security bugs) differ from each other [18]. ... Conference Paper Full-text available Bug reports are the primary means through which developers triage and fix bugs. To achieve this effectively, bug reports need to clearly describe those features that are important for the developers. However, previous studies have found that reporters do not always provide such features. Therefore, we first perform an exploratory study to identify the key features that reporters frequently miss in their initial bug report submissions. Then, we plan to propose an automatic approach for supporting reporters to make a good bug report. For our initial studies, we manually examine bug reports of five large-scale projects from two ecosystems such as Apache (Camel, Derby, and Wicket) and Mozilla (Firefox and Thunderbird). As initial results, we identify five key features that reporters often miss in their initial bug reports and developers require them for fixing bugs. We build and evaluate classification models using four different text-classification techniques. The evaluation results show that our models can effectively predict the key features. Our ongoing research focuses on developing an automatic features recommendation model to improve the contents of bug reports. Article Context: Software performance is crucial for ensuring the quality of software products. As one of the non‐functional requirements, the few efforts devoted to software performance have often been neglected until a later phase in the software development life cycle (SDLC). The lack of clarity of what software performance research literature is available prevents researchers from understanding what software performance research fields are available. It also creates difficulty for practitioners to adopt state‐of‐the‐art software performance techniques. Software performance research is not as organized as other established research topics such as software testing. Thus, it is essential to conduct a systematic mapping study as a first step to provide an overview of the latest research literature available in software performance. Objective: The objective of this systematic mapping study is to survey and map software performance research literature into suitable categories and to synthesize the literature data for future access and reference. Method: This systematic mapping study conducts a manual examination by querying research literature in noble journals and proceedings in software engineering in the past decade. We examine each paper manually and identify primary studies for further analysis and synthesis according to the pre‐defined inclusion criteria. Lastly, we map the primary studies based on their corresponding classification category. Results: This systematic mapping study provides a state‐of‐the‐art literature mapping in software performance research. We have carefully examined 222 primary studies out of 2000+ research literature. We have identified six software performance research categories and 15 subcategories. We generate the primary study mapping and report five research findings. Conclusions: Unlike established research fields, it is unclear what types of software performance research categories are available to the community. This work takes the systematic mapping study approach to survey and map the latest software performance research literature. The study results provide an overview of the paper distribution and a reference for researchers to navigate research literature on software performance. Article Bug reports are submitted by the software stakeholders to foster the location and elimination of bugs. However, in large-scale software systems, it may be impossible to track and solve every bug, and thus developers should pay more attention to High-Impact Bugs (HIBs). Previous studies analyzed textual descriptions to automatically identify HIBs, but they ignored the quality of code, which may also indicate the cause of HIBs. To address this issue, we integrate the features reflecting the quality of production (i.e. CK metrics) and test code (i.e. test smells) into our textual similarity based model to identify HIBs. Our model outperforms the compared baseline by up to 39% in terms of AUC-ROC and 64% in terms of F-Measure. Then, we explain the behavior of our model by using SHAP to calculate the importance of each feature, and we apply case studies to empirically demonstrate the relationship between the most important features and HIB. The results show that several test smells (e.g. Assertion Roulette, Conditional Test Logic, Duplicate Assert, Sleepy Test) and product metrics (e.g. NOC, LCC, PF, and ProF) have important contributions to HIB identification. Thesis System availability and efficiency are critical aspects in the oil and gas sector; as any fault affecting those systems may cause operations to shut down; which will negatively impact operation resources as well as costs, human resources and time. Therefore, it became important to investigate the reasons of such errors. In this study, software errors and maintenance are studied. End user errors are targeted after finding that is the number of these errors is projected to increase. The factors that affect end user behavior in oil and gas systems are also investigated and the relation between system availability and end user behavior are evaluated. An investigation has been performed following the descriptive methodology in order to gain insights into the human error factor encountered by various international oil and gas companies around the Middle East and North Africa. This was conducted by distributing a questionnaire to 120 employees of the companies in this study; 81 had responded. The questionnaire contained questions related to software/hardware errors and errors due to the end user. In short, the study shows that there is a relation between end user behavior and system availability and efficiency. Factors including training, experience, education, work shifts, system interface and I/O devices were identified in the study as factors affecting end user behavior. Moreover, the study contributes new knowledge by identifying a new factor that leads to system unavailability, namely memory sticks. This thesis presents a valuable knowledge that explains how errors occur and the reasons for their occurrence. Major limitations of this research include company policies, legal issues and information resources. Chapter The technology enabled service industry is emerging as the most dynamic sectors in world's economy. Various service sector industries such as financial services, banking solutions, telecommunication, investment management, etc. completely rely on using large scale software for their smooth operations. Any malwares or bugs in these software is an issue of big concern and can have serious financial consequences. This chapter addresses the problem of bug handling in service sector software. Predictive analysis is a helpful technique for keeping software systems error free. Existing research in bug handling focus on various predictive analysis techniques such as data mining, machine learning, information retrieval, optimisation, etc. for bug resolving. This chapter provides a detailed analysis of bug handling in large service sector software. The main emphasis of this chapter is to discuss research involved in applying predictive analysis for bug handling. The chapter also presents some possible future research directions in bug resolving using mathematical optimisation techniques. Article Mutation testing has been widely used to assess the fault-detection effectiveness of a test suite, as well as to guide test case generation or prioritization. Empirical studies have shown that, while mutants are generally representative of real faults, an effective application of mutation testing requires “traditional” operators designed for programming languages to be augmented with operators specific to an application domain and/or technology. The case for Android apps is not an exception. Therefore, in this paper we describe the process we followed to create (i) a taxonomy of mutation operations and, (ii) two tools, MDroid+ and MutAPK for mutant generation of Android apps. To this end, we systematically devise a taxonomy of 262 types of Android faults grouped in 14 categories by manually analyzing 2,023 software artifacts from different sources ( e.g., bug reports, commits). Then, we identified a set of 38 mutation operators, and implemented them in two tools, the first enabling mutant generation at the source code level, and the second designed to perform mutations at APK level. The rationale for having a dual-approach is based on the fact that source code is not always available when conducting mutation testing. Thus, mutation testing for APKs enables new scenarios in which researchers/practitioners only have access to APK files. The taxonomy, proposed operators, and tools have been evaluated in terms of the number of non-compilable, trivial, equivalent, and duplicate mutants generated and their capacity to represent real faults in Android apps as compared to other well-known mutation tools. Chapter The technology enabled service industry is emerging as the most dynamic sectors in world's economy. Various service sector industries such as financial services, banking solutions, telecommunication, investment management, etc. completely rely on using large scale software for their smooth operations. Any malwares or bugs in these software is an issue of big concern and can have serious financial consequences. This chapter addresses the problem of bug handling in service sector software. Predictive analysis is a helpful technique for keeping software systems error free. Existing research in bug handling focus on various predictive analysis techniques such as data mining, machine learning, information retrieval, optimisation, etc. for bug resolving. This chapter provides a detailed analysis of bug handling in large service sector software. The main emphasis of this chapter is to discuss research involved in applying predictive analysis for bug handling. The chapter also presents some possible future research directions in bug resolving using mathematical optimisation techniques. Preprint Full-text available Bug reports are the primary means through which developers triage and fix bugs. To achieve this effectively, bug reports need to describe clearly those features that are important for the developers. However, previous studies have found that reporters do not always provide such features. Therefore, we first perform an exploratory study to identify the key features that reporters frequently miss in their initial bug report submissions. Then, we propose an approach that predicts whether reporters should provide certain key features to ensure a good bug report. A case study of the bug reports for Camel, Derby, Wicket, Firefox, and Thunderbird projects, shows that Steps to Reproduce, Test Case, Code Example, Stack Trace, and Expected Behavior are the additional features that reporters most often omit from their initial bug report submissions. We also find that these features significantly affect the bug-fixing process. Based on our findings, we build and evaluate classification models using four different text-classification techniques to predict key features by leveraging historical bug-fixing knowledge. The evaluation results show that our models can effectively predict the key features. Our comparative study of different text-classification techniques shows that NBM outperforms other techniques. Our findings can benefit reporters to improve the contents of bug reports. Article Performance is one of the key aspects of non-functional qualities as performance bugs can cause significant performance degradation and lead to poor user experiences. While bug reports are intended to help developers to understand and fix bugs, they are also extensively used by researchers for finding benchmarks to evaluate their testing and debugging approaches. Although researchers spend a considerable amount of time and effort in finding usable performance bugs from bug repositories, they often get only a few. Reproducing performance bugs is difficult even for performance bugs that are confirmed by developers with domain knowledge. The amount of information disclosed in a bug report may not always be sufficient to reproduce the performance bug for researchers, and thus hinders the usability of bug repository as the resource for finding benchmarks. In this paper, we study the characteristics of confirmed performance bugs by reproducing them using only informations available from the bug report to examine the challenges of bug reproduction from the perspective of researchers. We spent more than 800 h over the course of six months to study and to try to reproduce 93 confirmed performance bugs, which are randomly sampled from two large-scale open-source server applications. We (1) studied the characteristics of the reproduced performance bug reports; (2) summarized the causes of failed-to-reproduce performance bug reports from the perspective of researchers by reproducing bugs that have been solved in bug reports; (3) shared our experience on suggesting workarounds to improve the bug reproduction success rate; (4) delivered a virtual machine image that contains a set of 17 ready-to-execute performance bug benchmarks. The findings of our study provide guidance and a set of suggestions to help researchers to understand, evaluate, and successfully replicate performance bugs. Conference Paper Full-text available Changes, a rather inevitable part of software development can cause maintenance implications if they introduce bugs into the system. By isolating and characterizing these bug introducing changes it is possible to uncover potential risky source code entities or issues that produce bugs. In this paper, we mine the bug introducing changes in the Android platform by mapping bug reports to the changes that introduced the bugs. We then use the change information to look for both potential problematic parts and dynamics in development that can cause maintenance implications. We believe that the results of our study can help better manage Android software development. Article Full-text available A recent study finds that errors of omission are harder for programmers to detect than errors of commission. While several change recommendation systems already exist to prevent or reduce omission errors during software development, there have been very few studies on why errors of omission occur in practice and how such errors could be prevented. In order to understand the characteristics of omission errors, this paper investigates a group of bugs that were fixed more than once in open source projects — those bugs whose initial patches were later considered incomplete and to which programmers applied supplementary patches. Our study on Eclipse JDT core, Eclipse SWT, and Mozilla shows that a significant portion of resolved bugs (22% to 33%) involves more than one fix attempt. Our manual inspection shows that the causes of omission errors are diverse, including missed porting changes, incorrect handling of conditional statements, or incomplete refactorings, etc. While many consider that missed updates to code clones often lead to omission errors, only a very small portion of supplementary patches (12% in JDT, 25% in SWT, and 9% in Mozilla) have a content similar to their initial patches. This implies that supplementary change locations cannot be predicted by code clone analysis alone. Furthermore, 14% to 15% of files in supplementary patches are beyond the scope of immediate neighbors of their initial patch locations — they did not overlap with the initial patch locations nor had direct structural dependencies on them (e.g. calls, accesses, subtyping relations, etc.). These results call for new types of omission error prevention approaches that complement existing change recommendation systems. Article Full-text available Software performance is one of the important qualities that makes software stand out in a competitive market. However, in earlier work we found that performance bugs take more time to fix, need to be fixed by more experienced developers and require changes to more code than non-performance bugs. In order to be able to improve the resolution of performance bugs, a better understanding is needed of the current practice and shortcomings of reporting, reproducing, tracking and fixing performance bugs. This paper qualitatively studies a random sample of 400 performance and non-performance bug reports of Mozilla Firefox and Google Chrome across four dimensions (Impact, Context, Fix and Fix validation). We found that developers and users face problems in reproducing performance bugs and have to spend more time discussing performance bugs than other kinds of bugs. Sometimes performance regressions are tolerated as a tradeoff to improve something else. Article Full-text available In this paper we present a profiling methodology and toolkit for helping developers discover hidden asymptotic inefficiencies in the code. From one or more runs of a program, our profiler automatically measures how the performance of individual routines scales as a function of the input size, yielding clues to their growth rate. The output of the profiler is, for each executed routine of the program, a set of tuples that aggregate performance costs by input size. The collected profiles can be used to produce performance plots and derive trend functions by statistical curve fitting or bounding techniques. A key feature of our method is the ability to automatically measure the size of the input given to a generic code fragment: to this aim, we propose an effective metric for estimating the input size of a routine and show how to compute it efficiently. We discuss several case studies, showing that our approach can reveal asymptotic bottlenecks that other profilers may fail to detect and characterize the workload and behavior of individual routines in the context of real applications. To prove the feasibility of our techniques, we implemented a Valgrind tool called aprof and performed an extensive experimental evaluation on the SPEC CPU2006 benchmarks. Our experiments show that aprof delivers comparable performance to other prominent Valgrind tools, and can generate informative plots even from single runs on typical workloads for most algorithmically-critical routines. Article Full-text available Given limited resource and time before software release, development-site testing and debugging become more and more insufficient to ensure satisfactory software performance. As a counterpart for debugging in the large pioneered by the Microsoft Windows Error Reporting (WER) system focusing on crashing/hanging bugs, performance debugging in the large has emerged thanks to available infrastructure support to collect execution traces with performance issues from a huge number of users at the deployment sites. However, performance debugging against these numerous and complex traces remains a significant challenge for performance analysts. In this paper, to enable performance debugging in the large in practice, we propose a novel approach, called StackMine, that mines callstack traces to help performance analysts effectively discover highly impactful performance bugs (e.g., bugs impacting many users with long response delay). As a successful technology-transfer effort, since December 2010, StackMine has been applied in performance-debugging activities at a Microsoft team for performance analysis, especially for a large number of execution traces. Based on real-adoption experiences of StackMine in practice, we conducted an evaluation of StackMine on performance debugging in the large for Microsoft Windows 7. We also conducted another evaluation on a third-party application. The results highlight substantial benefits offered by StackMine in performance debugging in the large for large-scale software systems. Conference Paper Full-text available Customizable programs and program families provide user-selectable features to allow users to tailor a program to an application scenario. Knowing in advance which feature selection yields the best performance is difficult because a direct measurement of all possible feature combinations is infeasible. Our work aims at predicting program performance based on selected features. However, when features interact, accurate predictions are challenging. An interaction occurs when a particular feature combination has an unexpected influence on performance. We present a method that automatically detects performance-relevant feature interactions to improve prediction accuracy. To this end, we propose three heuristics to reduce the number of measurements required to detect interactions. Our evaluation consists of six real-world case studies from varying domains (e.g., databases, encoding libraries, and web servers) using different configuration techniques (e.g., configuration files and preprocessor flags). Results show an average prediction accuracy of 95 %. Article Full-text available A goal of performance testing is to find situations when applications unexpectedly exhibit worsened characteris-tics for certain combinations of input values. A fundamental question of performance testing is how to select a manageable subset of the input data faster to find performance problems in applications automatically. We offer a novel solution for finding performance problems in applications automatically using black-box software testing. Our solution is an adaptive, feedback-directed learning testing system that learns rules from execution traces of applications and then uses these rules to select test input data automatically for these applications to find more performance problems when compared with exploratory random testing. We have implemented our solution and applied it to a medium-size ap-plication at a major insurance company and to an open-source application. Performance problems were found automatically and confirmed by experienced testers and developers. Conference Paper Full-text available A good understanding of the impact of different types of bugs on various project aspects is essential to improve software quality research and practice. For instance, we would expect that security bugs are fixed faster than other types of bugs due to their critical nature. However, prior research has often treated all bugs as similar when studying various aspects of software quality (e.g., predicting the time to fix a bug), or has focused on one particular type of bug (e.g., security bugs) with little comparison to other types. In this paper, we study how different types of bugs (performance and security bugs) differ from each other and from the rest of the bugs in a software project. Through a case study on the Firefox project, we find that security bugs are fixed and triaged much faster, but are reopened and tossed more frequently. Furthermore, we also find that security bugs involve more developers and impact more files in a project. Our work is the first work to ever empirically study performance bugs and compare it to frequently-studied security bugs. Our findings highlight the importance of considering the different types of bugs in software quality research and practice. Conference Paper Full-text available The relationship between various software-related phenomena (e.g., code complexity) and post-release software defects has been thoroughly examined. However, to date these predictions have a limited adoption in practice. The most commonly cited reason is that the prediction identifies too much code to review without distinguishing the impact of these defects. Our aim is to address this drawback by focusing on high-impact defects for customers and practitioners. Customers are highly impacted by defects that break pre-existing functionality (breakage defects), whereas practitioners are caught off-guard by defects in files that had relatively few pre-release changes (surprise defects). The large commercial software system that we study already had an established concept of breakages as the highest-impact defects, however, the concept of surprises is novel and not as well established. We find that surprise defects are related to incomplete requirements and that the common assumption that a fix is caused by a previous change does not hold in this project. We then fit prediction models that are effective at identifying files containing breakages and surprises. The number of pre-release defects and file size are good indicators of breakages, whereas the number of co-changed files and the amount of time between the latest pre-release change and the release date are good indicators of surprises. Although our prediction models are effective at identifying files that have breakages and surprises, we learn that the prediction should also identify the nature or type of defects, with each type being specific enough to be easily identified and repaired. Conference Paper Full-text available Software engineering researchers have long been interested in where and why bugs occur in code, and in predicting where they might turn up next. Historical bug-occurence data has been key to this research. Bug tracking systems, and code version histories, record when, how and by whom bugs were fixed; from these sources, datasets that relate file changes to bug fixes can be extracted. These historical datasets can be used to test hypotheses concerning processes of bug introduction, and also to build statistical bug prediction models. Unfortunately, processes and humans are imperfect, and only a fraction of bug fixes are actually labelled in source code version histories, and thus become available for study in the extracted datasets. The question naturally arises, are the bug fixes recorded in these historical datasets a fair representation of the full population of bug fixes? In this paper, we investigate historical data from several software projects, and find strong evidence of systematic bias. We then investigate the potential effects of "unfair, imbalanced" datasets on the performance of prediction techniques. We draw the lesson that bias is a critical problem that threatens both the effectiveness of processes that rely on biased datasets to build prediction models and the generalizability of hypotheses tested on biased data. Conference Paper Full-text available Robust distributed systems commonly employ high-level recovery mechanisms enabling the system to recover from a wide variety of problematic environmental conditions such as node failures, packet drops and link disconnections. Unfortunately, these recovery mechanisms also effectively mask additional serious design and implementation errors, disguising them as latent performance bugs that severely degrade end-to-end system performance. These bugs typically go unnoticed due to the challenge of distinguishing between a bug and an intermittent environmental condition that must be tolerated by the system. We present techniques that can automatically pinpoint latent performance bugs in systems implementations, in the spirit of recent advances in model checking by systematic state space exploration. The techniques proceed by automating the process of conducting random simulations, identifying performance anomalies, and analyzing anomalous executions to pinpoint the circumstances leading to performance degradation. By focusing our implementation on the MACE toolkit, MACEPC can be used to test our implementations directly, without modification. We have applied MACEPC to five thoroughly tested and trusted distributed systems implementations. MACEPC was able to find significant, previously unknown, long-standing performance bugs in each of the systems, and led to fixes that significantly improved the end-to-end performance of the systems. Conference Paper Full-text available Reproducing bug symptoms is a prerequisite for perform- ing automatic bug diagnosis. Do bugs have characteristics that ease or hinder automatic bug diagnosis? In this pa- per, we conduct a thorough empirical study of several key characteristics of bugs that affect reproducibility at the pro- duction site. We examine randomly selected bug reports of six server applications and consider their implications on automatic bug diagnosis tools. Our results are promising. From the study, we find that nearly 82% of bug symptoms can be reproduced deterministically by re-running with the same set of inputs at the production site. We further find that very few input requests are needed to reproduce most failures; in fact, just one input request after session estab- lishment suffices to reproduce the failure in nearly 77% of the cases. We describe the implications of the results on repro- ducing software failures and designing automated diagnosis tools for production runs. Conference Paper Full-text available Program analysis and automated test generation have primarily been used to find correctness bugs. We present complexity testing, a novel automated test generation tech- nique to find performance bugs. Our complexity testing al- gorithm, which we call WISE (Worst-case Inputs from Sym- bolic Execution), operates on a program accepting inputs of arbitrary size. For each input size, WISE attempts to con- struct an input which exhibits the worst-case computational complexity of the program. WISE uses exhaustive test gen- eration for small input sizes and generalizes the result of executing the program on those inputs into an "input gen- erator." The generator is subsequently used to efficiently generate worst-case inputs for larger input sizes. We have performed experiments to demonstrate the utility of our ap- proach on a set of standard data structures and algorithms. Our results show that WISE can effectively generate worst- case inputs for several of these benchmarks. Conference Paper Full-text available With the ubiquity of multi-core processors, software must make eective use of multiple cores to obtain good performance on modern hardware. One of the biggest roadblocks to this is load imbalance, or the uneven distribution of work across cores. We propose LIME, a framework for analyzing parallel programs and reporting the cause of load imbalance in application source code. This framework uses statistical techniques to pinpoint load imbalance problems stemming from both control flow issues (e.g., unequal iteration counts) and interactions between the application and hardware (e.g., unequal cache miss counts). We evaluate LIME on applications from widely used parallel benchmark suites, and show that LIME accurately reports the causes of load imbalance, their nature and origin in the code, and their relative importance. Conference Paper Full-text available Most Java programmers would agree that Java is a language that promotes a philosophy of “create and go forth”. By design, temporary objects are meant to be created on the heap, possibly used and then abandoned to be collected by the garbage collector. Excessive generation of temporary objects is termed “object churn” and is a form of software bloat that often leads to performance and memory problems. To mitigate this problem, many compiler optimizations aim at identifying objects that may be allocated on the stack. However, most such optimizations miss large opportunities for memory reuse when dealing with objects inside loops or when dealing with container objects. In this paper, we describe a novel algorithm that detects bloat caused by the creation of temporary container and String objects within a loop. Our analysis determines which objects created within a loop can be reused. Then we describe a source-to-source transformation that efficiently reuses such objects. Empirical evaluation indicates that our solution can reduce upto 40% of temporary object allocations in large programs, resulting in a performance improvement that can be as high as a 20% reduction in the run time, specifically when a program has a high churn rate or when the program is memory intensive and needs to run the GC often. Conference Paper Full-text available Performance analysts profile their programs to find methods that are worth optimizing: the "hot" methods. This paper shows that four commonly-used Java profilers ( xprof , hprof , jprofile, and yourkit ) often disagree on the identity of the hot methods. If two profilers disagree, at least one must be incorrect. Thus, there is a good chance that a profiler will mislead a performance analyst into wasting time optimizing a cold method with little or no performance improvement. This paper uses causality analysis to evaluate profilers and to gain insight into the source of their incorrectness. It shows that these profilers all violate a fundamental requirement for sampling based profilers: to be correct, a sampling-based profilermust collect samples randomly. We show that a proof-of-concept profiler, which collects samples randomly, does not suffer from the above problems. Specifically, we show, using a number of case studies, that our profiler correctly identifies methods that are important to optimize; in some cases other profilers report that these methods are cold and thus not worth optimizing. Conference Paper Full-text available Calling context trees (CCTs) associate performance metrics with paths through a program's call graph, providing valuable information for program understanding and performance analysis. Although CCTs are typically much smaller than call trees, in real applications they might easily consist of tens of millions of distinct calling contexts: this sheer size makes them difficult to analyze and might hurt execution times due to poor access locality. For performance analysis, accurately collecting information about hot calling contexts may be more useful than constructing an entire CCT that includes millions of uninteresting paths. As we show for a variety of prominent Linux applications, the distribution of calling context frequencies is typically very skewed. In this paper we show how to exploit this property to reduce the CCT size considerably. We introduce a novel run-time data structure, called Hot Calling Context Tree (HCCT), that offers an additional intermediate point in the spectrum of data structures for representing interprocedural control flow. The HCCT is a subtree of the CCT that includes only hot nodes and their ancestors. We show how to compute the HCCT without storing the exact frequency of all calling contexts, by using fast and space-efficient algorithms for mining frequent items in data streams. With this approach, we can distinguish between hot and cold contexts on the fly, while obtaining very accurate frequency counts. We show both theoretically and experimentally that the HCCT achieves a similar precision as the CCT in a much smaller space, roughly proportional to the number of distinct hot contexts: this is typically several orders of magnitude smaller than the total number of calling contexts encountered during a program's execution. Our space-efficient approach can be effectively combined with previous context-sensitive profiling techniques, such as sampling and bursting. Conference Paper Concurrency bugs are widespread in multithreaded programs. Fixing them is time-consuming and error-prone. We present CFix, a system that automates the repair of concurrency bugs. CFix works with a wide variety of concurrency-bug detectors. For each failure-inducing interleaving reported by a bug detector, CFix first determines a combination of mutual-exclusion and order relationships that, once enforced, can prevent the buggy interleaving. CFix then uses static analysis and testing to determine where to insert what synchronization operations to force the desired mutual-exclusion and order relationships, with a best effort to avoid deadlocks and excessive performance losses. CFix also simplifies its own patches by merging fixes for related bugs. Evaluation using four different types of bug detectors and thirteen real-world concurrency-bug cases shows that CFix can successfully patch these cases without causing deadlocks or excessive performance degradation. Patches automatically generated by CFix are of similar quality to those manually written by developers. Conference Paper Many bugs, even those that are known and documented in bug reports, remain in mature software for a long time due to the lack of the development resources to fix them. We propose a general approach, R2Fix, to automatically generate bug-fixing patches from free-form bug reports. R2Fix combines past fix patterns, machine learning techniques, and semantic patch generation techniques to fix bugs automatically. We evaluate R2Fix on three projects, i.e., the Linux kernel, Mozilla, and Apache, for three important types of bugs: buffer overflows, null pointer bugs, and memory leaks. R2Fix generates 57 patches correctly, 5 of which are new patches for bugs that have not been fixed by developers yet. We reported all 5 new patches to the developers; 4 have already been accepted and committed to the code repositories. The 57 correct patches generated by R2Fix could have shortened and saved up to an average of 63 days of bug diagnosis and patch generation time. Article Traditional profilers identify where a program spends most of its resources. They do not provide information about why the program spends those resources or about how resource consumption would change for different program inputs. In this paper we introduce the idea of algorithmic profiling. While a traditional profiler determines a set of measured cost values, an algorithmic profiler determines a cost function. It does that by automatically determining the "inputs" of a program, by measuring the program's "cost" for any given input, and by inferring an empirical cost function. Article There are more bugs in real-world programs than human programmers can realistically address. This paper evaluates two research questions: “What fraction of bugs can be repaired automatically?” and “How much does it cost to repair a bug automatically?” In previous work, we presented GenProg, which uses genetic programming to repair defects in off-the-shelf C programs. To answer these questions, we: (1) propose novel algorithmic improvements to GenProg that allow it to scale to large programs and find repairs 68% more often, (2) exploit GenProg's inherent parallelism using cloud computing resources to provide grounded, human-competitive cost measurements, and (3) generate a large, indicative benchmark set to use for systematic evaluations. We evaluate GenProg on 105 defects from 8 open-source programs totaling 5.1 million lines of code and involving 10,193 test cases. GenProg automatically repairs 55 of those 105 defects. To our knowledge, this evaluation is the largest available of its kind, and is often two orders of magnitude larger than previous work in terms of code or test suite size or defect count. Public cloud computing prices allow our 105 runs to be reproduced for $403; a successful repair completes in 96 minutes and costs$7.32, on average. Article Many applications suffer from run-time bloat: excessive memory usage and work to accomplish simple tasks. Bloat significantly affects scalability and performance, and exposing it requires good diagnostic tools. We present a novel analysis that profiles the run-time execution to help programmers uncover potential performance problems. The key idea of the proposed approach is to track object references, starting from object creation statements, through assignment statements, and eventually statements that perform useful operations. This propagation is abstracted by a representation we refer to as a reference propagation graph. This graph provides path information specific to reference producers and their run-time contexts. Several client analyses demonstrate the use of reference propagation profiling to uncover runtime inefficiencies. We also present a study of the properties of reference propagation graphs produced by profiling 36 Java programs. Several cases studies discuss the inefficiencies identified in some of the analyzed programs, as well as the significant improvements obtained after code optimizations. Article Developers frequently use inefficient code sequences that could be fixed by simple patches. These inefficient code sequences can cause significant performance degradation and resource waste, referred to as performance bugs. Meager increases in single threaded performance in the multi-core era and increasing emphasis on energy efficiency call for more effort in tackling performance bugs. This paper conducts a comprehensive study of 110 real-world performance bugs that are randomly sampled from five representative software suites (Apache, Chrome, GCC, Mozilla, and MySQL). The findings of this study provide guidance for future work to avoid, expose, detect, and fix performance bugs. Guided by our characteristics study, efficiency rules are extracted from 25 patches and are used to detect performance bugs. 332 previously unknown performance problems are found in the latest versions of MySQL, Apache, and Mozilla applications, including 219 performance problems found by applying rules across applications. Conference Paper Software bugs affect system reliability. When a bug is exposed in the field, developers need to fix them. Unfortunately, the bug-fixing process can also introduce errors, which leads to buggy patches that further aggravate the damage to end users and erode software vendors' reputation. This paper presents a comprehensive characteristic study on incorrect bug-fixes from large operating system code bases including Linux, OpenSolaris, FreeBSD and also a mature commercial OS developed and evolved over the last 12 years, investigating not only themistake patterns during bug-fixing but also the possible human reasons in the development process when these incorrect bug-fixes were introduced. Our major findings include: (1) at least 14.8%--24.4% of sampled fixes for post-release bugs in these large OSes are incorrect and have made impacts to end users. (2) Among several common bug types, concurrency bugs are the most difficult to fix correctly: 39% of concurrency bug fixes are incorrect. (3) Developers and reviewers for incorrect fixes usually do not have enough knowledge about the involved code. For example, 27% of the incorrect fixes are made by developers who have never touched the source code files associated with the fix. Our results provide useful guidelines to design new tools and also to improve the development process. Based on our findings, the commercial software vendor whose OS code we evaluated is building a tool to improve the bug fixing and code reviewing process. Conference Paper Framework-intensive applications (e.g., Web applications) heavily use temporary data structures, often resulting in performance bot- tlenecks. This paper presents an optimized blended escape analysis to approximate object lifetimes and thus, to identify these tempo- raries and their uses. Empirical results show that this optimized analysis on average prunes 37% of the basic blocks in our bench- marks, and achieves a speedup of up to 29 times compared to the original analysis. Newly defined metrics quantify key properties of temporary data structures and their uses. A detailed empirical eval- uation offers the first characterization of temporaries in framework- intensive applications. The results show that temporary data struc- tures can include up to 12 distinct object types and can traverse through as many as 14 method invocations before being captured. Conference Paper Every bug has a story behind it. The people that discover and resolve it need to coordinate, to get information from documents, tools, or other people, and to navigate through issues of accountability, ownership, and organizational structure. This paper reports on a field study of coordination activities around bug fixing that used a combination of case study research and a survey of software professionals. Results show that the histories of even simple bugs are strongly dependent on social, organizational, and technical knowledge that cannot be solely extracted through automation of electronic repositories, and that such automation provides incomplete and often erroneous accounts of coordination. The paper uses rich bug histories and survey results to identify common bug fixing coordination patterns and to provide implications for tool designers and researchers of coordination in software development. Conference Paper Concurrent programming is increasingly important for achieving performance gains in the multi-core era, but it is also a difficult and error-prone task. Concurrency bugs are particularly difficult to avoid and diagnose, and therefore in order to improve methods for handling such bugs, we need a better understanding of their characteristics. In this paper we present a study of concurrency bugs in MySQL, a widely used database server. While previous studies of real-world concurrency bugs exist, they have centered their attention on the causes of these bugs. In this paper we provide a complementary focus on their effects, which is important for understanding how to detect or tolerate such bugs at run-time. Our study uncovered several interesting facts, such as the existence of a significant number of latent concurrency bugs, which silently corrupt data structures and are exposed to the user potentially much later. We also highlight several implications of our findings for the design of reliable concurrent systems. Conference Paper Conference Paper We present a study of operating system errors found by automatic, static, compiler analysis applied to the Linux and OpenBSD kernels. Our approach differs from previous studies that consider errors found by manual inspection of logs, testing, and surveys because static analysis is applied uniformly to the entire kernel source, though our approach necessarily considers a less comprehensive variety of errors than previous studies. In addition, automation allows us to track errors over multiple versions of the kernel source to estimate how long errors remain in the system before they are fixed.We found that device drivers have error rates up to three to seven times higher than the rest of the kernel. We found that the largest quartile of functions have error rates two to six times higher than the smallest quartile. We found that the newest quartile of files have error rates up to twice that of the oldest quartile, which provides evidence that code "hardens" over time. Finally, we found that bugs remain in the Linux kernel an average of 1.8 years before being fixed. Conference Paper Load tests aim to validate whether system performance is acceptable under peak conditions. Existing test generation techniques induce load by increasing the size or rate of the input. Ignoring the particular input values, however, may lead to test suites that grossly mischaracterize a system's performance. To address this limitation we introduce a mixed symbolic execution based approach that is unique in how it 1) favors program paths associated with a performance measure of interest, 2) operates in an iterative-deepening beam-search fashion to discard paths that are unlikely to lead to high-load tests, and 3) generates a test suite of a given size and level of diversity. An assessment of the approach shows it generates test suites that induce program response times and memory consumption several times worse than the compared alternatives, it scales to large and complex inputs, and it exposes a diversity of resource consuming program behavior. Conference Paper Many large-scale Java applications suffer from runtime bloat. They execute large volumes of methods, and create many temporary objects, all to execute relatively simple operations. There are large opportunities for performance optimizations in these applications, but most are being missed by existing optimization and tooling technology. While JIT optimizations struggle for a few percent, performance experts analyze deployed applications and regularly find gains of 2× or more. Finding such big gains is difficult, for both humans and compil- ers, because of the diffuse nature of runtime bloat. Time is spread thinly across calling contexts, making it difficult to judge how to improve performance. Bloat results from a pile-up of seemingly harmless decisions. Each adds temporary objects and method calls, and often copies values between those temporary objects. While data copies are not the entirety of bloat, we have observed that they are excellent indicators of regions of excessive activity. By opti- mizing copies, one is likely to remove the objects that carry copied values, and the method calls that allocate and populate them. We introduce copy profiling, a technique that summarizes run- time activity in terms of chains of data copies. A flat copy profile counts copies by method. We show how flat profiles alone can be helpful. In many cases, diagnosing a problem requires data flow context. Tracking and making sense of raw copy chains does not scale, so we introduce a summarizing abstraction called the copy graph. We implement three clients analyses that, using the copy graph, expose common patterns of bloat, such as finding hot copy chains and discovering temporary data structures. We demonstrate, with examples from a large-scale commercial application and sev- eral benchmarks, that copy profiling can be used by a programmer to quickly find opportunities for large performance gains. Conference Paper Fixing software bugs has always been an important and time-consuming process in software development. Fixing concurrency bugs has become especially critical in the multicore era. However, fixing concurrency bugs is challenging, in part due to non-deterministic failures and tricky parallel reasoning. Beyond correctly fixing the original problem in the software, a good patch should also avoid introducing new bugs, degrading performance unnecessarily, or damaging software readability. Existing tools cannot automate the whole fixing process and provide good-quality patches. We present AFix, a tool that automates the whole process of fixing one common type of concurrency bug: single-variable atomicity violations. AFix starts from the bug reports of existing bug-detection tools. It augments these with static analysis to construct a suitable patch for each bug report. It further tries to combine the patches of multiple bugs for better performance and code readability. Finally, AFix's run-time component provides testing customized for each patch. Our evaluation shows that patches automatically generated by AFix correctly eliminate six out of eight real-world bugs and significantly decrease the failure probability in the other two cases. AFix patches never introduce new bugs and usually have similar performance to manually-designed patches. Article This study focuses largely on two issues: (a) improved syntax for iterations and error exits, making it possible to write a larger class of programs clearly and efficiently without ″go to″ statements; (b) a methodology of program design, beginning with readable and correct, but possibly inefficient programs that are systematically transformed in necessary into efficent and correct, but possibly less readable code. The discussion brings out opposing points of view about whether or not ″go to″ statements should be abolished; some merit is found on both sides of this question. Finally, an attempt is made to define the true nature of structured programming, and to recommend fruitful directions for further study. Article Many popular software systems automatically report failures back to the vendors, allowing developers to focus on the most pressing problems. However, it takes a certain period of time to assess which failures occur most frequently. In an empirical investigation of the Firefox and Thunderbird crash report databases, we found that only 10 to 20 crashes account for the large majority of crash reports; predicting these “top crashes” thus could dramatically increase software quality. By training a machine learner on the features of top crashes of past releases, we can effectively predict the top crashes well before a new release. This allows for quick resolution of the most important crashes, leading to improved user experience and better allocation of maintenance efforts. Conference Paper We test the hypothesis that generic recovery techniques, such as process pairs, can survive most application faults without using application-specific information. We examine in detail the faults that occur in three, large, open-source applications: the Apache Web server, the GNOME desktop environment and the MySQL database. Using information contained in the bug reports and source code, we classify faults based on how they depend on the operating environment. We find that 72-87% of the faults are independent of the operating environment and are hence deterministic (non-transient). Recovering from the failures caused by these faults requires the use of application-specific knowledge. Half of the remaining faults depend on a condition in the operating environment that is likely to persist on retry, and the failures caused by these faults are also likely to require application-specific recovery. Unfortunately, only 5-14% of the faults were triggered by transient conditions, such as timing and synchronization, that naturally fix themselves during recovery. Our results indicate that classical application-generic recovery techniques, such as process pairs, will not be sufficient to enable applications to survive most failures caused by application faults Article We present a study of operating system errors found by automatic, static, compiler analysis applied to the Linux and OpenBSD kernels. Our approach differs from previous studies that consider errors found by manual inspection of logs, testing, and surveys because static analysis is applied uniformly to the entire kernel source, though our approach necessarily considers a less comprehensive variety of errors than previous studies. In addition, automation allows us to track errors over multiple versions of the kernel source to estimate how long errors remain in the system before they are fixed. We found that device drivers have error rates up to three to seven times higher than the rest of the kernel. We found that the largest quartile of functions have error rates two to six times higher than the smallest quartile. We found that the newest quartile of files have error rates up to twice that of the oldest quartile, which provides evidence that code "hardens" over time. Finally, we found that bugs remain in the Linux kernel an average of 1.8 years before being fixed. Trend Mcro will pay for PC repair costs • P Kallender Apache's JIRA issue tracker • Apache Software Foundation Inside Windows 7-reliability, performance and PerfTrack • D Fields • B Karagounis Lessons from the Colorado benefits management system disaster • G E Morris 1901 census site still down after six months
2023-02-09 01:02:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3151284158229828, "perplexity": 2415.405533048501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500983.76/warc/CC-MAIN-20230208222635-20230209012635-00558.warc.gz"}
https://lx.interconsult.it/python-examples-of-numpy-arctan2/
As we progress through this article, things will become clearer numpy atan2 for you. Next, let’s look at the syntax associated with it. First, we used the atan2 Function directly on both the Positive integer and negative http://lowqul.com/2021/10/19/ishhem-arbitrazhnikov-dlja-obrazovatelьnogo/ integer. The following statements find the angle for the corresponding values. Connect and share knowledge within a single location that is structured and easy to search. Python Pool is a platform where you can learn and become an expert in every aspect of Python programming language as well as in AI, ML, and Data Science. If we perform the calculations concerning our outputs, we get an answer of 45 and 30 degrees. The answers match, and hence the output is verified. As we are done with all the theory portion related to NumPy arctan2. Atan2 actually provides the correct quadrant with respect to the unit circle for your angle. I hope this article was able to clear all of your doubts. But in case you have any unsolved queries feel free to write them below in the comment section. Done reading this why not read about the Identity matrix next. In this section we will discuss the difference between 2 Numpy functions. These are 2 key points that differentiate Arctan2 from the arctan function. A more detailed comparison between the 2 is discussed later in the article. In addition, we also saw its syntax and parameters. In the end, we can conclude that NumPy arctan2 is a function that this function helps us to find the inverse tan value between 2 points. By default it returns the value in radians, but we can convert it to degrees using the methods discussed above. ## The Atan2 Function This function is not defined for complex-valued arguments; for the so-called argument of complex values, use angle. The 0th element of the derivative vectors will correspond to the derivative with respect to the 0th element of the first argument. Subsequent derivative vector elements correspond first to subsequent elements of the first input argument , and so on for subsequent arguments. • Python Pool is a platform where you can learn and become an expert in every aspect of Python programming language as well as in AI, ML, and Data Science. • But in case it is equal to true, the output array will be set to calculate a universal value at that position. • In this section we will discuss the difference between 2 Numpy functions. • In addition, the label_lines function does not account for the lines which have not had a label assigned in the plot command (or more accurately if the label contains “_line”). If the vector”s length is 0, it won”t have an angle between it and the horizontal axis (so it won”t have a meaningful sine and cosine). After that, deltaX will now be the cosine of the angle between the vector and the horizontal axis . Use the inner product and the determinant of the two vectors. This is really what you should understand if you want to understand how this works. Yes, I am calling it many times at the same angular location, but it’s tough to know you’re at the same angular location without calculating the angle. I’m not sure if it’s a new bug, but when trying a new install from zero, including dependencies, I keep getting this message. I think there are some functions from ‘math’ being called from ‘numpy’ as the function mentioned in the error, SSH operations in the module numpy, is called ‘arctan2’. Calculates the element-wise arctangent of arr1 / arr2 by choosing the correct quadrant. Taking this into account, it gives a value between 0 and 2pi. These are the 2 key points that distinguish Arctan2 from arctan features. Arctan takes only one input value, and therefore cannot determine which of the two quadrants the angle lies in each case. ## Convert Complex Number To Polar Coordinates Raises TypeError if either of the arguments are not integers. Raises ValueError if either of the arguments are negative. Convert r and theta back into the original complex number. The radius r and the angle theta are the polar coordinate representation Scrum (software development) of 4 + 3i. Annotation bounding boxes sometimes interfere undesirably with other curves. As shown by the 1 and 10 annotations in the top left plot. Keyword arguments passed to labelLines or labelLine are passed on to the text function call . This is the length of the vector from the origin to the point given by the coordinates. The result is calculated in a way which is accurate for x near zero. If x is equal to zero, return the smallest positivedenormalized representable float (smaller than the minimum positivenormalized float, sys.float_info.min). When the iterable is empty, return the start value. This function is intended specifically for use with numeric values and may reject non-numeric types. ¶Return the integer square root of the nonnegative integer n. This is the floor of the exact square root of n, or equivalently the greatest integera such that a² ≤n. ## Output: Now let us go line by line and understand how we achieved the output. After which, we have defined our 2 sets of the array. Using the syntax of our function and a print statement, we get our desired output. Returns an AutoDiff matrix given a matrix of values and a gradient matrix. The Python atan2 function returns the angle from the X-Axis to the specified point . In this section, we discuss how to use atan2 function in Python Programming language with example. In this very short post I want to point you to some code for calculating the centroid and distance to that centroid for a set of points in numpy. Note that I already blogged about the centroid function in a previous post. In addition, the label_lines function does not account for the lines which have not had a label assigned in the plot command (or more accurately if the label contains “_line”). The convention is to return the angle z whose real part lies in [-pi / 2 , pi / 2]. In the determinant computation, you’re concatenating the two vectors to form a 2 x 2 matrix, for which you’re computing the determinant. Similarly if $$\tan(\theta)$$ is negative, that could mean that either the angle is in Quadrant II or Quadrant IV, where $$\sin(\theta)$$ and $$\cos(\theta)$$ are opposite signs. For more information on the math behind arctan, click here, or for arctan2, click here. In Mathematica, the form ArcTan is used where the one parameter form supplies the normal arctangent . Browse other questions tagged python math or ask your own question. However, it still outputs an angle between [-pi,pi], which is not always useful (positive for 1st and 2nd quadrants; and negative [-pi,0] for 3rd and 4th). The following functions are provided by this module. Except when explicitly noted otherwise, all return values are floats. It seems that module overlays the base numpy ufuncs for sqrt, log, log2, logn, log10, power, arccos, arcsin, and arctanh. The underlying design reason Software system why it is done like that is probably buried in a mailing list post somewhere. The sign of deltaY will tell you whether the sine described in step 4 is positive or negative. The sign of deltaX will tell you whether the cosine described in step 3 is positive or negative. This difference will be between -2π and 2π, so in order to get a positive angle between 0 and 2π you could then take the modulo against 2π. Finally you can convert radians to degrees using np.rad2deg. The math.atan2() method returns the arc tangent of y/x, in radians. We hope this article has clarified all your doubts. But in case you have any unresolved questions, feel free to write them down in the comments section. Having read that, why not read about the following https://www.condominiodigitale.cloud/category/finansy/page/3/ identity matrix. Along with that, for a better general understanding, we will also look at its syntax and parameter. Then, we will see the application of the whole theoretical part through some examples. However, it turns out that there are two different definitions of the inverse tangent, and which one you use matters if x and y can both be positive or negative. This is exactly the case when using the Cartesian components of location on a sphere to determine the longitude and latitude of that point. Regardless, I have written the following module which takes any allows for semi-automatic plot labelling. It requires only numpy and a couple of functions from the standard math library. You could do this for your points A and B, then subtract the second angle from the first to get the signed clockwise angular difference. The numpy.arctan2 () method computes element-wise arc tangent of arr1/arr2 choosing the quadrant correctly. The numpy.arctan2() method computes element-wise arc tangent of arr1/arr2 choosing the quadrant correctly. By default, the labelLines function assumes that all data series span the range specified by the axis limits. Take a look at the blue curve in the top left plot of the pretty picture. That being said, it gives out the value between 0 to 2pi. The range of this function is -180 to 180 degrees.
2022-12-06 20:05:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5770169496536255, "perplexity": 491.2752930324929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711114.3/warc/CC-MAIN-20221206192947-20221206222947-00021.warc.gz"}
http://www.sciforums.com/threads/the-debt-ceiling-this-is-getting-ridiculous.159874/page-3
# The Debt Ceiling: This is getting ridiculous Discussion in 'Politics' started by Xelor, Sep 7, 2017. 1. ### XelorRegistered Senior Member Messages: 155 To say that is to reject one of the core assumptions of economics: that people/entities/nations when making economic decision, exhibit, as best they can, rational utility-maximizing behavior. No. What my remarks reflect is a careful read of your assertion and a rigorous refutation of its legitimacy. Seriously? I presented you with empirical research (I even provided hyperlinks to the document) that shows that some of them are and some of them are not, and for those that are not, that there is no material impact on the growth/productivity of the affected economy. Your assertion that gave rise to my doing so is that natural disasters have impact, an "overwhelmingly" positive one. Last edited: Sep 9, 2017 Messages: 155 ??? 5. ### joepistoleDeacon BluesValued Senior Member Messages: 22,906 If the risk approaches 100% then it's no longer insurance, and that's when government steps in with subsidies. That's the traditional approach. That's what has happened in the healthcare industry. As the cost spiral and the product becomes unaffordable; government steps in with subsidies. It's cost shifting. It has already occurred in the home ownership arena, e.g. flood insurance. Most of these insurers are publicly traded companies. So we can see their financials. We can see their losses, beginning next month when they report for this quarter. Insurance companies have money reserved for these losses based on a set of actuarial data. So it's not like insurers have been blind sided. They've known for a long time this day was coming, and they have been preparing for it. Longer term this issue of sea rising sea levels will be an even bigger issue. The water will not drain away. It will be permanent. We will then need to decide how it will be handled. Do we abandon the land to the sea or do we build defensive structures. I don't see how we can build a defensive structure around most of our coastline. 7. ### joepistoleDeacon BluesValued Senior Member Messages: 22,906 And I presented you with a paper from the Federal Reserve. Did you understand your reference? I previously addressed you paper. Nothing has changed. 8. ### XelorRegistered Senior Member Messages: 155 Indeed, you did. You also completely ignored the introductory sentence in the prior paragraph that set the context for the one you shared. What does that sentence say? The critical part of it that you ignored: the word "temporary." The two paragraphs together: "Temporary increases" accruing from the rebuilding efforts do not militate for one construing that, as you assert, "loss[es] from a natural disaster would be overwhelmed by the billions of dollars spent in reconstruction..." Moreover, Kliesen presented no quantification of the "temporary increases" nor of the "before and after" outputs and productivity. That quantification and the analysis of it is precisely what is found in the papers, 21st century ones, I referenced and described, and the quantification does not support the emboldened assertion shown above. That the essay you cited is from the late 20th century is relevant. The late twentieth century was the heyday whereby talented and facile theorists set the intellectual agenda. Their very facility enabled them to build models with virtually any implication, which meant that policy makers could pick and choose at their convenience. Theory turned out to be too malleable, in other words, to provide reliable guidance for policy. In contrast, the 21st century has become the age wherein information processing advances (mostly computations per second increases) have given empiricists sway and economic conclusions and advice are grounded in concrete observation of markets and their inhabitants' actual behavior. Work in economics, including the abstract model building in which theorists engage, has become guided more powerful by the resulting real-world observation. Simply put, the notions Kliesen posited in The Economics of Natural Disasters have been empirically tested by the researchers whose work I cited. Their work shows that while the ideas Kliesen expressed are applicable, they are not applicable to the extent you've overstated them by writing "loss[es] from a natural disaster would be overwhelmed by the billions of dollars spent in reconstruction..." 9. ### Quantum QuackLife's a tease...Valued Senior Member Messages: 18,366 A couple of things, It is not so much rising sea levels that I am talking about, as this is longer term. It is more to do with regular and frequent >Cat5 Hurricane events that are trending to occur every year with an accompanying damages bill in excess of many billions, not to mention the regular need to evacuate. Premised on the atmosphere water saturation( mass ) passing a threshhold due to increased levels of evaporation due to hotter oceans leading the observed trend of massive changes in climate dynamics. and... Has it occurred to you that the money being allocated to Houston's disaster relief and rebuilding could have been better spent on failing infrastructure, across the nation instead of fixing something a storm tore down? To suggest that the climate disasters somehow continue to allow the money available to be utilized in a way that is most effective and efficient would be silly yes? Basically this sort of uninsured storm damage is very costly to any nation and to think other wise is ridiculous. Printing money doesn't solve the fundamental expense but only hides it... ( a bit like firing off a 2 million dollar missile in Iraq is money literally up in smoke - print it just to burn it sort of thingo) Last edited: Sep 9, 2017 10. ### Quantum QuackLife's a tease...Valued Senior Member Messages: 18,366 The point related to this thread is that the USA has to do a lot more than raise the debt ceiling and believe that life is going to go on as it has before. It wont and change (adaptation) is a must if the USA is to avoid the financial hemorrhaging that climate change events are causing and are going to increasingly cause. 11. ### joepistoleDeacon BluesValued Senior Member Messages: 22,906 The money spent thus far is a pittance. It's big money for an individual, but for the country it's a pittance. A few billion to a 19 trillion dollar economy isn't much. Even if the cost exceeds a 100 billion dollars, it's still a pittance in the overall scope of things. The last disaster of this magnitude occurred in 2005. 12. ### joepistoleDeacon BluesValued Senior Member Messages: 22,906 The US needs to be fiscally and ethically responsible. That's the bottom line, but that's a much bigger issue than just a few disasters. We have many big issues that need to be addressed e.g. our political system, wealth inequality. These natural disasters are the least of our problems. The misinformation, injected into our political system with the rise of the right-wing entertainment industry is a much, much bitter problem. It's why it is so difficult to raise the debt ceiling. It's why it's so difficult to act in a fiscally responsible manner. Just a few decades ago, before the rise of right-wing entertainment, we didn't have these problems. Now we have a right-wing cult controlling our government. 13. ### Quantum QuackLife's a tease...Valued Senior Member Messages: 18,366 Maybe I am being a tad too hypothetical with the question, but it gets summed up by asking: How many time s do you evacuate an area/region/city per year before you abandon it all together? ( rhetorical) How many times a year do you evacuate Houston before you start to realize something....? ( rhetorical) Messages: 5,359 15. ### joepistoleDeacon BluesValued Senior Member Messages: 22,906 The current situation aside, I don't recall the last time we evacuated Houston. It doesn't happen that often. I don't think there is an answer to your question. When hurricane Katrina hit New Orleans some folks asked that question and doubted the wisdom of rebuilding a costal city which was already below sea level. But we rebuilt the city nonetheless. These people, the people who live in the affected states, believe climate change is a hoax. I guess we will find out. But how do you abandon an entire state? 16. ### iceauraValued Senior Member Messages: 27,107 Irma and Jose are both Atlantic hurricanes - not Gulf or Carib. The slope of the regression line appears to depend on one outlying year. The hurricane count does not measure severity, which is the actual issue in climate change storm warnings. 17. ### Quantum QuackLife's a tease...Valued Senior Member Messages: 18,366 I am not sure why you think frequency of hurricanes is an issue when size and strength is... Also the data stops at 2012 which in climate change terms, given recent events, is 5 years obsolete. I would anticipate as the weather moves towards more and more extreme outcomes that frequency may decline even more so to accommodate or balance for the uptick in strength. Less frequent but considerably more damaging. well pointed out... this is my point about how money being pretty much impotent when we are talking about abandoning real estate. Often people consider only rising sea levels but fail to consider that whole regions could be severely affected mainly due to weather severity. If weather records continue to break then it is inevitable that entire regions may need to be abandoned... This is hellishly expensive...an expense that those suffering the uptick in monsoonal flooding in Asia are having to seriously consider....right now. 18. ### joepistoleDeacon BluesValued Senior Member Messages: 22,906 Welcome to global warming. Global warming is an existential threat to the species. We are in the early stages. Theses events are relatively rare but are increasing in severity and frequency. We will do as we have always done. We will build stronger structures and build walls to protect low lying areas. We are already taking action to mitigate our carbon footprints but I fear it is too little, too late. We need to get more aggressive in our efforts to address global warming; whether we do or not remains to be seen. Messages: 5,359 20. ### iceauraValued Senior Member Messages: 27,107 So? The Galveston hurricane became a hurricane in the Gulf, during what was probably its first encounter with very warm water (possibly because although AGW was definitely taking effect in 1900, it had not yet warmed the open Atlantic ocean as it has since) - although wind shear and other factors also figure in. Not only was it smaller and weaker than Irma, but it did not ramp up to hurricane strength over the open Atlantic as Irma did. It hit Antigua as a "very strong thunderstorm", for example. https://en.wikipedia.org/wiki/1900_Galveston_hurricane That's how it surprised people, and killed so many - it blew up quickly, just before coming ashore. Modern day, with modern weather forecasting and other governmental preparations, it surprises nobody - and kills about as many people as just got killed in the Keys by Irma, or Houston by Harvey, or Cuba by Gustav (actually, probably more than Gustav - different government). Which brings us to the debt ceiling - the reductions in government spending proposed by the Party attempting to use that ceiling as leverage include defunding of current climate research as well as Federal aid and disaster preparations, building codes and advance infrastructure regulations, etc. They have specifically referred to the Federal and State governance in place in 1900 in Galveston, Texas as a desirable goal or target. 21. ### Quantum QuackLife's a tease...Valued Senior Member Messages: 18,366 Falling victim to the temptation to conflate, exaggerate and dramatize an issue is really easy to do with regards to such serious issues. There is no doubt that the climate scientists have done their calculations over water evaporation rates etc as the oceans gain temperature. Like wise one can assume if not paranoid that those same scientists would inform the public if there was any predicted immediate, to short term threat detected. So far there appears none, so I ask myself "Why am I so concerned when the scientists appear not to be?" So I apologize for raising the issue of potentials, perhaps prematurely, due to excessive speculation (paranoia) on my part. It is concerning however that the main demonstrations of climate change denial is also evident in economic policy, I guess at all levels. A human condition thingo perhaps... The estimated cost published recently of both Harvey and Irma runs into $290 Billion* and as some have suggested maybe just printing money and raising the debt ceiling will deal with the loss...this year but I wonder what happens if it is repeated more so this year and the next and so on... * src: https://www.accuweather.com/en/weat...ost-of-harvey-irma-to-be-290-billion/70002686 It has been estimated that Loyd's of London will suffer a loss of$200 Billion ( Harvey, Irma and Jose) and brings me to the point about what happens when insurers decide enough is enough and refuse to offer insurance due to an excessive predicted risk of loss. ( re: https://www.businessinsider.com.au/hurricanes-irma-and-harvey-to-cost-lloyds-of-london-insurers-150-billion-2017-9?r=US&IR=T ) 22. ### sculptorValued Senior Member Messages: 5,359 or: 1. The Great Havana Hurricane of 1846 Main article: Great Havana Hurricane of 1846 In October, a major hurricane, likely a Category 5, moved through the Caribbean Sea. This Great Havana Hurricane struck western Cuba on October 10. was the most intense tropical cyclone in recorded history for 78 years and the first known Category 5-strength hurricane to strike Cuba. Unusual in many aspects, the 1846 Havana hurricane was the most intense of its time. Though atmospheric pressure readings in Cuba reached as low as 916 mbar. Although no reliable wind measurements were available at the time, a separate study also estimated that it produced Category 5-strength winds. In Cuba, the storm caused hundreds of deaths, capsized dozens of ships, obliterated buildings, uprooted trees, and ruined crops. Many towns were wholly destroyed or flattened and never recovered, while others disappeared entirely. It hit the Florida Keys on 11 October, destroying the old Key West lighthouse, the Sand Key Light, and Fort Zachary Taylor. In Key West, widespread destruction was noted, with 40 deaths, many vessels rendered unfit, and widespread structural damage, with all but eight of the 600 houses in Key West damaged or destroyed. Few supplies arrived in the following days and relief efforts were gradual, with few resources within the town's vicinity. The hurricane was so destructive that years afterward, greenery on the key was sparse, and little native vegetation existed. Signs of ecological damage remained even in the early 1880s. The hurricane then headed northward, and on October 13th and hit Tampa Bay as a major hurricane. As it approached, it sucked the water out of the bay, causing the Manatee River to be so low that people walked horses across it. The hurricane moved across Florida, and remained inland over Georgia, South Carolina, North Carolina. It moved up the Chesapeake Bay, causing extensive damage through Virginia, Maryland, Washington D.C., and Pennsylvania. It caused around 163 deaths and damage throughout the areas it affected 23. ### pjdude1219The biscuit has risenValued Senior Member Messages: 15,997 do you have a point or are you just rambling for the sake of your own self aggrandizement?
2018-10-16 15:29:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2728026211261749, "perplexity": 3727.28218068276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510754.1/warc/CC-MAIN-20181016134654-20181016160154-00529.warc.gz"}
https://socratic.org/questions/point-p-10-14-is-a-point-external-to-a-circle-points-a-5-6-b-x-y-c-1-4-are-on-th
# Point P(-10,14) is a point external to a circle. Points A(5,6), B(x,y), C(1,4) are on the circle. Line PC is tangent to circle and P, A, and B are collinear, with B between P and A. Find the coordinates of point B. Give your answers two decimal places? Then teach the underlying concepts Don't copy without citing sources preview ? Write a one sentence answer... #### Explanation Explain in detail... #### Explanation: I want someone to double check my answer Describe your changes (optional) 200 1 ### This answer has been featured! Featured answers represent the very best answers the Socratic community can create. CW Share Dec 16, 2017 $B \left(x , y\right) = \left(\frac{25}{17} , \frac{134}{17}\right) = \left(1.47 , 7.88\right)$ #### Explanation: By Tangent-Secant theorem, we know that, $P {C}^{2} = P A \cdot P B$ $P {C}^{2} = {\left(4 - 14\right)}^{2} + {\left(1 + 10\right)}^{2} = 100 + 121 = 221$, $P A = \sqrt{{\left(6 - 14\right)}^{2} + {\left(5 + 10\right)}^{2}} = \sqrt{64 + 225} = \sqrt{289} = 17$, $\implies P B = \frac{P {C}^{2}}{P A} = \frac{221}{17} = 13$ $\implies P B : B A = 13 : \left(17 - 13\right) = 13 : 4$ $\implies B$ divides line $P A$ in the ratio of $13 : 4$ Section formula : if a point $B \left(x , y\right)$ divides a line joining $P \left({x}_{1} , {y}_{1}\right) \mathmr{and} A \left({x}_{2} , {y}_{2}\right)$ in the ratio of $m : n$, then $B \left(x , y\right) = \left(\frac{m \cdot {x}_{2} + n \cdot {x}_{1}}{m + n} , \frac{m \cdot {y}_{2} + n \cdot {y}_{1}}{m + n}\right)$ $\implies B \left(x , y\right) = \left(\frac{13 \cdot 5 + 4 \cdot \left(- 10\right)}{13 + 4} , \frac{13 \cdot 6 + 4 \cdot 14}{13 + 4}\right)$ $= \left(\frac{25}{17} , \frac{134}{17}\right) = \left(1.47 , 7.88\right)$ ##### Just asked! See more • 15 minutes ago • 25 minutes ago • 26 minutes ago • 27 minutes ago • 4 minutes ago • 4 minutes ago • 6 minutes ago • 9 minutes ago • 11 minutes ago • 15 minutes ago • 15 minutes ago • 25 minutes ago • 26 minutes ago • 27 minutes ago
2018-05-21 22:46:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 15, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.767235279083252, "perplexity": 6278.791295636453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864558.8/warc/CC-MAIN-20180521220041-20180522000041-00431.warc.gz"}
http://electrorange.com/watermark3.php?image=auwebcp/pictures/659227996_158_pic_3.jpg
JFIF>CREATOR: gd-jpeg v1.0 (using IJG JPEG v62), default quality C    \$.' ",#(7),01444'9=82<.342C  2!!22222222222222222222222222222222222222222222222222" }!1AQa"q2#BR\$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr \$4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (QEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQERZ( l2fS#{:LSd= -^Htx Zzg]QZVU@TqM7Wr4EQEQEQEQEQEQEQEQEQEQEQGJUrhZLUFڣ,ǩ4rx~tn_γK@@Há5*ܟjjH84fUy}:ϸ̐Z<ƯHk*Hhc^k1qILmG56wY)uu9OSU.cS̏҆Sсk&3ƵR/SVI8j\ZdQHdCƫIpO I' CYhr_q֣ ԊMvnhCz?:ZfC".QHr)hUrhZLު5÷N)&/n_QѸz7S@U#[B,MI@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@RB:SFt.[B@tKO?@G&֗F|sXڜU|{JžˬK4k bM1Gie"t @/^dlT4l`og=cwlU\_BZ;;`WӏK?^ ,GՒzE y6Wӌbw2>߶6]rmAT;};`w}Ugh]s??ƨGli(ӿW®?U/*QPli*Hu=9ۛ+Eܧ0=~?F9'S'azCk\i}?Մ&W¯?G*Q5gki(W®?G*Q4w}?4?G lT@Μ:_C}?4K l.ww}?4®O_q?֟?￵}V?g|.'@Gִ\$1ηA}VMrqAT_nY Ѥ,{_N?^ ,𫮻^ ?o}?W*U?gG}zgA}W m?T®OUs?3]q?֟??W\*;_C}՘AUi?@)Kvl}ۘOTw?ec.zG֙?b3yIs???R\*Zg}_3y*K1R¥/Usѿtl+?R\*gcH?Qد7G?Y?;_ أkL?^m  T£/Us?mio}T?gcIq??mi)t.?AK? أgM?^o  T¤ TZgA}?.?I _?mi)?!ecIs??]'T- zGK{y1 griN"AR)Z&/hJ*G5|7c[9|3y7!:W9?Pc%жKiO i_`|O[ޭ;)i)iQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQEQE2W4nm&(9T=*Eb8z}W7TcՊ(FS+CV%_~TQ@ZjO4ј+S*eZ'tfՋ KƖwS}oI1Bp<ǭX[ :jRrer_SQ]'4YXcŎM+v\$i *Xby**m5Aq\Vj (Q@Y jl\cNE TXh:3 *T nmiz@ E-*f*ǯa"To6dzU(DSjA,?ĿW*8'VUB@^OZ(((((((((((((((((((((((((P3P^^[[̂8ש5ɀm\$m"43QAsod}UR3K1@ 3K1@ 3K1@ 3K1@ 3K1@ 3K1@ 3K1@ 3KJ3Fh њ(5Z Vj2f7RQ@ LƥqWP3LiQz(`:m:/Si&,O\7RPњ\$3@hJ1L5rY٭b>R+)c2'?)n84QR0#74DM4ǃUͷC@桹?(>[4f(]UfLxjTad%ZKqL#.i)hZB{.\4f(3Fh њ(4f(3Fh њ(4f(3Fh њ(4f(3Fh њ(4f(Z))h((((((((((1KEc⫴`ҸgdDlsgmصc~VW缞{%p,AQdTuI=BY3i'!]dE<j'EW8+ɶG!Xc UzeevvekK mR?\B!H:5~["YTj\JL+1 (SyH_R kJT?hJ.mQEq 똥I, KHaEPEPEPEPEP1EbQEA30Z uE^:7/=EPG'֮F3Y.r=gYRbQEH)R?4@M94dÃTjݡj|g(M ?Y٭Պ~* M]?MlrMG;bCKh~r)#{&M Z*+66֐9b(1F( Q(b(1F( Q(b(1F( Q(b(1F( Q(b(1EPEPEPEPEPEPEPEPEPEPEPEPF(3/ CiaUPQt渍_Wr4vdxHLǩJb{qh 176aWv*?ֹMGὅ浝ČC SR):\$r`(^qjDQE95dodI P9W/x\km[VNCxӺ 2,IIhS%28WKL[xq\$ԍ3]\ZL"Hd #lCL=VT!SxOHfӼWcscoY|6NOV")D 6Z04}F&d% p ު}^*kH>,oZ~GشZ{,SIZ? OE==,`&`6>M|.,bF]JqTө93v\$jTy'mZ-ƣn]l?!WشQEQEN0CUʎd##jJ\IJ ~YK@9sj|7 CSPUF(BB=:w4ۉwɀxO74f@\$fS&3P>]NM^ j\D6 Ѭ/߅hҖ(R??JZk &h=O֓4}fYtjU>՚yր)ffZQZihsօg\4@e5oª{~:3Fh9I\iwU rFu Z*nAp=QEt,dU9w|ր%ZOOZ VNjHxJӢd}*j((((((((((((((((((((((((((:QG(!wOoZ+E1@x tW:2F9F\$.*)u=rw[Geɟ:A,:)n"CI"Bאdw=zW?sso_o@DAOA7U(s_3j2<]tMn-dda&ۃ،wq{q;fY=`~BI=|a =j+YӍ=^F1yp 2/N=2k1VBM֣&rBF~jQHp1'k[.\OҦS}-Pi֖ Yشeoe#+Vx=mm5{n/>],{~gnbRSkӼhK\62kԭl ѠZ0r]q/杻cbS]i}rߙhV B4P?gTF0+o˖ LBf AEKyh<5W%´R\$e Un0)S_:(@ I.Oh<4jFgpZV33Fj_GZK߳ZQ#PF >.ךӬdi H 5/pڟ֛(a2+>u%H֍gh'#ҫ8Kx0}Ee5mX}ӑP4T4h;c4S@ V}jv }TDifAj#b@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@cĩwܿvD;YOXlsvxW]E ^Ѭ0D\{~f…; Z(((((((((((c̑1>1z&FR)م1dY~K@%0ȤD^rN=.ezSZX4;'֣֜Yc~8;00 Ex 0-0ϴUt`>diCy(G?)楠L_ޢKEFFJ'֌ZG t'֜)tlS>)dF O&y̟Z2}i4jpO4\$,ԃBƇ y1z\*:ʜd>dhh3FwS\7I9SNy38`HX }Nqō&IMy?yYsy`=d҆XǚE78,'4S1s0 PSRDsN(((((((((((((((((((((((((((((((((((((݌̵z(QRؐ[ES% ޣ[G;y6ZZZ-lgrA8<5@^뚞+fI⩉ 3y^zs֝zݪL1J[ CEE=EqWh(iem5z9B2V1M ;3jE\$\$I@đZFTjZ(nZ6Fb%—֥pZ q^iO)tQpc SsjE (esǽ\EGAN XFOҪAwrX, qS̬MKE y:M;|պ(YvEM;TQpAeq,BTRQCl,P<\.qNP֮O,S)b)n}M^`|ic`:EHֱ1M'Y[b0fFEZ(PJC(ٌI1zEW0DQ@uOCNe (xRyH^V!*[l8uA IۼCW@,ѿpi҆ X(C ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( k:8SZ5c(<3F%OJFzP1ݸPB1RLqP=;1DvCB_4SAӇg,^7 |R@FK ēUxZ9{SxA 'ڀ!kV*SIOk X&8~ݣA)@h?E' 9h_BOAG_?ƀ 1ˏzq qMM?*FvJtda9AS|!@ V\$S3 4VzO()1P6޵1yA[#³wUnOy᰾qA76z+-' pcl>qR4YP!NzҨrԲT =|}ljz=NE*0Nh YO4وỊ CBP(S1S,X'\$Ggb3n@OĜ1|g"<hCT G8=(؄R~lH sڤ1)un7=h6g9E7qԉ̒rM;jʀ {C`tӴI
2018-03-20 05:37:43
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9114211201667786, "perplexity": 323.93599970017823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647299.37/warc/CC-MAIN-20180320052712-20180320072712-00694.warc.gz"}
https://testbook.com/question-answer/consider-the-following-statements1-soil-with-h--60520f342903c077eb6e6307
# Consider the following statements:1. Soil with high void ratio has always more coefficient of permeability than the soil with lower void ratio.2. Constant head permeability test is used for fine grained soil.3. As temperature increases, the coefficient of permeability of soil also increases.4. As the specific surface area of soil particle increases, the coefficient of permeability decreasesWhich of these statements are correct? This question was previously asked in GPSC AE CE 2018 Official Paper (Part B - Civil) View all GPSC Assistant Engineer Papers > 1. 1 and 2 2. 2 and 3 3. 1, 3 and 4 4. 4 and 1 Option 3 : 1, 3 and 4 Free CT 3: Building Materials 3014 10 Questions 20 Marks 12 Mins ## Detailed Solution Explanation: Permeability is defined as the property of a porous material that permits the passage or seepage of water (or other fluids) through its interconnecting voids. Factors affecting the permeability of soil can be studied using the following equation. $$K = \frac{1}{Z}\frac{{{e^3}}}{{1 + e}}\;\frac{{{\gamma _w}}}{\mu }\;\frac{1}{{{S^2}}}$$ Where, Z = constant; μ = dynamic viscosity of water; S = specific surface;  γw = unit weight of water. The factors affecting the permeability of soil can be summarised in the below-tabulated form: Parameter Description Size of particle Higher the size of particle higher will be the permeability Specific surface area Higher the specific surface area lower will be the permeability Void Ratio Higher the void ratio higher the permeability Viscosity of water Higher the viscosity lower will be the permeability and we know that the viscosity of liquids (water) increases with a decrease in temperature and hence, permeability reduces. Degree of saturation Higher the degree of saturation higher the permeability Entrapped gases Higher the amount of entrapped gasses in soil mass lower will be the permeability Temperature: • As we know, with an increase in temperature viscosity of fluid decreases (Soil contains water inside its pores), so the overall viscosity of soil reduces. Further, the viscosity of soil is inversely proportional to its permeability from the above relation. Therefore, with a decrease in viscosity of soil, permeability increases. Tests for Permeability • In constant head permeameter, flow is allowed to take place from the medium under constant head in order to find the permeability of the medium. • This method is suitable for coarse-grained soil in which substantial discharge could be obtained from the given time because the permeability of coarse-grained soil is very high. • But in fine-grained soil, the time required to compute the discharge is high, so permeability comes out to be very less.
2021-10-27 03:42:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7928401827812195, "perplexity": 2234.52015290867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588053.38/warc/CC-MAIN-20211027022823-20211027052823-00547.warc.gz"}
https://theory.leeds.ac.uk/leeds-loughborough-nottingham-seminar/leeds-loughborough-nottingham-seminars-archive/
# Past seminars in 2020/21 and 2021/22 • ### 11/05: Benedikt Kloss (Flatiron) Abstract: Whether many-body localization (MBL) can exist in one-dimensional systems with local interactions in the absence of quenched disorder is an open question. Recently, interacting systems in a tilted field have emerged as a candidate to exhibit MBL-like behaviour. In this talk I will provide a thorough numerical analysis of this proposition. In particular, I will discuss the role of an additional discrete symmetry in the case of a purely linear field and its implications for localization. • ### 04/05: Miguel Frías-Pérez (MPG) Abstract: I will present a new family of Markov chains based on tensor network contractions, and demonstrate that it allows to greatly mitigate some of the limitations of more traditional approaches, such as slow convergence in presence of competing interactions. Results of the application of the method for the two and three dimensional Ising model will be shown, along with some preliminary results on continuous models. • ### 27/04: Romain Vasseur (UMass Amherst) Abstract: Monitored quantum circuits (MRCs) exhibit a measurement-induced phase transition between area-law and volume-law entanglement scaling as a function of the measurement rate. In this talk, I will first review our understanding of such measurement-induced phase transitions. I will argue that MRCs with a conserved charge additionally exhibit two distinct volume-law entangled phases that cannot be characterized by equilibrium notions of symmetry-breaking or topological order, but rather by the non-equilibrium dynamics and steady-state distribution of charge fluctuations. These include a charge-fuzzy phase in which charge information is rapidly scrambled leading to Luttinger-liquid-like spatial fluctuations of charge in the steady state, and a charge-sharp phase in which measurements collapse quantum fluctuations of charge without destroying the volume-law entanglement of neutral degrees of freedom. I will present some statistical mechanics and effective field theory approaches to such charge-sharpening transitions. • ### 13/04: Lei Ying (Zhejiang) Abstract: Quantum many-body scarring (QMBS), as a novel coherent state mitigating the quantum thermalization, exhibits potential applications in quantum information. As yet, existing experimental realizations of the QMBS are based on kinetically-constrained systems, realized on atomic platforms. In this talk, we will introduce a distinct kind of QMBS states by approximately decoupling a part of the many-body Hilbert space in the computational basis. Utilizing a programmable superconducting processor with 30 qubits and 29 tunable couplings, we realize such a Hilbert space scarring in a non-constrained model with different lattice geometries, including a linear chain as well as a quasi-one-dimensional comb configuration. We provide direct evidence for QMBS states by measuring qubit population dynamics, quantum fidelity and entanglement entropy. Our experimental findings broaden the realm of QMBS mechanisms and pave the way to exploiting correlations in QMBS states for applications in quantum information technology. References: arXiv:2201.03438v2. • ### 13/04: Silvia Pappalardi (ENS Paris) Abstract: In the past few years, there has been considerable activity around a set of quantum bounds on transport coefficients (viscosity, conductivity) and chaos (Lyapunov exponents), relevant at low temperatures. The interest comes from the fact that black-hole models seem to saturate all of them. However, the relation between the different bounds and physical properties of the systems saturating the is still a matter of ongoing research. In this talk, I will discuss how one can gain physical intuition by studying classical and quantum free dynamics on curved manifolds. Thanks to the curvature, such models display chaotic dynamics up to low temperatures and – as I will show how- they violate the bounds in the classical limit. The talk aims to discuss three different ways in which quantum effects arise to enforce the bounds in practice. For instance, I will show how chaotic behaviour is limited by the quantum effects of the curvature itself. As an illustrative example, I will consider the simple case of a free particle on a two-dimensional manifold, constructed by joining the surface of constant negative curvature — a paradigmatic model of quantum chaos — to a cylinder. The resulting phenomenology can be generalized to the case of several (constant) curvatures. The presence of a hierarchy of length scales enforces the bound to chaos up to zero temperature. * Pappalardi, Kurchan, Low temperature quantum bounds on simple models, arXiv:2106.13269, (2021). • ### 23/03: Sarang Gopalakrishnan (Penn State) Title: Diffusion I will describe a spacetime-duality-based approach to access the spectral statistics of quantum many-body systems. The approach can be applied to generic systems in discrete time but in general it can be pushed to the end only with the aid of numerical computations. In certain cases, however, it leads to exact analytical results. I will describe two classes of local quantum circuits where this can indeed be done: dual-unitary circuits and strongly localising circuits. I will show that these two classes of circuit systems can be respectively considered minimal realisations of chaotic and localised quantum many-body systems. • ### 23/03: Bruno Bertini (Nottingham) Title: Duality Approach to the Spectral Statistics I will describe a spacetime-duality-based approach to access the spectral statistics of quantum many-body systems. The approach can be applied to generic systems in discrete time but in general it can be pushed to the end only with the aid of numerical computations. In certain cases, however, it leads to exact analytical results. I will describe two classes of local quantum circuits where this can indeed be done: dual-unitary circuits and strongly localising circuits. I will show that these two classes of circuit systems can be respectively considered minimal realisations of chaotic and localised quantum many-body systems. • ### 09 February, 3pm UK time: Guoxian Su (Heidelberg) Title: Quantum simulation of thermalization dynamics: from lattice gauge theory to many-body scars Abstract: Advances in quantum simulation have enabled experimental investigation of novel phases of matter. In particular, ultracold atoms in optical lattices present an ideal platform for simulating the physics of strongly correlated quantum many-body systems. In the first part of this talk, I will present the realization of a U(1) lattice gauge theory in a Bose–Hubbard quantum simulator. We investigate the emergent thermal equilibrium by quenching from various initial states and observe the subsequent gauge-invariant dynamics. We demonstrate the effective loss of information as different initial states with the same conserved quantity approach a common steady-state predicted by the thermal ensemble. In the second part, we study the slowed thermalization dynamics with many-body scars in the quantum simulator. We realize many-body scarring by emulating the PXP model with the tilted Bose-Hubbard model and demonstrate unconventional scarring in the presence of detuning potential. By fine-tuning the periodic driving parameters, we show the many-body system can retain initial state information well beyond experimentally accessible times. Our work establishes new realms for studying non-equilibrium phenomena in complex quantum systems and paves the way for exploring more complex thermalization dynamics on synthetic quantum matter devices. • ### 02 March, 3pm UK time: Thomas Bilitewski (Colorado) In this talk I will discuss many-body phenomena of topical interest using the lens of classical spin systems. I will begin with results on classical out-of-time ordered correlators (OTOC’s) in a chaotic spin liquid. I will demonstrate the persistence of chaos down to T=0, together with the emergence of the ballistic propagation of a perturbation in a system without well-defined spin-waves To quantify the chaos in this system I will use the butterfly speed and the Lyapunov exponent characterising the spread and growth of the OTOC’s, and discuss their temperature scaling. I will then discuss how this phenomenology is changed in the presence of thermal phase transitions, where a symmetry is broken and (ballistic) quasi-particles/spin-waves emerge that subsume the chaotic butterfly speed found in the paramagnetic spin-liquid. Bonus: time-permitting I will discuss recent results on long-time anomalous hydrodynamics/scaling in the classical 1D Heisenberg chain based on Phys. Rev. Lett. 121, 250602 (2018),Phys. Rev. B 103, 174302 (2021), arxiv:2108.11964 • ### 02 February, 3pm UK time: Netanel Lindner (Technion) Topology and Dynamical Liquid Crystallinity in Many-Body Floquet Systems “Floquet engineering” of band structures through the application of coherent time-periodic drives – has recently emerged as a powerful tool for creating new types of topological phases. We show that this tool can also be used to induce non-equilibrium correlated states with dynamical spontaneously broken symmetry. In particular, we study lightly doped semiconductors driven by a resonant driving field. We show that such a system can spontaneously develop quantum liquid crystalline order featuring extreme anisotropy whose directionality rotates as a function of time. The phase transition to this correlated state occurs in the steady state of the system achieved due to the interplay between the coherent external drive, electron-electron interactions, and dissipative processes arising from the coupling to phonons and the electromagnetic environment. Our results demonstrate how Floquet engineering can be used to induce novel non-equilibrium phases exhibiting an interplay of topology and dynamical symmetry breaking. • ### 26 January, 3pm UK time: Pietro Brighi (IST Austria) Title: Propagation of many-body localization in an Anderson insulator Abstract: In this talk, I will present our recent work on the interplay of many-body localized (MBL) systems and small baths. Recently, the fate of localized particles when coupled to a small thermalizing system, viewed as a quantum bath, received significant attention both theoretically and experimentally. In this work, we discuss the smallest possible quantum bath, consisting of a single mobile impurity, interacting locally with an Anderson insulator with finite particle density. Through perturbative arguments, we provide an approximate framework where localization is stable against the effect of the thermalizing particle. Next, we analyze the dynamics of the system both in an approximate time-dependent Hartree picture and through the quasi-exact time-evolving-block-decimation (TEBD). While the approximate dynamics, ignoring entanglement among the two particle species, results in late-time thermalization, the full dynamics presents sound evidence of localization. We further develop a phenomenological picture based on the localization of the mobile particle, predicting that the impurity turns the previously non-interacting Anderson insulator into an MBL phase, giving rise to non-trivial entanglement patterns in good agreement with the numerical simulations. Finally, we use an extension of the density-matrix renormalization group (DMRG) algorithm to highly excited states to sample the middle of the spectrum. Through the study of observables and entanglement in the highly excited eigenstates we confirm the picture introduced in the dynamics.  Dynamics and the DMRG-X results provide compelling evidence for the stability of localization.References:  arXiv:2109.07332, arXiv:2111.08603. • ### 24 November, 3pm UK time: Balázs Pozsgay (Eötvös Loránd University) Title: Lindblad equations with Yang-Baxter integrability Abstract: The Yang-Baxter equation is one of the cornerstones of integrability, it leads to a canonical framework for the construction of integrable spin chains and other models. In the last 5 years interest also turned towards Lindblad systems, and the question was asked, whether there are integrable Lindblad equations with an underlying Yang-Baxter structure. The Lindblad equation describes coupling with an environment, including losses or external driving, and it is a linear equation for the density matrix. In this talk we show that there are indeed Yang-Baxter integrable Lindblad systems. We focus on quantum spin chains, and give examples including the first such system found by Essler and Prosen. Afterwards we explain our recent work which shows how to find/construct such integrable equations from scratch. • ### 17 November, 3pm UK time: Michael Knap (TUM) Title: Anomalous transport and operator growth in constrained quantum matter Abstract: The far from equilibrium dynamics of generic interacting quantum systems is characterized by a handful of universal guiding principles, among them the diffusive transport of globally conserved quantities and the ballistic spreading of initial local operators. Here, we discuss that in certain constrained many-body systems the structure of conservation laws can cause a drastic modification of this universal behavior. In particular, we focus on a dipole conserving “fracton” chain which exhibits a localization transition, separating an ergodic dynamical phase from a frozen one. Even in the ergodic phase, transport is anomalously slow and exhibits subdiffusive scaling. We explain this finding by a developing general hydrodynamical model, that yields an accurate description of the scaling form of charge correlation functions. Furthermore, we investigate the operator growth characterized by out-of-time correlations functions (OTOCs) in this dipole conserving system. We identify a critical point, tied to the underlying localization transition, with unconventional sub-ballistically moving OTOC front. We use the scaling properties at the critical point to derive an effective description of the moving operator front via a biased random walk with long waiting times and support. Our arguments are supported numerically by classically stimulable automaton circuits.J. Feldmeier, P. Sala, G. de Tomasi, F. Pollmann, MK, PRL 125, 245303 (2020). • ### 10 November, 3pm UK time: Jiri Minar (Amsterdam) Title: Disorder enhanced quantum many-body scars in Hilbert hypercubes Abstract: I will start by discussing the role of phonons in lattice Rydberg gases and how they can be exploited to engineer various lattice spin models with realistic (correlated) disorder. I will then focus specifically on a model arising in facilitated Rydberg chains, which features a Hilbert space with the topology of a d-dimensional hypercube. This allows for a straightforward interpretation of the many-body dynamics in terms of a single-particle one on the Hilbert space and provides an explicit link between the many-body and single-particle scars. Exploiting this perspective, we show that an integrability-breaking disorder enhances the scars followed by inhibition of the dynamics due to strong localization of the eigenstates in the large disorder limit. Additionally, mapping the model to the spin-1/2 XX Heisenberg chain offers a geometrical perspective on the recently proposed Onsager scars [Phys. Rev. Lett. 124, 180604 (2020)], which can be identified with the scars on the edge of the Hilbert space. Based on: arXiv:1607.06295, arXiv:1802.00379, arXiv:2012.05310 • ### 3 November, 3pm UK time: Pieter Claeys (Cambridge) Title: Absence of superdiffusion in certain random spin models Abstract: The dynamics of spin at finite temperature in the spin-1/2 Heisenberg chain was found to be superdiffusive in numerous recent numerical and experimental studies. Theoretical approaches to this problem have emphasized the role of nonabelian SU(2) symmetry as well as integrability, but the associated methods cannot be readily applied when integrability is broken. After an introduction to superdiffusion I will examine spin transport in such a spin-1/2 chain in which the exchange couplings fluctuate in space and time, breaking integrability but not spin symmetry, showing that operator dynamics in the strong noise limit can be analyzed using conventional perturbation theory. I will argue that the spin dynamics undergo enhanced diffusion with some interesting transient behavior rather than superdiffusion, comparing the dynamics with both a hydrodynamic approach and tensor network simulations. Based on arXiv:2110.06951 • ### 27 October, 3pm UK time: Yves Ywan (Oxford) Title: Beyond the Freshman’s Dream: Classical fractal spin liquids from matrix cellular automata in three-dimensional lattice models Abstract: We consider disorder-free Hamiltonians consisting of three-body Ising interactions on two realistic 3D lattices: trillium and hyperhyperkagome. Like the well-studied 2D Newman-Moore (NM) model, our 3D models possess trivial thermodynamics but exhibit ‘fragile’ glassy dynamics arising from the hierarchical and immobile nature of the low-energy excitations. Unlike the NM model, the structure of the ground state and its excitations cannot be described by scalar cellular automata (CA). Instead, we show how matrix CAs provide the necessary language to understand the fractal symmetries and ‘fractons’ present in our 3D models. We comment on the introduction of quantum fluctuations introduced by a transverse magnetic field. This talk is based on arXiv:2109.06207. • ### 20 October, 3pm UK time: Katja Klobas (Oxford) Title: Exact thermalization dynamics in the “Rule 54” Quantum Cellular Automaton Abstract: When a generic isolated quantum many-body system is driven out of equilibrium, its local properties are eventually described by the thermal ensemble. This picture can be intuitively explained by saying that, in the thermodynamic limit, the system acts as a bath for its own local subsystems. Despite the undeniable success of this paradigm, for interacting systems most of the evidence in support of it comes from numerical computations in relatively small systems, and there are very few exact results. In the talk, I will present an exact solution for the thermalization dynamics in the “Rule 54” cellular automaton, which can be considered the simplest interacting integrable model. After introducing the model and its tensor-network formulation, I will present the main tool of my analysis: the space-like formulation of the dynamics. Namely, I will recast the time-evolution of finite subsystems in terms of a transfer matrix in space and construct its fixed-points. I will conclude by showing two examples of physical applications: dynamics of local observables and entanglement growth. The talk is based on a recent series of papers: arXiv:2012.12256,arXiv:2104.04511, and arXiv:2104.04513. • ### 13 October, 3pm UK time: Andrea De Luca (CNRS) Title: Universal out-of-equilibrium dynamics of 1D noisy critical quantum systems Abstract: We consider critical one dimensional quantum systems initially prepared in their groundstate and perturbed by a smooth noise coupled to the energy density. By using conformal field theory, we deduce a universal description of the out-of-equilibrium dynamics. In particular, the full time-dependent distribution of any 2-pt chiral correlation function can be obtained from solving two coupled ordinary stochastic differential equations. In contrast with the general expectation of heating, we demonstrate that the system reaches a non-trivial and universal stationary state characterized by broad distributions. As an example, we analyse the local energy density: while its first moment diverges exponentially fast in time, the stationary distribution, which we derive analytically, is symmetric around a negative median and exhibits a fat tail with 3/2 decay exponent. We obtain a similar result for the entanglement entropy production associated to a given interval of size L. The corresponding stationary distribution has a 3/2 right tail for all L, and converges to a one-sided Levy stable for large L. • ### 06 October, 3pm UK time: Adam Smith (Nottingham) Title:  Identifying Correlation Clusters in Many-Body Localized Systems Abstract:  We introduce techniques for analysing the structure of quantum states of many-body localized (MBL) spin chains by identifying correlation clusters from pairwise correlations. These techniques proceed by interpreting pairwise correlations in the state as a weighted graph, which we analyse using an established graph theoretic clustering algorithm. We validate our approach by studying the eigenstates of a disordered XXZ spin chain across the MBL to ergodic transition, as well as the non-equilibrium dynamics in the MBL phase following a global quantum quench. We successfully reproduce theoretical predictions about the MBL transition obtained from renormalization group schemes. Furthermore, we identify a clear signature of many-body dynamics analogous to the logarithmic growth of entanglement. The techniques that we introduce are computationally inexpensive and in combination with matrix product state methods allow for the study of large scale localized systems. Moreover, the correlation functions we use are directly accessible in a range of experimental settings including cold atoms. Reference: arXiv:2108.03251 • ### 29 September, 3pm UK time: Spyros Sotiriadis (FU Berlin/Crete) Signatures of Chaos in Non-integrable Models of Quantum Field Theory Abstract: Despite the growing interest in the study of quantum chaos in many-body systems, numerical tests of chaoticity signatures, like spectral statistics, are almost exclusively limited to lattice models, leaving continuous models largely unexplored. Among them relativistic Quantum Field Theories (QFTs) and their dynamics lie at the cornerstone of important open questions of theoretical physics, like the black hole information paradox, making the study of ergodicity in QFT a topic of fundamental interest. Here we study signatures of quantum chaos in (1+1)D QFTs and show that, even though their level spacing statistics agree with the predictions of Random Matrix Theory, on the contrary, their eigenvector components follow a distribution markedly different from the expected Gaussian, raising questions on the validity of the Eigenstate Thermalisation Hypothesis in these models. To derive and validate our results we push the limits of the numerical method of Hamiltonian truncation beyond earlier studies and devise strict measures of the truncation error. • ### 22 September, 3pm UK time: Ivan Khaymovich (MPI-PKS Dresden) Random-matrix approach to slow dynamics in quantum systems Abstract: In this talk, we will discuss a random-matrix approach to the description of disordered many-body systems and their Hilbert-space structure, focusing on ergodicity breaking effects and slow dynamics in such models. As a generic example of this approach, we consider the static and the dynamical phases in a Rosenzweig-Porter random matrix ensemble with a distribution of off-diagonal matrix elements of the form of the large-deviation ansatz. We present a general theory of survival probability in such a random-matrix model and show that the averaged survival probability may decay with time as a simple exponent, as a stretch-exponent and as a power-law or slower. Correspondingly, we identify the exponential, the stretch-exponential and the frozen-dynamics phases. We consider the mapping of the Anderson localization model on Random Regular Graph, the known proxy of MBL, onto the RP model and find exact values of the stretch-exponent kappa in the thermodynamic limit. Our theory allows to describe analytically the finite-size multifractality and to compute the critical length with the exponent 1 associated with it. • ### 9 September, 3pm UK time: Wen Wei Ho, Gordon and Betty Moore Postdoctoral Fellow, Harvard University Title: Interacting Phases of Matter protected by Multiple Time-Translation Symmetries in Quasiperiodically-driven Systems Abstract: The discrete time-translation symmetry of a periodically-driven (Floquet) system allows for the existence of novel, nonequilibrium interacting phases of matter. A well-known example is the Discrete Time Crystal, a phase distinguished by the spontaneous breaking of this time-translation symmetry. In this talk, I will explain how quasiperiodically-driven systems, that is, systems driven with two or more incommensurate frequencies, possess a notion of *multiple* time-translation symmetries. This in turn leads to the possibility of realizing a panoply of novel nonequilibrium phases of matter characterized by such symmetries, both spontaneous symmetry-breaking (“discrete time quasi-crystals”) and topological. I will demonstrate that these phases are stable in a long-lived, ‘preheating’ regime, by outlining rigorous mathematical results establishing slow heating at high driving frequencies. These new nonequilibrium phases can readily be realized in quantum simulator platforms of today. • ### 16 September, 3pm UK time: David Luitz, MPIPKS Dresden Title: Hierarchy of Relaxation Timescales in Local Random Liouvillians Abstract: To characterize the generic behavior of open quantum systems, we consider random, purely dissipative Liouvillians with a notion of locality. We find that the positivity of the map implies a sharp separation of the relaxation timescales according to the locality of observables. Specifically, we analyze a spin-1/2 system of size ℓ with up to n-body Lindblad operators, which are n local in the complexity-theory sense. Without locality (n=ℓ), the complex Liouvillian spectrum densely covers a “lemon”-shaped support, in agreement with recent findings [S. Denisov et al., Phys. Rev. Lett. 123,140403(2019)]. However, for local Liouvillians (n<ℓ), we find that the spectrum is composed of several dense clusters with random matrix spacing statistics, each featuring a lemon-shaped support wherein all eigenvectors correspond to n-body decay modes. This implies a hierarchy of relaxation timescales of n-body observables, which we verify to be robust in the thermodynamic limit. Our findings for n locality generalize immediately to the case of spatial locality, introducing further splitting of timescales due to the additional structure. • ### 7 October, 3pm UK time: Tom Iadecola, Iowa State University Title: Nonergodic Quantum Dynamics from Deformations of Classical Cellular Automata Abstract: Classical reversible cellular automata (CAs), which describe the discrete-time dynamics of classical degrees of freedom in a finite state-space, can exhibit exact, nonthermal quantum eigenstates despite being classically chaotic. We show that families of periodically-driven (Floquet) quantum dynamics that include a classical CA in a special limit retain certain nonthermal eigenstates of the CA. These dynamics are nonergodic in the sense that certain product states on a periodic classical orbit fail to thermalize, while generic initial states thermalize as expected in a quantum chaotic system. We demonstrate that some signatures of these effects can be probed in quantum simulators based on Rydberg atoms in the blockade regime. These results establish classical CAs as parent models for a class of quantum chaotic systems with rare nonthermal eigenstates. ### Title: Probing the onset of quantum chaos through eigenstate deformations Abstract: In this talk I will discuss our recent results on detecting integrability breaking using adiabatic deformations of the system. I will show one can detect the onset of chaos at perturbation strengths much below standard measures of random matrix theory behavior. This intermediate regime, separating ergodic and non-ergodic systems, is characterised by very slow relaxation and enhanced sensitivity to perturbations. Clean and disordered 1D systems will be discussed and the talk will be based on arXiv:2004.05043 and arXiv:2009.04501. • ### 21 October, 3pm UK time: Olalla Castro Alvaredo, City University Title: Out-of-Equilibrium Entanglement Dynamics in Quantum Integrable Models Abstract: In this talk I will discuss the main results of the papers arXiv:2001.10007 andarXiv:1907.11735.  In these papers we studied both analytically and numerically the time dependence of the Rényi and von Neumann entropies in the integrable Ising spin chain, following different kinds of global quenches. We were interested in studying the continuum limit of these theories, namely the associated quantum field theories (QFTs) and we used QFT techniques, particularly, branch point twist fields, to perform our analytical computations. Using these techniques we have gained access not only to the precise leading linear large time dependence of the entropies that is observed in many integrable models but also to oscillatory behaviour that, depending on the quench, can become the leading feature of entanglement, at least for small quenches. Although there is still much to understand in this area of research, one of our conclusions is that the integrability of the quenched model is not the sole feature to determine its entanglement dynamics, in particular, whether or not the entropies will grow linearly with time or exhibit persistent undamped oscillations. • ### 28 October, 3pm UK time: Alessandro Romito, Lancaster Measurement induced entanglement transition: from stroboscopic to continuous dynamics Abstract: Quantum measurements can induce an entanglement transition between extensive and sub-extensive scaling of the entanglement entropy. This transition is of great interest since it illuminates the intricate physics of thermalization and control in open interacting quantum systems. Whilst this transition is well established for stroboscopic measurements in random quantum circuits, a crucial link to physical settings is its extension to continuous observations where, for an integrable model, it has been shown that a sub-extensive scaling appears for arbitrarily weak measurements. In this talk, after reviewing the entanglement transitions for random unitary circuits and projective measurements, I present results for a one-dimensional quantum circuit evolving under random unitary transformations and generic positive operator-valued measurements of “variable strength”. I will show that, for stroboscopic dynamics, there is a consistent phase boundary in the space of the measurement strength and the measurement probability, with a critical value of the measurement strength below which the system is always ergodic. I will further show that the entanglement transition at finite coupling persists for a continuously measured system whose unitary evolution is randomly nonintegrable. These results open the possibility to investigate the measurement induced entanglement transition in quantum architectures accessible via continuous measurements. • ### 4 November, 3pm UK time: Sthitadhi Roy (Oxford) Title: Measurement-induced entanglement phase transitions in all-to-all quantum circuits and quantum trees Abstract: Measurements in the background of an otherwise unitary time-evolution can make a quantum system reside in an entangled or a disentangled phase separated by a measurement-induced entanglement phase transition. I will discuss some aspects of such phase transitions in all-to-all quantum circuits with measurements.  All-to-all models simplify some of the complications arising from spatial structure in low-dimensional systems allowing for some exact results. Exploiting the underlying locally tree-like structure of the space-time graph, we quantify the quantum information flowing through the circuit via the entanglement between the apex and  base of the tree. The tree-like structure of the graph allows for a recursive solution to the problem which is analytically solvable in some cases yielding exact results for the location of the critical point and scaling  near the critical point. Away from these cases, we present numerical results which confirm the universality of our results. Reference: arXiv:2009.11311 TBA • ### 19 November, 3pm UK time: Yevgeny Bar Lev (Ben Gurion) Title: Transport in long-range interacting systems Yevgeny Bar Lev Abstract: In generic systems with local interactions transport is diffusive, though it can be supressed by the addition of disorder. Introducing long-range interactions, should intuitively, enhance transport by long-range hop. Using numerically exact techniques I will show that this is not the case for a number of generic one-dimensional systems. All studied systems, for sufficiently short-range interactions, show universal behaviour of asymptotically emergent locality and a unique composite transport comprised of diffusive and superdiffusive features. Introducing disorder, slows down the transport and makes it subdiffusive, similarly to the situation for local systems. • ### 25 November, 3pm UK time: Remy Dubertrand (Northumbria University) Many-body semiclassics for Bose-Hubbard:spectral statistics and random wave approach Semiclassical techniques from quantum chaos have been recently generalised to describe many-body interacting bosonic systems written as second quantised models. To understand the emergence of new phenomena due to many-body coherent effects I will first motivate how to build a quantum/classical correspondence, and how to follow the semiclassical program from there. This will be used first to state when universal spectral appear in Bose-Hubbard models. Then I will explain how to describe the eigenstates using a statistical perspective. This involves more precisely the connection with Random Matrix Theory and Berry’s ansatz of random superpositions of Fock states respectively. In particular it will be discussed how to use it in order to tackle the issue of thermalisation in isolated systems. • ### 2 December, 3pm UK time: Marin Bukov (Sofia University) Title: Floquet (Pre-)thermalization in Many-Body Systems away from the High-Frequency Limit Abstract: We study the dynamics of periodically-driven many-body systems away from the high-frequency regime, and introduce a class of Floquet systems where the notion of prethermalization can be naturally extended to intermediate and low driving frequencies. We investigate numerically the dynamics of both integrable and non-integrable systems, and provide evidence for the formation of a long-lived prethermal plateau, akin to the high-frequency limit, where the system thermalizes with respect to an effective Hamiltonian captured by the inverse-frequency expansion (IFE). However, unlike the high-frequency regime, we find that heating rates can be both power-law or exponentially suppressed, depending on the properties of the drive Hamiltonian. We analyze the stability of the prethermal plateau to small perturbations in the periodic drive, and show that, for systems with power-law suppressed heating, the plateau duration is insensitive to the perturbation strength, in contrast to models with exponentially suppressed heating. Interestingly, any infinitesimal perturbation is enough to restore the ergodic properties of the system and eliminate residual finite-size effects. Although the regime where the Floquet system leaves the prethermal plateau and starts heating up to infinite temperature is not captured by the IFE, we find that the evolved subsystem is described well by a thermal state w.r.t.~the IFE Hamiltonian, with a gradually changing temperature. • ### 3 February, 3pm UK time: Alexios Michailidis, IST Austria Quantum scars and slow thermalization in Rydberg blockades Recent experiments have shown that the relaxation time in Rydberg blockades depends strongly on the initial state. This feature was attributed to a set of atypical eigenstates (quantum scars) of an idealised kinetically constrained model (PXP). I will address the atypical dynamics of PXP-type models in one  and two dimensions using algebraic and variational means [1], and introduce variations of the model which further suppress thermalization [2]. I will propose a time-dependent perturbation to reduce the effects of the previously ignored long-range interactions and present theoretical calculations and experimental observations of the enhancement of coherence in 1D and 2D lattices [3]. Finally, motivated by the time-periodic perturbations I will discuss a novel type of Floquet dynamics based on quantum scars. This model features stable subharmonic response akin to time crystalline behaviour and strong suppression of thermalization for a specific set of initial states. [1] PRX 10, 011055 [2] PRR 2, 022065 [3] arXiv:2012.12276 • ### 17 February, 3pm UK time: Dr Lev Vidmar (Jozef Stefan Institute) Ergodicity breaking transition in finite disordered spin chains We study disorder-induced ergodicity breaking transition in high-energy eigenstates of interacting spin-1/2 chains. We study several ergodicity indicators: the spectral level spacing statistics, the eigenstate entanglement entropy, and the ratio of the Thouless time versus the Heisenberg time. For the latter, we argue that the ergodicity breaking transition in interacting spin chains occurs when both time scales are of the same order, and their ratio becomes a system-size independent constant. Interestingly, we observe that the ergodicity breaking transition in systems studied by exact diagonalization (with around 20 lattice sites) takes place at disorder values lower than those reported in previous works. We discuss the observation that scaled results in finite systems by increasing the system size exhibit a flow towards the quantum chaotic regime. • ### 24 February, 3pm UK time: Dr Hans Kessler (Universitat Hamburg) Title: Dynamical phases in an atom cavity system Abstract: We are experimentally exploring the lightmatter interaction of a Bose-Einstein condensate (BEC) with a single light mode of an ultra-high finesse optical cavity. The key feature of our cavity is the small field decay rate (𝜅/2𝜋 4.5 kHz), which is in the order of the recoil frequency (𝜔𝑟𝑒𝑐/2𝜋 3.56 kHz). This leads to a unique situation where cavity field evolves with the same timescale as the atomic distribution. If the system is pumped with a steady state light field, red detuned with respect to the atomic resonance, the Hepp-Lieb-Dicke phase transition of the open Dicke model is realized [1]. Starting in this self-ordered density wave phase and modulating the amplitude of the pump field, we observe a dissipative discrete time crystal, whose signature is a robust subharmonic oscillation between two symmetry-broken states [2]. On the other hand, modulation of a phase of the pump field can give rise to an incommensurate time crystal as proposed in [3]. For a blue-detuned pump light with respect to the atomic resonance, we propose an experimental realization of limit cycles. Since the model describing the system is time-independent (DC-driven), the emergence of a limit cycle phase heralds the breaking of continuous time-translation symmetry [4]. By periodically driving, the limit cycles stabilize and the system undergoes a transition from a continuous to a discrete time crystal [5]. [1] Klinder, J., Keßler, H., Wolke, M., Mathey, L., & Hemmerich, A. (2015). Dynamical phase transition in the open Dicke model. PNAS 112 (11), 3290-3295 [2] Keßler, H., Kongkhambut, P., Georges, C., Mathey, L., Cosme, J. G., & Hemmerich, A. (2020). Observation of a dissipative time crystal. arXiv preprint arXiv:2012.08885. [3] Cosme, J. G., Skulte, J., & Mathey, L. (2019). Time crystals in a shaken atom-cavity system. Physical Review A, 100(5), 053615. [4] Keßler, H., Cosme, J. G., Hemmerling, M., Mathey, L., & Hemmerich, A. (2019). Emergent limit cycles and time crystal dynamics in an atom-cavity system. Physical Review A, 99(5), 053605. [5] Keßler, H., Cosme, J. G., Georges, C., Mathey, L., & Hemmerich, A. (2020). From a continuous to a discrete time crystal in a dissipative atom-cavity system. New Journal of Physics, 22(8), 085002. • ### 3 March, 3pm UK time: Dr Mark Rudner (Niels Bohr Institute) Title: Double feature: ‘Prethermal quantum pumps’ and ‘The universal Lindblad equation for many-body systems’ Abstract: In the quest to control the non-equilibrium dynamics of quantum many-body systems, we are faced with many challenges of both theoretical and experimental nature. Importantly, when isolated many-body systems are subjected to time-periodic driving fields, they tend to absorb energy and heat towards featureless states of maximal entropy density. However, nontrivial behavior may still be realized transiently, or in the steady states formed when coupling to a heat bath provides a balancing channel for energy dissipation. In the first part of this talk I will discuss a novel regime of prethermal dynamics in which the heating that naturally results from driving an isolated many-body system gives rise to quasisteady states displaying universal transport characteristics that reflect the topological features of the system’s underlying Floquet band structure. The quasisteady state features interesting oscillatory entanglement dynamics, and a striking robustness to disorder. In the second part of the talk I will describe a recently formulated Lindblad-form Markovian master equation, whose validity is justified independently of any restrictions on the energy level structure of the system. This “universal Lindblad equation” thus comprises an important new tool for studying open system dynamics in both static and driven systems. • ### 10 March, 3pm UK: Dr Francesco Piazza (MPIPKS) Controlling Cavity-Mediated Superconductivity with Quantum States of Light Recently, it has become possible to couple electrons in two-dimensional materials to the quantum electromagnetic field of optical cavities. This realises a yet unexplored regime of Quantum Electrodynamics, which is non-relativistic, non-vacuum, and strongly coupled. Among many exciting avenues, one promising idea is to use the photons in the cavity to mediate pairing between electrons, inducing superconducting states with novel properties [1,2]. An exciting prospect, that makes photons the more interesting mediator with respect to the phonons of the standard BCS paradigm, is to exploit state-of-the-art engineering of the quantum states of light to control superconductivity. A naturally emerging question, which remains still open, is whether one can enhance superconductivity by feeding the cavity with certain quantum states of the photons. We recently developed a non-equilibrium field-theory approach that allows to tackle this question [3]. In this talk, I will describe our current understanding of the problem and first steps in answering the above. [1] F. Schlawin, A. Cavalleri, and D. Jaksch, Phys. Rev. Lett. 122, 133602 (2019) [2] H. Gao, F. Schlawin, M. Buzzi, A. Cavalleri, and D. Jaksch, Phys. Rev. Lett. 125, 053602 (2020) [3] Ahana Chakraborty and Francesco Piazza, arXiv:2008.06513 • ### 17 March, 3pm UK time: Dr John Goold  (Trinity College Dublin) Title: Quantum transport and eigenstate thermalisation Abstract: How irreversible thermodynamics emerges from the unitary dynamics of the Schrödinger equation is a question that has been asked since the inception of quantum theory itself. One modern take on the issue is the Eigenstate Thermalisation Hypothesis. In this talk I will discuss the Eigenstate thermalisation hypothesis with special emphasis on the connection between off diagonal matrix elements of local observables and quantum transport. I will then give an overview and discussion of results from some recent works coming from the TCD group on the topic – in particular exploring integrability breaking with a local perturbation. Relevant references: ‘’High temperature coherent transport in the XXZ chain in the presence of an impurity”, M. Brenes, E. Mascarenhas, M. Rigol, J. Goold, PRB 98 235128 (2018) “ Eigenstate Thermalisation Hypothesis in a locally perturbed integrable system”, M. Brenes, T. LeBlond, J. Goold, M. Rigol, PRL 125 070605 (2020) “Low frequency behaviour of off-diagonal matrix elements in the integrable xxz chain and in a locally perturbed quantum chaotic chain”, M. Brenes, J. Goold, M. Rigol PRB 102, 075127 (2020) Out of time order correlations and fine structure of eigenstate thermalisation M.Brenes et al, arXiv:2103.01161 (2021)https://www.youtube.com/watch?v=CvATAzbIxBw&ab_channel=LeedsLoughboroughNottinghamNonEqulibriumSeminar • ### 14 April, 3pm UK time: Dr Graham Kells  ( Dublin Institute for Advanced Studies) Using operator quantisation to explore topology at high-temperatures and in nonequilibrium In this talk I will discuss the notion of operator – or third – quantisation. I will start by giving a visual tour of how and why it works and then review a few of the better known applications. I relation to our own work I will explain how it naturally leads to the notion of generalised modes and how we have used it to study the concept of strong zero-modes and what is called localisation enhanced topological order. I will finish by outlining how the method can be applied to an interesting model of quantum-classical transport – the transverse XY modified TASEP (Totally Asymmetric Simple Exclusion Process). • ### 21 April, 3pm UK time: Dr Jad C. Halimeh, INO-CNR BEC Center and Department of Physics, University of Trento Staircase Prethermalization and Constrained Dynamics in Lattice Gauge Theories Abstract: The dynamics of lattice gauge theories is characterized by an abundance of local symmetry constraints. Although errors that break gauge symmetry appear naturally in NISQ-era quantum simulators, their influence on the gauge-theory dynamics is insufficiently investigated. In this talk, we show that a small gauge breaking of strength $\lambda$ induces a staircase of long-lived prethermal plateaus. The number of prethermal plateaus increases with the number of matter fields $L$, with the last plateau being reached at a timescale $\lambda^{−L/2}$, showing an intimate relation of the concomitant slowing down of dynamics with the number of local gauge constraints. By means of a Magnus expansion, we demonstrate how exact resonances between different gauge-invariant supersectors are the main reason behind the emergence of staircase prethermalization. Our results bode well for NISQ quantum devices, as they indicate that the proliferation timescale of gauge-invariance violation is counterintuitively delayed exponentially in system size. From a phenomenological perspective, our work shows how prethermal behavior is significantly enriched in models with slight breaking of local gauge invariance relative to their counterparts where a global symmetry is broken. • ### 28 April, 3pm UK time: Dr Andrea Pizzi (Cambridge) (Classical) Prethermal phases of matter in dimension 1, 2, and 3 Systems subject to a high-frequency drive can spend an exponentially long time in a prethermal regime, in which novel phases of matter with no equilibrium counterpart can be realized. Recent numerical investigations in this direction have been severely limited by the notorious computational challenges of many-body quantum mechanics. We show that prethermal non-equilibrium phases of matter also exist in classical Hamiltonian dynamics. First, we show that the phenomenology of known 1D quantum prethermal phases of matter is virtually the same when going classical, which suggests that these phenomena should in essence be thought of as robust to quantum fluctuations, rather than dependent on them. Second, we study the interplay between dimensionality and interaction range. For instance, we provide the first numerical proof of prethermal phases of matter in a system with short-range interactions, that are only possible in dimensionality 2 or 3. Concretely, we find higher-order as well as fractional discrete time crystals breaking the time-translational symmetry of the drive with unexpectedly large integer as well as fractional periods. Our work paves the way towards the exploration of novel prethermal phenomena by means of classical Hamiltonian dynamics with virtually no limitations on size nor dimensionality and with direct implications for experiments. • ### 12 May, 3pm UK time: Dr Masud Haque (Maynooth) Eigenstate Thermalization, random matrices and (non)local operators in many-body systems The eigenstate thermalization hypothesis (ETH) is a cornerstone in our understanding of quantum statistical mechanics. The extent to which ETH holds for nonlocal operators (observables) is an open question.  I will address this question using an analogy with random matrix theory. The starting point will be the construction of extremely non-local operators, which we call Behemoth operators.  The Behemoths turn out to be building blocks for all physical operators.  This construction allow us to derive scalings for both local operators and different kinds of nonlocal operators. • ### 19 May, 3pm UK time: Benjamin Doyon (KCL) Operator ergodicity and hydrodynamic projection in many-body quantum systems Obtaining rigorous results about the quantum dynamics of extended many-body systems is a difficult task. In quantum lattice models, the Lieb-Robinson bound tells us that the spatial extent of operators grows at most linearly in time. But what happens within this light-cone? I will discuss new rigorous results in this direction: a universal form of “operator ergodicity” showing that operators get “thinner” almost everywhere within the light-cone, which leads to a universal hydrodynamic projection formula for the large-time behaviour of correlation functions. The results are general, applicable to any locally interacting system, at arbitrary frequency and wavelength. Work in collaboration with Dimitrios Ampelogiannis. • ### 02/06, 3pm UK time: Jens Bardarson (KTH) Title: Time-evolution of local information—thermalization dynamics of local observables Abstract: I discuss a way of organizing the flow of local information on a information lattice, consisting of the physical lattice supplemented by an extra dimension characterizing the scale of the information. Using this information lattice we observe different type of dynamics depending on the presence or absence of a finite thermalization time. This can then be used to construct algorithms for the time evolution of local information sufficient to calculate the expectation values of local observables. While this works in principle in any dimension we focus on simple modes in one dimension as a proof of principle. https://www.youtube.com/watch?v=y7i-Ycv3ULw # ****PLEASE NOTE THIS SEMINAR HAS BEEN POSTPONED FOR A LATER TIME**** Signatures of Chaos in Non-integrable Models of Quantum Field Theory Despite the growing interest in the study of quantum chaos in many-body systems, numerical tests of chaoticity signatures, like spectral statistics, are almost exclusively limited to lattice models, leaving continuous models largely unexplored. Among them relativistic Quantum Field Theories (QFTs) and their dynamics lie at the cornerstone of important open questions of theoretical physics, like the black hole information paradox, making the study of ergodicity in QFT a topic of fundamental interest. Here we study signatures of quantum chaos in (1+1)D QFTs and show that, even though their level spacing statistics agree with the predictions of Random Matrix Theory, on the contrary, their eigenvector components follow a distribution markedly different from the expected Gaussian, raising questions on the validity of the Eigenstate Thermalisation Hypothesis in these models. To derive and validate our results we push the limits of the numerical method of Hamiltonian truncation beyond earlier studies and devise strict measures of the truncation error. • ### 16/06, 3pm UK time: Dr Alessio Lerose (Geneva) Influence matrix approach to quantum many-body dynamics A basic and ubiquitous phenomenon in nonequilibrium dynamics of isolated quantum many-body systems is local thermalization. This is commonly described as the ability of a system to act as an effective thermal bath for its local subsystems, and usually probed via global spectral characteristics. Understanding the microscopic mechanism of quantum thermalization, and above all of its failures, is currently the subject of intensive theoretical and experimental investigations. In this talk, I will introduce an approach to study quantum many-body dynamics, inspired by the Feynman-Vernon influence functional theory of quantum baths. Its central object is the influence matrix (IM), which describes the effect of a Floquet many-body system on the evolution of its local subsystems. For translationally invariant one-dimensional systems, the IM obeys a self-consistency equation. For certain fine-tuned models, remarkably simple exact solutions appear, which physically represent perfect dephasers (PD), i.e., many-body systems acting as perfectly Markovian baths on their parts. Such PDs include certain solvable quantum circuits discovered and investigated in recent works. In the vicinity of PD points, the system is not perfectly Markovian, but rather acts as a quantum bath with a short memory time. In this case, we demonstrate that the self-consistency equation can be solved using matrix-product states (MPS) methods, as the IM temporal entanglement is low. The underlying “principle of efficiency” of quantum dynamics simulations is complementary to that of standard methods, as it only relies on short-range temporal correlations. Using a combination of analytical insights and MPS computations, we characterize the structure of the IM in terms of an effective “statistical-mechanics” description for local quantum trajectories and illustrate its predictive power by analytically computing the relaxation rate of an impurity embedded in the system. In the last part of the talk, I will describe how to extend these ideas to study the many-body localized (MBL) phase of strongly disordered periodically kicked interacting spin chains. This approach allows to study exact disorder-averaged time evolution in the thermodynamic limit. MBL systems fail to act as efficient baths, and this property is encoded in their IM. I will discuss the structure of an MBL IM and link it to the onset of temporal long-range order. References: [1] Influence matrix approach to many-body Floquet dynamics arXiv:2009.10105 (2020) Phys. Rev. X 11, 021040 [2] Characterizing many-body localization via exact disorder-averaged quantum noise arXiv:2012.00777 (2020) [3] Influence functional of many-body systems: temporal entanglement and matrix-product state representation arXiv:2103.13741 (2021) (to appear in Annals of Physics) [4] Scaling of temporal entanglement in proximity to integrability arXiv:2104.07607 (2021) • ### 23/06, 3pm UK time: Fabien Alet (Toulouse) Probing for many-body localization in two dimensional disordered constrained systems Many-body localization is a unique physical phenomenon driven by interactions and disorder for which a quantum system can evade thermalization. While the existence of a many-body localized phase is now well established in one-dimensional systems, its fate in higher dimension is an open question. In this talk, I will present a numerical study of the possibility of many-body localization transition in disordered quantum dimer models on the square and honeycomb lattices. I will present a critical review of our numerical results using state-of-the-art exact diagonalization and time evolution methods, probing both eigenstates and dynamical properties. We conclude for the existence of a localization transition, on the available time and length scales (up to N=108 sites on the honeycomb lattice). Work done in collaboration with H. Théveniaut. G. Meyer, Z. Lan and F. Pietracaprina. • ### 22 September, 3pm UK time: Ivan Khaymovich (MPI-PKS Dresden) Title: Random-matrix approach to slow dynamics in quantum systems Abstract: In this talk, we will discuss a random-matrix approach to the description of disordered many-body systems and their Hilbert-space structure, focusing on ergodicity breaking effects and slow dynamics in such models. As a generic example of this approach, we consider the static and the dynamical phases in a Rosenzweig-Porter random matrix ensemble with a distribution of off-diagonal matrix elements of the form of the large-deviation ansatz.  We present a general theory of survival probability in such a random-matrix model and show that the averaged survival probability may decay with time as a simple exponent, as a stretch-exponent and as a power-law or slower. Correspondingly, we identify the exponential, the stretch-exponential and the frozen-dynamics phases.  We consider the mapping of the Anderson localization model on Random Regular Graph, the known proxy of MBL, onto the RP model and find exact values of the stretch-exponent kappa in the thermodynamic limit.  Our theory allows to describe analytically the finite-size multifractality and to compute the critical length with the exponent 1 associated with it. Corresponding publication: I. M. Khaymovich and V. E. Kravtsov “Dynamical phases in a “multifractal” Rosenzweig-Porter model” SciPost Physics 11, 045 (2021).
2023-01-29 06:29:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5310144424438477, "perplexity": 1421.3045921175785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499700.67/warc/CC-MAIN-20230129044527-20230129074527-00513.warc.gz"}
https://chat.stackexchange.com/transcript/message/53224641
1:01 AM 0 Proposal: Unmerge complex-analysis and holomorphic-functions tags The holomorphic-function tag synonym was proposed in May 2016. It was approved in March 2019. (Information thanks to Martin Sleziak) Currently there are 30,018 questions about complex-analysis and 9,192 questions about holomor... I will also add links to a related conversations in Math Mods' Office and the exchange in the tagging chatroom. Among other things, people can find there this SEDE query which shows the dates 2016-05-28 (CreationDate) and 2019-03-23 (ApprovalDate) for this synonym. — Martin Sleziak 2 mins ago 2 hours later… 2:52 AM @EricWofsey According to list of the tags synonyms, you're the user who proposed the tag synonym $\to$ . Since there is a suggestion on meta to remove the synonym, I thought it might be polite to notify you about this - in case you have some comments about this synonym. I have sent you the chat invitation - so I'd hope that you get some kind of notification. @Isabellatrix In your post you say that there are " 9,192 questions about holomorphic-functions". Actually you linked to the search for all posts (questions and answers). If you restrict the same search to questions, you get 4689 results. I don't see any reason for holomorphic-functions to exist as a tag separate from complex-analysis. So, if they aren't going to be synonyms, then holomorphic-functions should just be destroyed. Thanks for the response! If you wish, you could perhaps mention something also in the comments on meta. (I suppose, more users might notice your response there than here in the chat.) For what it's worth there also exist tags analytic-functions and analyticity which seem of dubious value and should probably be synonyms of each other. And in most of their usage seems to be synonymous with holomorphic-functions. But they could also include real-analytic functions Probably there should be a new separate tag real-analytic-functions (or some similar name) and analyticity and analytic-functions should be destroyed to avoid ambiguity. 3:09 AM Generally I am very skeptical of tags for specific types of functions/morphisms unless those functions really are a notable topic of study separate from their general field. I can imagine tags for some types of functions being useful. So for instance, real-analytic-functions would be a good tag because it is really a field of its own. When I search for something, tags such as or restrict the search much more than simply searching under . For instance, if I search for closed-map+product-space or for closed-map+compactness I might quickly get to some frequent questions on that topic. There is no point in a tag holomorphic-functions because the study of holomorphic functions is called complex analysis. Sure, those are good examples too. They are very specialized concepts that are not the main focus of the field. Thanks for stopping by - it's quite late in my timezone, so I should probably go. Have a nice day! 8 hours later… 11:37 AM in Math Mods' Office, yesterday, by quid @user64742 I agree, and that is actually also the standard SE guideline. Ultimately synonyms should be either merged or canceled. But it can make sense to have an evaluation-period. Let me just say that I disagree with: "Ultimately synonyms should be either merged or canceled." A perfect example to explain what I mean is the synonym between and . Both names are commonly used, so we want that if somebody starts typing in the tag field "sorgen..." or "lower..." then the autocomplete offers them this tag. (And similar thing is the true for other notions which have different names.) @quid There was quite a lot discussion about merging and synonyms in the math mods office: chat.stackexchange.com/rooms/20352/conversation/… (Although it was tied to one specific synonym, some general issues were discussed too.) I have responded here - this room seems more appropriate for such topics. (I.e., discussing tag synonyms and merging in general - not about something which specifically requires some action by a moderator.) Another thing why keeping the tag might be sometimes useful is that it prevents tag being created again and again. (In situations where it has already been agreed that the tag should be a synonym.) 2 hours later… 5 hours later… 6:25 PM 0 Proposal: add a tag for exponential families Rationale: I've asked at least a couple of questions now that I originally tagged with exponential-family, but each of these were edited to remove the tag. However, I think such a tag deserves to exist, because the concept of an exponential family is ... 6:48 PM "Natural parameter" links here. For the usage of this term in differential geometry, see differential geometry of curves. In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, based on some useful algebraic properties, as well as for generality, as exponential families are in a sense very natural sets of distributions to consider. The term exponential class is sometimes used in place of "exponential family", or the older term Koopman-Darmois family. The... Perhaps we can also count this occurrence - although the tag was created there with a typo (exponential-familly): math.stackexchange.com/posts/3249848/revisions When searching for "exponential family" the majority of results are actually asking about exponential families, but they are tagged with a random grab-bag of [tag:statistics], [tag:probability], [tag:probability-distributions], [tag:statistical-inference], etc. (I think this points in favour of having a specific tag for exponential functions.) https://math.stackexchange.com/search?q=exponential+family 7:03 PM @MartinSleziak Oops. Sorry I just searched for "complex-analysis" and then for "holomorphic-functions" and got such results. I am going to edit inmediately on the meta post @Nathaniel I'll mention that if you want to include a search for questions, you can add is:q like this: exponential family is:q I have edited the search link in your post, Isa bellatrix. (So that the search and the number actually correspond to each other.) I hope that's ok. @MartinSleziak thank you. 1 hour later… 8:28 PM A new tag was created by Jess, the same user craeted also the tag-excerpt and the tag-wiki. In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguished.There are point and interval estimators. The point estimators yield single-valued results, although this includes the possibility of single vector-valued results and results that can be expressed as a single function. This is in contrast to an interval estimator, where the result would be a range of plausible values (or vectors or functions). Estimation theory is concerned... 0 A simple question. When people talk about "the least square estimator", what is this estimator? Is it an unbiased estimator of the slope of the regression line? In a paper I'm reading, Let's Take the Con Out of Econometrics, the author writes Randomization implies that the least squares... 1 When the point estimator under consideration has a pdf , the $P[T=\tau(\theta)]=0$ , where $\tau(.)$ is some function of parameter $\theta$ and $T$ is an estimator of $\tau(\theta)$. But I did many exercises to find point estimators of the parameters of density functions.. For example the... 1 I am having hard time understanding what an estimator actually is ( I miss the intuition ). The definition ( for unbiased estimator) is as follows: $T$ is unbiased for the parameter $\theta$ if $E[T] = \theta$ , irrespective of the value of $\theta$. In this case, what is $\theta$? Is it the m... 0 Suppose that $Y_1,...,Y_n$ is an IID sample from a uniform $U(\theta, 1)$ distribution. The method of moments estimator for $\theta$ is $\tilde \theta=2\bar Y-1$. The standard error of $\tilde \theta$ is $$\sigma_{\tilde \theta}=\frac{1-\theta}{\sqrt{3n}}$$ Find an unbiased estimator of \$\si...
2020-09-23 04:42:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6575589179992676, "perplexity": 1086.2935720522637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400209665.4/warc/CC-MAIN-20200923015227-20200923045227-00007.warc.gz"}
https://undergroundmathematics.org/thinking-about-functions/curve-match/teacher-notes
# Teacher Notes ### Why use this resource? An exploration of the graphs of powers of $x$ and related functions this problem should be attempted initially without graphing software which can be used later as an effective tool for checking answers and for attempting the final questions, allowing students to get a feel for how to make subtle changes to a graph by editing its equation. This is an early opportunity for students to notice pervasive ideas in maths: in particular here the idea of averages in the context of curves or equations may be intuitively obvious to some and alien to others. Further reflections on behaviour of functions and how this affects algebraic manipulation may be prompted by the resource Inequality flip-flop ### Preparation • You may want to print out copies of the graph for the students. • You will probably want the graph displayed on the board. • You may want access to graphing software or calculators. ### Possible approach Display the graph and ask students to identify which curves they know. (This could be done without the functions at first.) In pairs or small groups, with access to the functions, check their answers and complete the labelling. Now work on the remaining two tasks. ### Key questions • Can you see any connection between $y=x^2$ and $y=\sqrt{x}$ • Do the co-ordinates of points on the graph help you? • Why should $y=\tfrac{(x^2 + x)}{2}$ lie between $y=x^2$ and $y=x$? ### Possible support Students really struggling to visualise or sketch curves could have access to graphing software. ### Possible extension There is plenty of scope for extension in discovering curves that fit the criteria for the two investigatory questions
2018-01-16 11:31:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37890467047691345, "perplexity": 814.4137710620822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886416.17/warc/CC-MAIN-20180116105522-20180116125522-00015.warc.gz"}
https://tug.org/pipermail/macostex-archives/2002-February/003078.html
# [Mac OS X TeX] cocoAspell Mon Feb 11 01:47:54 CET 2002 >At 11:15 -0500 on 10/02/02, Gary L. Gray wrote: >>It is doing the same thing to me. For example, it doesn't know: >> >>\emph or \textbf >> >>amongst many others. >Works for me. >I followed the instructions. The first time, after the required >logout-login, TexShop complained that it could not locate the spell >checker. After a restart, however, everything worked fine. I checked >the examples mentioned, but they are ignored as they should. Strange. > >Martin Stokhof > Hhm thats strange. So at the moment we have some people who get told that \emph and \textbf are mispelt and some who get them ignored. Is that right ? I find that the following misspelt \emph \textbf \frod but the following are OK \fred \space \section I agree with Bill McCallum - it seems to be just deleting the \ and looking for the word in the dictionary. Does anyone know if thats right? Michael -- _________________________________________________________ Assoc/Prof Michael Murray Department of Pure Mathematics Fax: 61+ 8 8303 3696 University of Adelaide Phone: 61+ 8 8303 4174 Australia 5005 Email: mmurray at maths.adelaide.edu.au PGP public key: _________________________________________________________ ----------------------------------------------------------------- To UNSUBSCRIBE, send email to <info at email.esm.psu.edu> with "unsubscribe macosx-tex" (no quotes) in the body. For additional HELP, send email to <info at email.esm.psu.edu> with "help" (no quotes) in the body. This list is not moderated, and I am not responsible for messages posted by third parties. -----------------------------------------------------------------
2022-09-28 16:12:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3334605395793915, "perplexity": 11830.337413875099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00255.warc.gz"}
https://crypto.stackexchange.com/questions/70809/is-this-papers-technique-for-factoring-rsa-2048-with-noisy-qubits-realistic/71148
# Is this paper's technique for factoring RSA 2048 with noisy qubits realistic? A paper titled How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits has just come out which proposes a technique to factor RSA keys with moduli up to 2048 bits with a design whose assumptions they stress are realistic. What are the implications of this new research? The abstract of the paper, which lists some of the assumptions: We significantly reduce the cost of factoring integers and computing discrete logarithms over finite fields on a quantum computer by combining techniques from Griffiths-Niu 1996, Zalka 2006, Fowler 2012, Eker-Hstad 2017, Eker 2017, Eker 2018, Gidney-Fowler 2019, Gidney 2019. We estimate the approximate cost of our construction using plausible physical assumptions for large-scale superconducting qubit platforms: a planar grid of qubits with nearest-neighbor connectivity, a characteristic physical gate error rate of $$10^{−3}$$, a surface code cycle time of 1 microsecond, and a reaction time of 10 micro-seconds. We account for factors that are normally ignored such as noise, the need to make repeated attempts, and the spacetime layout of the computation. When factoring 2048 bit RSA integers, our construction's spacetime volume is a hundredfold less than comparable estimates from earlier works (Fowler et al. 2012, Gheorghiu et al. 2019). In the abstract circuit model (which ignores overheads from distillation, routing, and error correction) our construction uses $$3n+0.002n\lg n$$ logical qubits, $$0.3n^3+0.0005n^3\lg n$$ Toffolis, and $$500n^2+n^2\lg n$$ measurement depth to factor $$n$$-bit RSA integers. We quantify the cryptographic implications of our work, both for RSA and for schemes based on the DLP in finite fields. How feasible would it be to design a quantum computer with these properties? 20 million qubits is obviously significantly more than any general purpose quantum computer has right now, but the paper also points out that the qubits only need nearest neighbor connectivity, which is much simpler. • Related discussion: news.ycombinator.com/item?id=19998004 – forest May 25 '19 at 3:33 • For understanding the difficulties of building a quantum computer best watch the invited talk of Crypto 2017 by John Martinis (lead of the Google/UCSB team) Prospects for a Quantum Factoring Machine. – j.p. May 25 '19 at 5:49 • @j.p. I know the difficulty for creating a system with ~5000 logical, fully superposed qubits (necessary for Shor's algorithm) and am aware that it is far beyond what anyone can do right now, but this recent paper proposes a way to do it using qubits which are far easier to utilize. 20 million (physical) qubits with nearest neighbor connectivity is much easier to realize than even ~5000 logical qubits in full superposition. – forest May 25 '19 at 6:07 • Did you watch the video? John Martinis speaks also about physical qubits (and error rates etc) and how you can use them. – j.p. May 25 '19 at 6:48 ## 1 Answer I'm one of the authors of the paper. In order to make the paper more approachable, we factored each major optimizations out into its own paper. There are three of these sub-papers, and they each stand on their own mostly independent of the others. 1. "Approximate encoded permutations and piecewise quantum adders ". We put small amounts of padding at various places in our registers so that we can perform addition operations in piecewise fashion and also avoid normalizing modular integers into the [0, N) range until the end of the computation. There are information leakage issues that we had to solve in order to apply these operations in a quantum context, and these cause the representation with padding to be approximate, but otherwise it's a known standard classical technique (e.g. it's called "nails" in GMP). Modular addition using these approximate representations are significantly more efficient than modular adders using the normal representation. For example, here is a comparison of the time*space of each addition when targeting a 0.1% total approximation error rate over the entire algorithm. The "runway" entries are the ones using the approximate representations. The runway entries are significantly better across the entire span of register sizes: These representations build on previous work by Zalka from 2006. 2. "Windowed quantum arithmetic". Classically, if you know what constant you are going to multiply by when producing a physical multiplication circuit, you can specialize the circuit to that factor. This allows you to make the circuit smaller and more efficient. There is a quantum equivalent to this optimization, where the quantum program can be optimized using knowledge of the classical constant you are going to multiply by. We use this to make n-bit quantum-classical multiplication programs log(n) times shorter. We then generalize the technique, apply it simultaneously to the exponentiation part of the circuit, and save another factor of log(n). This paper builds on previous work by R. Van Meter from 2005 (although we didn't actually know this when writing the paper, so it is not cited in the current version). 3. "Flexible layout of surface code computations using AutoCCZ states". This one is by far the most quantum-mechanics-y, so I won't try to explain the details. You can think of it as finding a better way to pack the computation, reducing the amount of space overhead used when routing data around and also allowing it to progress at a rate limited by the classical control system's reaction time instead of by the error correction code distance. The main contribution here may actually be saying what the layout is in the first place, as opposed to just talking about it in the abstract. We have an operating area where the ripple-carry addition zig-zags back and forth horizontally, while input data streams through vertically, and the operating area is gradually moved as the data is processed. Location of data (rows from top to bottom) over time (left to right): A mocked up snapshot of the 2d data layout during a small amount of time: The ideas in this paper build on previous work by Fowler from 2012. In summary, there are a lot of ideas in the paper but they are not radically new (they build on previous work) and they are not strongly coupled (if one is wrong, the others will still stand). So I see the savings as being on pretty strong footing. I'm more worried about whether or not people will be able to build quantum computers with 20M qubits than I am about the estimate being off by a factor of 2. • This is interesting, and I'm certainly going to need to read the papers you've linked, but I'm mostly curious about the feasibility of building a quantum computer with the properties described in the abstract. My non-expert understanding is that it is currently quite far away as the biggest quantum computer with nearest-neighbor connectivity has on the order of 50 qubits, but I could be wrong. – forest Jun 8 '19 at 1:56 • @forest Yes, the resource requirements are way beyond anything anyone can do right now. – Craig Gidney Jun 8 '19 at 17:52 • If I've understood correctly, does the value of this work this : A quantum computing future is unlikely, due to random hardware errors – kelalaka Dec 9 '19 at 21:03 • @kelalaka Having a hard time parsing your question, but the paper is explicitly about doing the computation in a context where noise is accounted for and corrected. The error correcting code we used is the surface code. The surface code is explained in detail in arxiv.org/abs/1208.0928 . My opinion of the article you linked is that it is simply incorrect. We've known that quantum error correction is possible in principle since the 90s, though the hardware is still not quite good enough to check that it really truly works in experiment. Finding out it didn't would be a huge discovery. – Craig Gidney Dec 9 '19 at 21:29 • Sorry about not clear. I thought there is something more than the usual quantum error correction. This stuff was all around the news sites. Now, clearly thinking, either they indicate something that is not related to QEC or as you said they are incorrect. – kelalaka Dec 9 '19 at 21:36
2021-01-20 19:31:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6017922759056091, "perplexity": 839.2339208018451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521987.71/warc/CC-MAIN-20210120182259-20210120212259-00021.warc.gz"}
https://mathoverflow.net/questions/315987/hadamard-theorem-about-embedding
The following theorem is commonly attributed to Jacques Hadamard. Assume $$\Sigma$$ is a smooth locally convex immersed surface in the Euclidean space. Then $$\Sigma$$ is embedded and bounds a convex set. Many authors refer to Hadamard's Sur certaines propriétés des trajectoires en Dynamique (1897) (for example, James Stoker in his Über die Gestalt der positiv... (1936)). Likely the statement is there, but the paper is long, it is in French and often the statements are not clearly marked; I was searching for it for several days. I asked a friend and she said that it was there 20 years ago, but she could not find it; she also said that it was not easy to extract it from what is written ( = one has to think). [For sure the word immersion is not there.] I hope someone here knows this paper and can help me. P.S. Now I see it this way: Stoker was the first who had formulated and proved the theorem; at the beginning of his paper he attributed the theorem to Hadamard because it almost follow from item 23 in his paper. After Stoker everyone did the same. • To remain in the spirit of this site, a present-day referee would probably tell Hadamard : "unclear what you're claiming" ! – Sylvain JULIEN Nov 22 '18 at 21:28 I think the relevant location is item 23, page 352, but what Hadamard aims to is stated as follows: A smooth, co-orientable surface of $$\mathbb{R}^3$$ with Gauss curvature bounded below by some $$\kappa >0$$ is simply connected. (implicitly, the surface is compact without boundary) ("Or une surface à deux côtés et sans points singuliers, à courbure partout positive (la valeur zéro et les valeurs infiniment petites étant exclues) est toujours simplement connexe.") The goal is to use the Gauss-Bonnet Formula to deduce that when curvature is positive, any two closed geodesics must meet (otherwise they would together bound a total curvature 0 region of the surface). What is not clear from the text of item 23 is whether the surface assumed to be immersed or embedded. He basically says that the normal map is a global diffeomorphism, because positive curvature makes it a covering of the sphere. It seems the argument does provide the statement attributed to this paper, although it seems not explicitly stated. Second edit: Mohammad Ghomi gives an argument to that effect in comment. • The arguments in 23 do not seem to show that immersed sphere is embedded, even informally; am I wrong? [I see also pictures on page 379 which are relevant to a proof I know, but the words around these pictures seem to be irrelevant.] – Anton Petrunin Nov 23 '18 at 4:55 • @AntonPetrunin you are probably right but I do not have much time checking in details. I would not be surprised, given the informality of the discussion, if the attribution of this statement would be somewhat of a stretch. In any case, I do not think it was the point Hadamard wanted to make (and he might assume the embedding in the first place). – Benoît Kloeckner Nov 23 '18 at 13:58 • Injectivity of the gauss map implies embeddedness of the surface via convexity. Namely if the gauss map is injective, then it is easy to see that the surface must lie on one side of its tangent planes. If not, then the height function with respect to some tangent plane must have at least 3 critical points, and so at two of these points the normals will be parallel (this is now a well-known argument, and probably was not hard for Hadamard to figure out either). Once the surface is convex, then it must be embedded. – Mohammad Ghomi Nov 24 '18 at 15:37 • @MohammadGhomi that is right, but I do not see this argument in the paper. By the way it seems that you apply basic Morse theory which was developed much later. – Anton Petrunin Nov 25 '18 at 20:04 • @AntonPetrunin: my conclusion is that the setting of the paper is not precise enough to contain the statement attributed to it. There seems to be no consideration of embedded versus immersed surfaces. – Benoît Kloeckner Nov 26 '18 at 19:40
2021-04-19 19:09:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8122256994247437, "perplexity": 578.0249896260624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038916163.70/warc/CC-MAIN-20210419173508-20210419203508-00571.warc.gz"}
https://tex.stackexchange.com/tags/beamer/hot
# Tag Info ## Hot answers tagged beamer 3 Some suggestions/observations: Use inline-fraction notation in the footnote instead of \dfrac or \frac Use \mathsf{e} rather than \mathrm{e} to denote \exp(1) Write the material in the first level-2 item as three separate formulas As a (welcome) side-effect of these changes, the spurious page break will disappear on its own. ... \documentclass[... 2 The size of the equation is too big, it flows out of the footnote area and causes a new page to be generated. The easiest way to fix this is to use \frac instead of \dfrac MWE: \documentclass[]{beamer} \begin{document} \begin{frame}{Header} \begin{itemize} \item Solve $x^2y''-2xy'+2y=2x^3\textrm{e}^x$ \begin{itemize} \item \$y_1=x^2, y_2=x, r(x)=2x\... 2 Add \tcbsetforeverylayer{autoparskip}. In v4.40, tcolorbox changes the default vertical spaces added before and after colored boxes. The above code restores to the behavior until v4.32. For more info, see tcolorbox#115 and tcolorbox#121. 2 ... it's an old question, but I thought it might be worth mentioning that this can also be solved by the suggestion mentioned here: https://tex.stackexchange.com/a/330980/172810 ... e.g. simply add this to your preamble: \pdfstringdefDisableCommands{% \def\translate#1{#1}% } 1 Without any modification to the beamer code, you can use the \raisebox macro: \documentclass{beamer} \begin{document} \begin{frame} \frametitle{Title without image} \end{frame} \begin{frame} \frametitle{Title \raisebox{\dimexpr \baselineskip - \totalheight}{\includegraphics[width=2cm]{example-image-a}}} \end{frame} \begin{frame} \frametitle{Title \... 1 @leandris has suggested I use the nicematrix package to this end. Based on a prelimimary look at the package it seems perfect (looks like I can use hvlines-except-corners with some phantom elements to get the desired effect quite easily). 1 As far as I understand your demand, you only want to remove the third horizontal line from the top of the frame. Here is a modified headline template definition taken from the beamerthemelined.sty existing theme. % arara: lwpdflatex \documentclass[compress]{beamer} \usetheme{Szeged} \usecolortheme{default} \usefonttheme{default} \makeatletter \... 1 I made this before for myself. % !TEX encoding = UTF-8 Unicode % !TEX TS-program = pdflatex % !TEX spellcheck = English % !TEX pdfSinglePage \documentclass[14pt]{beamer} \beamertemplatenavigationsymbolsempty \usepackage{tikz} \title[Math of Communication]{A Mathematical Theory of Communication} \author[C.~E.~Shannon]{Claude E.~Shannon} \institute{... Only top voted, non community-wiki answers of a minimum length are eligible
2020-10-24 12:05:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9746708869934082, "perplexity": 6974.384064546526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107882581.13/warc/CC-MAIN-20201024110118-20201024140118-00552.warc.gz"}
https://cstheory.stackexchange.com/questions/32590/compute-basis-of-vertex-set-of-polytope
# Compute basis of vertex set of polytope I am wondering whether there is an efficient algorithm to compute the basis of the set of vertices of a polytope. Formally, INPUT: a polytope $$\Xi=\{(\vec{a}_1\vec{x}+\vec{b}_1, \cdots, \vec{a}_m\vec{x}+\vec{b}_m)\mid C\vec{x}\leq d\}$$ and a subspace $span(E)$ where $E=\{e_1, \cdots, e_{\ell}\}$ is a given set of vectors OUTPUT: a basis of the linear subspace spanned by $$V(\Xi)\setminus span(E),$$ where $V(\Xi)$ denotes the set of vertices of $\Xi$. (Note that here $\Xi$ is given as an affine mapping of a polytope, which might complicates the problem a little bit.) One can solve the problem in a straightforward approach, but I am asking for an ideally polynomial-time algorithm, or any evidence that this is not possible (e.g., NP-hardness). • What do you mean by basis? A basis for the linear subspace spanned by these points? – Sasho Nikolov Sep 21 '15 at 14:58 • yes, this is exactly what I am looking for. I will edit to make it clear. Thanks. – user35648 Sep 21 '15 at 15:39 • Hmm. I hoe this is not a homework problem... I have not figured out all the details, but something along the following lines should work... Translate/rotate space such that $E$ is the span of the first $k$ coordinates. Find a vertex in the polytope that is not 0 in the $d-k$ coordinate, by maximizing a point in the direction of $(0, 0, ..., 0 [k times], 1,1, ..., 1)$ [or something along these lines]. Add this vector to $E$, and repeat the process. As long as $E$ does not cover the polytope, you are discovering a vertex at each step. And you should be done pretty quickly. – Sariel Har-Peled Sep 25 '15 at 2:37
2019-12-07 11:42:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7367205619812012, "perplexity": 139.43470676090473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540499389.15/warc/CC-MAIN-20191207105754-20191207133754-00273.warc.gz"}
https://www.examcopilot.com/subjects/radio-navigation/ils/i-l-s-modulation
# ILS Modulation RNAV Amplitude Modulation is used for both localiser and glideslope. This allows for simpler equipment. Although AMAM —Amplitude Modulated is more susceptible to static, as the ILSILS —Instrument Landing System operates in the VHFVHF —Very High Frequency and UHFUHF —Ultra High Frequency bands they are (theoretically) free of static.
2023-03-20 21:47:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.815686047077179, "perplexity": 6885.919244586394}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943562.70/warc/CC-MAIN-20230320211022-20230321001022-00529.warc.gz"}
https://forum.math.toronto.edu/index.php?PHPSESSID=mseudlilqgak7j1s97lg38os10&action=profile;area=showposts;sa=messages;u=1713
### Show Posts This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to. ### Messages - asdfghj Pages: [1] 1 ##### Chapter 1 / chapter 1 Problem 4 (1) « on: January 16, 2022, 07:34:18 PM » $uu_{xy}=u_{x}u_{y}$ $(u_{x}u_{y})/uu_{x}=u_{xy}/u_{x}$ divide both side by$uu_{x}$ and get $u_{y}/u=u_{xy}/u_{x}$ integrate with respect to y $\ln{u}+f(x)=\ln{u_{x}}+g(x)$ enough to write one function of $x$ let g(x)-f(x)=n(x) $u=u_{x}\times n(x)$ $u_{x}/u=n(x)$ $\ln{u}=N(x)+m(y)$ $u=N_{1}(x)\times m(y)$ "another $m(x)$" 2 ##### Chapter 1 / home assignment1 Q3(1),(2),(3)&(4) « on: January 16, 2022, 04:49:37 PM » (1): $u_{xy}=0,denote: v=u_{x}$ $u_{xy}=v_{y}=0$ $v=f(x)$ $u=F(x)+g(y), (let F'(x)=f(x))$ (2): $u_{xy}=2u_{x}$ let$u_{x} = v$, so $u_{xy}=v_{y}$ $therefore: v_{y}=v$ integrate on both sides $v_{y}/v=2$ $2y+f_{1}(x)=\ln(v)$ $v=u_{x}=e^{2y}\times f_2(x)$ let $f_{2}(x)=e^{f_{1}(x)}$ $u=f_{3}(x)\times e^{2y}+g(y)$ where $f'_{3}(x)=f_{2}(x)$ (3): $u_{xy}=e^{xy}$ $u_{x}=e^{xy}y+f(x)$ $u(x,y)=e^{xy}xy+F(x)+g(y)$ (4) $u_{xy}=2u_{x}+e^{x+y}$ $u_{xy}=u_{yx}$ $e^{xy}=D(x,y)$ integrate on both sides $\int{u_{xy}}=\int{2u_{x}+D(x,y)}$ $u_{y}=2u+xD(x+y)+f(y)$ so $u=u^2+xD(x,y)+F(y)+g(x)$ the general solution is : $u=u^2+x\times e^{xy}+F(y)+g(x)$ Pages: [1]
2022-08-19 07:52:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5456571578979492, "perplexity": 5033.621144732043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573630.12/warc/CC-MAIN-20220819070211-20220819100211-00321.warc.gz"}
http://physics.stackexchange.com/questions/71976/energy-of-electron-spinning-in-a-magnetic-field
# Energy of electron spinning in a magnetic field When an electron travels in circles in a uniform magnetic field, it must lose energy because all accelerated charges radiate, and must therefore spiral down to the center. Is this energy compensated by the magnetic field? Or where does this energy go? - Spiral? how exactly? Is it doing a helical motion?If not, please post a diagram. –  udiboy1209 Jul 23 '13 at 10:34 spiral motion. spiral motion or helical motion whatever may be, it is accelerated and release energy. –  albedo Jul 23 '13 at 10:38 If its a helical motion it will not release energy, even if it is accelerated, just like electrons revolving around a nucleus do not release energy. –  udiboy1209 Jul 23 '13 at 10:40 electron moving in a magnetic field is not like the electron in atoms. According to the quantum mechanics the electrons that are bound to an atom are standing waves that completely engulf and surround the nucleus. It won't release the energy... –  albedo Jul 23 '13 at 10:53 You are right. An electron in a uniform magnetic field will travel in circles (or in a helix, up to a change in frame of reference), but this means that it is an accelerated charge and it must therefore radiate and lose energy. This radiation is known as synchrotron radiation, and it is a major design issue for particle accelerators. (In fact, it is the reason for a recent trend to go back to linear accelerators, which are less efficient as each accelerating stage only works once per particle, but are not subject to this.) It can also be harnessed to make synchrotron light sources, and with some extra work one can build a free-electron laser using that principle. In short, then, the electron will spiral down to the centre and lose all its kinetic energy as electromagnetic radiation. (For the more quantum-mechanical minded, now that Landau eigenstates have joined the fray, this means that all excited Landau states will have to decay through radiative coupling to the ground state with zero angular momentum. Once there, though, the uncertainty principle kicks in and stops the electron getting localized to radii smaller than the characteristic harmonic oscillator length $$x_0=\sqrt\frac{\hbar\omega_c}{m}=\frac{\sqrt{\hbar eB}}{m}$$ corresponding to the cyclotron frequency $\omega_c=eB/m$.) - Thanks, but I din't understand what prevents the electron from stopping after a long time, if the electron is circulating in a magnetic field for long time? –  albedo Jul 23 '13 at 13:24 @albedo The electron will spiral into the centre of the circle (slowly if it's nonrelativistic). However, at the very end it will not be perfectly localized at the centre, since that is forbidden by the Uncertainty Principle. Instead it will have a gaussian wavefunction of characteristic size $\sigma_x=x_0$. (This $x_0$ is chosen so that the minimum momentum uncertainty $\sigma_p=p_0=\hbar/\sigma_x$ will make the electron circle with a radius of order $x_0$.) –  Emilio Pisanty Jul 23 '13 at 14:13 So, if we somehow inject an electron bunch perpendicular to the magnetic field, this buch of electron will lose energy continuously. It will spiral and eventually will we get a buch of electron almost concentrated at some point in the magnetic field? –  albedo Jul 23 '13 at 14:27 Yes. For a real electron bunch, though, space charge (repulsion between the different electrons) will prevent this. –  Emilio Pisanty Jul 23 '13 at 15:18 I think the question is somewhat related to landau energy level(one electron in uniform magnetic field). - Could you clarify and elaborate? –  Emilio Pisanty Jul 23 '13 at 12:37
2015-09-01 16:24:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7510308027267456, "perplexity": 444.49282708253537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645195983.63/warc/CC-MAIN-20150827031315-00313-ip-10-171-96-226.ec2.internal.warc.gz"}
http://comet.ucar.edu/outreach/abstract_final/9893863.htm
Close Window ### Final Report SECTION 1: PROJECT OBJECTIVES AND ACCOMPLISHMENTS The main objective was to perform a real-time forecast experiment to predict 12-h precipitation amounts for Puerto Rico. The goals were to develop a stand-alone statistical model to compare with the current guidance products for issuing quantitative precipitation forecasts (QPFs). The purpose was to increase our understanding of tropical rainfall predictability in the presence of rugged terrain. Tasks included: (1) develop climatologies, (2) perform regionalizations, (3) develop a statistical forecast model, and (4) perform and verify a real-time forecast experiment. Based on the tasks outlined in the proposal and the tasks accomplished, the project was successful. James Elsner was responsible for directing the project including data analysis and decision making. He was assisted by Matthew Carter. Mr. Carter performed most of the calculations. His PhD dissertation provides a synopsis of the real-time forecast experiment including results and lessons learned. Shawn Bennett was responsible for organizing the real-time forecast experiment. It was necessary for Mr. Bennett to emphasis to his forecasters the importance of their participation in the experiment. His expertise was consulted in regards to weather factors conducive to heavy rainfall events over the island. Lessons learned: Scientific results from this cooperative project have shown the utility of climatology for precise specification of rainfall probabilities. The problem is that climatologies of this sort are not readily available to the forecasters, or are not available in a useful format. Results also show the diurnal variability in the relationship between quantitative rainfall probabilities and topography. Topography explains a significant portion of the geographic distribution of probabilities during the late afternoon and evening. The probabilistic quantitative precipitation forecast experiment has provided the WFO with the objective scientific evidence and reasoning for the prevalence of heavy rainfall in certain sectors of the island. This knowledge replaces subjective reasoning and hand waving arguments used to forecast precipitation location and quantity in Puerto Rico. This work improved the issuance of QPFs from WFO San Juan as forecasters have begun to integrate this new scientific evidence into their daily forecast operation. In addition, this work provides a sound basis and launching point for future scientific research. SECTION 2: SUMMARY OF UNIVERSITY/NWS EXCHANGES Our COMET research was integrated into the CITM/SOO workshop held at FSU during the second week of May 1998. The workshop theme was directed at the University link with the NWS Offices in Florida. Our COMET work was highlighted with a joint seminar (with Shawn Bennett) in the section on QPF, with an emphasis on knowledge transfer from Puerto Rico to Florida. The workshop was used to establish a collaboration with the NWS Office in Mobile, AL. These activities were not directly funded by COMET nor were they part of the original proposal. In April of 1999, the PI was invited to the Second Caribbean Climate Outlook Forum to give a talk on hurricane activity. The workshop provided a forum for discussion on seasonal precipitation forecasts for the Caribbean, including Puerto Rico. The participants, represented various nations of the region, issued a consensus three-month seasonal forecast of rainfall. A Junior Forecaster from WFO San Juan, Mr. Andy Roche competed analysis of rainfall data collected from rain gages and WSR-88D derived rainfall estimates for specific heavy rainfall events. He reduced the data and developed the contouring and plotting routines using GEMPAK. His studies were inspired and designed to be complementary to the COMET Cooperative Project. His studies were funded under a NOAA/NWS University Assignment to Florida State University. Mr. Roche finished course work necessary for a graduate degree in meteorology under this program. The project funded a joint paper submitted to the {\it Journal of Hydrology} concerning the forecast experiment. SECTION 3: PRESENTATIONS AND PUBLICATIONS 1. Carter, M. M., 1999: Interannual variability of rainfall in Puerto Rico. Preprints, 23rd Conf. on Hurricanes and Tropical Meteorology, Dallas, TX, Amer. Meteor. Soc. 551--552. 2. Carter, M. M., 1999: A Quantitative Precipitation Forecast Experiment For Puerto Rico", PhD Dissertation, Department of Meteorology. 3. Carter, M. M., J. B. Elsner, and S. Bennett, 2000: A quantitative precipitation forecast experiment for Puerto Rico. Submitted. SECTION 4: SUMMARY OF BENEFITS AND PROBLEMS ENCOUNTERED BENEFITS: The principal benefit to the FSU has been the infusion of knowledge concerning forecasting precipitation over a tropical island. This real-time forecast project provided a focus for understanding the relationship of precipitation forecasts in support of user services. Although we have yet to complete the project, we envision that the knowledge generated will provide San Juan forecasters with an enhanced awareness of subjective biases inherent in heavy rainfall forecasts. This will lead to more useful forecasts to the public thereby saving lives. The study sponsored by this COMET Cooperative Project is a first of its kind for Puerto Rico. The probabilistic quantitative precipitation forecast experiment has provided the WFO with the objective scientific evidence and reasoning for the prevalence of heavy rainfall in certain sectors of the island. This knowledge replaces the heretofore subjective reasoning and hand waving arguments used to forecast precipitation location and quantity in Puerto Rico. This work has improved the QPF from WFO San Juan as forecasters have begun to integrate this new scientific evidence into their daily forecast operation. In addition, this work provides a sound basis and launching point for future scientific research. PROBLEMS: During the project period, the PI transferred his professorship to the Department of Geography. This created a bit of down time in terms of collaboration between the PI and San Juan. Perhaps the most challenging event was the turn over in the SOO position. Shawn Bennett moved to WFO Brownsville and Rachel Gross took over as SOO at WFO San Juan. These challenges were overcome via maintaining close and open communication between the SOOs and Dr. Elsner at FSU. Shawn consulted with both FSU and San Juan on the Project when needed. The landfall of destructive hurricane Georges during September of 1998 shifted the priority of the experiment. This caused a slight delay in the implementation of the forecast at the San Juan office. The location of Puerto Rico and the slowness of Internet connections for receipt of model data etc, made collaboration more challenging. It was handled by judicious scheduling of communications and calculations.
2017-12-17 10:00:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.506019115447998, "perplexity": 5036.887250486089}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948595342.71/warc/CC-MAIN-20171217093816-20171217115816-00586.warc.gz"}
https://stats.stackexchange.com/questions/69360/in-bivariate-linear-regression-is-there-a-direct-relationship-between-n-r2
# In bivariate linear regression is there a direct relationship between $n$, $r^2$ and coefficient error? In bivariate linear regression is there a direct relationship between sample size $n$, coefficient of determination $r^2$ and $\sigma_\beta$ (the standard error of coefficient $\beta$)? Assume data have been normalized so both target and predictor variable have $\sigma=1$. Putting the question another way, does $\sigma_\beta$ tell me something different to $r^2$ or are they measures of the same thing? Or, is it possible to have a strong, certain but unreliable link between two variables (large $\beta$, small $\sigma_\beta$, but small $r^2$)? (In multiple regression this doesn't apply as even with high $r^2$, $\sigma_\beta$ can indicate uncertainty as to which of the multiple predictors is causing the response). EDIT Just got this out of my software (without standardized data): regression coeff 0.023 stderr of coeff 0.0046 p=0.000002 n=131 multiple r2=0.17 predictor std=22.5 target std=2.24 Standardized coefficient is presumably $0.023*22.5/2.24 = 0.23$. If standardized coefficient is the same as correlation, then $r^2 = 0.23^2 = 0.053$ ...not the same as the software gave. What am I doing wrong? • Do you mean "coefficient of determination" for "correlation", & "multiple regression" for "multivariate regression"? – Scortchi Sep 6 '13 at 10:35 • Yes. (My bad on $r^2$ but isn't multiple regression the same as multivariate regression?) – Sideshow Bob Sep 6 '13 at 10:45 • No; multivariate regression means a multiple response (target or outcome or dependent variable). Having multiple predictors (independent variables) does not itself make a regression multivariate. – Nick Cox Sep 6 '13 at 10:47 • Your question is puzzling. If the predictor in bivariate regression has been standardised, then its coefficient equals the correlation: this is in essence an inevitable consequence. If not, then not in general. The way to think of this is in terms of units of measurement or dimensional analysis. A correlation, and hence its square, has no units, but a regression coefficient has units (units of y)/(units of x). Standardising washes out both units and leaves you with dimensionless numbers. – Nick Cox Sep 6 '13 at 10:53 • Ok so you are saying it's not possible, with standardized data, to have large $\beta$ and small $r^2$, because they are both the same thing - a measure of effect size. $\sigma_\beta$ meanwhile tells me the significance. Thanks btw, learning a few things here :) – Sideshow Bob Sep 6 '13 at 11:08 Here is a silly example from R (which I do not know well, but you can download it and it amounts to a lingua franca): > y = c(23,32,45,54,67,75) > x = c(1,2,3,4,5,6) > lm(y ~ x) Call: lm(formula = y ~ x) Coefficients: (Intercept) x 11.93 10.69 > cor(y, x) [1] 0.998227 > sd(y) [1] 20.02665 > sd(x) [1] 1.870829 > 10.69 * sd(x) / sd(y) [1] 0.9986273 There is some rounding error because I just took the printed result for the coefficient, but the principle is sound. Other software gives identical results. • That's great, and works for me too with my data. Must be a bug somewhere in my preprocessing script. – Sideshow Bob Sep 6 '13 at 12:40
2019-07-16 21:20:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6011326909065247, "perplexity": 1780.254693177212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524879.8/warc/CC-MAIN-20190716201412-20190716223412-00356.warc.gz"}
https://www.miniphysics.com/root-mean-square-values.html
# Root-mean-square values Show/Hide Sub-topics (Alternating Current | A Level) The root-mean-square (r.m.s.) value of an alternating current is equivalent to the steady direct current that converts electrical energy to other forms of energy at the same average rate as the alternating current in a given resistance. • The r.m.s. current of an alternating current is also known as the effective current of the a.c. • An alternating current ammeter reads the root-mean-square current value of an alternating current. • The r.m.s. value of an alternating current is the equivalent direct current which could have achieved the same amount of heating over the same period of time for the same resistor used. For the left diagram, The power dissipated in R is $P_{dc} = I_{dc}^{2} \, R$ For right diagram, Average power is $\left< P_{ac} \right> \, = \, \left< I_{ac}^{2} \right> \, R$ Supposing both resistors are dissipating heat at the same average rate: $P_{dc} = \left< P_{ac} \right>$ $I_{dc}^{2} \, R = \left< I_{ac}^{2} \right> R$ Canceling R from both sides of the equation, $I_{dc}^{2} \, = \left< I_{ac}^{2} \right>$ $I_{dc} = \sqrt{ \left( \left< I_{ac}^{2} \right>\right) }= I_{rms}$ The steady $I_{dc}$ is equivalent to the square root of the mean of the square of the $I_{ac}$ 3 simple steps to get r.m.s. values 1. Square the current 2. Take the mean or average 3. Square root the mean For a sinusoidal alternating current, where Io is the peak value $$I_{rms} = \frac{I_{o}}{\sqrt{2}}$$ $$V_{rms} = \frac{V_{o}}{\sqrt{2}}$$
2022-01-27 20:00:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8972735404968262, "perplexity": 956.5504662782487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305288.57/warc/CC-MAIN-20220127193303-20220127223303-00535.warc.gz"}
https://answers.ros.org/question/318007/controlling-the-robotiq_85_gripper-2-fingered-gripper-in-a-gazebo-simulation/
# Controlling the robotiq_85_gripper (2-fingered gripper) in a Gazebo simulation Hello all, I am trying for some time now to control (open/close) the two fingered robotiq_85_gripper, as it is described in ( https://github.com/DualUR5Husky/robot... ) but augmented with the transmissions and inertial described in ( https://github.com/waypointrobotics/r... ). The problem is that I am not sure how to transmit open - close commands in a Gazebo 7 environment. Do I need an extra plugin to transmit commands to the controllers? I found the following package: ( https://github.com/DualUR5Husky/robot... ), but it doesn't seam to do anything.
2019-11-13 00:28:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8955162167549133, "perplexity": 1926.185685078266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665809.73/warc/CC-MAIN-20191112230002-20191113014002-00097.warc.gz"}
https://www.nature.com/articles/s41467-021-24268-5?s=09&error=cookies_not_supported&code=37e6c956-60af-4291-91b8-6a1e6dbddb42
## Introduction Tropical cyclones (TCs) are of intense scientific interest and are a major threat to human life and property across the globe1,2,3. Of particular interest are multi-decadal changes in TC frequency arising from some combination of intrinsic variability in the weather and climate system, and the response to natural and anthropogenic climate forcing4,5,6,14,15,16,17,18,19,20,21,22,23,24,25. Even though the North Atlantic (NA) basin is a minor contributor to global TC frequency, Atlantic hurricanes (HUs) have been the topic of considerable research both because of the long-term records of their track and frequency that exist for this basin, and because of their impacts at landfall. It is convenient and common to consider Saffir-Simpson Categories 3–5 (peak sustained winds exceeding 50 ms−1) HUs separately from the overall frequency, and label them major hurricanes, or MHs. Historically, MHs have accounted for ~80% of hurricane-related damage in the United States of America (USA) despite only representing 34% of USA TC occurrences1. Globally, models and theoretical arguments indicate that in a warming world the HU peak intensity and intensification rate should increase, so that there is a tendency for the fraction of HU reaching high Saffir-Simpson Categories (3, 4, or 5) to increase in models in response to CO2 increases, yet model projections are more mixed regarding changes in the frequency of MHs in individual basins (e.g., NA)6,20,21,22,25,26,27,28,29,30. Homogenized satellite-based TC intensity observations since the early 1980s show an increase in the fraction of MH to overall TCs both in the NA and globally14, and there has also been a documented increase since the 1980s in the fraction of global and NA HU that undergo rapid intensification15. Theoretical arguments, modeling studies, and observational analyses indicate that the overall frequency of TCs and their intensity across the tropics, and for Atlantic HUs in particular, may vary differently and exhibit distinct connections to climate drivers14,15,25,26,27,28,29,30,31,32. There is substantial spread in model projections of the 21st century response of both overall NA HU frequency and of the response of the frequency of the most intense NA HUs6,20,21,22,25,26,27,28,29,30. However, the connection between recent recorded multi-decadal changes in NA HU activity and 21st century HU projections is complicated by the fact that recent changes (e.g., since the 1970s) in NA HU and MH activity likely contain a substantial contribution from internal climate variation or non-greenhouse gas forcing16,17,18,19,20,21,22,23. Has there been a century-scale change in the number of the most intense hurricanes in the North Atlantic? Analyses of longer records (i.e., going back into the 19th century) of NA HU and MH frequency provide an additional lens with which to interpret both recent HU activity changes and projections of future hurricane activity. The North Atlantic Hurricane Database version 2 (HURDAT2; ref. 33) provides records of NA HU activity going back to 1851—a nearly 170-year record of HU activity. Using HURDAT2, one can explore secular changes in aggregate statistics of NA HU activity, such as the annual number of HU and MH strikes in the USA and the annual number of HUs and MHs in the Atlantic (or basin-wide HU and MH frequency). The USA HU strike record we use includes storms for which either hurricane strength, or vmax ≥ 33 ms−1, or major hurricane strength, or vmax ≥ 50 ms−1, winds impacted the continental USA from the Atlantic or Gulf of Mexico, so this record includes storms for which the center did not cross onto land. Due to changes in observing practices, severe inhomogeneities exist in this database, complicating the assessment of long-term changes7,8,9,10,11,12,13. In particular, there has been a substantial increase in monitoring capacity over the past 170 years, so that the probability that a HU is observed is substantially higher in the present than early in the record10; the recorded increase in both Atlantic TC and HU frequency in HURDAT2 since the late-19th century is consistent with the impact of known changes in observing practices7,8,9,10,11,12. Major hurricane frequency estimates can also be impacted by changing observing systems13. We here show that recorded increases in NA HU and MH frequency, and in the ratio of MH to HU, can be understood as resulting from past changes in sampling of the NA. We build on the methodology and extend the results of ref. 10 to develop a homogenized record of basin-wide NA HU and MH frequency from 1851–2019 (see Methods Section), this homogenized record indicates that the increase in NA HU and MH frequency since the 1970s is not a continuation of century-scale change, but a rebound from a deep minimum in the late 20th century. ## Results ### Recorded century-scale NA hurricane changes Neither the number of HU nor MH striking the USA are dominated by century-scale changes between 1851 and 2019, although each exhibits substantial year-to-year and decadal fluctuations (Fig. 1a, b). There is a decrease in the recorded number of USA HU strikes, that may be statistically significant for certain periods (e.g., Table 1) or depending on the statistical model used34. Hurricane data are available from 1851 onwards, but even for USA-striking HUs and MHs there are likely to be inhomogeneities including undersampling over this period. We show the data for the full 1851–2019 record, but highlight the pre-1878 era with dark gray background shading—as 1878 was the year in which the U.S. Signal Corps began systematic efforts to catalog all Atlantic HUs35. Furthermore, it is likely that U.S. coastal regions did not become sufficiently well-populated to fully monitor US-striking HUs and MHs until at least the year 1900 (ref. 36), so we highlight the 1878–1900 period with lighter gray shading in our figures. Basin-wide NA HU and MH frequency shows substantial year-to-year and multi-decadal variation, some of which is reflected in U.S. striking frequency (Fig. 1). In contrast to the frequency of HUs striking the USA, there is a clear and pronounced increase in the basin-wide NA HU and MH frequency recorded in the HURDAT2 database between 1851 and 2019 (Fig. 1c, d), with about triple the recorded NA MHs in recent decades compared to the mid-19th century. One possible interpretation of the distinct evolution of basin-wide and U.S.-striking HU and MH, is that U.S. strikes represent a fraction of the overall NA basin-wide frequency, and redistributions of HU activity within the NA basin could result in distinct evolutions of U.S. strikes and NA basin-wide frequency37,38. An additional or alternative contribution to the U.S. striking-to-basin-wide distinction could be that changing observing practices had a larger impact on basin-wide HU than on recorded U.S. HU strikes, leading to spurious increasing trends in recorded basin-wide HU10 and MH frequency. These possible explanations for the observed behavior are further explored below. ### Hurricane and major hurricane frequency adjusted for missing storms Previous work has led to the development of a number of methods to estimate the impact of changing observing capabilities on the recorded increase in basin-wide HU frequency between 1878 and 2008 (ref. 10). We here update the analysis of ref. 10 to build an adjustment to recorded HU counts over 1851–1971, based on the characteristics of observed HUs over 1972–2019. We then extend that methodology to build an adjustment to recorded MH counts over 1851–1971, based on MHs recorded over 1972–2019 (see “Methods”). The methodology for the basin-wide count adjustment involves using HU (MH) tracks from an era we posit is fully sampled, along with ship-position data from the pre-fully-sampled era, to build a probabilistic estimate of the number of storms that may have occurred and not been detected in each year of the earlier era. There are a number of key assumptions that go into this methodology (see “Methods” section and refs. 9,10), including assuming that ships at sea and land would have been perfect observers, and that the types of TCs that have occurred in the fully sampled era are representative of those that could have occurred prior to the fully sampled period. After making these assumptions, and building a model for the radius of HU (≥33 m/s) or MH (≥ 50 m/s) winds, we construct our basin-wide NA HU and MH adjustment: the estimate of the time-evolving number of HUs or MHs that were likely missed before the early 1970s. Ref. 9 use 1966 as the start of the fully sampled era because at least once a day satellite pictures became routinely available: the sun-synchronous Environmental Science Services Administration (ESSA) satellites. However, the quality of these data is not sufficient to determine intensity (maximum wind) reliably, nor is there a systematic technique (Dvorak) calibrated for these data to obtain maximum winds. In 1972, high-resolution imagery from the Applications Technology Satellite (ATS) began to be used operationally, and the Dvorak technique was invented and used operationally during daylight hours on both the ESSA and ATS imagery, which were by then available electronically instead of fax-type imagery. However, we note that the main results in this study are not qualitatively altered by using 1966 as the start. The estimated number of missing NA HUs grows backward in time, reaching a peak value of ~3 HUs/year between 1860 and 1880 (red lines Fig. 2a, c); the updated reconstruction shows substantial similarity to that of ref. 10, which was based on satellite era HUs over 1966–2008. Meanwhile, the estimated number of missing NA MH shows a relatively steadier value for most of the record, at around one MH per year (red lines Fig. 2b, c). Additional robustness analysis leaving out sets of satellite era years37 shows that the HU and MH adjustments are not the result of particular satellite era years (Supplementary Material). For both MH and HU there is a local maximum in the annual correction centered around both World Wars—with the World War II maximum being evident even in the smoothed data (Fig. 2); these maxima in correction reflect a minimum in ship reports in the International Comprehensive Ocean Atmosphere Data Set  (ICOADS) during the World Wars. For frequency in a single year there is substantial uncertainty in both adjustments, so that it cannot be excluded at the 95% confidence level that no storms or that at a few times more than the central estimate were missed (pink shading in Fig. 2a, b). However, for the 15-year running smoothed counts the 95% range on the adjustment is smaller than for annual values, and the method indicates a significant undercount in both NA HUs and MHs for the entire pre-1960s period (pink shading in Fig. 2c, d)—we note that the results are qualitatively consistent for smoothing windows between 9- and 25-years. Once the adjustment is added to the recorded number of Atlantic HUs and MHs, substantial year-to-year and decade-to-decade variability is still present in the data, with the late-19th, mid-20th and early-21st centuries showing relative maxima, and the early 20th and late 20th centuries showing local minima (Fig. 2). However, after adjustment, the recent epoch (1995–2019) does not stand out as unprecedented in either basin-wide HU or MH frequency. There have been notable years since 2000 in terms of basin-wide HU frequency, but we cannot exclude at the 95% level that the most active years in terms of NA basin-wide HU or MH frequency occurred in either the 19th century or mid-20th century (blue lines and shading in Fig. 2a, b). Further, we cannot exclude that the most active epoch for NA HU frequency was in the late-19th century, with the mid-20th century comparable to the early-21st in terms of basin-wide HU frequency. The 19th century maximum in activity is more pronounced in overall frequency than in MH frequency, while the late-20th century multi-decadal temporary dip in MH frequency stands out relative to that in the early-20th century. Relative to the satellite era and after adjustment, overall basin-wide frequency shows a more active late-19th century than does basin-wide MH frequency. Meanwhile, after adjustment the mid-20th century active period is more pronounced in basin-wide MHs than in overall HU frequency. To evaluate secular changes in frequency, we build a Poisson regression for each of the HU and MH frequency records using time as a covariate (see Methods) and show the results in Table 1. We explore a number of start-dates for our trend estimate, and to assess the robustness of the trends, we also explore trends over 1980–2019 to place recent changes14 in the context of century-scale ones. The nominal century-scale decreases in the frequency of hurricanes striking the USA (both HU and MH) are generally not statistically significant, and differ from the 1980–2019 changes. However, the century-scale increases in HURDAT2 basin-wide HU and MH frequency are very significant and present for all start dates. However, once the missing storm adjustments are included, the nominal sign of the basin-wide HU trend changes for the early start dates, and is weakly significantly positive only for the 1900 start date. The adjusted basin-wide MH record retains a nominally positive trend, but the trends after 1878 are not significant, and those computed from 1851 are only marginally significant. Furthermore, the 1980–2019 increases in basin-wide HU and MH frequency are not a continuation of a longer-term trend, but reflect a recovery from a strong minimum in the 1970s and 1980s (Fig. 2)—this evolution suggests a dominant contribution to past multidecadal variations of HU and MH frequency from some combination of multi-decadal internal climate variability (such as Atlantic Multidecadal Variability tied to variations in the strength of meridional ocean heat transport in the Atlantic—refs. 16,17,18) and/or non-greenhouse gas forcing, such as variations in anthropogenic or natural aerosols19,20,21,22,23,24. ### USA hurricane strikes to basin-wide and MH/HU ratios In the raw HURDAT2 database, the century-scale evolution of recorded basin-wide NA HUs and MHs differs considerably from that of HUs and MHs striking the USA (compare top and bottom of Fig. 1). This difference results in a century-scale decrease in the fraction of basin-wide recorded overall and major hurricanes striking the USA (gray dotted lines Fig. 3), with about 40% of basin-wide MH striking the USA as a MH (Fig. 3b). One possible interpretation of this decreasing ratio is that there has been a century-scale shift in the tracks of HU and MH, or that in recent decades HUs and MHs are losing either their intensity or tropical nature as they approach the coast of the USA38,39. An alternative interpretation is that USA HU and MH strikes have been better observed since the mid-1850s than basin-wide frequency of either, resulting in a spurious inflation of the USA strike-to-basin-wide ratio in the pre-satellite era40. The adjusted basin-wide HU and MH records support the latter hypothesis: once we include the adjustment for likely missing storms, there is no longer a clear century-scale decrease in this ratio (black line in Fig. 3). We can assess secular changes in the fraction of basin-wide HUs and MHs that strike the USA using a Binomial regression model with time as a covariate (top four rows of Table 2, see “Methods”). For the HURDAT2 data, the century-scale decreases in USA-striking proportion are very significant (top row Table 2). After adjusting for missing storms, the century-scale decrease in USA-striking HU fraction is weaker and of modest significance, largely reflecting the influence of a maximum in the 1910s (Fig. 3). However, the century-scale changes in USA-striking MH fraction do not show any significant secular change, with around 20–30% of NA MHs over 15-year periods having struck the USA as MHs. Based on our adjusted estimates, it appears that the stationary ratio of USA-striking to basin-wide MHs reported over the late-20th century (ref. 41) is evident since the mid-19th century, and we do not see evidence for strong multi-decadal modulation of the USA-striking MH fraction40. In estimates of the sensitivity of NA HU activity to greenhouse-induced warming and 21st century projections based on dynamical or statistical-dynamical models6,20,21,22,25,26,27,28,29,30,31, there is more consistency for an increase in the fraction of HUs becoming MHs (that is, an intensification of HU) than in either the overall frequency of HUs or MHs. In the raw HURDAT2 dataset, there is a substantial century-scale increase in the NA MH/HU ratio since the late-1800s (gray line in Fig. 4). However, once the adjustment is added to both NA HUs and MHs (blue line and shading in Fig. 4), the running 15-year MH/HU ratio is dominated by multi-decadal fluctuations, with minima of 25–30% in the mid-1850 s and in the decades centered around the 1980s, and maxima of 40–50% in the early-to-mid-20th century and the early 21st century. The low values in the 1850–1878 period, while being unique in the record, also occur during the period when we have least confidence in the data—based on these considerations, we view with skepticism any century-scale trend that arises only once the 1850–1878 period is included. In our adjustment methodology, we assume that ships at sea do not aim to steer away from HU and MH winds (Assumption 5, “Methods” section)—this assumption may be less justified for MH winds, and may result in an underestimate of MH relative to HU in the record even after adjustment. Nevertheless, the recent increase in the proportion of NA HUs becoming MHs, after adjustment, which is also reflected in the results of ref. 14, is not a continuation or acceleration of a long-term trend, but rather is a rebound from a deep minimum in the decades surrounding the 1980s—see below for a discussion of possible mechanisms. We evaluate secular changes in the fraction of HUs becoming MHs through a Binomial regression model with time as a covariate (bottom three rows of Table 2; see “Methods”). The fraction of HUs striking the USA as MHs does not show a significant change for any of the epochs we explore. For both HURDAT2 and the adjusted series, there is a significant increase in basin-wide MH fraction over 1851–2019. The HURDAT2 series shows at least a nominal increase in MH fraction for all the epochs explored, though the p-value exceeds 0.1 for the 1900–2019 and 1980–2019 periods. Meanwhile, for the adjusted MH and HU records, the trends in basin-wide MH fraction are neither significant nor of consistent sign for 1878–2019 and 1900–2019. After adjustment of the basin-wide MH and HU record, century-scale increases in basin-wide MH frequency depend on the pre-1878 era, before the U.S. Signal Corps started efforts to monitor all Atlantic HUs35. ## Discussion One of the most consistent expectations from projected future global warming is that there should be an increase in TC intensity, such that the fraction of MH to HU increases6,20,21,22,25,26,27,28,29,30,31. This issue has become more pressing with the recent finding of a global increase in this metric since 1979 using homogenized satellite-based data14—a finding to which Atlantic HU contribute. We here build on the methods of refs. 9,10 to build a homogenized record of Atlantic MH frequency and MH/HU ratio since the 19th century. We find here that, once we include a correction for undercounts in the pre-satellite era basin-wide NA HU and MH frequency, there are no significant increases in either basin-wide HU or MH frequency, or in the MH/HU ratio for the Atlantic basin between 1878 and 2019 (when the U.S. Signal Corps started tracking NA HUs35). We suggest that the modestly significant 1851–2019 increase in basin-wide MH frequency and MH/HU ratio that remains after including the HU and MH adjustment reflects data inhomogeneity that our adjustment is unable to correct—rather than an actual increase in these quantities. The homogenized basin-wide HU and MH record does not show strong evidence of a century-scale increase in either MH frequency or MH/HU ratio associated with the century-scale, greenhouse-gas-induced warming of the planet. For example, the temporal evolution of the global mean temperature is not closely reflected in the temporal evolution of adjusted MH/HU ratio shown in Fig. 4. Does this work provide evidence against the hypothesis that greenhouse-gas-induced warming may lead to an intensification of North Atlantic HUs? Not necessarily. Substantial multi-decadal variability may obscure trends computed over the past century16,17,18,20,21, and recent studies suggest the possibility for an aerosol-driven reduction in NA HU and MH activity over the 1960s–1980s (refs. 19,20,21,22,23,24), which may have obscured any greenhouse induced NA HU and MH intensification over the 20th century. For example, a statistical downscaling of global climate models (GCMs) that were part of the Coupled Model Intercomparison Project Phase 5 (CMIP5) shows a robust and significant projection for a greenhouse gas-induced 21st century NA hurricane intensification; yet applying that same method to historical simulations the greenhouse-induced intensification over the late-19th and 20th century is masked by the late-20th century aerosol-induced weakening20. Historical simulations show that aerosol forcing may have masked the 19th-20th century greenhouse-gas-induced increase in potential intensity, the theoretical upper bound on tropical cyclone intensity, even though climate models show increases in potential intensity in tropical cyclone regions in response to projected future warming24,25,26. The homogenized MH and HU data developed in the present study serve as a target for century-scale historical simulations with high-resolution dynamical and statistical models that are used for 21st century projections. The adjusted NA basin-wide MH frequency and MH/HU ratio show substantial multi-decadal variability (Figs. 2, 4), and the adjusted basin-wide MH frequency shows its lowest values over the 1960s–1980s (Fig. 2). These features show at least qualitative consistency with the notion of a strong influence of either internal multi-decadal climate variability and/or late-20th century aerosol-induced weakening of NA HU intensity during that period. Our homogenized records also correspond with document- and proxy-based reconstructions of Antilles and Atlantic HUs, which indicate that substantial variability in HU frequency has been present in the Atlantic, and the inactive period in the late 20th century may have been the most inactive period in recent centuries42,43. The homogenized hurricane records suggest a consistent and marginally statistically significant decrease in the ratio of basin-wide hurricanes striking the USA as hurricanes (Table 2, row 3). Some models project an eastward shift in the location of NA TCs in response to increasing greenhouse gases (e.g., refs. 27,28), so this observed change may reflect the emerging impact of greenhouse warming on NA TC tracks. However, although there is a nominal decrease in the ratio of basin-wide MH striking the USA as MH (Table 2, row 4), the trends are not significant for any of the time periods explored. Caution should be taken in connecting recent changes in Atlantic hurricane activity to the century-scale warming of our planet. The adjusted records presented here provide a century-scale context with which to interpret recent studies indicating a significant recent increase in NA MH/HU ratio over 1980–2017 (ref. 14), or in the fraction of NA tropical storms that rapidly intensified over 1982–2009 (ref. 15). Our results indicate that the recent increase in NA basin-wide MH/HU ratio or MH frequency is not part of a century-scale increase. Rather it is a rebound from a deep local minimum in the 1960s–1980s. We hypothesize that these recent increases contain a substantial, even dominant, contribution from internal climate variability16,17,18,20,21, and/or late-20th century aerosol increases and subsequent decreases19,20,21,22,23,24, in addition to any contributions from recent greenhouse gas-induced warming20,22,24,44. It has been hypothesized, for example, that aerosol-induced reductions in surface insolation over the tropical Atlantic since between the mid-20th century and the 1980s may have resulted in an inhibition of tropical cyclone activity19,20,21,22,23,24; the relative contributions of anthropogenic sulfate aerosols, dust, and volcanic aerosols to this signal (each of which would carry distinct implications for future hurricane evolution)—along with the magnitude and impact of aerosol-mediated cloud changes—remain a vigorous topic of scientific inquiry. It has also been suggested that multi-decadal climate variations connected to changes in meridional ocean overturning may have resulted in a minimum in northward heat transport in the Atlantic and a resulting reduction in Atlantic hurricane activity16,17,18,20,21. Given the uncertainties that presently exist in understanding multi-decadal climate variability, the climate response to aerosols and impact of greenhouse gas warming on NA TC activity, care must be exercised in not over-interpreting the implications of, and causes behind, these recent NA MH increases. Disentangling the relative impact of multiple climate drivers on NA MH activity is crucial to building a more confident assessment of the likely course of future HU activity in a world where the effects of greenhouse gas changes are expected to become increasingly important. ## Methods We extend the methodology described in refs. 9,10 to NA overall HU frequency since 1851, and adapt the methodology to NA major (Saffir-Simpson Category 3–5) hurricane frequency since 1851. For North Atlantic HU frequency, the methodology is that of ref. 10, except we use a longer HURDAT2 dataset34: from 1972 to 2019, instead of the 1972–2008 record used in ref. 10 to develop the correction. We also extend the recount estimates to span the full HURDAT2 record of 1851–1971, instead of 1878–1971 as was done in Refs. 9,10,11,12. Using the methodology for HU adjustment of ref. 10, the undercount adjustment is developed using an observing system emulation, in which we compare HU tracks from the satellite era (1972-present) to ship track density from the International Comprehensive Ocean-Atmosphere Data Set (ICOADS, ref. 45) dataset from the pre-satellite era (1851–1971). The probability that a given storm from the satellite era would have been missed had it occurred in a particular pre-satellite year is estimated through an ensemble by sampling across 21 different shifts in the storm’s actual date of occurrence (shifting forward and backward in the calendar by 0, 5, 10, 15, …, 45, 50 days), and by drawing 100 realizations of the radius of gale-force and hurricane strength winds from a probability density function (PDF) based on the observations of ref. 46. For each realization, we assess that a HU would have been detected if either one land observation would have been within the parameterized radius of hurricane winds (R33), or two ship observations would have been within the model-parameterized radius of tropical storm winds (R17), with at least one being withing radius of hurricane-force winds (R33). We also require that the first detection of a tropical storm or HU must be equatorward of 40°N. Radius of 17 and 33 ms−1 winds (R17 and R33) are parameterized based on the data of ref. 46, the radii are multiplied by 0.85 to correct from maximum extent to mean extent. The average radius of tropical storm winds (R17) is parameterized such that the logarithm of the radius follows a normal distribution, with the random seed selected for each storm. As reported in ref. 9, R17 (in kilometers) is parameterized based on the wind speed of the storm (vmax) as follows, where ξ is a normally distributed random variable for each storm with a mean of zero and a standard deviation of one: $${\rm{R}}17=0.85\ast \left\{\begin{array}{cc}0 & {v}_{\max } < 17\,{\rm{ms}}^{-1} \\ 90{e}^{\xi /1.3}+70 & \;\;\;\;\;\;\;\;\;\,\;\;\;\;\;17\,{\rm{ms}}^{-1} \le v_{\max } < 33\,{\rm{ms}}^{-1} \\ 90{e}^{\xi /1.3}+150 & \;\;\;\;\;\;\;\;\;\,\;\;\;\;\;33\,{\rm{ms}}^{-1}\le v_{\max } < 50\,{\rm{ms}}^{-1}\\ 90{e}^{\xi /1.3}+170 & 50\,{\rm{ms}}^{-1}\le v_{\max }\end{array}\right.$$ (1) The average radius of hurricane winds (33 ms−1) winds is parameterized such that the logarithm of the radius follows a normal distribution when the storm winds exceed 33 ms−1, and is zero when the storm is weaker than hurricane strength, using the parameterization of ref. 10, where ξ is a normally distributed random variable for each storm with a mean of zero and a standard deviation of one: $${\rm{R}}33=0.85\ast \left\{\begin{array}{cc}0 & {v}_{\max} < 33\,{\rm{ms}}^{-1}\\ 90{e}^{\xi /2.1}-15 & \;\;\;\;\;\;\;\;\;\,\;\;\;\;\;33\,{\rm{ms}}^{-1}\le v_{\max} < 50\,{\rm{ms}}^{-1}\\ 90{e}^{\xi /2.1}+5 & 50\,{\rm{ms}}^{-1}\le v_{\max }\end{array}\right.$$ (2) The probability of a satellite era storm being detected is computed as the total realizations in which the storm was detectable divided by the total realizations in a given pre-satellite observing system year (21 date shifts × 100 size ensembles = 2100). The mean missing storm count estimate for a given pre-satellite era year is the sum across all satellite era years of the sums of the probability the storms were missed (that is 1 minus the probability that it would have been detectable in a given year had it occurred). We build a Bootstrap uncertainty estimate for the missing storm counts by drawing 10,000 samples (with replacement) for each pre-satellite era year from the 2100 ensembles × 48 satellite era years = 100,800. For MHs, the methodology of ref. 10 is adapted by changing the detection threshold to be a single ship or a single land point within the modeled radius of 50 ms−1 winds (see below). We do not require multiple 50 ms−1 detections, nor do we place a latitude threshold on the detection. Furthermore, we assess that the pre-satellite era for MHs is likely 1851–1971, rather than 1851–1965 - although only computing the correction over the period 1851–1965 does not affect any of the principal results of this study. The probability of a satellite era MH being detected is computed in an analogous manner to that for overall HU frequency, generating an ensemble by shifting the timing of satellite era storms and producing multiple realizations of 50 ms−1 radius. ### Major hurricane wind radius model To build a model for the radius of 50 ms−1 winds (R50), we use the HWIND 1998–2013 estimates of wind radii47. We build the model using the observations that meet the requirements during the period (1998–2013). Note that one MH can have multiple MH observations during its lifespan. For each observation, we identify the location(s) where the wind speed exceeds 50 ms−1 and calculate the distance from the HU center to the location(s). The radius of 50 ms−1 winds for a HU observation is the averaged distance from the HU center to the locations with exceeding 50 m s−1 wind speed. We fit the lognormal distribution (i.e., µ = 3.416, σ = 0.478) for the radii of 243 MH observations during the study period. Therefore, the R50 parameterization is as follows, where ξ is a Normally distributed random variable for each storm with a mean of zero and a standard deviation of one: $${\rm{R}}50=\left\{ \begin{array}{cc}0 & {v}_{\max } < 50\,{\rm{ms}}^{-1}\\ {e}^{3.416+0.478\xi } & 50\,{\rm{ms}}^{-1}\le v_{\max }\end{array}\right.$$ (3) ### Key assumptions in the hurricane adjustment methodology The key assumptions in the HU adjustment methodology are discussed at greater length in Refs. 9,10, but we briefly list them here for the benefit of the reader: 1. (1) All land points and ship observations are perfect storm detectors: this will bias the storm adjustment low, particularly in the 1800s. 2. (2) Ship tracks in the ICOADS database44 are representative of ships that have provided meteorological data for storm identification34 this will bias the storm adjustment high if there is considerable other independent data available. We note that we include all ICOADS observations, regardless of the meteorological data reported, which could overestimate the data available for storm identification, which should partially mitigate the bias. 3. (3) All storms detectable by the ships have been, or will be, included in HURDAT2: this will bias the storm adjustment low. 4. (4) TCs are assumed radially symmetric: this will likely lead to random adjustment errors, rather than a systematic bias. 5. (5) Ships and land can perfectly measure storm wind (at least to the threshold for HU or MH identification): if there is a systematic under- (over-)estimation of winds, this will lead to an under- (over-)estimation of historical frequency. 6. (6) Ships did not attempt to, or were unable to, avoid storms: this assumption leads to an underestimate of the adjustment. 7. (7) Modern era storm tracks are representative of the storm tracks that could have occurred in the pre-satellite era: errors in this assumption will lead to reductions in any real variations and changes in HU and MH activity. This would also lead to underestimates in the time-smoothed uncertainty estimates. 8. (8) Sufficient information in addition to wind speed would be available to identify a HU or MH, if HU or MH winds are observed: this leads to an underestimate in the adjustment. 9. (9) Assume that single HU or MH events were not inaccurately counted as multiple systems in HURDAT2: if this happened the storm count for that period would be biased high, all other factors equal ### Trend measures To measure the secular trend in the various measures of aggregate NA HU and MH activity, we fit statistical models using time as a covariate. For frequency statistics (e.g., the number of HUs or MHs striking the USA, and HU and MH basin-wide frequency), we model the counts through a Poisson regression model, such that the probability distribution of the annual count (Nx) for each frequency metric (x; e.g., USA HU strikes, basin-wide MHs) is: $$p(N_{x}=k|\lambda)= \frac{\lambda_{x}(t)^{k}e^{-{\lambda}_{x}(t)}}{k!}{\rm{for}}\,k=0,1,2$$ (4) for which we use the available data for each quantity and assume that the rate of occurrence (λx(t)) is a function of time through a logarithmic link function: $$\lambda_{x}(t)={e}^{{a}_{x}+b_{x}t}$$ (5) where t is time (measured in (years C.E/100), ax gives a measure of the base rate and bx gives a measure of the time dependence of the rate or the trend measure, for each frequency measure x (e.g., USA HU strikes, basin-wide MH). To summarize the time dependence of the rate parameter (trend), we show in Table 1 the time-dependent coefficient (bx) of the rate parameter of the Poisson regression (λx(t)). For ratio statistics (e.g., the MH/HU ratio), we model the counts through a Binomial distribution, such that the probability distribution of the annual count of the subset variable (Ny) for each frequency ratio metric (Ny/Nx; e.g., MH/HU ratio) is: $$p({N}_{y}=k{\rm{|}}{\mu }_{x,y}(t),{N}_{x})=\frac{\varGamma ({N}_{x}+1){\rm{\cdot }}\varGamma ({N}_{y}+1)}{\varGamma ({N}_{x}-{N}_{y}+1)}{\mu }_{x,y}^{k}{(1-{\mu }_{x,y})}^{{N}_{x}-k}\;{\rm{for}}\,{k}=0,1,2\ldots {\it{N}}{\it{x}}$$ (6) for which we use the available data for each quantity to fit the probability of success (µx,y(t)) as a function of time, through a logistic link function: $${\mu}_{x,y}(t)=\frac{1}{1+{e}^{-(a_{x,y}+b_{x,y}t)}}$$ (7) where t is time (measured in (years C.E. /100), ax,y gives a measure of the base probability and bx,y gives a measure of the time dependence of the probability, or the trend measure, for the ratio of each frequency measure (Ny/Nx, e.g., MH/HU ratio). To summarize the time dependence of the probability (trend), we show in Table 2 the time-dependent coefficient (bx,y) of the probability of the Binomial regression (μx,y(t)). The Poisson and Binomial regression fit calculations are performed in R (ref. 48) using the freely available gamlss package (ref. 49,50). In Tables 1 and 2, we report the values of the trend factor in the regressions (bx for the Poisson regression and bx,y for the Binomial regression), along with the p-value of the time-dependent coefficient (bx or bx,y) estimated using the gamlss package.
2023-03-24 19:08:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6222795844078064, "perplexity": 2746.0096414791988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945288.47/warc/CC-MAIN-20230324180032-20230324210032-00091.warc.gz"}
https://cstheory.stackexchange.com/questions/8314/comparing-shannon-fano-and-shannon-coding/19745
# Comparing Shannon-Fano and Shannon coding I am interested in a few algorithms for creating prefix codes: 1. Shannon coding: we take $l_i=\lceil -\log p_i\rceil$. 2. Shannon-Fano coding: list probabilities in decreasing order and then split them in half at each step to keep the probability on each side balanced. Then codes/lengths come from resulting binary tree. My question is whether one of these algorithms always provides a better $L=\sum p_i l_i$? In a few examples I've done, Shannon-Fano seems better. Is this always true? Do you have any references or proofs? I realize these aren't the two best algorithms (Huffman coding is optimal). I'm just interested in comparing them. Edit: Well, no interest so far? I guess this stuff isn't all that exciting, but I'd still like to know. Here's what I've come up with after some more google searching/playing around with numbers. Practically, Shannon-Fano is often optimal for a small number of symbols with randomly generated probability distributions, or quite close to optimal for a larger number of symbols. I haven't found an example yet where Shannon-Fano is worse than Shannon coding. In Shannon's original 1948 paper (p17) he gives a construction (equivalent to Shannon coding above) and claims that Fano's construction (Shannon-Fano above) is substantially equivalent, without any real proof. I haven't been able to find a copy of Fano's 1949 technical report to see whether it has any analysis. Thomas and Cover's Elements of Information Theory says that Fano codes give $L \leq H +2$ and Shannon codes give $L \leq H+1$. However the reference they give for the Fano code analysis I think applies to alphabetical codes where you're not allowed to reorder the probabilities. Stefan Moser's Information Theory Lecture Notes (pp 50-59) agree with my historical analysis above and purport to prove that for Fano codes we have $l_i \leq \lceil -\log p_i \rceil$ which would be sufficient to prove they are better than Shannon codes. However I don't follow the proof and I think I have a counterexample: Take probabilities $(0.4, 0.26, 0.02, 0.02, 0.02, \ldots, 0.02)$ (we have 17 0.02's so that probabilities add to 1). Then Shannon coding has lengths $(2,2,6,6,6,\ldots,6)$ while Fano coding splits between 0.4 and 0.26 and then for the 0.6 probability on the right it splits between the second and third 0.02. Continuing on we see that 0.26 is encoded with a length of 3, larger than Shannon length. However, the average length is still less for Fano than for Shannon (according to my program implementing Fano coding). So, am I doing something wrong? Can you see how to construct a probability distribution to make Shannon code perform better than Fano, or a way to prove it's not possible? • Very nice counterexample. I think it's an interesting question. I agree that Cover and Thomas are probably talking about the case where you don't reorder the probabilities. I expect the theorem is true, and that nobody here knows any references to anybody looking at anything similar, and this is why you haven't gotten any answers. – Peter Shor Sep 23 '11 at 12:37 • Thanks for the comment Peter. Glad to know somebody's reading. I'll post something if I ever figure out the answer. – Martin Leslie Sep 23 '11 at 14:15 Unfortunately, I also don't have an exact answer. After an initially wrong statement in my lecture notes, currently I do not provide a good bound on the Fano code (version 3.1 of lecture notes). I do, however, have a proof that shows that the Fano codes has an expected average codeword length of less than H(U)/log(2) + 1 - 2p_{min}. Unfortunately, I only have the proof for the case of a binary code, not a general D-ary code. This is why I haven't included this yet into the lecture notes. But I'm still working on it and hope that I will eventually be able to fix it in my lecture notes. Note, however, that my result does NOT compare Shannon codes with Fano codes. It only gives a general upper bound on the Fano codes. I think it is very difficult to compare the two codes directly. In particular, I do not understand the proof given above and I have my doubts that it can be made rigorous, particularly not for D>2. Nice counterexample. A short "proof" that Shannon-Fano coding is always at least as good as Shannon coding over the long term, even though it may be worse for a few specific letters: 1. Shannon coding always sets the length $ls_i$ of each codeword to a function of how many times $f_i$ it occurs in some text of length $LN$: $ls_i=\lceil -\log (f_i/LN)\rceil$. 2. two-symbol Shannon-Fano coding and Huffman coding: always sets the codeword for one symbol to 0, and the other codeword to 1, which is optimal -- therefore it is always better than Shannon coding in this case (or equal, in the case where both probabilities are 1/2). 3. multi-symbol Shannon-Fano coding and Huffman coding: case (a): sometimes there exists some letter $i$ assigned Fano codeword with a length $lf_i$ 1 bit longer (worse compression) than $ls_i$. 4. multi-symbol Shannon-Fano coding and Huffman coding: case (b): sometimes there exists some letter $j$ assigned Fano codeword with a length $lf_j$ shorter (better compression) than $ls_j$. 5. Whenever case (a) occurs, case (b) also occurs at least as many times. 6. Each letter (if any) $i$ that falls under case (a) can be paired up with some other letter $j$ that not only falls under case (b), but also letter $j$ is more frequent than letter $i$. 7. Therefore, whenever case (b) occurs, the total number of bits needed to store all the letters $i$ and all the letters $j$ with with Shannon-Fano coding is no worse than with Shannon coding: $f_i lf_i + f_j lf_j \leq f_i ls_i + f_j ls_j$. 8. Therefore Shannon-Fano coding is always at least as good as Shannon coding. Alas, there's a lot of hand-waving in this "proof". I suspect there might be a better proof in the book Yaglom and Yaglom: "Probability and information". p.s.: You might also be interested in yet another algorithm for generating prefix codes, "Polar coding" developed by Andrew Polar. • Thanks for attempting an answer David. Unfortunately I don't really understand what you are saying here :( In points 3 and 4 are you saying that your claims apply to both Shannon-Fano and Huffman coding? Or are you somehow proving something about Shannon-Fano compared to Huffman? I just don't really see reasons for any of points 3-6. – Martin Leslie Sep 28 '11 at 6:28 • I had a look at the Yaglom and Yaglom book and I don't think it had anything to add on this topic: like most books it talks about Shannon-Fano coding but all its proofs are using Shannon coding. – Martin Leslie Sep 28 '11 at 6:29 • Thanks for the polar codes reference. Do you know whether this has anything to do with polar codes as used in channel coding? As far as I know those polar codes are so named because of channel polarisation, not after someone called Polar. The Korada and Urbanke paper linked from the link you gave seems to be about those kind of polar codes, applied to lossy compression. – Martin Leslie Sep 28 '11 at 6:33 • Yes, I'm saying 3 and 4 are true for Shannon Fano coding. (I'm also going on a tangent and mentioning that it is also true for Huffman coding as well, although that is irrelevant to the proof). – David Cary Sep 29 '11 at 1:56
2021-04-22 03:19:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7397512793540955, "perplexity": 540.6241011177498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039560245.87/warc/CC-MAIN-20210422013104-20210422043104-00402.warc.gz"}
https://compmatsci.wordpress.com/category/cahn-hilliard-equation/
## Some interesting papers in recent issues of Acta ### November 6, 2009 A Novick-Cohen et al Using numerical computations and asymptotic analysis, we study the effects of grain grooves on grain boundary migration in nanofilms, focusing for simplicity on axisymmetric bicrystals containing an embedded cylindrical grain located at the origin. We find there is a critical initial grain radius, R*, such that if RR*, groove growth during grain shrinkage leads to film break-up. The central cross-section of the grain boundary profile is seen to be parabolic, and an ordinary differential equation which depends on the tilt angle and the groove depth is seen to govern the location of the groove root. Near the annihilation–pinch-off transition, temporary stagnation occurs; thereafter, the shrinking grain accelerates rapidly, then disappears. Q Y Qiu et al The phase stability of ultra-thin (0 0 1) oriented ferroelectric PbZr1–xTixO3 (PZT) epitaxial thin films as a function of the film composition, film thickness, and the misfit strain is analyzed using a non-linear Landau–Ginzburg–Devonshire thermodynamic model taking into account the electrical and mechanical boundary conditions. The theoretical formalism incorporates the role of the depolarization field as well as the possibility of the relaxation of in-plane strains via the formation of microstructural features such as misfit dislocations at the growth temperature and ferroelastic polydomain patterns below the paraelectric–ferroelectric phase transformation temperature. Film thickness–misfit strain phase diagrams are developed for PZT films with four different compositions (x = 1, 0.9, 0.8 and 0.7) as a function of the film thickness. The results show that the so-called rotational r-phase appears in a very narrow range of misfit strain and thickness of the film. Furthermore, the in-plane and out-of-plane dielectric permittivities ε11 and ε33, as well as the out-of-plane piezoelectric coefficients d33 for the PZT thin films, are computed as a function of misfit strain, taking into account substrate-induced clamping. The model reveals that previously predicted ultrahigh piezoelectric coefficients due to misfit-strain-induced phase transitions are practically achievable only in an extremely narrow range of film thickness, composition and misfit strain parameter space. We also show that the dielectric and piezoelectric properties of epitaxial ferroelectric films can be tailored through strain engineering and microstructural optimization. E A Lazar et al We describe a method for evolving two-dimensional polycrystalline microstructures via mean curvature flow that satisfies the von Neumann–Mullins relation with an absolute error O(Δt2). This is a significant improvement over a different method currently used that has an absolute error O(Δt). We describe the implementation of this method and show that while both approaches lead to indistinguishable evolution when the spatial discretization is very fine, the differences can be substantial when the discretization is left unrefined. We demonstrate that this new front-tracking approach can be pushed to the limit in which the only mesh nodes are those coincident with triple junctions. This reduces the method to a vertex model that is consistent with the exact kinetic law for grain growth. We briefly discuss an extension of the method to higher spatial dimensions. R Besson The aim of this work is to give the independent-point-defect thermodynamics of ordered compounds a sufficiently general flavour, adapted to and working for multicomponent alloys. Generalizing previous approaches, we first show that an appropriate description for a crystal with point defects allows treatment of the practically important pressure and defect volume parameters in the grand canonical framework, the equivalence of which is explicited with the closer to experiments isothermal–isobaric conditions. Since industrial applications often involve multialloyed compounds, we then derive an operational tool for atomic-scale investigations of long-range order alloys with complex crystallographies and multiple additions. J Gruber et al A critical event model for the evolution of number- and area-weighted misorientation distribution functions (MDFs) during grain growth is proposed. Predictions from the model are compared to number- and area-weighted MDFs measured in Monte Carlo simulations with anisotropic interfacial properties and several initial orientation distributions, as well as a dense polycrystalline magnesia sample. The steady-state equation of our model appears to be a good fit to all data. The relation between the grain boundary energy and the normalized average boundary area is discussed in the context of triple junction dynamics. A L Genau and P W Voorhees Spatial correlations of interfacial curvature are compared for symmetric and asymmetric two-phase mixtures produced following spinodal decomposition as given by a numerical solution to the Cahn–Hilliard equation in three dimensions. By calculating radial distribution functions of the density of interfacial area as a function of the mean interfacial curvature of these bicontinuous microstructures, it is found that long-range diffusive interactions, in combination with the morphology of the system, yield a variety of correlations and anticorrelations over a range of length scales. The asymmetric mixtures show some similarities to the symmetric mixtures, as well as other unique features. ## Questioning Gibbs, anisotropy in phase field models and solidification under magnetic fields ### March 1, 2009 A few papers of interest — to be published in Acta and Scripta: A Perovic et al Our observation of the spinodal modulations in gold-50 at% nickel (Au-50Ni) transformed at high temperatures (above 600K) contradicts non-stochastic Cahn theory with its $\approx$500 degree modulation suppression. These modulations are stochastic because simultaneous increase in amplitude and wavelength by diffusion cannot be synchronized. The present theory is framed as a 2nd order differential uphill/downhill diffusion process and has an increasing time-dependent wave number and amplitude favouring Hillert’s one dimensional (1D) prior formulation within the stochastic association of wavelength and amplitude. R S Qin and H K D H Bhadeshia An expression is proposed for the anisotropy of interfacial energy of cubic metals, based on the symmetry of the crystal structure. The associated coefficients can be determined experimentally or assessed using computational methods. Calculations demonstrate an average relative error of <3% in comparison with the embedded-atom data for face-centred cubic metals. For body-centred-cubic metals, the errors are around 7% due to discrepancies at the {3 3 2} and {4 3 3} planes. The coefficients for the {1 0 0}, {1 1 0}, {1 1 1} and {2 1 0} planes are well behaved and can be used to simulate the consequences of interfacial anisotropy. The results have been applied in three-dimensional phase-field modelling of the evolution of crystal shapes, and the outcomes have been compared favourably with equilibrium shapes expected from Wulff’s theorem. X Li et al Thermoelectric magnetic convection (TEMC) at the scale of both the sample (L = 3 mm) and the cell/dendrite (L = 100 μm) was numerically and experimentally examined during the directional solidification of Al–Cu alloy under an axial magnetic field (Bless-than-or-equals, slant1T). Numerical results show that TEMC on the sample scale increases to a maximum when B is of the order of 0.1 T, and then decreases as B increases further. However, at the cellular/dendritic scale, TEMC continues to increase with increasing magnetic field intensity up to a field of 1 T. Experimental results show that application of the magnetic field caused changes in the macroscopic interface shape and the cellular/dendritic morphology (i.e. formation of a protruding interface, decrease in the cellular spacing, and a cellular–dendritic transition). Changes in the macroscopic interface shape and the cellular/dendritic morphology under the magnetic field are in good agreement with the computed velocities of TEMC at the scales of the macroscopic interface and cell/dendrite, respectively. This means that changes in the interface shape and the cellular morphology under a lower magnetic field should be attributed respectively to TEMC on the sample scale and the cell/dendrite scale. Further, by investigating the effect of TEMC on the cellular morphology, it has been proved experimentally that the convection will reduce the cellular spacing and cause a cellular–dendritic transition. ## Improved phase field microelasticity theory ### February 12, 2009 An improvement on the three-dimensional phase field microelasticity theory for elastically and structurally inhomogeneous solids Y Shen et al The three-dimensional phase field microelasticity theory for elastically and structurally inhomogeneous solids is improved with a simple and efficient damped iterative method. This method can be used to obtain the effective stress-free strain distribution that fully determines the stress and strain fields in the elastically and structurally inhomogeneous solids, or directly obtain the strain field from the equilibrium equation. Got to implement this some time! ## Meshless method for phase field equations ### March 20, 2008 Title: Solving phase field equations using a meshless method Authors: J X Zhou and M E Li Abstract: The phase field equation is solved by using a meshless reproducing kernel particle method (RKPM) for the very first time. The 1D phase field equation is solved using different grid sizes and various time steps at a given grid size. The method can give accurate solutions across the interface, and allows a larger time step than explicit finite-difference method. The 2D phase field equation is computed by the present method and a classic shrinking of a circle is simulated. This shows the powerfulness and the potential of the method to treat more complicated problems. ## Moving mesh spectral method for phase field simulations ### June 12, 2007 Title: Spectral implementation of an adaptive moving mesh method for phase-field equations Authors: W M Feng, P Yu, S Y Hu, Z K Liu, Q Du and L-Q Chen Abstract: Phase-field simulations have been extensively applied to modeling microstructure evolution during various materials processes. However, large-scale simulations of three-dimensional (3D) microstructures are still computationally expensive. Among recent efforts to develop advanced numerical algorithms, the semi-implicit Fourier spectral method is found to be particularly efficient for systems involving long-range interactions as it is able to utilize the fast Fourier transforms (FFT) on uniform grids. In this paper, we report our recent progress in making grid points spatially adaptive in the physical domain via a moving mesh strategy, while maintaining a uniform grid in the computational domain for the spectral implementation. This approach not only provides more accurate treatment at the interfaces requiring higher resolution, but also retains the numerical efficiency of the semi-implicit Fourier spectral method. Numerical examples using the new adaptive moving mesh semi-implicit Fourier spectral method are presented for both two and three space dimensional microstructure simulations, and they are compared with those obtained by other methods. By maintaining a similar accuracy, the proposed method is shown to be far more efficient than the existing methods for microstructures with small ratios of interfacial widths to the domain size. ## Finite difference schemes for Cahn-Hilliard equations ### June 12, 2007 Title: Numerical study of the Cahn-Hilliard equation in one, two and three dimensions Authors: E V L de Mello and Otton Teixeira da Silveira Filho Abstract: The Cahn–Hilliard (CH) equation is related with a number of interesting physical phenomena like the spinodal decomposition, phase separation and phase ordering dynamics. On the other hand this equation is very stiff and the difficulty to solve it numerically increases with the dimensionality and therefore, there are several published numerical studies in one dimension (1D), dealing with different approaches, and much fewer in two dimensions (2D). In three dimensions (3D) there are very few publications, usually concentrate in some specific result without the details of the used numerical scheme. We present here a stable and fast conservative finite difference scheme to solve the CH with two improvements: a splitting potential into an implicit and explicit in time part and the use of free boundary conditions. We show that gradient stability is achieved in one, two and three dimensions with large time marching steps than normal methods. ## Finite difference schemes for Cahn-Hilliard equation ### June 2, 2007 Title: Conservative nonlinear difference scheme for the Cahn-Hilliard equation (Parts I and II) Authors: S M Choo and S K Chung (Part I); S M Choo, S K Chung and K I Kim(Part II) Abstract: Part I: Numerical solutions for the Cahn-Hilliard equation is considered using the Crank-Nicolson type finite difference method. Existence of the solution for the difference scheme has been shown by Brouwer fixed-point theorem. Stability, convergence and error analysis of the scheme are shown. We also show that the scheme preserves the discrete mass, even though the linearized scheme in [1] is conditionally stable and does not preserve the mass. Part II: A nonlinear conservative difference scheme is considered for the two-dimensional Cahn-Hilliard equation. Existence of the solution for the finite difference scheme has been shown and the corresponding stability, convergence, and error estimates are discussed. We also show that the scheme preserves the discrete total mass computationally as well as analytically.
2017-06-25 08:43:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5038370490074158, "perplexity": 1191.6272328805396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320476.39/warc/CC-MAIN-20170625083108-20170625103108-00210.warc.gz"}
https://www.gamedev.net/forums/topic/435549-solved-datagridviewsort-is-exploding-on-me/
# [.net] [solved] DataGridView.Sort is exploding on me.... This topic is 4323 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts ##### Share on other sites This doesn't happen because you need to add an item that uses IComparable ? (not sure if Int32 has this by default) EDIT - yes it does, ignore that. EDIT 2 - if you could post some small repro code, that might help to see exactly what you're doing. [Edited by - Niksan2 on February 14, 2007 10:28:40 AM] ##### Share on other sites Sure. I will do it as soon as I get home (I am at school right now). ##### Share on other sites Ok, here is the code that adds a new row. It basically calls a Dialog window that I have set up that gets some basic values, then figures out what bitmap should be displayed according to the priority. private void CreateNewTask() { //show the new task window NewTaskDialog newTaskDialog = new NewTaskDialog(); DialogResult result = newTaskDialog.ShowDialog(); if (result == DialogResult.OK) { //pick out the new task details and add it to the list, then refresh the DataGridView as well //as update the autosave details //string newTask = newTaskDialog.Task; Bitmap newBitmap = NoPriorityBitmap; if (newTaskDialog.Priority == 0) newBitmap = NoPriorityBitmap; if (newTaskDialog.Priority == 1) newBitmap = LowPriorityBitmap; if (newTaskDialog.Priority == 2) newBitmap = MediumPriorityBitmap; if (newTaskDialog.Priority == 3) newBitmap = HighPriorityBitmap; dgvDataGridView.Rows.Insert(dgvDataGridView.Rows.Count, NextTaskId, newTaskDialog.Priority.ToString(), newBitmap, newTaskDialog.Completed, newTaskDialog.Task, newTaskDialog.Details, newTaskDialog.DueDateNecessary, newTaskDialog.DueDate, false); NextTaskId++; AutosaveNecessary = true; SortTodoList(currentSortMode); UpdateStatusBar(); } } Similarily, here is the code that modifies an existing row. It's a bit more complicated (but still shouldn't be hard to understand, especially with all the comments): //View/Modify a task private void ViewModifyTask() { //grab the id of the task being viewed - pass it into the other window so when it returns we know //which task was being edited. //loop through the tasks in the datagrid to find the row with the matching id, and update it (first //find the row index using a foreach loop, then modify (or delete) the contents of that row with the //same id. This should allow multiple tasks to be open at the same time DataGridViewSelectedRowCollection selectedRows = dgvDataGridView.SelectedRows; foreach (DataGridViewRow selectedRow in selectedRows) { //open up a new view window for each row that is selected frmViewModify modifyDialog = new frmViewModify(); modifyDialog.Id = Int32.Parse(selectedRow.Cells[0].Value.ToString()); modifyDialog.Priority = int.Parse(selectedRow.Cells[1].Value.ToString()); //ignore the priority image, which would be Cells[2] modifyDialog.Completed = bool.Parse(selectedRow.Cells[3].Value.ToString()); modifyDialog.Task = selectedRow.Cells[4].Value.ToString(); modifyDialog.Details = selectedRow.Cells[5].Value.ToString(); modifyDialog.DueDateNecessary = bool.Parse(selectedRow.Cells[6].Value.ToString()); modifyDialog.DueDate = DateTime.Parse(selectedRow.Cells[7].Value.ToString()); DialogResult result = modifyDialog.ShowDialog(); //depending on the result of the dialog, do different things if (result == DialogResult.OK) { //do something similar to the delete - grab the id out of the modify/view window and use it //to find the row in the datagrid view, and update it. Int32 taskId = modifyDialog.Id; int rowIndex = new int(); foreach (DataGridViewRow row in dgvDataGridView.Rows) { if (Int32.Parse(row.Cells[0].Value.ToString()) == taskId) { //grab the row index and break rowIndex = row.Index; break; } } //now that we have the task index, go through and update each cell of that task using the //values from the view/modify window dgvDataGridView.Rows[rowIndex].Cells[1].Value = modifyDialog.Priority.ToString() ; //set the right image if (modifyDialog.Priority == 0) dgvDataGridView.Rows[rowIndex].Cells[2].Value = NoPriorityBitmap; if (modifyDialog.Priority == 1) dgvDataGridView.Rows[rowIndex].Cells[2].Value = LowPriorityBitmap; if (modifyDialog.Priority == 2) dgvDataGridView.Rows[rowIndex].Cells[2].Value = MediumPriorityBitmap; if (modifyDialog.Priority == 3) dgvDataGridView.Rows[rowIndex].Cells[2].Value = HighPriorityBitmap; dgvDataGridView.Rows[rowIndex].Cells[3].Value = modifyDialog.Completed; dgvDataGridView.Rows[rowIndex].Cells[4].Value = modifyDialog.Task; dgvDataGridView.Rows[rowIndex].Cells[5].Value = modifyDialog.Details; dgvDataGridView.Rows[rowIndex].Cells[6].Value = modifyDialog.DueDateNecessary; dgvDataGridView.Rows[rowIndex].Cells[7].Value = modifyDialog.DueDate; dgvDataGridView.Refresh(); //don't touch the OverdueNotified column SortTodoList(currentSortMode); AutosaveNecessary = true; } //delete the task, if necessary //NOTE - this is kinda funky, but it works. Check the Delete button for the DialogResult if (result == DialogResult.Abort) { //delete the selected Task - first, figure out what the id of the task was being deleted, //then find the matching row id in the datagrid and delete that one int taskId = modifyDialog.Id; int rowIndex = new int(); foreach (DataGridViewRow row in dgvDataGridView.Rows) { if (int.Parse(row.Cells[0].Value.ToString()) == taskId) { //grab the row index and break rowIndex = row.Index; break; } } //delete the row with that index dgvDataGridView.Rows.RemoveAt(rowIndex); SortTodoList(currentSortMode); AutosaveNecessary = true; } //if the dialog result was "Cancel", don't do anything UpdateStatusBar(); } } ViewModifyTask is called when a user right clicks to brinig up a context menu and selects the View/Modify option (hence the name ViewModifyTask). So far, everything seems to be loading/saving properly. I did a little more testing last night, and it seems to explode only when I try to sort by the priority column. Any other column (including the id column, which is also an Int32) doesn't cause problems. It is probably something simple that I am missing. If you notice any bad practices in there, please, let me know. I have only ever taken one course in C#, and they really didn't teach us a lot (or at least I don't remember much from it). ##### Share on other sites Holy crap! I just found it! I had: dgvDataGridView.Rows[rowIndex].Cells[1].Value = modifyDialog.Priority.ToString(); ... when I needed... dgvDataGridView.Rows[rowIndex].Cells[1].Value = Int32.Parse(modifyDialog.Priority.ToString()); Thanks for the help (or the willingness to help)! [smile] ##### Share on other sites :D glad you found it, I always find that I have the "post code and find the bug myself within minutes" all the time, but will only happen when you post it to a forum or newsgroup. 1. 1 2. 2 Rutin 21 3. 3 A4L 15 4. 4 5. 5 • 13 • 26 • 10 • 11 • 44 • ### Forum Statistics • Total Topics 633742 • Total Posts 3013628 ×
2018-12-16 21:46:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1706530600786209, "perplexity": 6577.648273604583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827998.66/warc/CC-MAIN-20181216213120-20181216235120-00505.warc.gz"}
https://physics.stackexchange.com/questions/210746/differentiate-between-vrms-and-vavg-dc-of-an-ac-signal
# differentiate between v(rms) and V(avg/dc) of an Ac signal [duplicate] Please differentiate between v(rms) and V(avg/dc) of an Ac signal . Why do we use rms and why is vrms called effective value; why is vp not so called or (v(avg))? And why we use v rms to calculate power (avg)? • Why we use V rms to calculate average power? Because it gives the right answer! I'm sorry, I don't know the mathematical explanation of why it gives the right answer, but giving the right answer is the reason why we use it. – Solomon Slow Oct 5 '15 at 12:07 • Possible duplicate of Justification of root mean square – Kyle Kanos Oct 5 '15 at 12:43 • – Kyle Kanos Oct 5 '15 at 12:44 ## 1 Answer The average voltage of a pure AC signal is 0V. That's not a lot of use. Vrms is widely used because it ensures that an AC voltage produces the same heating effect in a resistor as the equivalent DC voltage. This also means it will produce the same light from an incandescent lamp.
2021-03-08 17:47:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8214122653007507, "perplexity": 324.51242213974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385389.83/warc/CC-MAIN-20210308174330-20210308204330-00216.warc.gz"}
https://byjus.com/question-answer/a-quadrilateral-abcd-is-drawn-to-circumscribe-a-circle-see-fig-prove-that-ab-cd-ad-bc/
Question # A quadrilateral $ABCD$ is drawn to circumscribe a circle (see Fig.) . Prove that $AB+CD=AD+BC$ Open in App Solution ## Circle properties:Let the$P,Q,R,S$be point of contacts for tangent $AB,BC,CD,DA$ respectively from the figure.We know that the length of two tangents from one point is equal. So,$AP=AS\dots \dots ..\left(1\right)\phantom{\rule{0ex}{0ex}}BP=BQ\dots \dots ..\left(2\right)\phantom{\rule{0ex}{0ex}}CR=CQ\dots \dots ..\left(3\right)\phantom{\rule{0ex}{0ex}}DR=DS\dots \dots ..\left(4\right)$By, adding),$\left(1\right),\left(2\right),\left(3\right)$ and$\left(4\right)$ , we get$AP+BP+CR+DR=AS+BQ+CQ+DS\phantom{\rule{0ex}{0ex}}⇒\left(AP+BP\right)+\left(CR+DR\right)=\left(AS+DS\right)+\left(BQ+CQ\right)\mathbf{\left(}\mathbf{On}\mathbf{}\mathbf{Rearranging}\mathbf{\right)}\phantom{\rule{0ex}{0ex}}⇒AB+CD=AD+BC$Hence, $\mathbf{AB}\mathbf{}\mathbf{+}\mathbf{}\mathbf{CD}\mathbf{}\mathbf{=}\mathbf{}\mathbf{AD}\mathbf{}\mathbf{+}\mathbf{}\mathbf{BC}$ proved. Suggest Corrections 2
2023-01-28 17:31:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5831469297409058, "perplexity": 3695.835537197114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499646.23/warc/CC-MAIN-20230128153513-20230128183513-00599.warc.gz"}
https://www.khanacademy.org/math/cc-third-grade-math/cc-third-grade-measurement/cc-third-grade-picture-graphs/v/interpreting-picture-graphs-paint-math-3rd-grade-khan-academy
- [Voiceover] Jacob charges $9.00 an hour to paint. The graph below shows the number of hours he spent painting different rooms of one house. How much did Jacob charge for painting the living room? So here's the graph, this is a picture graph, or pictograph, and it shows us how much time Jacob spent painting different rooms of a house. One super important thing is this little key right here that tells us each of these paint buckets is three hours of time. We're asked about the living room. On the graph, we can find where it says living room and see how many buckets. Remember, each of these buckets equals three hours of time. He spent three hours, plus another three hours, so he spent a total of six hours painting the living room. But the question asks us, how much did Jacob charge. Jacob charges$9.00 for every hour, and he worked for six hours, so he charged $9.00 six times.$9.00 for each of the six hours he worked, which is a total of $54.00. Jacob charged a total of$54.00 to paint the living room.
2019-06-24 09:45:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5265425443649292, "perplexity": 1853.4111036438658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999298.86/warc/CC-MAIN-20190624084256-20190624110256-00252.warc.gz"}
https://www.rdocumentation.org/packages/ggplot2/versions/2.2.1/topics/scale_x_discrete
# scale_x_discrete 0th Percentile ##### Position scales for discrete data You can use continuous positions even with a discrete position scale - this allows you (e.g.) to place labels between bars in a bar chart. Continuous positions are numeric values starting at one for the first level, and increasing by one for each level (i.e. the labels are placed at integer positions). This is what allows jittering to work. ##### Usage scale_x_discrete(..., expand = waiver(), position = "bottom") scale_y_discrete(..., expand = waiver(), position = "left") ##### Arguments ... common discrete scale parameters: name, breaks, labels, na.value, limits and guide. See discrete_scale for more details expand a numeric vector of length two giving multiplicative and additive expansion constants. These constants ensure that the data is placed some distance away from the axes. position The position of the axis. left or right for y axes, top or bottom for x axes Other position scales: scale_x_continuous, scale_x_date library(ggplot2) ggplot(diamonds, aes(cut)) + geom_bar() # The discrete position scale is added automatically whenever you # have a discrete position. (d <- ggplot(subset(diamonds, carat > 1), aes(cut, clarity)) + geom_jitter()) d + scale_x_discrete("Cut") d + scale_x_discrete("Cut", labels = c("Fair" = "F","Good" = "G", "Very Good" = "VG","Perfect" = "P","Ideal" = "I")) # Use limits to adjust the which levels (and in what order) # are displayed d + scale_x_discrete(limits = c("Fair","Ideal")) # you can also use the short hand functions xlim and ylim d + xlim("Fair","Ideal", "Good") d + ylim("I1", "IF") # See ?reorder to reorder based on the values of another variable ggplot(mpg, aes(manufacturer, cty)) + geom_point() ggplot(mpg, aes(reorder(manufacturer, cty), cty)) + geom_point() ggplot(mpg, aes(reorder(manufacturer, displ), cty)) + geom_point() # Use abbreviate as a formatter to reduce long names ggplot(mpg, aes(reorder(manufacturer, displ), cty)) + geom_point() + scale_x_discrete(labels = abbreviate)
2019-01-24 13:39:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6593610644340515, "perplexity": 7706.2018646803135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584547882.77/warc/CC-MAIN-20190124121622-20190124143622-00332.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-slope-of-a-line-that-is-perpendicular-to-2x-y-4
# How do you find the slope of a line that is perpendicular to 2x+y=4? Nov 30, 2016 The slope of the line is $\frac{1}{2}$ #### Explanation: The product of the slopes of two perpendicular lines is $- 1$. The slope of the line $2 x + y = 4 \mathmr{and} y = - 2 x + 4$ is $- 2$ (Compare with y=mx+c) Let ${m}_{2}$ is the slope of the perpendicular line. ${m}_{2} \cdot \left(- 2\right) = - 1 \therefore {m}_{2} = \frac{1}{2}$ So the slope of the line is $\frac{1}{2}$[Ans] Nov 30, 2016 Slope of lines perpendicular to $y$ is $\frac{1}{2}$ #### Explanation: $2 x + y = 4$ $y = - 2 x + 4$ A straight iine in slope $\left(m\right)$ and intercept $\left(c\right)$ form has the equation: $y = m x + c$ Hence this example $m = - 2$ Lines perpendicular to $y$ will have a slope $\left({m}_{1}\right)$ that satisfies $m \cdot {m}_{1} = - 1$ $\therefore {m}_{1} = - \frac{1}{-} 2 = \frac{1}{2}$
2019-11-20 12:53:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 20, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7492363452911377, "perplexity": 395.77036317627244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670558.91/warc/CC-MAIN-20191120111249-20191120135249-00450.warc.gz"}
http://icpc.njust.edu.cn/Problem/Hdu/4352/
# XHXJ's LIS Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others) ## Description #define xhxj (Xin Hang senior sister(学姐)) If you do not know xhxj, then carefully reading the entire description is very important. As the strongest fighting force in UESTC, xhxj grew up in Jintang, a border town of Chengdu. Like many god cattles, xhxj has a legendary life: 2010.04, had not yet begun to learn the algorithm, xhxj won the second prize in the university contest. And in this fall, xhxj got one gold medal and one silver medal of regional contest. In the next year's summer, xhxj was invited to Beijing to attend the astar onsite. A few months later, xhxj got two gold medals and was also qualified for world's final. However, xhxj was defeated by zhymaoiing in the competition that determined who would go to the world's final(there is only one team for every university to send to the world's final) .Now, xhxj is much more stronger than ever,and she will go to the dreaming country to compete in TCO final. As you see, xhxj always keeps a short hair(reasons unknown), so she looks like a boy( I will not tell you she is actually a lovely girl), wearing yellow T-shirt. When she is not talking, her round face feels very lovely, attracting others to touch her face gently。Unlike God Luo's, another UESTC god cattle who has cool and noble charm, xhxj is quite approachable, lively, clever. On the other hand,xhxj is very sensitive to the beautiful properties, "this problem has a very good properties",she always said that after ACing a very hard problem. She often helps in finding solutions, even though she is not good at the problems of that type. Xhxj loves many games such as,Dota, ocg, mahjong, Starcraft 2, Diablo 3.etc,if you can beat her in any game above, you will get her admire and become a god cattle. She is very concerned with her younger schoolfellows, if she saw someone on a DOTA platform, she would say: "Why do not you go to improve your programming skill". When she receives sincere compliments from others, she would say modestly: "Please don’t flatter at me.(Please don't black)."As she will graduate after no more than one year, xhxj also wants to fall in love. However, the man in her dreams has not yet appeared, so she now prefers girls. Another hobby of xhxj is yy(speculation) some magical problems to discover the special properties. For example, when she see a number, she would think whether the digits of a number are strictly increasing. If you consider the number as a string and can get a longest strictly increasing subsequence the length of which is equal to k, the power of this number is k.. It is very simple to determine a single number’s power, but is it also easy to solve this problem with the numbers within an interval? xhxj has a little tired,she want a god cattle to help her solve this problem,the problem is: Determine how many numbers have the power value k in [L,R] in O(1)time. For the first one to solve this problem,xhxj will upgrade 20 favorability rate。 ## Input First a integer T(T<=10000),then T lines follow, every line has three positive integer L,R,K.( 0<L<=R<263-1 and 1<=K<=10). ## Output For each query, print "Case #t: ans" in a line, in which t is the number of the test case starting from 1 and ans is the answer. ## Sample Input 1 123 321 2 ## Sample Output Case #1: 139 zhuyuanchen520 ## Source 2012 Multi-University Training Contest 6
2020-02-20 19:47:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22156676650047302, "perplexity": 4678.030890415672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145282.57/warc/CC-MAIN-20200220193228-20200220223228-00517.warc.gz"}
https://tex.stackexchange.com/questions/517199/biblatex-continuous-reference-numbering-without-duplication
# biblatex: continuous reference numbering without duplication I am using the refsegment feature of biblatex to assemble multiple bibliographies within the same document. I would like to have continuous reference numbering across the refsegments (i.e., if the last new reference of refsegment N is assigned the number i, then the first new reference of refsegment N+1 will be assigned the number i+1). I would also like to avoid duplication of already-cited references in the sub-bibliographies. That is to say, while a given reference may be cited in more than one refsegment, it should be included only in the sub-bibliography for the refsegment in which it first occurs. Here is a simple example: refsegment #1: The quick brown fox [1] jumps over the lazy dog [2]. Bibliography: [1] Fox reference. [2] Dog reference. refsegment #2: The lazy dog [2] was busy doing nothing [3]. Bibliography: [3] Reference on nothing. Note that reference #2 was not included in the second bibliography even though it was cited in the second refsegment because it had already been included in the first bibliography. This post proposed a solution to this problem, namely the onlynew bibliographic check, which suppresses repeated bibliographic items. The proposed solution keeps a running tally of bibliographic items (stored in the control sequence \blx@entrycount) and records the next reference number at the start of each refsegment (in the control sequence \blx@entrycount@\the\c@refsegment). Then, if the number assigned to a given bibliographic item (\thefield{labelnumber}) is less than (\ifnumless{}) the latter quantity, it is skipped in the bibliography (since it has already been referenced). However, upon compilation (TeX Live 2019), all citations are assigned the number zero, the bibliographies are empty, and the following warning is emitted: LaTeX Warning: Empty bibliography on input line <n> The below example reproduces the problem. What changes are needed to fix the code in the MWE? \documentclass{article} \usepackage[defernumbers=true]{biblatex} \usepackage{filecontents} \makeatletter % Overall entry counter \csnumgdef{blx@entrycount}{0} \AtEveryBibitem{% \csnumgdef{blx@entrycount}{\csuse{blx@entrycount}+1}} % Continued from this label number \appto{\newrefsegment}{% \csnumgdef{blx@entrycount@\the\c@refsegment}{\csuse{blx@entrycount}+1}} % Skip entries with label numbers less than the continued number \defbibcheck{onlynew}{% \ifnumless{\thefield{labelnumber}}{\csuse{blx@entrycount@\the\c@refsegment}} {\skipentry} {}} \makeatother \begin{filecontents}{\jobname.bib} @Book{companion, author = {Goossens, Michel and Mittelbach, Frank and Samarin, Alexander}, title = {The LaTeX Companion}, edition = {1}, date = {1994}} @Article{gillies, author = {Gillies, Alexander}, title = {Herder and the Preparation of Goethe's Idea of World Literature}, journaltitle = {Publications of the English Goethe Society}, volume = {9}, date = {1933}, pages = {46--67}} @Article{bertram, author = {Bertram, Aaron and Wentworth, Richard}, title = {Gromov invariants for holomorphic maps on Riemann surfaces}, journaltitle = {J.~Amer. Math. Soc.}, volume = {9}, number = {2}, date = {1996}, pages = {529--571}} @Book{poetics, author = {Aristotle}, editor = {Lucas, D. W.}, title = {Poetics}, series = {Clarendon Aristotle}, publisher = {Clarendon Press}, location = {Oxford}, date = {1968}} @Book{rhetoric, author = {Aristotle}, editor = {Cope, Edward Meredith}, commentator = {Cope, Edward Meredith}, title = {The Rhetoric of Aristotle with a commentary by the late Edward Meredith Cope}, volumes = {3}, publisher = {Cambridge University Press}, date = {1877}} \end{filecontents} \begin{document} \newrefsegment refsegment \therefsegment: \cite{companion,rhetoric} \printbibliography[segment=\therefsegment,check=onlynew] \newrefsegment refsegment \therefsegment: \cite{companion,bertram,poetics} \printbibliography[segment=\therefsegment,check=onlynew] \newrefsegment refsegment \therefsegment: \cite{companion,bertram,gillies,rhetoric} \printbibliography[segment=\therefsegment,check=onlynew] \end{document} Compilation: pdflatex && biber && pdflatex && pdflatex Output: • Basically this bib system means that if in segment N there is a cite, the reader has to search for the bib entry in N reference sections starting probably on N different pages - sounds like a very effective way to discourage a reader to ever check a source. – Ulrike Fischer Nov 25 '19 at 7:48 • @UlrikeFischer: I agree. This is the requirement I face, however. – user001 Nov 25 '19 at 7:52 • Well my point of view is that if someone requires such nonsense they should pay for the implementation of their whims instead of asking the volonteers to do it for free. – Ulrike Fischer Nov 25 '19 at 8:17 • @UlrikeFischer: I think you misunderstand. The asker and requirer are separate; for all the requirer cares, this can be done in Microsoft Word. – user001 Nov 25 '19 at 9:10 The quoted answer is over eight years old and some internal things have changed since then. I can't be absolutely sure, but the main issue seems to be the defernumbers option. With that all labelnumbers are initially set to 0 and non-zero numbers are only assigned once an entry was printed in a bibliography. Unfortunately, the test \ifnumless{\thefield{labelnumber}}{\csuse{blx@entrycount@\the\c@refsegment}} will always be true if labelnumber is 0, so all entries are skipped in each bibliography, which means that the bibliographies stay empty and thus that blx@entrycount is never increased. I suggest the following hopefully more stable solution. For each entry it records the first refsegment in which it was cited. The filter onlynew then only needs to check if this refsegment has a number smaller than the current refsegment. \documentclass{article} \usepackage[defernumbers=true]{biblatex} \makeatletter \AtEveryCitekey{% \ifcsundef{blx@entry@refsegment@\the\c@refsection @\thefield{entrykey}} {\csnumgdef{blx@entry@refsegment@\the\c@refsection @\thefield{entrykey}}{\the\c@refsegment}} {}} \defbibcheck{onlynew}{% \ifnumless{0\csuse{blx@entry@refsegment@\the\c@refsection @\thefield{entrykey}}}{\the\c@refsegment} {\skipentry} {}} \makeatother \begin{document} \newrefsegment refsegment \therefsegment: \cite{sigfridsson,worman} \printbibliography[segment=\therefsegment,check=onlynew] \newrefsegment refsegment \therefsegment: \cite{sigfridsson,geer,nussbaum} \printbibliography[segment=\therefsegment,check=onlynew] \newrefsegment refsegment \therefsegment: \cite{sigfridsson,geer,pines,worman} \printbibliography[segment=\therefsegment,check=onlynew] \end{document} Here is a solution that also works for \nocite. Since there is no \AtEveryCitekey-equivalent for \nocite we have to hook into internal commands. \documentclass{article} \usepackage[defernumbers=true]{biblatex} \makeatletter \def\blx@citation@entry#1#2{% \blx@bibreq{#1}% \ifinlist{#1}\blx@cites {} \blx@auxwrite\@mainaux{}{\string\abx@aux@cite{#1}}% \ifinlistcs{#1}{blx@segm@\the\c@refsection @\the\c@refsegment} {} \blx@auxwrite\@mainaux{}{\string\abx@aux@segm{\the\c@refsection}% {\the\c@refsegment}% {\detokenize{#1}}}% \ifcsundef{blx@entry@refsegment@\the\c@refsection @#1} {\csnumgdef{blx@entry@refsegment@\the\c@refsection @#1}{\the\c@refsegment}} {}% \blx@ifdata{#1} {} {\ifcsdef{blx@miss@\the\c@refsection} {\ifinlistcs{#1}{blx@miss@\the\c@refsection} {} {\blx@logreq@active{#2{#1}}}} {\blx@logreq@active{#2{#1}}}}} \defbibcheck{onlynew}{% \ifnumless{0\csuse{blx@entry@refsegment@\the\c@refsection @\thefield{entrykey}}}{\the\c@refsegment} {\skipentry} {}} \makeatother \begin{document} \newrefsegment refsegment \therefsegment: \cite{sigfridsson,worman} \printbibliography[segment=\therefsegment,check=onlynew] \newrefsegment refsegment \therefsegment: \cite{sigfridsson,geer,nussbaum}\nocite{knuth:ct:a} \printbibliography[segment=\therefsegment,check=onlynew] \newrefsegment refsegment \therefsegment: \cite{sigfridsson,geer,pines,worman}\nocite{knuth:ct:a,knuth:ct:b} \printbibliography[segment=\therefsegment,check=onlynew] \end{document} • Thanks moewe, that works well. I'm wondering if you could clarify a few minor points: (1) Is blx@entry@refsegment@\the\c@refsection @\thefield{entrykey} a single control sequence (with a space in the middle)? (2) Is entrykey the .bib file entry key (e.g., sigfridsson)? Finally, (3) does contatenation of 0 and \csuse{blx@entry@refsegment@\the\c@refsection @\thefield{entrykey} make the first argument of \ifnumless zero if the control sequence specified by \csuse{} is undefined? – user001 Nov 20 '19 at 16:50 • @user001 (1) In ..cs.. macros blx@entry@refsegment@\the\c@refsection @\thefield{entrykey} gets expanded before forming the control sequence name, so this becomes for example blx@entry@refsegment@0@sigfridsson for sigfridsson in refsection 0, the space after \c@refsection is swallowed by the usual TeX rules (for more on ...cs... macros see also Phelype's explanations to your recent question tex.stackexchange.com/a/517205/35864). (2) Yes, within biblatex we can access the entry key with \thefield{entrykey}. (3) Yes. – moewe Nov 20 '19 at 19:13 • Many thanks for clarifying. – user001 Nov 20 '19 at 19:38 • @user001 See the updated answer, please. – moewe Nov 21 '19 at 18:49 • github.com/plk/biblatex/issues/934 – moewe Nov 21 '19 at 18:49
2020-07-15 17:49:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7984230518341064, "perplexity": 4525.560613987493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657170639.97/warc/CC-MAIN-20200715164155-20200715194155-00052.warc.gz"}
https://www.physicsforums.com/threads/problem-understanding-group-theory-question.406601/
# Homework Help: Problem understanding Group Theory question 1. May 29, 2010 ### twotwo Hello all, my first post, hope to be a regular forum goer. Any help understanding this problem would be appreciated. 1. The problem statement, all variables and given/known data "Consider the following functions: f(x) = 1/x ; g(x) = 1/(1-x) defined on the set R\{0,1} = (-∞,0) U (0,1) U (1,∞) How many total functions can be generated by composing combinations of any number of these two functions?" 3. The attempt at a solution What i am having trouble with is the word "combination". Does it mean any combination of adding, subtracting, multiplying and dividing? Or does it mean to take one function of another (as in, g(f(g(f(g(x)))))? I assume it means the latter, but that assumption comes merely from the limited number of functions. Once again, any help would be immensely appreciated. 2. May 29, 2010 ### Dick I think it means exactly what you think it means. It says "composing". I think the group operation is intended to be composition of functions. Last edited: May 29, 2010 3. May 30, 2010 ### psholtz Yes, composing functions is what's intended.. For instance, if you take: $$f(x) = \frac{1}{x}$$ $$f(f(x)) = x$$ $$f(f(f(x))) = \frac{1}{x}$$ So there are a total of 2 functions that can be created by composing f with itself (ad infinitum). Continue mixing combinations of these two functions, and you'll get the total number of functions that can be created.
2018-09-21 00:00:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.630785346031189, "perplexity": 850.85500077736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156690.32/warc/CC-MAIN-20180920234305-20180921014705-00109.warc.gz"}
http://public.azimuthproject.railsplayground.net/vanilla218_stem/discussion/680/overexploitation
#### Howdy, Stranger! It looks like you're new here. If you want to get involved, click one of these buttons! # Overexploitation added stub for Overexploitation. I will add some more on how this adds up in the dynamics of the food population as a function of (logistic) growth and consumption losses which can lead to a fold. Comment Source:I polished it slightly.
2019-09-22 23:38:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27140793204307556, "perplexity": 1973.317877526266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575751.84/warc/CC-MAIN-20190922221623-20190923003623-00160.warc.gz"}
https://socratic.org/questions/how-do-you-write-the-original-function-as-a-piecewise-function-y-3-abs-x-1-4
# How do you write the original function as a piecewise function y= -3* abs(x+1)+4? Jul 6, 2018 $f \left(x\right) = \left\{\begin{matrix}3 x + 7 & \textcolor{w h i t e}{\text{XX}} & x \le - 1 \\ - 3 x + 1 & \null & x > - 1\end{matrix}\right.$ #### Explanation: Given: $y = - 3 \cdot | x + 1 | + 4$ The absolute value function always has a positive answer. The quantity inside the absolute value can be both positive or negative. This means there are two possible equations: $y = - 3 \left(x + 1\right) + 4 = - 3 x - 3 + 4$ $y = - 3 x + 1$ $y = - 3 \left(- 1\right) \left(x + 1\right) + 4 = 3 \left(x + 1\right) + 4$ $y = 3 x + 3 + 4$ $y = 3 x + 7$ The vertex of the absolute value occurs when the quantity in the absolute value $= 0$: $x + 1 = 0 \implies \ast x = - 1 \ast$ Piecewise function is $f \left(x\right) = \left\{\begin{matrix}3 x + 7 & \textcolor{w h i t e}{\text{XX}} & x \le - 1 \\ - 3 x + 1 & \null & x > - 1\end{matrix}\right.$
2019-09-16 14:41:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8600241541862488, "perplexity": 313.79873464838954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572744.7/warc/CC-MAIN-20190916135948-20190916161948-00436.warc.gz"}
https://flrecruiter.org/heaven-hill-rqh/perceptron-learning-algorithm-example-cb3019
Sometimes the term “perceptrons” refers to feed-forward pattern recognition networks; but the original perceptron, described here, can solve only simple problems. Examples are presented one by one at each time step, and a weight update rule is applied. classic algorithm for learning linear separators, with a different kind of guarantee. Updating weights means learning in the perceptron. Perceptron Learning Algorithm: Implementation of AND Gate 1. In this article we’ll have a quick look at artificial neural networks in general, then we examine a single neuron, and finally (this is the coding part) we take the most basic version of an artificial neuron, the perceptron, and make it classify points on a plane.. Perceptron Learning Algorithm Issues I If the classes are linearly separable, the algorithm converges to a separating hyperplane in a finite number of steps. This value does not matter much in the case of a single perceptron, but in more compex neural networks, the algorithm may diverge if the learning … In this tutorial, you will discover how to implement the Perceptron algorithm from scratch with Python. The Perceptron is a linear machine learning algorithm for binary classification tasks. I will begin with importing all the required libraries. Draw an example. • Perceptron Algorithm Simple learning algorithm for supervised classification analyzed via geometric margins in the 50’s [Rosenblatt’57] . Once all examples are presented the algorithms cycles again through all examples, until convergence. The animation frames below are updated after each iteration through all the training examples. We set weights to 0.9 initially but it causes some errors. But first, let me introduce the topic. Then, we update the weight values to 0.4. I A number of problems with the algorithm: I When the data are separable, there are many solutions, and which one is found depends on the starting values. It is definitely not “deep” learning but is an important building block. The Perceptron algorithm is the simplest type of artificial neural network. The code uses a … A Perceptron is an algorithm for supervised learning of binary classifiers. The perceptron algorithm • One of the oldest algorithm in machine learning introduced by Rosenblatt in 1958 • the perceptron algorithm is an online algorithm for learning a linear classifier • an online algorithm is an iterative algorithm that takes a single paired example at -iteration, and computes the updated iterate according to some rule A perceptron is initialized with the following values: $\eta = 0.2$ and weight vector $w = (0, 1, 0.5)$. Perceptron is termed as machine learning algorithm as weights of input signals are learned using the algorithm Perceptron algorithm learns the weight using gradient descent algorithm. Perceptron Learning Rule. Famous example of a simple non-linearly separable data set, the XOR problem (Minsky 1969): This is contrasted with unsupervised learning, which is trained on unlabeled data.Specifically, the perceptron algorithm focuses on binary classified data, objects that are either members of one class or another. Algorithm is: Remember: Prediction = sgn(wTx) There is typically a bias term also (wTx+ b), but the bias may be treated as a constant feature and folded into w Multilayer perceptron tries to remember patterns in sequential data. The learning rate controls how much the weights change in each training iteration. The goal of this example is to use machine learning approach to build a … The perceptron algorithm has been covered by many machine learning libraries, if you are intending on using a Perceptron for a … This example shows how to implement the perceptron learning algorithm using NumPy. Perceptrons: Early Deep Learning Algorithms. The Perceptron Algorithm • Online Learning Model • Its Guarantees under large margins Originally introduced in the online learning scenario. Example. Now that we understand what types of problems a Perceptron is lets get to building a perceptron with Python. A higher learning rate may increase training speed. And let output y = 0 or 1. Linear classification is nothing but if we can classify the data set by drawing a simple straight line then it … Winter. Perceptron Algorithm is used in a supervised machine learning domain for classification. In this example I will go through the implementation of the perceptron model in C++ so that you can get a better idea of how it works. The Perceptron algorithm 12 Footnote: For some algorithms it is mathematically easier to represent False as -1, and at other times, as 0. Let input x = ( I 1, I 2, .., I n) where each I i = 0 or 1. We don't have to design these networks. At its core a perceptron model is one of the simplest supervised learning algorithms for binary classification.It is a type of linear classifier, i.e. The perceptron can be used for supervised learning. The smaller the gap, One of the earliest supervised training algorithms is that of the perceptron, a basic neural network building block. The famous Perceptron Learning Algorithm that is described achieves this goal. Supervised learning, is a subcategory of Machine Learning, where learning data is labeled, meaning that for each of the examples used to train the perceptron, the output in known in advanced.. ... For example, when the entrance to the network is an image of a number 8, the corresponding forecast must also be 8. A Perceptron in just a few Lines of Python Code. Following example is based on [2], just add more details and illustrated the change of decision boundary line. The perceptron algorithm is frequently used in supervised learning, which is a machine learning task that has the advantage of being trained on labeled data. Well, the perceptron algorithm will not be able to correctly classify all examples, but it will attempt to find a line that best separates them. I The number of steps can be very large. For better results, you should instead use patternnet , which can solve nonlinearly separable problems. We can terminate the learning procedure here. A Simple Example: Perceptron Learning Algorithm. Commonly used Machine Learning Algorithms (with Python and R Codes) This algorithm enables neurons to learn and processes elements in the training set one at a time. Can you characterize data sets for which the Perceptron algorithm will converge quickly? A comprehensive description of the functionality of a perceptron … In classification, there are two types of linear classification and no-linear classification. Luckily, we can find the best weights in 2 rounds. Perceptron for AND Gate Learning term. In this example, our perceptron got a 88% test accuracy. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector.A more intuitive way to think about is like a Neural Network with only one neuron. Deep Learning Toolbox™ supports perceptrons for historical interest. Perceptron Learning Example. 1 The Perceptron Algorithm One of the oldest algorithms used in machine learning (from early 60s) is an online algorithm for learning a linear threshold function called the Perceptron Algorithm. Content created by webstudio Richter alias Mavicc on March 30. (See the scikit-learn documentation.). A Perceptron in Python. We could have learnt those weights and thresholds, by showing it the correct answers we want it to generate. Enough of the theory, let us look at the first example of this blog on Perceptron Learning Algorithm where I will implement AND Gate using a perceptron from scratch. We implement the methods fit and predict so that our classifier can be used in the same way as any scikit-learn classifier. Like logistic regression, it can quickly learn a linear separation in feature space […] Example. 2017. He proposed a Perceptron learning rule based on the original MCP neuron. First things first it is a good practice to write down a simple algorithm of what we want to do. Import all the required library. History. For the Perceptron algorithm, treat -1 as false and +1 as true. It may be considered one of the first and one of the simplest types of artificial neural networks. Initially, huge wave of excitement ("Digital brains") (See The New Yorker December 1958) Then, contributed to the A.I. The PLA is incremental. We should continue this procedure until learning completed. It is a model of a single neuron that can be used for two-class classification problems and provides the foundation for later developing much larger networks. It can solve binary linear classification problems. Supervised learning, is a subcategory of Machine Learning, where learning data is labeled, meaning that for each of the examples used to train the perceptron, the output in known in advanced. This example uses a classic data set, Iris Data Set, which contains three classes of 50 instances each, where each class refers to a type of iris plant. Perceptron Convergence Theorem As we have seen, the learning algorithms purpose is to find a weight vector w such that If the kth member of the training set, x(k), is correctly classified by the weight vector w(k) computed at the kth iteration of the algorithm, then we do not adjust the weight vector. x < 0, this means that the angle between the two vectors is greater than 90 degrees. Perceptron was introduced by Frank Rosenblatt in 1957. Say we have n points in the plane, labeled ‘0’ and ‘1’. We’re given a new point and we want to guess its label (this is akin to the “Dog” and “Not dog” scenario above). Correct answers we want it to generate is a good practice to write down a simple algorithm what! Deep ” learning but is an important building block kind of guarantee we set weights to 0.9 initially but causes... Can you characterize data sets for which the Perceptron is an important building block the XOR (... Answers we want to do for supervised classification analyzed via geometric margins in the 50 ’ s [ Rosenblatt 57... Achieves this goal fit and predict so that our classifier can be very large a simple algorithm of we. Is applied say we have n points in the Online learning Model • Its under... ‘ 1 ’ set, the XOR problem ( Minsky 1969 ) used in same! March 30 = ( I 1, I n ) where each I =... Separators, with a different kind of guarantee some errors algorithm is simplest! Results, you will discover how to implement the Perceptron algorithm will converge quickly Online learning Model Its! Original MCP neuron introduced in the same way as any scikit-learn classifier it is perceptron learning algorithm example not “ deep ” but. Or 1 plane, labeled ‘ 0 ’ and ‘ 1 ’ iteration all. Or 1 algorithm will converge perceptron learning algorithm example set one at a time begin with importing all the training set one a. Mcp neuron add more details and illustrated the change of decision boundary line could. And illustrated the change of decision boundary line below are updated after each iteration through all training. Now that we understand what types of problems a Perceptron with Python as false and +1 true... Just a few Lines of Python Code perceptron learning algorithm example, the XOR problem ( 1969... Boundary line, with perceptron learning algorithm example different kind of guarantee smaller the gap a! Just a few Lines of Python Code 88 % test accuracy should instead use patternnet, which solve... And illustrated the change of decision boundary line • Its Guarantees under large Originally! Algorithm enables neurons to learn and processes elements in the 50 ’ s [ ’... The XOR problem ( Minsky 1969 ) under large margins Originally introduced in the plane, ‘.,.., I 2,.., I n ) where each I I = or. Nonlinearly separable problems training set one at each time step, and a weight rule... Write down a simple non-linearly separable data set, the XOR problem ( Minsky 1969 ) importing all training. I the number of steps can be very large classic algorithm for supervised classification analyzed via margins! Each I I = 0 or 1 points in the 50 ’ s [ Rosenblatt ’ ]. Those weights and thresholds, by showing it the correct answers we want it generate. Of problems a Perceptron is lets get to building a Perceptron is a machine! Below are updated after each iteration through all the required libraries training set one each... Are updated after each iteration through all examples, until convergence each iteration. Mavicc on March 30 the algorithms cycles again through all perceptron learning algorithm example are presented one by one at each time,... It the correct answers we want it to generate iteration through all are. Want it to generate algorithm of what we want to do this tutorial, you will how... Weights in 2 rounds find the best weights in 2 rounds it the correct answers we to. Presented one by one at a time this algorithm enables neurons to learn and processes in. Algorithm will converge quickly can solve nonlinearly separable problems Implementation of and Gate 1 first and one of the supervised! Nonlinearly separable problems earliest supervised training algorithms is that of the first and one of simplest... This algorithm enables neurons to learn and processes elements in the Online learning Model • Its Guarantees under large Originally! Labeled ‘ 0 ’ and ‘ 1 ’,.., I n ) where I. Machine learning approach to build a … example we can find the best weights in 2 rounds to remember in. Examples, until convergence as any scikit-learn classifier 2 ], just add details. Earliest supervised training algorithms is that of the simplest types of linear classification and no-linear classification test. Discover how perceptron learning algorithm example implement the methods fit and predict so that our can... Of steps can be very large to write down a simple non-linearly separable data set, the XOR (. Analyzed via geometric margins in the 50 ’ s [ Rosenblatt ’ 57 ] alias Mavicc March! Of Python Code what we want to do be very large each time step, and a update... To build a … example be considered one of the Perceptron algorithm is: Now that we understand types. Learning rate controls how much the weights change in each training iteration importing all the required.. You will discover how to implement the methods fit and predict so that our classifier can be very large guarantee... Rosenblatt ’ 57 ] results, you will discover how to implement the methods fit and so. Perceptron learning algorithm for supervised learning of binary classifiers can you characterize data sets for which the Perceptron algorithm learning. An important building block Guarantees under large margins Originally introduced in the 50 ’ s [ Rosenblatt 57... The earliest supervised training algorithms is that of the Perceptron algorithm • Online learning scenario algorithms cycles through! You will discover how to implement the methods fit and predict so that our classifier can be used in Online! In the 50 ’ s [ Rosenblatt ’ 57 ] got a 88 % test accuracy discover to! S [ Rosenblatt ’ 57 ] examples are presented one by one at a time binary classification tasks which... Perceptron with Python get to building a Perceptron in just a few Lines of Python Code and one of earliest. Is based on the original MCP neuron in just a few Lines of Python Code each I I = or. ’ s [ Rosenblatt ’ 57 ] example of a simple non-linearly separable data perceptron learning algorithm example, the XOR problem Minsky... It the correct answers we want to do will converge quickly but is an important building block Perceptron tries remember! An algorithm for supervised learning of binary classifiers false and +1 as true simple learning algorithm for linear! Test accuracy different kind of guarantee example is based on the original MCP neuron Perceptron Python... Classic algorithm for binary classification tasks I I = 0 or 1 initially but it causes errors! Better results, you should instead use patternnet, which can solve nonlinearly separable.... 1 ’ Perceptron with Python want to do where each I I = 0 1. Can be very large and ‘ 1 ’ supervised learning of binary classifiers by showing it the correct answers want..., treat -1 as false and +1 as true each training iteration n points in the Online scenario... Want to do two types of linear classification and no-linear classification set weights to 0.9 initially but it some. Network building block is: Now that we understand what types of neural... Its Guarantees under large margins Originally introduced in the training set one at time. Our classifier can be used in the plane, labeled ‘ 0 ’ and 1! Margins in the 50 ’ s [ Rosenblatt ’ 57 ] I n ) where each I. The animation frames below are updated after each iteration through all examples are presented one by at. Goal of this example, our Perceptron got a 88 % test accuracy patternnet, which can solve nonlinearly problems! A linear machine learning approach to build a … example for better results, you will discover how to the... To learn and processes elements in the plane, labeled ‘ 0 and. +1 as true example, our Perceptron got a 88 % test accuracy will discover how to implement the fit... Best perceptron learning algorithm example in 2 rounds is the simplest type of artificial neural network building block learning of binary.. For better results, you will discover how to implement the methods fit and predict so that our can. To learn and processes elements in the 50 ’ s [ Rosenblatt ’ 57 ] first and one of Perceptron., you will discover how to implement the methods fit and predict so that classifier. You should instead use patternnet, which can solve nonlinearly separable problems iteration! Original MCP neuron all examples, until convergence the XOR problem ( Minsky 1969 ) smaller! May be considered one of the Perceptron algorithm • Online learning Model Its. Kind of guarantee to generate is the simplest types of artificial neural network building block Perceptron got a 88 test... Animation frames below are updated after each iteration through all the training examples,!, treat -1 as false and +1 as true details and illustrated the change of boundary... Online learning scenario any scikit-learn classifier required libraries training algorithms is that of the earliest training. Its Guarantees under large margins Originally introduced in the plane, labeled ‘ 0 ’ and ‘ 1.... No-Linear classification is the simplest type of artificial neural network the earliest training!, there are two types of artificial neural networks few Lines of Python Code geometric margins in the set. Showing it the correct answers we want to do data sets for which the Perceptron is lets get to a! And a weight update rule is applied Gate 1 I 1, I,! Algorithms cycles again through all the required libraries of artificial neural network building block weights change in each iteration... What we want to do and a weight update rule is applied building a Perceptron is a good practice write! So that our classifier can be very large may be considered one of the simplest type of artificial neural.., I 2,.., I 2,.., I n ) where each I I 0... All the required libraries March 30 is lets get to building a Perceptron an. And thresholds, by showing it the correct answers we want to do scikit-learn! Tfl Annual Report 2019/20, Spa Bella Appointments, Plugged Crossword Clue, Golf Clubs For Sale Uae, Prefix Re Worksheets, How To Improve Self-efficacy, Words With Metro, Retaliate In Tagalog, Monteverdi: Madrigals, Book 8, Black Butler: Book Of The Atlantic English,
2021-04-17 17:52:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.624221920967102, "perplexity": 993.4822795098507}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038461619.53/warc/CC-MAIN-20210417162353-20210417192353-00345.warc.gz"}
http://www.physicsforums.com/showthread.php?t=366174
## Inverse of a Piecewise Function 1. The problem statement, all variables and given/known data Find the inverse of .........{x......x =/= a1,...,an f(x) = {ai+1......x = ai, i = 1,...,n-1 .........{a1......x = an 2. Relevant equations 3. The attempt at a solution I interchanged the variables x and y, but I am very confused as to how to solve for the rest. I don't understand how to find inverses if we aren't given an explicit formula. Can someone help please? Recognitions: Gold Member Homework Help Quote by SpringPhysics 1. The problem statement, all variables and given/known data Find the inverse of .........{x......x =/= a1,...,an f(x) = {ai+1......x = ai, i = 1,...,n-1 .........{a1......x = an 2. Relevant equations 3. The attempt at a solution I interchanged the variables x and y, but I am very confused as to how to solve for the rest. I don't understand how to find inverses if we aren't given an explicit formula. Can someone help please? It might help you to just list the ordered pairs for f. Then reverse them all and you should see a way to write f-1 as a formula similar in form to the formula for f(x). Quote by LCKurtz It might help you to just list the ordered pairs for f. Then reverse them all and you should see a way to write f-1 as a formula similar in form to the formula for f(x). All right. Thanks for your help. Also, could I ask for help on finding inverses for the following? f(x) = x + [x] (floor) and f(x) = x/(1-x2) for -1 < x < 1. For the first function, there is no reversible equation for the floor operator, so could I state the inverse as simply {(x+[x],x) | (x,x+[x]) $$\in$$ f}? Would it be possible to state that any x in f be a.b, where a is an integer and b is any real number? Then f(x) = a.b + a for a >= 0 and f(x) = a.b + (a-1) for a < 0. Then the inverse of f would be given by f-1(x) = 1/2 (x + 0.b) for x >= 0 and f-1(x) = 1/2 (x + 1.b - 0.(2b)) for x < 0. For the second function, I interchanged the variables and obtains: x(1-y2) = y 0 = xy2 + y - x Using the quadratic formula, I got y = (-1 +/- $$\sqrt{1+4x^2}$$)/2x, -1 < y < 1 How do I know whether to take the positive or negative? Recognitions: Gold Member Staff Emeritus ## Inverse of a Piecewise Function You shouldn't take either one. If your calculations are correct, you are saying that this function is NOT one to one and so does NOT have an inverse. Quote by HallsofIvy You shouldn't take either one. If your calculations are correct, you are saying that this function is NOT one to one and so does NOT have an inverse. I am not sure which function you are referring to, but for the second function, the question asked to determine f-1 (the inverse) for -1 < x < 1. Hence, the function is 1-1 for the specified interval. I am just not sure there is an intuitive reason why the positive root works but not the negative. The floor function is not 1-1 so it does not have an inverse, but the question is f(x) = x + floor (x), so that the function is 1-1. However, I am not sure how to cleanly express its inverse. EDIT: Never mind for the floor function. Can someone please explain the first function? Recognitions: Gold Member Homework Help Quote by SpringPhysics EDIT: Never mind for the floor function. Can someone please explain the first function? Are you talking about f(x) = x + [x]? Try drawing the graph of f-1. You will see that its domain has gaps and the segments of the graph are translates of f(x) = x. Does that help? Quote by LCKurtz Are you talking about f(x) = x + [x]? Try drawing the graph of f-1. You will see that its domain has gaps and the segments of the graph are translates of f(x) = x. Does that help? Sorry, I meant f(x) = x/(1-x2) Recognitions: Gold Member Homework Help When you interchanged x and y and solved for y you got: $$y = \frac {-1 \pm \sqrt{1+4x^2}}{2x}$$ One way you can tell you don't want the - choice is what happens as x approaches 0. The branch you want goes through the origin. If you look at $$y = \frac {-1 - \sqrt{1+4x^2}}{2x}$$ as $x\rightarrow 0$ you get a -2/0 form which indicates a vertical asymptote. On the other hand if you let $x\rightarrow 0$ in $$y = \frac {-1 + \sqrt{1+4x^2}}{2x}$$ you get 0 as you can see if you rationalize the numerator and take the limit. Another thing that is a bit more work is to observe that with the + choice you get $-1\le y \le 1$, which also tells you you have the right branch. I see now. Thank you so much for your help!
2013-05-22 07:14:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6403163075447083, "perplexity": 327.1622945133166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00079-ip-10-60-113-184.ec2.internal.warc.gz"}
https://dsp.stackexchange.com/questions/63360/motion-compensation-for-stripmap-sar
# Motion compensation for stripmap SAR I am trying to perform motion compensation for an airborne stripmap SAR data following the book Synthetic Aperture Radar Processing by Franceschetti and Lanari. The algorithm flowchart is on page 145. My questions are: 1. Can I use the same phase equation given for 1st order MoCo for range resampling? 2. What is the right domain for performing these compensations? My understanding is that the range resampling can be done in frequency domain to achieve a shift in time domain and the phase compensations can to be done (via multiplications) in time domain for removing the phase error $$e^{j \frac{4 \pi \delta(r)}{\lambda}}$$. • This is not an exactly easy to track book, do you think you could take a photo of the page you are refering to? – A_A Jan 21 '20 at 10:04 • I am not familiar with that book but mocomp is generally done after waveform removal and in the RF x slow-time domain, such that the mocomp locus (whether it's a point or a line) has zero phase change in this domain. Jan 21 '20 at 15:36 • Here is the photo link: imgur.com/P1nSlD6. I figured out the right domain for phase corrections (as asked it Q2), it's multiplications in time domain. However, the targets are resolved in wrong azimuth positions and there appear aliases. Not sure about the reason. Also, I still need to know the range resampling procedure (as asked in Q1). Thanks for your help. Jan 22 '20 at 0:44
2021-09-27 22:23:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6346232295036316, "perplexity": 886.0162120587412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058552.54/warc/CC-MAIN-20210927211955-20210928001955-00026.warc.gz"}
https://newbedev.com/why-don-t-photons-split-up-into-multiple-lower-energy-versions-of-themselves
# Why don't photons split up into multiple lower energy versions of themselves? A photon is an elementary particle. As much elementary and as much particle as the electron . A single elementary particle has a fixed mass and cannot emit another particle without violating energy conservation, because its mass is fixed. In the center of mass of a massive elementary particle, electron, there is no energy for an emission , for a radiating electron in a field the energy is supplied by the field. If a zero mass elementary particle like the photon could split into two, suddenly an invariant mass will appear and the before the split has zero invariant mass, after the split a measurable invariant mass. This means both momentum and energy conservation are violated, as the invariant mass is the measure of the four vector, before and after the split. A photon can also interact with a field in higher order diagrams , but cannot split in the sense you envisage. Assume a photon could decay into two photons. These photons will have four vectors. There are two situations: their three momenta are parallel in the laboratory to the original photon, or there exists an angle of the three momenta with the original photon and also between them. In the latter case the two decay photons define a center of mass ( similar to a pi0 at rest). In this system the two momenta add up to zero, but there will be energy giving an invariant mass to the system, which violates energy conservation as the original photon had 0 invariant mass, i.e. cannot supply this energy. The original photon in the center of mass of the decayed photons will still be moving with velocity c, and so have a momentum different than zero, thus momentum conservation is also violated. In the case of two collinear photons in the lab , their invariant mass will be zero at the limit of the angle between them being exactly 0, otherwise the above argument holds. If it is exactly 0 no center of mass can be defined because a zero mass system moves with the velocity of light. So the question becomes: why a photon of frequency nu does not turn into two exactly collinear in the lab photons of lower frequency. Experimentally this has not been observed so if it can happen it is a very very low probability process. In the comments Lubos Motl gives this statement :"For photons, this amplitude is 0 due to the Abelian gauge symmetry and other symmetries." I am still looking for a link on this. In the next answer the collinear case is excluded by special relativity, Mathematically, the reason is that the Lorentz group is non-compact, which means that the parameter gamma can take any value from [1, infinity) but not infinity itself which would correspond to a coordinate frame moving at lightspeed with all massive particles having infinite kinetic energy. After the hypothetical split, 2 photons with the same energy would be propagating at an angle ok with momentum conservation. Then there would be a rest frame where the angle is 180 degrees. Now if you stay in this restframe and go back in time before the split, your single photon would be at rest. However, that is not possible: According to relativity, speed of light is constant for all frames. Thus, there can be not split of a single photon into two in vacuum (i.e. without momentum transfer during split). Mathematically, the reason is that the Lorentz group is non-compact, which means that the parameter gamma can take any value from [1, infinity) but not infinity itself which would correspond to a coordinate frame moving at lightspeed with all massive particles having infinite kinetic energy. Photons come with chirality, so you should consider angular momentum conservation as well. For $1\gamma \to 2\gamma$ scattering, this will not be possible. (I'm assuming production of collinear photons only; it's obvious when two are not collinear, energy and momentum conservation will be violated)
2023-04-02 00:20:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8028622269630432, "perplexity": 310.67495286807383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950363.89/warc/CC-MAIN-20230401221921-20230402011921-00716.warc.gz"}
https://gmatclub.com/forum/there-are-3-boxes-a-b-and-c-in-box-a-there-are-3-green-marbles-and-230442.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 25 Apr 2019, 22:57 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # There are 3 boxes A, B and C. In box A, there are 3 green marbles and Author Message TAGS: ### Hide Tags Senior CR Moderator Status: Long way to go! Joined: 10 Oct 2016 Posts: 1354 Location: Viet Nam There are 3 boxes A, B and C. In box A, there are 3 green marbles and  [#permalink] ### Show Tags Updated on: 11 Dec 2016, 09:03 3 00:00 Difficulty: 55% (hard) Question Stats: 73% (03:06) correct 27% (02:48) wrong based on 139 sessions ### HideShow timer Statistics There are 3 boxes A, B and C. In box A, there are 3 green marbles and 5 yellow marbles. In box B, there are 2 red marbles and 3 green marbles. In box C, there are 4 white marbles and 5 green marbles. Choose randomly a box then pick randomly a marble from that box. What is the probability that picked marble is green? A. 1/8 B. 55/96 C. 2/15 D. 551/1080 E. 551/360 _________________ Originally posted by broall on 11 Dec 2016, 05:21. Last edited by broall on 11 Dec 2016, 09:03, edited 1 time in total. Intern Joined: 17 Nov 2016 Posts: 2 Re: There are 3 boxes A, B and C. In box A, there are 3 green marbles and  [#permalink] ### Show Tags 11 Dec 2016, 08:55 Senior CR Moderator Status: Long way to go! Joined: 10 Oct 2016 Posts: 1354 Location: Viet Nam Re: There are 3 boxes A, B and C. In box A, there are 3 green marbles and  [#permalink] ### Show Tags 11 Dec 2016, 08:59 fleamkt wrote: I'll post the solution later Why dont you try this one? _________________ Senior Manager Joined: 13 Oct 2016 Posts: 367 GPA: 3.98 There are 3 boxes A, B and C. In box A, there are 3 green marbles and  [#permalink] ### Show Tags Updated on: 12 Dec 2016, 01:21 1 $$P(A)*P(G)+P(B)*P(G)+P(C)*P(G)$$ $$\frac{1}{3} (\frac{3}{8} + \frac{3}{5} + \frac{5}{9}) = \frac{551}{1080}$$ First time I got 479, have you changed something? Thanks for posting ones again. Originally posted by vitaliyGMAT on 11 Dec 2016, 09:37. Last edited by vitaliyGMAT on 12 Dec 2016, 01:21, edited 2 times in total. Senior CR Moderator Status: Long way to go! Joined: 10 Oct 2016 Posts: 1354 Location: Viet Nam Re: There are 3 boxes A, B and C. In box A, there are 3 green marbles and  [#permalink] ### Show Tags 11 Dec 2016, 09:50 vitaliyGMAT wrote: $$P(A)*P(G)+P(B)*P(G)+P(C)*P(G)$$ $$\frac{1}{3} (\frac{3}{8} + \frac{3}{5} + \frac{5}{9}) = \frac{551}{1080}$$ First time I got 479, have you changed something? Thanks fo posting one again. Yes, I've corrected some typos mistakes. Sorry about that. Posted from my mobile device _________________ Intern Joined: 13 Jul 2016 Posts: 7 Re: There are 3 boxes A, B and C. In box A, there are 3 green marbles and  [#permalink] ### Show Tags 10 Jan 2017, 03:40 Vitality,,,,Why did u multiply by 1/3 of whole,,,please Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 9146 Location: Pune, India Re: There are 3 boxes A, B and C. In box A, there are 3 green marbles and  [#permalink] ### Show Tags 10 Jan 2017, 04:58 nguyendinhtuong wrote: There are 3 boxes A, B and C. In box A, there are 3 green marbles and 5 yellow marbles. In box B, there are 2 red marbles and 3 green marbles. In box C, there are 4 white marbles and 5 green marbles. Choose randomly a box then pick randomly a marble from that box. What is the probability that picked marble is green? A. 1/8 B. 55/96 C. 2/15 D. 551/1080 E. 551/360 The probability of picking a green marble from the different boxes is different. Probability of picking box A = 1/3 Probability of picking a green marble from box A = 3/8 Probability of picking box B = 1/3 Probability of picking a green marble from box B = 3/5 Probability of picking box C = 1/3 Probability of picking a green marble from box C = 5/9 Probability of picking a green marble from box A, B or C = (1/3)*(3/8) + (1/3)*(3/5) + (1/3)*(5/9) = 1/8 + 1/5 + 5/27 = 551/1080 _________________ Karishma Veritas Prep GMAT Instructor Intern Joined: 13 Jul 2016 Posts: 7 Re: There are 3 boxes A, B and C. In box A, there are 3 green marbles and  [#permalink] ### Show Tags 13 Jan 2017, 00:13 I got the cake,,,actually I didn't observe the line,,,,choose randomly a box and then picking up marble,,,,,,Thank you Karishma mam,,,,, Senior Manager Joined: 03 Apr 2013 Posts: 274 Location: India Concentration: Marketing, Finance GMAT 1: 740 Q50 V41 GPA: 3 Re: There are 3 boxes A, B and C. In box A, there are 3 green marbles and  [#permalink] ### Show Tags 13 Jul 2017, 00:06 VeritasPrepKarishma wrote: nguyendinhtuong wrote: There are 3 boxes A, B and C. In box A, there are 3 green marbles and 5 yellow marbles. In box B, there are 2 red marbles and 3 green marbles. In box C, there are 4 white marbles and 5 green marbles. Choose randomly a box then pick randomly a marble from that box. What is the probability that picked marble is green? A. 1/8 B. 55/96 C. 2/15 D. 551/1080 E. 551/360 The probability of picking a green marble from the different boxes is different. Probability of picking box A = 1/3 Probability of picking a green marble from box A = 3/8 Probability of picking box B = 1/3 Probability of picking a green marble from box B = 3/5 Probability of picking box C = 1/3 Probability of picking a green marble from box C = 5/9 Probability of picking a green marble from box A, B or C = (1/3)*(3/8) + (1/3)*(3/5) + (1/3)*(5/9) = 1/8 + 1/5 + 5/27 = 551/1080 Hello VeritasPrepKarishma I have solved this problem in the same way. I wonder how to do this using only combinatorics. Your inputs would be great _________________ Spread some love..Like = +1 Kudos Re: There are 3 boxes A, B and C. In box A, there are 3 green marbles and   [#permalink] 13 Jul 2017, 00:06 Display posts from previous: Sort by
2019-04-26 05:57:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8074681758880615, "perplexity": 2078.2015865478993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578760477.95/warc/CC-MAIN-20190426053538-20190426075538-00303.warc.gz"}
http://wwwopt.mathematik.tu-darmstadt.de/optpde/result.php?id=7
## gcdist1 details: Keywords: analytic solution Geometry: easy, fixed Design: coupled via volume data Differential operator: • Poisson: • linear elliptic operator of order 2. • Defined on a 2-dim domain in 2-dim space • No time dependence. Design constraints: • none State constraints: • nonlinear convex, local of order 1 Mixed constraints: • none Submitted on 2013-01-14 by Winnifried Wollner. Published on 2013-01-14 ## gcdist1 description: ### Introduction This is a variation of the mother problem with additional pointwise constraints on the gradient of the state with known analytic solution. The presented problem is given on a domain $\Omega \subset {ℝ}^{2}$. This problem and analytical solution where proposed in [Deckelnick et al.2008, Section 5], and have been verified in Wollner [2010]. The solution of the problem is special due to the fact that no additional bounds on the control are needed. ### Variables & Notation #### Given Data The given data is chosen in a way which admits an analytic solution, that is given by rotation of a one dimensional problem. ### Optimality System The following optimality system for the state $y\in {H}_{0}^{1}\left(\Omega \right)\cap {W}^{2,p}\left(\Omega \right)$ with $p>2$, the control $u\in {L}^{2}\left(\Omega \right)$, the adjoint state $p\in {L}^{{p}^{\prime }}\left(\Omega \right)$ where $\frac{1}{p}+\frac{1}{{p}^{\prime }}=1$, and a Lagrange multiplier $\mu \in M{\left(\Omega \right)}^{2}={C}^{\ast }{\left(\overline{\Omega }\right)}^{2}$ for the constraint on the gradient of $y$ characterizes the unique minimizer, see Casas and Fernández [1993]: Here the adjoint equation has to be understood in the very weak sense, i.e., $p$ solves $-{\int }_{\Omega }p△\varphi \phantom{\rule{0.3em}{0ex}}\mathrm{\text{d}}x={\int }_{\Omega }\left(y-{y}_{\Omega }\right)\varphi \phantom{\rule{0.3em}{0ex}}\mathrm{\text{d}}x+{\int }_{\Omega }\nabla \varphi \phantom{\rule{0.3em}{0ex}}\mathrm{\text{d}}\mu \phantom{\rule{1em}{0ex}}\forall \varphi \in {H}_{0}^{1}\left(\Omega \right)\cap {C}^{1}\left(\overline{\Omega }\right).$ ### Supplementary Material The optimal state, adjoint state, control and Lagrange multiplier are known analytically: $\begin{array}{ccc}\hfill y& ={y}_{\Omega },\hfill & \hfill \\ \hfill p& =-u,\hfill \\ \hfill u& =\left\{\begin{array}{cc}-1,\phantom{\rule{1em}{0ex}}\hfill & 0\le |x|\le 1,\hfill \\ 0,\phantom{\rule{1em}{0ex}}\hfill & 1<|x|\le 2,\hfill \end{array}\right\\hfill \\ \hfill \mu & =\frac{\nabla y}{|\nabla y|}{\mu }_{0},\hfill \\ \hfill {⟨\varphi ,{\mu }_{0}⟩}_{C,{C}^{\ast }}& ={\int }_{|x|=1}\varphi \phantom{\rule{0.3em}{0ex}}\mathrm{\text{d}}s.\hfill \end{array}$ ### References E. Casas and L. A. Fernández. Optimal control of semilinear elliptic equations with pointwise constraints on the gradient of the state. Applied Mathematics and Optimization, 27:35–56, 1993. doi: 10.1007/BF01182597. K. Deckelnick, A. Günther, and M. Hinze. Finite element approximation of elliptic control problems with constraints on the gradient. Numerische Mathematik, 111: 335–350, 2008. doi: 10.1007/s00211-008-0185-3. W. Wollner. A posteriori error estimates for a finite element discretization of interior point methods for an elliptic optimization problem with state constraints. Computational Optimization and Applications, 47(1):133–159, 2010. doi: 10.1007/s10589-008-9209-2.
2020-01-25 09:16:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8580658435821533, "perplexity": 1243.5432931602531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251671078.88/warc/CC-MAIN-20200125071430-20200125100430-00112.warc.gz"}
https://cp4space.wordpress.com/2013/03/21/holyhedra/
## Holyhedra Euler’s formula famously relates the number of vertices, edges and faces of a polyhedron. Specifically, it gives $V - E + F = 2$, where V, E and F are the numbers of vertices, edges and faces, respectively. For example, the dodecahedron has 20 vertices, 30 edges and 12 faces, and can be easily seen to obey Euler’s formula. A proof is obtained by puncturing one of the faces and ‘unfolding’ it into a planar graph, whence you can proceed by induction on the number of vertices and edges. It does, however, assume that the polyhedron is homeomorphic to a sphere. We can find a counter-example to Euler’s formula, such as this torus: Here, there are n vertices, n faces and 2n edges. A refinement of Euler’s formula is $V - E + F = \chi$, where $\chi = 2 - 2G$ is the Euler characteristic expressed in terms of the genus (number of holes). Even this refinement of the formula makes an assumption, namely that none of the faces contain holes. For instance, the following polyhedron (composed of two tetrahedra, one of which penetrates the other) has 7 vertices, 15 edges and 8 faces, but a genus of 0, and therefore disobeys Euler’s formula. John Conway and David Wilson wondered whether or not it would be possible for all faces of a polyhedron to contain holes. They coined the term ‘holyhedron’ to refer to this, even though no explicit examples were known at the time. Two years later, in 1999, Jade Vinson employed a ‘just do it’ construction to synthesise an example. The construction was incredibly fiddly, resulting in a holyhedron with 78585627 faces, which is explained very clearly in Vinson’s paper. John Conway offered a prize of $\frac{10000}{F}$ USD for a holyhedron, so this would be worth 12.7 millicents. A much more reasonable example is a 492-face beast discovered by Don Hatch. It features many interpenetrating tetrahedra and pentagonal pyramids, based around a central dodecahedron. It was constructed in layers, shown in different colours in the above image. He has included a digraph (directed graph) detailing which layers penetrate the faces of the other layers to achieve the property of being a holyhedron. Eight vertices on the exterior do not penetrate anything; these form the convex hull of the polyhedron.
2017-05-23 03:16:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8203449249267578, "perplexity": 545.7396131541115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607325.40/warc/CC-MAIN-20170523025728-20170523045728-00128.warc.gz"}
https://tdp.ie/larry-loves-s14/dynapack-graph/nissan/nissan-silvia/
# CA18 and SR20 tuning examples Here are the results for Alan Lenihan’s S13 drift car that was in for mapping today. It was mapped on a Motec M4. We have done a before and after comparison with the previous map from Tuner X. N.b. The thin red line represents Tuner-X’s map and the larger line represents TDP’s map. The first image shows the torque (lbft) on the left hand side of the page, and power (BHP) on the right hand side of the page, measured at the hubs. The torque figure is the ‘true torque’. It is calculated by: the torque produced at the flywheel multiplied by the gear ratio and the final drive ratio. The power figure is the power produced at the axles. Gain: 57.6bhp and 175.7lbft at the axles. The second image shows flywheel torque (lbft) on the left hand side of the page and flywheel power (BHP) on the right hand side. As the title suggests, this is the torque and power that is produced at the flywheel. These figures are estimated and are correct to ± 5% Gain: 64.7bhp and 44.6lbft at the flywheel This image shows flywheel torque (lbft) and AFR. The torque produced at the flywheel will be displayed on the left hand side of the page. The right hand side of the page shows the air-fuel ratio. This car was running extremely rich throughout the entire rev-range on Tuner-X’s map. The car drives a lot smoother through the revs, has more torque and power and is far more fuel efficient now that it has a TDP map. Finally, here is the boost graph. As you can see, the boost hasn’t been raised at all. +++++++++ Here are the results for a Larry Love’s S14 drift car that was tuned at TDP using an Apexi Power FC ecu. The car has the following modifications: a full turbo-back exhaust, an Apexi air filter, a front mount intercooler and an Apexi Power FC. This Image is of the Calculated Flywheel Torque (left side) and Flywheel BHP (right side). (These figures are for reference only are are accurate to +/- 5%) (Click Image to Enlarge) This Image is of the Axle Torque (left side) and Axle BHP (right side). (These figures are accurate to within +/- 0.1 %) (Click Image to Enlarge) This Image is of the boost and vacuum (Click Image to Enlarge)
2021-01-26 12:23:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8166629076004028, "perplexity": 2318.6251928177485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704799741.85/warc/CC-MAIN-20210126104721-20210126134721-00399.warc.gz"}
https://www.wisdomjobs.com/e-university/hadoop-tutorial-484/hadoop-configuration-14835.html
There are a handful of files for controlling the configuration of a Hadoop installation; the most important ones are listed in Table These files are all found in the conf directory of the Hadoop distribution. The configuration directory can be relocated to another part of the filesystem (outside the Hadoop installation, which makes upgrades marginally easier) as long as daemons are started with the --config option specifying the location of this directory on the local filesystem. Configuration Management Hadoop does not have a single, global location for configuration information. Instead, each Hadoop node in the cluster has its own set of configuration files, and it is up to administrators to ensure that they are kept in sync across the system. Hadoop provides a rudimentary facility for synchronizing configuration using rsync (see upcoming discussion); alternatively, there are parallel shell tools that can help do this, like dsh or pdsh. Hadoop is designed so that it is possible to have a single set of configuration files that are used for all master and worker machines. The great advantage of this is simplicity, both conceptually (since there is only one configuration to deal with) and operationally (as the Hadoop scripts are sufficient to manage a single configuration setup). For some clusters, the one-size-fits-all configuration model breaks down. For example, if you expand the cluster with new machines that have a different hardware specification to the existing ones, then you need a different configuration for the new machines to take advantage of their extra resources. In these cases, you need to have the concept of a class of machine, and maintain a separate configuration for each class. Hadoop doesn’t provide tools to do this, but there are several excellent tools for doing precisely this type of configuration management, such as Chef, Puppet, cfengine, and bcfg2. For a cluster of any size, it can be a challenge to keep all of the machines in sync: consider what happens if the machine is unavailable when you push out an update who ensures it gets the update when it becomes available? This is a big problem and can lead to divergent installations, so even if you use the Hadoop control scripts for managing Hadoop, it may be a good idea to use configuration management tools for maintainingthe cluster. These tools are also excellent for doing regular maintenance, such as patching security holes and updating system packages. Control scripts Hadoop comes with scripts for running commands, and starting and stopping daemons across the whole cluster. To use these scripts (which can be found in the bin directory), you need to tell Hadoop which machines are in the cluster. There are two files for this purpose, called masters and slaves, each of which contains a list of the machine hostnames or IP addresses, one per line. The masters file is actually a misleading name, inthat it determines which machine or machines should run a secondary namenode. The slaves file lists the machines that the datanodes and tasktrackers should run on. Both masters and slaves files reside in the configuration directory, although the slaves file may be placed elsewhere (and given another name) by changing the HADOOP_SLAVES setting in hadoop-env.sh. Also, these files do not need to be distributed to worker nodes, since they are used only by the control scripts running on the namenode or jobtracker. You don’t need to specify which machine (or machines) the namenode and jobtracker runs on in the masters file, as this is determined by the machine the scripts are run on. (In fact, specifying these in the masters file would cause a secondary namenode to run there, which isn’t always what you want.) For example, the start-dfs.sh script, which starts all the HDFS daemons in the cluster, runs the namenode on the machine thescript is run on. In slightly more detail, it: 1. Starts a namenode on the local machine (the machine that the script is run on) 2. Starts a datanode on each machine listed in the slaves file 3. Starts a secondary namenode on each machine listed in the masters file There is a similar script called start-mapred.sh, which starts all the MapReduce daemons in the cluster. More specifically, it: 1. Starts a jobtracker on the local machine 2. Starts a tasktracker on each machine listed in the slaves file Note that masters is not used by the MapReduce control scripts. Also provided are stop-dfs.sh and stop-mapred.sh scripts to stop the daemons started by the corresponding start script. These scripts start and stop Hadoop daemons using the hadoop-daemon.sh script. Ifyou use the aforementioned scripts, you shouldn’t call hadoop-daemon.sh directly. But if you need to control Hadoop daemons from another system or from your own scripts, then the hadoop-daemon.sh script is a good integration point. Likewise, hadoopdaemons. sh (with an “s”) is handy for starting the same daemon on a set of hosts. Master node scenarios Depending on the size of the cluster, there are various configurations for running the master daemons: the namenode, secondary namenode, and jobtracker. On a small cluster (a few tens of nodes), it is convenient to put them on a single machine; however, as the cluster gets larger, there are good reasons to separate them. The namenode has high memory requirements, as it holds file and block metadata for the entire namespace in memory. The secondary namenode, while idle most of the time, has a comparable memory footprint to the primary when it creates a checkpoint. (This is explained in detail in “The filesystem image and edit log” .) For filesystems with a large number of files, there may not be enough physical memory on onemachine to run both the primary and secondary namenode. The secondary namenode keeps a copy of the latest checkpoint of the filesystem metadata that it creates. Keeping this (stale) backup on a different node to the namenode allows recovery in the event of loss (or corruption) of all the namenode’s metadata files.(This is discussed further in Chapter Administrating Hadoop) On a busy cluster running lots of MapReduce jobs, the jobtracker uses considerable memory and CPU resources, so it should run on a dedicated node. Whether the master daemons run on one or more nodes, the following instructions apply: • Run the HDFS control scripts from the namenode machine. The masters file should contain the address of the secondary namenode. • Run the MapReduce control scripts from the jobtracker machine. When the namenode and jobtracker are on separate nodes, their slaves files need to be kept in sync, since each node in the cluster should run a datanode and a tasktracker. Environment Settings In this section, we consider how to set the variables in hadoop-env.sh. Memory By default, Hadoop allocates 1000 MB (1 GB) of memory to each daemon it runs. This is controlled by the HADOOP_HEAPSIZE setting in hadoop-env.sh. In addition, the task tracker launches separate child JVMs to run map and reduce tasks in, so we need to factor these into the total memory footprint of a worker machine. The maximum number of map tasks that will be run on a tasktracker at one time is controlled by the mapred.tasktracker.map.tasks.maximum property, which defaults to two tasks. There is a corresponding property for reduce tasks, mapred.task tracker.reduce.tasks.maximum, which also defaults to two tasks. The memory given to each of these child JVMs can be changed by setting the mapred.child.java.opts property. The default setting is -Xmx200m, which gives each task 200 MB of memory. (Incidentally, you can provide extra JVM options here, too. For example, you might enable verbose GC logging to debug GC.) The default configuration therefore uses 2,800 MB of memory for a worker machine (see Table). The number of tasks that can be run simultaneously on a tasktracker is governed by the number of processors available on the machine. Because MapReduce jobs are normally I/O-bound, it makes sense to have more tasks than processors to get better utilization. The amount of oversubscription depends on the CPU utilization of jobs you run, but a good rule of thumb is to have a factor of between one and two more tasks (counting both map and reduce tasks) than processors. For example, if you had 8 processors and you wanted to run 2 processes on each processor, then you could set each of mapred.tasktracker.map.tasks.maximum and mapred.tasktracker.reduce.tasks.maximum to 7 (not 8, since the datanode and the tasktracker each take one slot). If you also increased the memory available to each child task to 400 MB, then the total memory usage would be 7,600 MB (see Table ). Whether this Java memory allocation will fit into 8 GB of physical memory depends on the other processes that are running on the machine. If you are running Streaming or Pipes programs, this allocation will probably be inappropriate (and the memory allocated to the child should be dialed down), since it doesn’t allow enough memory for users’ (Streaming or Pipes) processes to run. The thing to avoid is processes beingswapped out, as this it leads to severe performance degradation. The precise memory settings are necessarily very cluster-dependent and can be optimized over time with experience gained from monitoring the memory usage across the cluster. Tools like Ganglia (“GangliaContext” on page 308) are good for gathering this information. Hadoop also provides settings to control how much memory is used for MapReduce operations. These can be set on a per-job basis and are covered in the section on “Shuffle and Sort” . For the master node, each of the namenode, secondary namenode, and jobtracker daemons uses 1,000 MB by default, a total of 3,000 MB. A namenode can eat up memory, since a reference to every block of every file is maintained in memory. For example, 1,000 MB is enough for a few million files. You can increase the namenode’s memory withoutchanging the memory allocated to other Hadoop daemons by setting HADOOP_NAMENODE_OPTS in hadoop-env.sh to include a JVM option for setting the memory size. HADOOP_NAMENODE_OPTS allows you to pass extra options to the namenode’s JVM. So, for example, if using a Sun JVM, -Xmx2000m would specify that 2000 MB of memory should be allocated to the namenode. If you change the namenode’s memory allocation, don’t forget to do the same for the secondary namenode (using the HADOOP_SECONDARYNAME NODE_OPTS variable), since its memory requirements are comparable to the primary namenode’s. You will probably also want to run the secondary namenode on a different machine, in this case. There are corresponding environment variables for the other Hadoop daemons, so you can customize their memory allocations, if desired. See hadoop-env.sh for details. Java The location of the Java implementation to use is determined by the JAVA_HOME setting in hadoop-env.sh or from the JAVA_HOME shell environment variable, if not set in hadoopenv. sh. It’s a good idea to set the value in hadoop-env.sh, so that it is clearly defined in one place and to ensure that the whole cluster is using the same version of Java. System logfiles System logfiles produced by Hadoop are stored in $HADOOP_INSTALL/logs by default. This can be changed using the HADOOP_LOG_DIR setting in hadoop-env.sh. It’s a good idea to change this so that logfiles are kept out of the directory that Hadoop is installed in, since this keeps logfiles in one place even after the installation directory changes after an upgrade. A common choice is /var/log/hadoop, set by including the following line in hadoop-env.sh: The log directory will be created if it doesn’t already exist (if not, confirm that the Hadoop user has permission to create it). Each Hadoop daemon running on a machine produces two logfiles. The first is the log output written via log4j. This file, which ends in .log, should be the first port of call when diagnosing problems, since most application log messages are written here. The standard Hadoop log4j configuration uses a DailyRolling File Appender to rotate logfiles. Old logfiles are never deleted, so you should arrange for them to be periodically deleted or archived, so as to not run out of disk space on the local node. The second logfile is the combined standard output and standard error log. This logfile, which ends in .out, usually contains little or no output, since Hadoop uses log4j for logging. It is only rotated when the daemon is restarted, and only the last five logs are retained. Old logfiles are suffixed with a number between 1 and 5, with 5 being the oldest file. Logfile names (of both types) are a combination of the name of the user running the daemon, the daemon name, and the machine hostname. For example, hadoop-tomdatanode- sturges.local.log.2008-07-04 is the name of a logfile after it has been rotated. This naming structure makes it possible to archive logs from all machines in the cluster in a single directory, if needed, since the filenames are unique. The username in the logfile name is actually the default for the HADOOP_IDENT_STRING setting in hadoop-env.sh. If you wish to give the Hadoop instance a different identity for the purposes of naming the logfiles, change HADOOP_IDENT_STRING to be the identifier you want. SSH settings The control scripts allow you to run commands on (remote) worker nodes from the master node using SSH. It can be useful to customize the SSH settings, for various reasons. For example, you may want to reduce the connection timeout (using the ConnectTimeout option) so the control scripts don’t hang around waiting to see whether a dead node is going to respond. Obviously, this can be taken too far. If the timeout istoo low, then busy nodes will be skipped, which is bad. Another useful SSH setting is StrictHostKeyChecking, which can be set to no to automatically add new host keys to the known hosts files. The default, ask, is to prompt the user to confirm they have verified the key fingerprint, which is not a suitable setting in a large cluster environment. To pass extra options to SSH, define the HADOOP_SSH_OPTS environment variable in hadoop-env.sh. See the ssh and ssh_config manual pages for more SSH settings. The Hadoop control scripts can distribute configuration files to all nodes of the cluster using rsync. This is not enabled by default, but by defining the HADOOP_MASTER setting in hadoop-env.sh, worker daemons will rsync the tree rooted at HADOOP_MASTER to the local node’s HADOOP_INSTALL whenever the daemon starts up. What if you have two masters a namenode and a jobtracker on separate machines? You can pick one as the source and the other can rsync from it, along with all the workers. In fact, you could use any machine, even one outside the Hadoop cluster, to rsync from. Because HADOOP_MASTER is unset by default, there is a bootstrapping problem: how do we make sure hadoop-env.sh with HADOOP_MASTER set is present on worker nodes? For small clusters, it is easy to write a small script to copy hadoop-env.sh from the master to all of the worker nodes. For larger clusters, tools like dsh can do the copies in parallel. Alternatively, a suitable hadoop-env.sh can be created as a part of the automated installation script (such as Kickstart). When starting a large cluster with rsyncing enabled, the worker nodes can overwhelm the master node with rsync requests since the workers start at around the same time. To avoid this, set the HADOOP_SLAVE_SLEEP setting to a small number of seconds, such as 0.1, for one-tenth of a second. When running commands on all nodes of the cluster, the master will sleep for this period between invoking the command on each worker machine in turn. For more discussion on the security implications of SSH Host Keys, consult the article “SSH Host Key Protection” by Brian Hatch at http://www.securityfocus.com/infocus/1806. Important Hadoop Daemon Properties Hadoop has a bewildering number of configuration properties. In this section, we address the ones that you need to define (or at least understand why the default is appropriate) for any real-world working cluster. These properties are set in the Hadoop site files: core-site.xml, hdfs-site.xml, and mapred-site.xml. Example shows a typical example set of files. Notice that most are marked as final, in order to prevent them frombeing overridden by job configurations. You can learn more about how to write Hadoop’s configuration files in “The Configuration API” . Example . A typical set of site configuration files HDFS To run HDFS, you need to designate one machine as a namenode. In this case, the property fs.default.name is an HDFS filesystem URI, whose host is the namenode’s hostname or IP address, and port is the port that the namenode will listen on for RPCs.If no port is specified, the default of 8020 is used. The masters file that is used by the control scripts is not used by the HDFS (or MapReduce) daemons to determine hostnames. In fact, because the masters file is only used by the scripts, you can ignore it if you don’t use them. The fs.default.name property also doubles as specifying the default filesystem. The default filesystem is used to resolve relative paths, which are handy to use since they save typing (and avoid hardcoding knowledge of a particular namenode’s address). For example, with the default filesystem defined in Example, the relative URI /a/b is resolved to hdfs://namenode/a/b. If you are running HDFS, the fact that fs.default.name is used to specify both the HDFS namenode and the default filesystem means HDFS has to be the default filesystem in the server configuration. Bear in mind,however, that it is possible to specify a different filesystem as the default in the client configuration, for convenience. For example, if you use both HDFS and S3 filesystems, then you have a choice of specifying either as the default in the client configuration, which allows you to refer to the default with a relative URI and the otherwith an absolute URI. There are a few other configuration properties you should set for HDFS: those that set the storage directories for the namenode and for datanodes. The property dfs.name.dir specifies a list of directories where the namenode stores persistent filesystem metadata (the edit log and the filesystem image). A copy of each of the metadata files is stored in each directory for redundancy. It’s common to configure dfs.name.dir so that the namenode metadata is written to one or two local disks, and a remote disk, such as an NFS-mounted directory. Such a setup guards against failure of a local disk and failure of the entire namenode, since in both cases the files can be recovered and used to start a new namenode. (The secondary namenode takes only periodic checkpoints of the namenode, so it does not provide an up-to-date backup of the namenode.) You should also set the dfs.data.dir property, which specifies a list of directories for a datanode to store its blocks. Unlike the namenode, which uses multiple directories for redundancy, a datanode round-robins writes between its storage directories, so for performance you should specify a storage directory for each local disk. Read performance also benefits from having multiple disks for storage, because blocks will be spreadacross them, and concurrent reads for distinct blocks will be correspondingly spread across disks. For maximum performance, you should mount storage disks with the noatime option. This setting means that last accessed time information is not written on file reads, which gives significant performance gains. Finally, you should configure where the secondary namenode stores its checkpoints of the filesystem. The fs.checkpoint.dir property specifies a list of directories where the checkpoints are kept. Like the storage directories for the namenode, which keep redundant copies of the namenode metadata, the checkpointed filesystem image is stored in each checkpoint directory for redundancy. Table summarizes the important configuration properties for HDFS. Note that the storage directories for HDFS are under Hadoop’s temporary directory by default (the hadoop.tmp.dir property, whose default is /tmp/hadoop-${user.name}). Therefore, it is critical that these propertiesare set so that data is not lost by the system clearing out temporary directories. MapReduce To run MapReduce, you need to designate one machine as a jobtracker, which on small clusters may be the same machine as the namenode. To do this, set the mapred.job.tracker property to the hostname or IP address and port that the jobtracker will listen on. Note that this property is not a URI, but a host-port pair, separated by a colon. The port number 8021 is a common choice. During a MapReduce job, intermediate data and working files are written to temporary local files. Since this data includes the potentially very large output of map tasks, you need to ensure that the mapred.local.dir property, which controls the location of local temporary storage, is configured to use disk partitions that are large enough. The mapred.local.dir property takes a comma-separated list of directory names, and youshould use all available local disks to spread disk I/O. Typically, you will use the same disks and partitions (but different directories) for MapReduce temporary data as you use for datanode block storage, as governed by the dfs.data.dir property, discussed earlier. MapReduce uses a distributed filesystem to share files (such as the job JAR file) with the tasktrackers that run the MapReduce tasks. The mapred.system.dir property is used to specify a directory where these files can be stored. This directory is resolved relative to the default filesystem (configured in fs.default.name), which is usually HDFS. Finally, you should set the mapred.tasktracker.map.tasks.maximum and mapred.task tracker.reduce.tasks.maximum properties to reflect the number of available cores on the tasktracker machines and mapred.child.java.opts to reflect the amount of memory available for the tasktracker child JVMs. See the discussion in “Memory”. Table summarizes the important configuration properties for HDFS. Hadoop daemons generally run both an RPC server (Table) for communication between daemons and an HTTP server to provide web pages for human consumption (Table ). Each server is configured by setting the network address and port number to listen on. By specifying the network address as 0.0.0.0, Hadoop will bind to all addresses on the machine.Alternatively, you can specify a single address to bind to. Aport number of 0 instructs the server to start on a free port: this is generally discouraged, since it is incompatible with setting cluster-wide firewall policies In addition to an RPC server, datanodes run a TCP/IP server for block transfers. The server address and port is set by the dfs.datanode.address property, and has a default value of 0.0.0.0:50010. There are also settings for controlling which network interfaces the datanodes and tasktrackers report as their IP addresses (for HTTP and RPC servers). The relevant properties are dfs.datanode.dns.interface and mapred.tasktracker.dns.interface, both of which are set to default, which will use the default network interface. You can set this explicitly to report the address of a particular interface (eth0, for example). This section discusses some other properties that you might consider setting. Cluster membership To aid the addition and removal of nodes in the future, you can specify a file containing a list of authorized machines that may join the cluster as datanodes or tasktrackers. The file is specified using the dfs.hosts (for datanodes) and mapred.hosts (for tasktrackers) properties, as well as the corresponding dfs.hosts.exclude and mapred.hosts.exclude files used for decommissioning. See “Commissioning and Decommissioning Nodes” for further discussion. Buffer size Hadoop uses a buffer size of 4 KB (4,096 bytes) for its I/O operations. This is a conservative setting, and with modern hardware and operating systems, you will likely see performance benefits by increasing it; 64 KB (65,536 bytes) or 128 KB (131,072 bytes) are common choices. Set this using the io.file.buffer.size property in core-site.xml. HDFS block size The HDFS block size is 64 MB by default, but many clusters use 128 MB (134,217,728 bytes) or even 256 MB (268,435,456 bytes) to ease memory pressure on the namenode and to give mappers more data to work on. Set this using the dfs.block.size property in hdfs-site.xml. Reserved storage space By default, datanodes will try to use all of the space available in their storage directories. If you want to reserve some space on the storage volumes for non-HDFS use, then you can set dfs.datanode.du.reserved to the amount, in bytes, of space to reserve. Trash Hadoop filesystems have a trash facility, in which deleted files are not actually deleted, but rather are moved to a trash folder, where they remain for a minimum period before being permanently deleted by the system. The minimum period in minutes that a file will remain in the trash is set using the fs.trash.interval configuration property in core-site.xml. By default, the trash interval is zero, which disables trash. Like in many operating systems, Hadoop’s trash facility is a user-level feature, meaning that only files that are deleted using the filesystem shell are put in the trash. Files deleted programmatically are deleted immediately. It is possible to use the trash programmatically, however, by constructing a Trash instance, then calling its moveToTrash() method with the Path of the file intended for deletion. The method returns a value indicating success; a value of false means either that trash is not enabled or that the file is already in the trash. When trash is enabled, each user has her own trash directory called .Trash in her home directory. File recovery is simple: you look for the file in a subdirectory of .Trash and move it out of the trash subtree. HDFS will automatically delete files in trash folders, but other filesystems will not, so you have to arrange for this to be done periodically. You can expunge the trash, which will delete files that have been in the trash longer than their minimum period, using the filesystem shell: The Trash class exposes an expunge() method that has the same effect. On a shared cluster, it shouldn’t be possible for one user’s errant MapReduce program to bring down nodes in the cluster. This can happen if the map or reduce task has a memory leak, for example, because the machine on which the tasktracker is running will run out of memory and may affect the other running processes. To prevent this situation, you can set mapred.child.ulimit, which sets a maximum limit on the virtualmemory of the child process launched by the tasktracker. It is set in kilobytes, and should be comfortably larger than the memory of the JVM set by mapred.child.java.opts; otherwise, the child JVM might not start. As an alternative, you can use limits.conf to set process limits at the operating system level. Job scheduler Particularly in a multiuser MapReduce setting, consider changing the default FIFO job scheduler to one of the more fully featured alternatives. See “Job Scheduling” User Account Creation Once you have a Hadoop cluster up and running, you need to give users access to it. This involves creating a home directory for each user and setting ownership permissions on it: This is a good time to set space limits on the directory. The following sets a 1 TB limit on the given user directory:
2019-05-24 21:23:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37105831503868103, "perplexity": 2340.1342703438418}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257767.49/warc/CC-MAIN-20190524204559-20190524230559-00192.warc.gz"}
https://datascience.stackexchange.com/questions/36445/fitting-an-arimax-model-on-out-of-sample-dataset
# Fitting an arimax model on out of sample dataset I have built an arimax model where we have sales data across time as the response variable and price is one of the external variables. I used the below code to build a simple arimax model. I had data points from 1 to 24, I have kept only 1 to 20 data points in the training dataset library(stats) fit=arima(window(tssales, end=20), order = c(0,1,1), xreg = window(tsprice, end=20)) summary(fit) fcast=forecast(fit, h=5, xreg = window(tsprice, end=20)) plot(fcast) Now when I try to fit the model from the training dataset in out of sample dataset (last 4 data points) I use the below code library(stats) out_of_sample=arima(window(tssales, start=21), xreg = window(tsprice, start=21), model=fit) I am getting the following error Error in arima(window(tssales, start = 21), xreg = window(tsprice, start = 21), : unused argument (model = fit) the arima function from the stats library does not take an argument called model, which is why you are receiving an error. Here is the function signature: arima(x, order = c(0L, 0L, 0L), seasonal = list(order = c(0L, 0L, 0L), period = NA), xreg = NULL, include.mean = TRUE, transform.pars = TRUE, fixed = NULL, init = NULL, method = c("CSS-ML", "ML", "CSS"), n.cond, SSinit = c("Gardner1980", "Rossignol2011"), optim.method = "BFGS", optim.control = list(), kappa = 1e6) The returned object, does contain the model. Read the related documentation for more details. I think your workflow is perhaps a little confused. You fit a model first on 20 data points, which is fine (more data would be nice!). You make some forecasts and plot it, which is also good - you can see if the model learned much and if there is perhaps some systematic error, e.g. simply predicting the previous time-step and not a more intelligent trend. The final step, however, should be to once again make predictions on your hold-out data; the last 4 data points. You should not fit another model to the hold-out data! Just predict what you model would say for that data, which it has never before seen. The reason we work like this, is so we can assess the model's performance independently from the data that was used to train it. We want to know what will happen in the future, when you get your 25th datapoint. Have a look at this nice tutorial, which explains the mains concepts of ARIMA and has a working example. Here is a very similar tutorial, but it is a video. In the following, I will demonstrate an example to show how you could fit an arimax model to your data in R using auto.arima() function (the code is the same if you want to use arima). If you use forecast package, auto.arima() function will fit "best ARIMA model according to either AIC, AICc or BIC value" to your data. Now, I assume your data have length of 300 Lets see how accurate arima would be: library(forecast) train <- window(my_data,end = 250) test <- window(my_data,start = 251) Since this is only a example to show an arimax model. I will generate monthly dummy variables to use as Covariate, to generate dummy monthly variables we can use nnfor package library(nnfor) dta <- seasdummy(350,12) colnames(dta) <- c("Jan", "Feb","Mar","Apr","May","Jun","Jul","Agu","Sep","Oct","Nov") We generate dummies for 350 months. later we will forecast next 50 month ("out of sample dataset"). train_xr <- window(dta,end=250) train_new_xr <- window(dta,start=251,end=300) Lets train our data: h1=nrow(train_new_xr) fit <- auto.arima(train,xreg = train_x) fc <- forecast(fit,h=h1,xreg= train_new_xr) autoplot(fc\$mean))+autolayer(test) # to see how good was the forecast or use accuracy() function Now out of sample forecast: xreg <- window(dta, end=300) new_xreg <- window(dta,start=301) h= nrow(new_xreg) fit1 <- auto.arima(my_data,xreg=xreg) fc1 <- forecast(fit1,h=h,xreg=new_xreg) autoplot(fc1) When I use stepwise = FALSE, approximation = FALSE arguments the forecast gets more accurate but auto.arima() function gets very slow. you could use it as: auto.arima(train,stepwise = FALSE, approximation = FALSE,xreg = train_x).
2020-09-18 07:26:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2829160690307617, "perplexity": 3249.224222895113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400187354.1/warc/CC-MAIN-20200918061627-20200918091627-00021.warc.gz"}
https://socratic.org/questions/how-do-you-write-a-verbal-expression-for-the-algebraic-expression-23f
How do you write a verbal expression for the algebraic expression 23f? The verbal expression for $23 f$ is "twenty-three ef."
2021-09-22 05:49:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8209092617034912, "perplexity": 2634.314048535359}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057329.74/warc/CC-MAIN-20210922041825-20210922071825-00647.warc.gz"}
http://thegrantlab.org/bio3d/reference/angle.xyz.html
A function for basic bond angle determination. angle.xyz(xyz, atm.inc = 3) ## Arguments xyz a numeric vector of Cartisean coordinates. a numeric value indicating the number of atoms to increment by between successive angle evaluations (see below). ## Value Returns a numeric vector of angles. ## References Grant, B.J. et al. (2006) Bioinformatics 22, 2695--2696. Barry Grant ## Note With atm.inc=1, angles are calculated for each set of three successive atoms contained in xyz (i.e. moving along one atom, or three elements of xyz, between sucessive evaluations). With atm.inc=3, angles are calculated for each set of three successive non-overlapping atoms contained in xyz (i.e. moving along three atoms, or nine elements of xyz, between sucessive evaluations). torsion.pdb, torsion.xyz, read.pdb, read.dcd. ## Examples ## Read a PDB file pdb <- read.pdb( system.file("examples/1hel.pdb", package="bio3d") ) ## Angle between N-CA-C atoms of residue four inds <- atom.select(pdb, resno=4, elety=c("N","CA","C")) angle.xyz(pdb$xyz[inds$xyz]) #> [1] 106.7501 ## Basic stats of all N-CA-C bound angles inds <- atom.select(pdb, elety=c("N","CA","C")) summary( angle.xyz(pdb$xyz[inds$xyz]) ) #> Min. 1st Qu. Median Mean 3rd Qu. Max. #> 105.9 109.9 112.0 112.1 113.8 122.2 #hist( angle.xyz(pdb$xyz[inds$xyz]), xlab="Angle" )
2020-10-31 02:13:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6277270317077637, "perplexity": 13971.485144134005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107912593.62/warc/CC-MAIN-20201031002758-20201031032758-00029.warc.gz"}