url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://zbmath.org/?q=an%3A0838.14018
# zbMATH — the first resource for mathematics Arithmetic and geometry of the curve $$y^ 3+1=x^ 4$$. (English) Zbl 0838.14018 We consider both the arithmetic and geometry of the curve in the title. Let two points of a curve be equivalent if the image of their difference in the Jacobian of the curve has finite order. An equivalence class is called a torsion packet. The Weierstrass points form a torsion packet and they are exactly the $$\mathbb{Q} (\zeta_{12})$$-rational points on this curve. The latter result is obtained from the fact that the Mordell-Weil group of the Jacobian over the field $$\mathbb{Q} (\zeta_{12})$$ is finite. Since the Mordell-Weil group over the rationals is also finite, we can describe all solutions of the equation in fields of degree 3 or less over the rationals. In addition, we find bases for the 2- and 3-torsion of the Jacobian and describe an isogeny from the Jacobian to the product of three CM elliptic curves. The finiteness of the Mordell-Weil group was shown using a 3-descent on the Jacobian that did not make use of this isogeny. ##### MSC: 14G05 Rational points 14H40 Jacobians, Prym varieties 11D25 Cubic and quartic Diophantine equations 14H55 Riemann surfaces; Weierstrass points; gap sequences Full Text:
2021-07-30 08:26:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7086154222488403, "perplexity": 286.64016311560783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153934.85/warc/CC-MAIN-20210730060435-20210730090435-00695.warc.gz"}
http://math.stackexchange.com/questions/267839/if-f-is-riemann-stieltjes-integrable-then-does-there-exist-a-partition-of-whi/267954
# If $f$ is Riemann-Stieltjes Integrable, then does there exist a partition of which each lengths of subinterval are the same? Let $\alpha$ be a monotonically increasing function. Say, $f\in\mathscr{R}(\alpha)$. Then does there exist a partition $P=\{x_0,...,x_n\}$ such that $$x_i=a+ \frac{b-a}{n}i,$$ $i\in\{0,\ldots,n\}$ and $$U(P,f,\alpha)-L(P,f,\alpha)<\epsilon$$ for each $\epsilon>0$? - Is this a proof of the right-hand rule introduced in early in Calc II? – emka Dec 31 '12 at 0:26 ## 1 Answer This Theorem is from the book Measure and Integral by Zygmund & Wheeden: According to this given $\epsilon\gt 0$ there exist a $\delta\gt 0$ such that for any partition $\Gamma$, if $|\Gamma|\lt\delta$, then $$U_\Gamma-L_\Gamma\lt\epsilon.$$ So, if your $f$ is bounded (it must be, otherwise the $U(P,f,\alpha)$ or $L(P,f,\alpha)$ might have no sense), given $\epsilon\gt 0$, in order to pick a uniform partition $$P=\{a=x_0\lt\cdots\lt x_n=b\}$$ such that $$U(P,f,\alpha)-L(P,f,\alpha)\lt\epsilon,$$ it is enough to choose $n$ large enough so that $$\frac{b-a}{n}\lt\delta.$$ - would you please tell me how to prove above theorem? – Katlus Dec 31 '12 at 6:25 @Katlus I've added the proof from the book. I recommend you find the book at your library and read the chapter (1 and) 2. – leo Dec 31 '12 at 21:09 Thanks for the posting, but there is a problem. Note that the definition of riemann-Stieltjes integral by Zygmund&Wheeden is 'different' from the definition using upper-sum and lower-sum. My definition for Riemann-Stieltjes integral is by using Upper-sum and Lower-sum so the proof above fails with respect to my definition – Katlus Jan 1 '13 at 12:34 Let $\epsilon>0$ be given. Note that $\inf_{P} U(P,f,\alpha)$ may differ from $\inf_{n\in\mathbb{N}} U(T_n,f,\alpha)$ where $T_n=\{a+\frac{b-a}{n}i\in [a,b]| 0≦i≦n\}$. Do you see why above proof fails with respect to my definition? One should prove first that these two infima the same and this is actually what I'm asking. – Katlus Jan 1 '13 at 12:39 I read Zygmund&Wheeden's book yesterday in the library, and you can see that he even said 'There are functions which are not ingerable w.r.t the definition in the book, but are integrable w.r.t the definition by Upper-Sum and Lower-Sum. – Katlus Jan 1 '13 at 12:45
2016-04-29 14:53:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9327055811882019, "perplexity": 278.6190769174709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111365.36/warc/CC-MAIN-20160428161511-00181-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/electric-potential-difference-a-charge-20-c-placed-positive-plate-isolated-parallel-plate-capacitor-capacitance-10-f-calculate-potential-difference-developed-between-plates_68782
Department of Pre-University Education, Karnataka course PUC Karnataka Science Class 12 Share # A Charge of 20 µC is Placed on the Positive Plate of an Isolated Parallel-plate Capacitor of Capacitance 10 µF. Calculate the Potential Difference Developed Between the Plates. - Physics ConceptElectric Potential Difference #### Question A charge of 20 µC is placed on the positive plate of an isolated parallel-plate capacitor of capacitance 10 µF. Calculate the potential difference developed between the plates. #### Solution Given : Capacitance of the isolated capacitor = 10 µF Charge on the positive plate = 20 µC Effective charge on the capacitor = (20-0)/2 = 10  "uC" The potential difference between the plates of the capacitor is given by V = Q/C therefore "Potential difference" = (10  "uC")/(10  "uF")= 1 "V" Is there an error in this question or solution? #### Video TutorialsVIEW ALL [2] Solution A Charge of 20 µC is Placed on the Positive Plate of an Isolated Parallel-plate Capacitor of Capacitance 10 µF. Calculate the Potential Difference Developed Between the Plates. Concept: Electric Potential Difference. S
2020-04-02 18:09:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5120786428451538, "perplexity": 2426.319434926644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370507738.45/warc/CC-MAIN-20200402173940-20200402203940-00094.warc.gz"}
https://www.gamedev.net/forums/topic/391913-find-the-point-of-a-line-x-away-from-end-point-of-line/
Jump to content • Advertisement # Find the point of a line x away from end point of line This topic is 4519 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. If you intended to correct an error in the post then please contact us. ## Recommended Posts I have a line in my little space game, described by the vertices startPosition and endPosition. My ship travels along the vector endPosition - startPosition, and it stops when it gets to endPosition. Now, my endPosition is the position of a planet, (the center of it) and I dont want my ship to actually hit the planet and travel to the centre. Instead, I want it to travel as far as it can on this vector until it gets to planetRadius * 2 from the centre of the planet. In effect, if a planet has radius of 5.0f, I want the ship to travel along the vector nutil it gets to 10.0f from the planet's position vert. Now, I am not a huge math wiz, and I am not sure how to calculate this. Could someone let me know, in layman's terms :), how I would get this point? #### Share this post ##### Share on other sites Advertisement If your planets position is vPlanet and your ship is at vShip and the planet radius is X The point you are looking for vMysteryPoint! will be... vDirection = (vShip-vPlanet).unit();// this is a vector from the center of the planet, //pointing at your ship. It is a unit vector (length set to be 1). vMysteryPoint = vPlanet + X*vDirection;// so we take our starting point, the middle of the planet,// and move along the direction we found // above X units. That will give you the point X units from the center // of your planet in the direction of the ship. #### Share this post ##### Share on other sites Theres a couple ways to accomplish this. If you want to be really simple about it, each time you update the ship's position, you could just check the distance from it's current coordinate to the planet center, and if it is less than that radius, Stop. If you want the Exact coordinate of that point, youre basically intersecting a line with a circle. This can be solved with a system of equations (I'm not going to though but its an approach you can work out on paper) Last, since you said that the vector endpoint was the center of the planet and the center of your stop position radius: you know that the stopradius is the same as the distance remaining along the vector when the ship stops. By comparing that dist-remaining to the total length of the vector, you can get a ratio which you can then scale the entire vector by, so the new scaled vector will have an endpoint where the ship should stop at. length is length of yourvector shortvector=yourvector*(length-stopradius)/length startpoint+shortvector= endpoint not sure how you actually do your ship's motion, but from that you can get either the new vector it will travel, or the specific stopping coordinates so it should be enough... EDIT: ok, CombatWombat got in first, actually, his solution is a lot more elegant #### Share this post ##### Share on other sites Thanks guys. Yep I used combatwombat's method before I got to read your post, Hap. It works fine, and using the normal makes so much sense, I feel mad I didn't think of it. I was thinking it was so much more complicated. Thanks for the help. #### Share this post ##### Share on other sites • Advertisement • Advertisement • ### Popular Contributors 1. 1 2. 2 Rutin 21 3. 3 4. 4 frob 15 5. 5 • Advertisement • 9 • 13 • 9 • 33 • 13 • ### Forum Statistics • Total Topics 632593 • Total Posts 3007281 × ## Important Information By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy. We are the game development community. Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up! Sign me up!
2018-09-21 05:50:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25642797350883484, "perplexity": 1281.7397460238137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156857.16/warc/CC-MAIN-20180921053049-20180921073449-00541.warc.gz"}
http://beastie.cs.ua.edu/proglan/dpl.html
CS403: Programming Languages The Designer Programming Language ## Preliminary information Your task is to build an interpreter for a general purpose programming language of your own design. Your language must support the following features: • integers and strings • dynamically typed (like Scheme and Python) • dictionaries with $O\left(\mathrm{log}n\right)$ worst-case access time (use an AVL tree) • arrays with O(1) access time • conditionals • recursion • iteration • convenient means to print to the console • an adequate set of operators • anonymous functions • functions as first-class objects (i.e. functions can be manipulated as in Scheme - e.g. local functions) • (graduate only) delayed evaluation of function arguments The only basic types you need to provide are integer and string and you do not need to provide methods for coercing one type to another (although you may find it convenient to do so). The efficiency of your interpreter is not particularly important, as long as you can solve the test problem in a reasonable amount of time. Your language also does not need to support reclamation of memory that is no longer needed. You are to write your program in a statically-typed, imperative language such as C, C++, or Java. Check with me first if you wish to use some other host language. Your dictionary code must be written in your designer language. ## Test Problem In your DPL, write an RPN calculator, with operators: `+ * - / ^` Ideally, your calculator should be able to read from a file or from stdin. However, you may hard-wire examples to show that your calculator works. Your RPN calculator must use a stack. If your calculator reads from stdin, your problem makefile rule (see below) in your makefile should do something like: ``` cat testProblemInputFile | java MyLang rpn.mylang ``` You will receive a serious deduction if the `make problemx` rule in your makefile causes a pause for input. • [100 points] everything works • [51-99 points] functionality is missing/test program is missing • [50 points] pretty printing • [30 points] recognizing Extra credit will be given to exceptional implementations. Your README file should give pertinent details. For Undergraduates: if you do not, at least, implement a recognizer for your language, you will fail the course. For Graduates: if you do not, at least, implement an pretty printer, you will fail the course. ## Submitting the assignment To submit your designer programming language, place all your source code, sample programs, a README detailing how to run and write programs in your language, and a makefile for building your system into one directory. Name the README file README. Your makefile should respond the command ``` make ``` which builds your processor and to the following commands, each of which illustrates a feature of your language: ``` make error1 make error1x make error2 make error2x make error3 make error3x make arrays make arraysx make conditionals make conditionalsx make recursion make recursionx make iteration make iterationx make functions make functionsx # shows that functions are 1st-class make dictionary make dictionaryx # shows that you have a log(n) AVL-tree dictionary ``` The first rule in a pair of rules should print out the appropriate input program, while the x rules should execute that program. In particular, the error rules should show off your parser detecting three different kinds of syntax errors. Your makefile should also respond to the commands: ``` make problem make problemx ``` These commands display the test problem of your implementation and run the test problem, respectively. Finally, provide an executable shellscript named dpl that runs a program like so: ``` dpl testprogram1.mylang ``` Test programs can be named anything. Note: in the case of recognizing only, only the error rules need be present in your makefile. In the case of pretty printing only, the run rules should run the input program through the pretty printer. Finally, your makefile should respond to the command: ``` make clean ``` This command should remove all compilation artifacts (such as `.o` or `.pyc` files) so that a clean compile can be performed. ``` make grad ``` submit proglan lusth dpl
2017-04-23 21:37:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20110198855400085, "perplexity": 2999.969999657083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118831.16/warc/CC-MAIN-20170423031158-00189-ip-10-145-167-34.ec2.internal.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/dcdss.2010.3.129
# American Institute of Mathematical Sciences March  2010, 3(1): 129-140. doi: 10.3934/dcdss.2010.3.129 ## Two rolling disks or spheres Received  July 2008 Revised  July 2009 Published  December 2009 The mechanical system of two disks, moving freely in the plane, while in contact and rolling against each other without slipping, may be written as a Lagrangian system with three degrees of freedom and one holonomic rolling constraint. We derive simple geometric criteria for the rotational relative equilibria and their stability. Extending to three dimensions, we derive the kinematics of the analogous system where two spheres replace two disks, and we verify that the rolling disk system occurs as a holonomic subsystem of the rolling sphere system. Citation: George W. Patrick, Tyler Helmuth. Two rolling disks or spheres. Discrete and Continuous Dynamical Systems - S, 2010, 3 (1) : 129-140. doi: 10.3934/dcdss.2010.3.129 [1] Lyudmila Grigoryeva, Juan-Pablo Ortega, Stanislav S. Zub. Stability of Hamiltonian relative equilibria in symmetric magnetically confined rigid bodies. Journal of Geometric Mechanics, 2014, 6 (3) : 373-415. doi: 10.3934/jgm.2014.6.373 [2] Andrew D. Lewis. Nonholonomic and constrained variational mechanics. Journal of Geometric Mechanics, 2020, 12 (2) : 165-308. doi: 10.3934/jgm.2020013 [3] Paul Popescu, Cristian Ida. Nonlinear constraints in nonholonomic mechanics. Journal of Geometric Mechanics, 2014, 6 (4) : 527-547. doi: 10.3934/jgm.2014.6.527 [4] Andrew D. Lewis. Erratum for "nonholonomic and constrained variational mechanics". Journal of Geometric Mechanics, 2020, 12 (4) : 671-675. doi: 10.3934/jgm.2020033 [5] Joris Vankerschaver, Eva Kanso, Jerrold E. Marsden. The geometry and dynamics of interacting rigid bodies and point vortices. Journal of Geometric Mechanics, 2009, 1 (2) : 223-266. doi: 10.3934/jgm.2009.1.223 [6] A. Agrachev and A. Marigo. Nonholonomic tangent spaces: intrinsic construction and rigid dimensions. Electronic Research Announcements, 2003, 9: 111-120. [7] Frederic Laurent-Polz, James Montaldi, Mark Roberts. Point vortices on the sphere: Stability of symmetric relative equilibria. Journal of Geometric Mechanics, 2011, 3 (4) : 439-486. doi: 10.3934/jgm.2011.3.439 [8] Dmitriy Chebanov. New class of exact solutions for the equations of motion of a chain of $n$ rigid bodies. Conference Publications, 2013, 2013 (special) : 105-113. doi: 10.3934/proc.2013.2013.105 [9] Nicolai Sætran, Antonella Zanna. Chains of rigid bodies and their numerical simulation by local frame methods. Journal of Computational Dynamics, 2019, 6 (2) : 409-427. doi: 10.3934/jcd.2019021 [10] Giulio G. Giusteri, Alfredo Marzocchi, Alessandro Musesti. Nonlinear free fall of one-dimensional rigid bodies in hyperviscous fluids. Discrete and Continuous Dynamical Systems - B, 2014, 19 (7) : 2145-2157. doi: 10.3934/dcdsb.2014.19.2145 [11] Christopher Cox, Renato Feres. Differential geometry of rigid bodies collisions and non-standard billiards. Discrete and Continuous Dynamical Systems, 2016, 36 (11) : 6065-6099. doi: 10.3934/dcds.2016065 [12] Waldyr M. Oliva, Gláucio Terra. Improving E. Cartan considerations on the invariance of nonholonomic mechanics. Journal of Geometric Mechanics, 2019, 11 (3) : 439-446. doi: 10.3934/jgm.2019022 [13] James Montaldi. Bifurcations of relative equilibria near zero momentum in Hamiltonian systems with spherical symmetry. Journal of Geometric Mechanics, 2014, 6 (2) : 237-260. doi: 10.3934/jgm.2014.6.237 [14] Miguel Rodríguez-Olmos. Continuous singularities in hamiltonian relative equilibria with abelian momentum isotropy. Journal of Geometric Mechanics, 2020, 12 (3) : 525-540. doi: 10.3934/jgm.2020019 [15] David Rojas, Pedro J. Torres. Bifurcation of relative equilibria generated by a circular vortex path in a circular domain. Discrete and Continuous Dynamical Systems - B, 2020, 25 (2) : 749-760. doi: 10.3934/dcdsb.2019265 [16] Alain Albouy, Holger R. Dullin. Relative equilibria of the 3-body problem in $\mathbb{R}^4$. Journal of Geometric Mechanics, 2020, 12 (3) : 323-341. doi: 10.3934/jgm.2020012 [17] Marshall Hampton, Anders Nedergaard Jensen. Finiteness of relative equilibria in the planar generalized $N$-body problem with fixed subconfigurations. Journal of Geometric Mechanics, 2015, 7 (1) : 35-42. doi: 10.3934/jgm.2015.7.35 [18] Florian Rupp, Jürgen Scheurle. Classification of a class of relative equilibria in three body coulomb systems. Conference Publications, 2011, 2011 (Special) : 1254-1262. doi: 10.3934/proc.2011.2011.1254 [19] Bernard Ducomet, Šárka Nečasová. On the motion of rigid bodies in an incompressible or compressible viscous fluid under the action of gravitational forces. Discrete and Continuous Dynamical Systems - S, 2013, 6 (5) : 1193-1213. doi: 10.3934/dcdss.2013.6.1193 [20] Aneta Wróblewska-Kamińska. Local pressure methods in Orlicz spaces for the motion of rigid bodies in a non-Newtonian fluid with general growth conditions. Discrete and Continuous Dynamical Systems - S, 2013, 6 (5) : 1417-1425. doi: 10.3934/dcdss.2013.6.1417 2020 Impact Factor: 2.425
2022-05-29 02:36:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4964878559112549, "perplexity": 4083.584874987362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663035797.93/warc/CC-MAIN-20220529011010-20220529041010-00249.warc.gz"}
https://www.merry.io/dynamical-systems/31-persistence-of-hyperbolic-fixed-points/
In the last lecture we showed that if we make a $C^1$ perturbation of a map in a neighbourhood of a hyperbolic fixed point, then the new map has a unique fixed point in this neighbourhood. That is, the fixed point cannot “disappear” under perturbation. Today we upgrade this statement by showing that the new fixed point is also hyperbolic for the perturbed map. The Local Persistence Theorem: Let $f \colon\Omega \subseteq E \to E$ be a local differentiable dynamical system. Suppose $u$ is a hyperbolic fixed point of $f$. Then any nearby map $g$ has a unique fixed point near $u$. Moreover this fixed point is hyperbolic for $g$, and the hyperbolic splitting varies continuously with $g$. The only new statement is that the fixed point is hyperbolic, and that the hyperbolic splitting is continuous. To prove this we use a version of the Inverse Function Theorem for bi-Lipschitz maps, together with a parametric version of the Banach Fixed Point Theorem.
2020-09-22 06:39:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9277640581130981, "perplexity": 111.96688390298385}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400204410.37/warc/CC-MAIN-20200922063158-20200922093158-00559.warc.gz"}
https://www.zbmath.org/authors/?q=ai%3Apaicu.marius
# zbMATH — the first resource for mathematics ## Paicu, Marius Compute Distance To: Author ID: paicu.marius Published as: Paicu, M.; Paicu, Marius External Links: MGP Documents Indexed: 46 Publications since 2003 all top 5 #### Co-Authors 6 single-authored 12 Zhang, Ping 5 Danchin, Raphaël 4 Huang, Jingchi 4 Raugel, Geneviève 4 Zhang, Zhifei 3 Abidi, Hammadi 2 Chemin, Jean-Yves 2 Gallagher, Isabelle 2 Zarnescu, Arghir Dani 2 Zhu, Ning 1 Busuioc, Adriana Valentina 1 De Anna, Francesco 1 Del Santo, Daniele 1 Fanelli, Francesco 1 Hamza, Meer A. 1 Iftimie, Dragoş 1 Jäh, Christian P. 1 Liu, Yanlin 1 Majdoub, Mohamed 1 Marchand, Fabien 1 Rekalo, Andrey M. 1 Vicol, Vlad C. all top 5 #### Serials 3 Archive for Rational Mechanics and Analysis 3 Communications in Mathematical Physics 2 Nonlinearity 2 Journal of Differential Equations 2 Journal of Functional Analysis 2 Physica D 2 Communications in Partial Differential Equations 2 Journal de Mathématiques Pures et Appliquées. Neuvième Série 2 Discrete and Continuous Dynamical Systems 2 Analysis & PDE 1 Advances in Mathematics 1 Annales de l’Institut Fourier 1 Bulletin de la Société Mathématique de France 1 Osaka Journal of Mathematics 1 Proceedings of the American Mathematical Society 1 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 1 Revista Matemática Iberoamericana 1 Differential and Integral Equations 1 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 1 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 1 SIAM Journal on Mathematical Analysis 1 Journal of Dynamics and Differential Equations 1 Journal of Nonlinear Science 1 European Series in Applied and Industrial Mathematics (ESAIM): Proceedings 1 Journal of Mathematical Fluid Mechanics 1 Annals of Mathematics. Second Series 1 Comptes Rendus. Mathématique. Académie des Sciences, Paris 1 Journal of the Institute of Mathematics of Jussieu 1 Science China. Mathematics all top 5 #### Fields 46 Partial differential equations (35-XX) 42 Fluid mechanics (76-XX) 7 Harmonic analysis on Euclidean spaces (42-XX) 3 Functional analysis (46-XX) 2 Geophysics (86-XX) 1 Functions of a complex variable (30-XX) 1 Dynamical systems and ergodic theory (37-XX) #### Citations contained in zbMATH 39 Publications have been cited 861 times in 489 Documents Cited by Year Global existence results for the anisotropic Boussinesq system in dimension two. Zbl 1223.35249 Danchin, Raphaël; Paicu, Marius 2011 Global well-posedness issues for the inviscid Boussinesq system with Yudovich’s type data. Zbl 1186.35157 Danchin, Raphaël; Paicu, Marius 2009 Global solutions to the 3-D incompressible inhomogeneous Navier-Stokes system. Zbl 1236.35112 Paicu, Marius; Zhang, Ping 2012 Anisotropic Navier-Stokes equation in critical spaces. Zbl 1110.35060 Paicu, Marius 2005 Global existence for an inhomogeneous fluid. Zbl 1122.35091 2007 The Leray and Fujita-Kato theorems for the Boussinesq system with partial viscosity. Zbl 1162.35063 Danchin, Raphaël; Paicu, Marius 2008 Global regularity for some classes of large solutions to the Navier-Stokes equations. Zbl 1229.35168 Chemin, Jean-Yves; Gallagher, Isabelle; Paicu, Marius 2011 Global existence for the magnetohydrodynamic system in critical spaces. Zbl 1148.35066 2008 Global unique solvability of inhomogeneous Navier-Stokes equations with bounded density. Zbl 1314.35086 Paicu, Marius; Zhang, Ping; Zhang, Zhifei 2013 Global solutions to the 3-D incompressible anisotropic Navier-Stokes system in the critical spaces. Zbl 1237.35129 Paicu, Marius; Zhang, Ping 2011 Energy dissipation and regularity for a coupled Navier-Stokes and Q-tensor system. Zbl 1315.76017 Paicu, Marius; Zarnescu, Arghir 2012 Global well-posedness of incompressible inhomogeneous fluid systems with bounded density or non-Lipschitz velocity. Zbl 1287.35055 Huang, Jingchi; Paicu, Marius; Zhang, Ping 2013 Existence and uniqueness results for the Boussinesq system with data in Lorentz spaces. Zbl 1143.76432 Danchin, Raphaël; Paicu, Marius 2008 Global existence and regularity for the full coupled Navier-Stokes and $$Q$$-tensor system. Zbl 1233.35160 Paicu, Marius; Zarnescu, Arghir 2011 Global solutions to 2-D inhomogeneous Navier-Stokes system with general velocity. Zbl 1290.35184 Huang, Jingchi; Paicu, Marius; Zhang, Ping 2013 Global large solutions to 3-D inhomogeneous Navier-Stokes system with one slow variable. Zbl 1317.35171 Chemin, Jean-Yves; Paicu, Marius; Zhang, Ping 2014 Periodic Navier-Stokes equation without viscosity in one direction. Zbl 1115.35101 Paicu, Marius 2005 Remarks on the blow-up of solutions to a toy model for the Navier-Stokes equations. Zbl 1162.76016 Gallagher, Isabelle; Paicu, Marius 2009 Regularity of the global attractor and finite-dimensional behavior for the second grade fluid equations. Zbl 1235.35225 Paicu, Marius; Raugel, Geneviève; Rekalo, Andrey 2012 Analyticity and gevrey-class regularity for the second-grade fluid equations. Zbl 1270.35370 2011 Global regularity for the Navier-Stokes equations with some classes of large initial data. Zbl 1242.35187 Paicu, Marius; Zhang, Zhifei 2011 A hyperbolic perturbation of the Navier-Stokes equations. Zbl 1221.35284 Paicu, Marius; Raugel, Geneviève 2007 Global existence and uniqueness result of a class of third-grade fluid equations. Zbl 1114.76004 Hamza, M.; Paicu, M. 2007 Asymptotic study of rapidly rotating anisotropic fluids in the periodic case. Zbl 1085.76071 Paicu, Marius 2004 Decay estimates of global solution to 2D incompressible. Zbl 1304.35492 Huang, J.; Paicu, Marius 2014 Global well-posedness for 3D Navier-Stokes equations with ill-prepared initial data. Zbl 1291.35191 Paicu, Marius; Zhang, Zhifei 2014 Busuioc, Adriana Valentina; Iftimie, Dragoş; Paicu, Marius 2008 Global solutions to the 3-D incompressible inhomogeneous Navier-Stokes system with rough density. Zbl 1273.35211 Huang, Jingchi; Paicu, Marius; Zhang, Ping 2013 Uniform local existence for inhomogeneous rotating fluid equations. Zbl 1160.76052 Majdoub, Mohamed; Paicu, Marius 2009 A well-posedness result for viscous compressible fluids with only bounded density. Zbl 1434.35046 Danchin, Raphaël; Fanelli, Francesco; Paicu, Marius 2020 Backward uniqueness for parabolic operators with non-Lipschitz coefficients. Zbl 1334.35074 Del Santo, Daniele; Jäh, Christian; Paicu, Marius 2015 On some large global solutions to 3-D density-dependent Navier-Stokes system with slow variable: well-prepared data. Zbl 1326.35247 Paicu, Marius; Zhang, Ping 2015 Dynamics of second grade fluids: the Lagrangian approach. Zbl 1318.35086 Paicu, M.; Raugel, G. 2013 Global existence in the energy space of the solutions of a non-Newtonian fluid. Zbl 1143.76359 Paicu, Marius 2008 Remarks on the uniqueness for the three-dimensional Navier-Stokes system. Zbl 1110.35059 Marchand, Fabien; Paicu, Marius 2007 Striated regularity of 2-d inhomogeneous incompressible Navier-Stokes system with variable viscosity. Zbl 1439.35375 Paicu, Marius; Zhang, Ping 2020 Global strong solutions to 3-D Navier-Stokes system with strong dissipation in one direction. Zbl 1431.35107 Paicu, Marius; Zhang, Ping 2019 On the global well-posedness of 3-D Navier-Stokes equations with vanishing horizontal viscosity. Zbl 1449.35347 2018 Anisotropic Navier-Stokes equations in a bounded cylindrical domain. Zbl 1185.35172 Paicu, Marius; Raugel, Geneviève 2009 A well-posedness result for viscous compressible fluids with only bounded density. Zbl 1434.35046 Danchin, Raphaël; Fanelli, Francesco; Paicu, Marius 2020 Striated regularity of 2-d inhomogeneous incompressible Navier-Stokes system with variable viscosity. Zbl 1439.35375 Paicu, Marius; Zhang, Ping 2020 Global strong solutions to 3-D Navier-Stokes system with strong dissipation in one direction. Zbl 1431.35107 Paicu, Marius; Zhang, Ping 2019 On the global well-posedness of 3-D Navier-Stokes equations with vanishing horizontal viscosity. Zbl 1449.35347 2018 Backward uniqueness for parabolic operators with non-Lipschitz coefficients. Zbl 1334.35074 Del Santo, Daniele; Jäh, Christian; Paicu, Marius 2015 On some large global solutions to 3-D density-dependent Navier-Stokes system with slow variable: well-prepared data. Zbl 1326.35247 Paicu, Marius; Zhang, Ping 2015 Global large solutions to 3-D inhomogeneous Navier-Stokes system with one slow variable. Zbl 1317.35171 Chemin, Jean-Yves; Paicu, Marius; Zhang, Ping 2014 Decay estimates of global solution to 2D incompressible. Zbl 1304.35492 Huang, J.; Paicu, Marius 2014 Global well-posedness for 3D Navier-Stokes equations with ill-prepared initial data. Zbl 1291.35191 Paicu, Marius; Zhang, Zhifei 2014 Global unique solvability of inhomogeneous Navier-Stokes equations with bounded density. Zbl 1314.35086 Paicu, Marius; Zhang, Ping; Zhang, Zhifei 2013 Global well-posedness of incompressible inhomogeneous fluid systems with bounded density or non-Lipschitz velocity. Zbl 1287.35055 Huang, Jingchi; Paicu, Marius; Zhang, Ping 2013 Global solutions to 2-D inhomogeneous Navier-Stokes system with general velocity. Zbl 1290.35184 Huang, Jingchi; Paicu, Marius; Zhang, Ping 2013 Global solutions to the 3-D incompressible inhomogeneous Navier-Stokes system with rough density. Zbl 1273.35211 Huang, Jingchi; Paicu, Marius; Zhang, Ping 2013 Dynamics of second grade fluids: the Lagrangian approach. Zbl 1318.35086 Paicu, M.; Raugel, G. 2013 Global solutions to the 3-D incompressible inhomogeneous Navier-Stokes system. Zbl 1236.35112 Paicu, Marius; Zhang, Ping 2012 Energy dissipation and regularity for a coupled Navier-Stokes and Q-tensor system. Zbl 1315.76017 Paicu, Marius; Zarnescu, Arghir 2012 Regularity of the global attractor and finite-dimensional behavior for the second grade fluid equations. Zbl 1235.35225 Paicu, Marius; Raugel, Geneviève; Rekalo, Andrey 2012 Global existence results for the anisotropic Boussinesq system in dimension two. Zbl 1223.35249 Danchin, Raphaël; Paicu, Marius 2011 Global regularity for some classes of large solutions to the Navier-Stokes equations. Zbl 1229.35168 Chemin, Jean-Yves; Gallagher, Isabelle; Paicu, Marius 2011 Global solutions to the 3-D incompressible anisotropic Navier-Stokes system in the critical spaces. Zbl 1237.35129 Paicu, Marius; Zhang, Ping 2011 Global existence and regularity for the full coupled Navier-Stokes and $$Q$$-tensor system. Zbl 1233.35160 Paicu, Marius; Zarnescu, Arghir 2011 Analyticity and gevrey-class regularity for the second-grade fluid equations. Zbl 1270.35370 2011 Global regularity for the Navier-Stokes equations with some classes of large initial data. Zbl 1242.35187 Paicu, Marius; Zhang, Zhifei 2011 Global well-posedness issues for the inviscid Boussinesq system with Yudovich’s type data. Zbl 1186.35157 Danchin, Raphaël; Paicu, Marius 2009 Remarks on the blow-up of solutions to a toy model for the Navier-Stokes equations. Zbl 1162.76016 Gallagher, Isabelle; Paicu, Marius 2009 Uniform local existence for inhomogeneous rotating fluid equations. Zbl 1160.76052 Majdoub, Mohamed; Paicu, Marius 2009 Anisotropic Navier-Stokes equations in a bounded cylindrical domain. Zbl 1185.35172 Paicu, Marius; Raugel, Geneviève 2009 The Leray and Fujita-Kato theorems for the Boussinesq system with partial viscosity. Zbl 1162.35063 Danchin, Raphaël; Paicu, Marius 2008 Global existence for the magnetohydrodynamic system in critical spaces. Zbl 1148.35066 2008 Existence and uniqueness results for the Boussinesq system with data in Lorentz spaces. Zbl 1143.76432 Danchin, Raphaël; Paicu, Marius 2008 Busuioc, Adriana Valentina; Iftimie, Dragoş; Paicu, Marius 2008 Global existence in the energy space of the solutions of a non-Newtonian fluid. Zbl 1143.76359 Paicu, Marius 2008 Global existence for an inhomogeneous fluid. Zbl 1122.35091 2007 A hyperbolic perturbation of the Navier-Stokes equations. Zbl 1221.35284 Paicu, Marius; Raugel, Geneviève 2007 Global existence and uniqueness result of a class of third-grade fluid equations. Zbl 1114.76004 Hamza, M.; Paicu, M. 2007 Remarks on the uniqueness for the three-dimensional Navier-Stokes system. Zbl 1110.35059 Marchand, Fabien; Paicu, Marius 2007 Anisotropic Navier-Stokes equation in critical spaces. Zbl 1110.35060 Paicu, Marius 2005 Periodic Navier-Stokes equation without viscosity in one direction. Zbl 1115.35101 Paicu, Marius 2005 Asymptotic study of rapidly rotating anisotropic fluids in the periodic case. Zbl 1085.76071 Paicu, Marius 2004 all top 5 #### Cited by 474 Authors 36 Zhang, Ping 24 Paicu, Marius 23 Wu, Jiahong 22 Ye, Zhuan 19 Zhai, Xiaoping 16 Zhang, Zhifei 15 Danchin, Raphaël 15 Zhang, Ting 13 Li, Yongsheng 13 Xu, Xiaojing 10 Abidi, Hammadi 10 Cao, Chongsheng 9 Fang, Daoyuan 8 Fan, Jishan 8 Gui, Guilong 8 Titi, Edriss Saleh 7 Chemin, Jean-Yves 7 Mucha, Piotr Bogusław 7 Xu, Xiang 7 Yan, Wei 6 De Anna, Francesco 6 Gallagher, Isabelle 6 Hmidi, Taoufik 6 Huang, Jingchi 6 Liu, Qiao 6 Liu, Yanlin 6 Miao, Changxing 6 Xue, Liutang 6 Yu, Haibo 6 Zarnescu, Arghir Dani 6 Zhang, Peixin 6 Zhong, Xin 5 Haspot, Boris 5 Jiu, Quansen 5 Kukavica, Igor 5 Qin, Yuming 5 Rousset, Frédéric 5 Tan, Zhong 5 Wang, Wei 5 Yin, Zhaoyang 5 Zheng, Xiaoxin 5 Zhou, Yi 4 Adhikari, Dhanapati 4 Bahouri, Hajer 4 Bian, Dongfen 4 Charve, Frédéric 4 Chen, Fei 4 Chen, Qionglei 4 Fanelli, Francesco 4 Gancedo, Francisco 4 Houamed, Haroune 4 Hu, Weiwei 4 Larios, Adam 4 Li, Fucai 4 Li, Jinkai 4 Lin, Fang Hua 4 Liu, Jitao 4 Ngo, Van-Sang 4 Rocca, Elisabetta 4 Scrobogna, Stefano 4 Vicol, Vlad C. 4 Wu, Hao 4 Xu, Fuyi 4 Xu, Huan 4 Yang, Minghua 4 Ye, Zhuan 4 Yuan, Jia 4 Zerguine, Mohamed 4 Zhang, Jianlin 4 Zhao, Jihong 4 Zhou, Yong 3 Chikami, Noboru 3 Dai, Mimi 3 García-Juárez, Eduardo 3 Han, Bin 3 Liao, Xian 3 Lü, Boqiang 3 Ma, Liangliang 3 Masmoudi, Nader 3 Nakamura, Gen 3 Ogawa, Takayoshi 3 Ozawa, Tohru 3 Schonbek, Maria Elena 3 Shang, Haifeng 3 Su, Xing 3 Sun, Jinyi 3 Wang, Chao 3 Xu, Jiang 3 Yamazaki, Kazuo 3 Zhai, Cuili 3 Zhang, Qian 3 Zhang, Rongfang 3 Zhao, Kun 3 Zhou, Daoguo 3 Zhu, Mingxuan 3 Zhu, Ning 3 Ziane, Mohammed 2 Bedrossian, Jacob 2 Bessaih, Hakima 2 Bianchini, Roberta ...and 374 more Authors all top 5 #### Cited in 98 Serials 63 Journal of Differential Equations 31 Archive for Rational Mechanics and Analysis 24 Journal of Mathematical Analysis and Applications 20 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 20 Journal of Mathematical Fluid Mechanics 19 Discrete and Continuous Dynamical Systems 17 Journal of Functional Analysis 16 ZAMP. Zeitschrift für angewandte Mathematik und Physik 15 Mathematical Methods in the Applied Sciences 14 Journal of Mathematical Physics 13 Nonlinear Analysis. Real World Applications 11 Communications in Mathematical Physics 11 Acta Applicandae Mathematicae 10 Advances in Mathematics 10 SIAM Journal on Mathematical Analysis 8 Computers & Mathematics with Applications 8 Physica D 8 Journal de Mathématiques Pures et Appliquées. Neuvième Série 7 Applicable Analysis 7 Communications on Pure and Applied Mathematics 6 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 6 Communications on Pure and Applied Analysis 6 Analysis and Applications (Singapore) 6 Science China. Mathematics 6 Annals of PDE 5 Applied Mathematics Letters 5 Communications in Partial Differential Equations 4 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 4 Journal of Dynamics and Differential Equations 4 Journal of Nonlinear Science 4 Discrete and Continuous Dynamical Systems. Series B 4 Comptes Rendus. Mathématique. Académie des Sciences, Paris 4 Boundary Value Problems 3 Nonlinearity 3 Applied Mathematics and Computation 3 Chinese Annals of Mathematics. Series B 3 Revista Matemática Iberoamericana 3 Kinetic and Related Models 2 Annali di Matematica Pura ed Applicata. Serie Quarta 2 Mathematische Nachrichten 2 Proceedings of the American Mathematical Society 2 Zeitschrift für Analysis und ihre Anwendungen 2 Asymptotic Analysis 2 Calculus of Variations and Partial Differential Equations 2 Mathematical Physics, Analysis and Geometry 2 Philosophical Transactions of the Royal Society of London. Series A. Mathematical, Physical and Engineering Sciences 2 Communications in Contemporary Mathematics 2 Journal of Evolution Equations 2 Acta Mathematica Scientia. Series B. (English Edition) 2 Journal of the Institute of Mathematics of Jussieu 2 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 2 Discrete and Continuous Dynamical Systems. Series S 2 Journal de l’École Polytechnique – Mathématiques 2 Proceedings of the Royal Society of London. A. Mathematical, Physical and Engineering Sciences 1 Journal d’Analyse Mathématique 1 Annales de l’Institut Fourier 1 The Annals of Probability 1 Annales Polonici Mathematici 1 Publications Mathématiques 1 International Journal of Mathematics and Mathematical Sciences 1 Journal of Computational and Applied Mathematics 1 Mathematische Annalen 1 Numerische Mathematik 1 Osaka Journal of Mathematics 1 Pacific Journal of Mathematics 1 SIAM Journal on Control and Optimization 1 Transactions of the American Mathematical Society 1 Stochastic Analysis and Applications 1 Acta Mathematicae Applicatae Sinica. English Series 1 Journal of the American Mathematical Society 1 Journal of Scientific Computing 1 European Journal of Applied Mathematics 1 The Journal of Geometric Analysis 1 Applications of Mathematics 1 Geometric and Functional Analysis. GAFA 1 SIAM Journal on Applied Mathematics 1 SIAM Review 1 Applied Mathematics. Series B (English Edition) 1 NoDEA. Nonlinear Differential Equations and Applications 1 Opuscula Mathematica 1 Complexity 1 Discrete Dynamics in Nature and Society 1 Annals of Mathematics. Second Series 1 Acta Mathematica Sinica. English Series 1 Annales Henri Poincaré 1 Advanced Nonlinear Studies 1 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 1 Annali dell’Università di Ferrara. Sezione VII. Scienze Matematiche 1 Journal of Physics A: Mathematical and Theoretical 1 Analysis & PDE 1 Journal of Geometric Mechanics 1 Communications in Mathematics and Statistics 1 Evolution Equations and Control Theory 1 International Journal of Analysis 1 Journal of Function Spaces 1 International Journal of Applied and Computational Mathematics 1 Journal of Elliptic and Parabolic Equations 1 Tunisian Journal of Mathematics all top 5 #### Cited in 26 Fields 469 Partial differential equations (35-XX) 429 Fluid mechanics (76-XX) 47 Harmonic analysis on Euclidean spaces (42-XX) 22 Geophysics (86-XX) 12 Real functions (26-XX) 9 Classical thermodynamics, heat transfer (80-XX) 8 Statistical mechanics, structure of matter (82-XX) 7 Calculus of variations and optimal control; optimization (49-XX) 7 Probability theory and stochastic processes (60-XX) 7 Numerical analysis (65-XX) 6 Functional analysis (46-XX) 6 Biology and other natural sciences (92-XX) 5 Ordinary differential equations (34-XX) 5 Dynamical systems and ergodic theory (37-XX) 4 Functions of a complex variable (30-XX) 4 Global analysis, analysis on manifolds (58-XX) 3 Mechanics of deformable solids (74-XX) 3 Optics, electromagnetic theory (78-XX) 3 Systems theory; control (93-XX) 2 Operator theory (47-XX) 2 Astronomy and astrophysics (85-XX) 1 Topological groups, Lie groups (22-XX) 1 Potential theory (31-XX) 1 Abstract harmonic analysis (43-XX) 1 Integral equations (45-XX) 1 Differential geometry (53-XX)
2021-04-19 05:45:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5447962880134583, "perplexity": 5693.286246909882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038878326.67/warc/CC-MAIN-20210419045820-20210419075820-00438.warc.gz"}
http://www.numdam.org/item/ASNSP_2006_5_5_4_465_0/
The Cauchy problem for hyperbolic systems with Hölder continuous coefficients with respect to the time variable Annali della Scuola Normale Superiore di Pisa - Classe di Scienze, Serie 5, Volume 5 (2006) no. 4, p. 465-482 We discuss the local existence and uniqueness of solutions of certain nonstrictly hyperbolic systems, with Hölder continuous coefficients with respect to time variable. We reduce the nonstrictly hyperbolic systems to the parabolic ones and by use of the Tanabe-Sobolevski's method and the Banach scale method we construct a semi-group which gives a representation of the solution to the Cauchy problem. Classification:  35L45,  35A08 @article{ASNSP_2006_5_5_4_465_0, author = {Kajitani, Kunihiko and Yuzawa, Yasuo}, title = {The Cauchy problem for hyperbolic systems with H\"older continuous coefficients with respect to the time variable}, journal = {Annali della Scuola Normale Superiore di Pisa - Classe di Scienze}, publisher = {Scuola Normale Superiore, Pisa}, volume = {Ser. 5, 5}, number = {4}, year = {2006}, pages = {465-482}, zbl = {1170.35474}, mrnumber = {2297720}, language = {en}, url = {http://www.numdam.org/item/ASNSP_2006_5_5_4_465_0} } Kajitani, Kunihiko; Yuzawa, Yasuo. The Cauchy problem for hyperbolic systems with Hölder continuous coefficients with respect to the time variable. Annali della Scuola Normale Superiore di Pisa - Classe di Scienze, Serie 5, Volume 5 (2006) no. 4, pp. 465-482. http://www.numdam.org/item/ASNSP_2006_5_5_4_465_0/ [1] M. D. Bronshtein, Cauchy problem for hyperbolic operators with variable multiple characteristics (Russian), Trudy Moscow Math. 41 (1980), 83-99. | MR 611140 | Zbl 0468.35062 [2] F. Colombini, E. Jannelli and S. Spagnolo, Wellposedness in the Gevrey classes of the Cauchy problem for a non strictly hyperbolic equation with coefficients depending on time, Ann. Scuola Norm. Sup. Pisa Cl. Sci. 10 (1983), 291-312. | Numdam | MR 728438 | Zbl 0543.35056 [3] P. D'Ancona, T. Kinoshita and S. Spagnolo, Weakly hyperbolic systems with Hölder continuous coefficients, J. Differential Equations 203 (2004), 64-81. | MR 2070386 | Zbl 1068.35065 [4] K. Kajitani, Local solution of Cauchy problem for nonlinear hyperbolic systems in Gevrey classes, Hokkaido Math. J. 23-3 (1983), 599-616. | MR 721386 | Zbl 0544.35063 [5] P. D'Ancona, T. Kinoshita and S. Spagnolo, Cauchy problem for nonstrictly hyperbolic systems in Gevrey classes, J. Math. Kyoto. Univ. 12 (1983), 434-460. | Zbl 0544.35063 [6] P. D'Ancona, T. Kinoshita and S. Spagnolo, The Cauchy Problem for Nonlinear Hyperbolic Systems, Bull. Sci. Math. 110 (1986), 3-48. | Zbl 0657.35082 [7] P. D'Ancona, T. Kinoshita and S. Spagnolo, Wellposedness in Gevrey class of the Cauchy problem for hyperbolic operators, Bull. Sc. Math. 111 (1987), 415-438. | Zbl 0653.35051 [8] T. Nishitani, Sur les équations hyperboliques à coefficients hölderients en $t$ et de classes de Gevrey en $x$, Bull. Sci. Math. 107 (1983), 113-138. | MR 704720 | Zbl 0536.35042 [9] Y. Ohya and S. Tarama, Le Problème de Cauchy à caractéristiques multiples dans la classe de Gevrey - coefficients hölderiens en t -, In: “Hyperbolic Equations and Related Topics”, Proc. Taniguchi Internat. Symp. (1984), 273-302. | Zbl 0665.35045 [10] H. Tanabe, “Equations of Evolution”, translated from the Japanese by N. Mugibayashi and H. Haneda. Monographs and Studies in Mathematics, Vol. 6, Pitman (Advanced Publishing Program), Boston, Mass.-London, 1979. | MR 533824 | Zbl 0417.35003 [11] Y. Yuzawa, The Cauchy problem for hyperbolic systems with Hölder continuous coefficients with respect to time, J. Differential Equations 219 (2005), 363-374. | MR 2183264 | Zbl 1087.35068
2019-12-11 13:00:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6626055836677551, "perplexity": 2427.0937703795184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530857.12/warc/CC-MAIN-20191211103140-20191211131140-00042.warc.gz"}
https://www.tutorialspoint.com/what-are-the-characteristics-of-lambda-expressions-in-java
# What are the characteristics of lambda expressions in Java? Java 8Object Oriented ProgrammingProgramming The lambda expressions were introduced in Java and facilitate functional programming. A lambda expression works nicely together only with functional interfaces and we cannot use lambda expressions with more than one abstract method. ## Characteristics of Lambda Expression • Optional Type Declaration − There is no need to declare the type of a parameter. The compiler inferences the same from the value of the parameter. • Optional Parenthesis around Parameter − There is no need to declare a single parameter in parenthesis. For multiple parameters, parentheses are required. • Optional Curly Braces − There is no need to use curly braces in the expression body if the body contains a single statement. • Optional Return Keyword − The compiler automatically returns the value if the body has a single expression to return the value. Curly braces are required to indicate that expression returns a value. ## Syntax parameter -> expression body (int a, int b) -> {return a + b} ## Example @FunctionalInterface interface TutorialsPoint { void message(); } public class LambdaExpressionTest { public static void main(String args[]) { // Lambda Expression TutorialsPoint tp = () -> System.out.println("Welcome to TutorialsPoint"); tp.message(); } } ## Output Welcome to TutorialsPoint Published on 11-Dec-2019 13:35:25
2020-09-22 02:06:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2584764361381531, "perplexity": 2741.159308938913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400202686.56/warc/CC-MAIN-20200922000730-20200922030730-00207.warc.gz"}
http://toomai.wordpress.com/category/topology/
# MATH with my KIDS ## An Evening of Math II Tonight I gave the kids the choice of doing more partitions, or something else. They chose something else. In the book Math Tricks, Puzzles & Games by Raymond Blum I found this problem: TUNNELS Try to connect each rectangle with the triangle that has the same number.  Lines cannot cross or go outside the diagram. When I showed the problem to the kids, A was very upset and said that she didn’t know how to do it because she had never learned.  She threw a fit and started drawing a picture instead of working on it.  B immediatly started working on it.  He worked for about 20 minutes, drawing lines, erasing, drawing more lines.  He eventually got frustrated, saying that it was impossible.  In spite of A’s protests I brought her over next to B and had him explain what he had tried, and why it was impossible.  B connected rectangle 1 to triangle 1, then connected rectangle 2 to triangle 2, but at that point rectangle 3 was completely cut off from triangle 3.  Then A took a pencil and a copy of the puzzle and without pausing connected rectangle 1 to triangle 1 and rectangle 3 to triangle 3.  Just as B was saying that she wouldn’t be able to connect the last pair, she did! Getting A to look at a problem is 95% of the battle.  I reminded her that it was B’s work on the problem that helped her to her solution. ## Math with Balloons In demonstrating what a mathematical knot is, balloons are the best medium that I’ve found.  They have enough stiffness to keep the knot from collapsing so that you can’t see its structure (a string or rope would either be floppy and hard to work with, or you would have to pull the knot tight, which would make it hard to see it’s structure).  They have enough give to allow knots to be tied in them.  They are very visisble and the ends can be twisted together to complete the loop. Here I am with not a knot, but a link: the Borromean rings.  It has beautiful symetries that have been recognized for centuries and is important to low dimensional topology (in fact, an early incarnation of my dissertation featured the Borromean rings). ## Knot Tying Schema I’ve noticed that whenever my two-year-old wants to tie a knot, she takes the two strands that she’s interested in and repeatedly twists them around each other.  Of course this doesn’t produce any kind of knot at all (or I should say it only produces the unknot).  But this twisting appears to be the only basic move that she knows (and she learned it herself) for producing knots. It made me wonder what the basic moves are that we use to tie knots.  Every knot theorist knows that there are three basic moves for transforming one picture of a mathematical knot into another (possibly quite different looking) picture of the same knot.  These moves are called the Reidemeister moves.  Any two pictures of the same knot can be made to look like each other using just these moves (and “plane isotopies”).  Similarly it seems to me that mathematically you really only need two moves to tie any knot, or what I mean to say is to go from a picture of a straight string to a picture of any knot.  Here is an illustration of those two moves: Of course after doing several of these moves, you will want to glue the two ends of the string together if you wnat to get a picture of a mathematical knot. When people tie knots, it does seem like there is some finite set of moves they use, but the catalogue of moves seems to be bigger than these two moves I illustrated above.  For instance there are moves like “wrap one end around a loop” or “put one end through a hole” or “follow one strand through a whole series of moves.”  Of course each of these can be broken up into a sequence of the two basic moves, but it is not always useful to break things down into the simplest possible moves in practice.  It makes me wonder what the catalogue of moves is that people use. ## More Experimental Topology and Experiments in Topology I visited my friends Peter and Liz (I stayed a few nights) and came back with a laundry list of things to post about: 1) They have some really cool polyhedra and mathematical quilts, so sometime I am going to have to go over there with a camera and click some photos. Peter’s latest quilt project is one tiled with spidrons, which is in the design phase now. 2) Peter was a little upset that I failed to credit him for pointing out to me originally that $7\times 3=21$ and $7$ expanded base $3$ is $21$ (see synchronicity) and asked if there were any other examples of this. Anyway, thanks, Peter, for pointing this out and inspiring so much recreational math. 3) I finally carried out that idea I had that I mentioned back in the post more experimental topology about making a five pointed star. It took some trial and error (hey! that’s what experimental topology is all about. Right?) but here is what I came up with: Start with ten strips of paper, with one end cut into points. The points should come to an angle of approximately $\frac{\pi}{5}$. Draw a line down the center of each and tape them together so that the tips are all touching: Now start taping opposite ends together, like so: When you have four of the five opposite pairs taped together, the last one needs one full twist. If you’ve been proceeding in a clockwise direction when taping opposite pairs together, you should make one full twist in the last pair by turning the strip closest to you in a counter-clockwise direction. Anyway, in the end you should get something like this: Now, cut along all of those center lines, and what do you get? A mess, but if you untangle the mess, you should get: A pair of stars that are linked! Topologically this is the Hopf link. 4) Peter also loaned me a couple of books: Experiments in Topology I haven’t read much yet, but it has some cool stuff, like if you have a strip of paper, one inch thick, what is the shortest Moebius strip that you could make? Also he lent me Goedel, Escher, Bach: an Eternal Golden Braid (gosh, that’s a really small image of the book, but, oh well). I’m about a fifth of the way into this book. Excellent so far. Won the Pulitzer Prize! I’m learning lots about the works of Goedel, Escher and (believe it or not) Bach. That’s all for now, but soon I will have to blog about my visit to my kids’ classes for career day, which was last week….Stay tuned. ## More Experimental Topology I was thinking that there might be a way to get a hexagon along the lines of the methods of the post Math with scissors. I told my kids about the idea and they were excited to do some math experiments. I started with three strips of paper. Taped them together like this: Then I wanted to tape the three ends up like this, but I knew there had to be some twisting of the strands involved. In the end we are going to cut each of the strands down the middle. So we experimented. I knew that at least two of the strands had to have some odd number of half twists in them to have any chance of getting a hexagon (can you see why?). It took us several tries, but my son and I both came up with our own solutions for how to get a hexagon: Try it and see if you can get it. Here are our solutions: My son’s solution: Put a single half twist in each of the three strands, twisting two in one direction and the other one in the opposite direction. My solution: Put a half twist in each of two of the strands, twisting them in opposite directions. Leave the third strand untwisted. Of course, the experts will want to conjecture and prove necessary and sufficient conditions to get a nice flat hexagon. I also have an idea for making a five-(or more)-pointed-star along similar lines. I’ll let you know what I come up with. ## Math with scissors The other day I visited my son’s class and did the following math demonstration. Take a strip of paper and draw a line down the middle. Our line is red. Tape one end to the other, but put a “half twist” in it, just like shown here. This is a Moebius strip. It only has one side to it. A pretty cool object in and of itself. Now for some real fun. Take a pair of scissors and cut the Moebius strip in half along the red line–But before you do that try to guess what you will end up with!!! I’m not going to show you. You have to try it yourself. Just make sure that you only cut along the red line! There is more that we can do. Let’s make a + out of paper with two red lines drawn as shown here. Now tape two opposite ends to each other with no twists. Like this. Tape the other two ends to each other, but put in a half twist. What you have here is actually a Klein bottle (whatever that is) with a hole in it. Now cut along the two red lines. Make sure not to make any other cuts except along the lines. Keep cutting… Keep cutting. In the end what do you get? (Make a guess first.) (!) One more: Make a star shape just like this. Make sure that all of the strips are the same length and that the angles between the strips are the same. Draw three red lines. One down each of the strips. Here is what it should look like. Now tape two opposite ends together with no twist. Take another two ends… …and tape them, with no twist, just like this. Now for the tricky part. You have to do this part right to get nice shapes in the end. But, don’t worry too much. If you do it “wrong” you’ll just get something other than what we got. Flip your paper over. Tape the last two opposite ends with a full twist to the left. Just like shown here. Here is another view. Now cut along the red lines and see what you get. Bailey’s class loved it.
2013-05-24 20:26:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5715845823287964, "perplexity": 723.6889575309043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705043997/warc/CC-MAIN-20130516115043-00038-ip-10-60-113-184.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/321276/lorentz-group-representations
# Lorentz Group Representations Consider for example the (trivial) spin-1/2 representation of the $SU(2)$ group. This representation has dimension two, which is clear from a quantum mechanical perspective since we need to specify the two coefficients $$|\psi\rangle=a_{+}|+\rangle+a_{-}|-\rangle$$ using the vector $$(a_{+},a_{-})$$ The relevant matrices are the Pauli matrices. Now consider the $(\frac{1}{2},\frac{1}{2})$ representation of the Lorentz group. This representation has dimension four (implying that we are dealing with four components vectors), however the generators are still written in terms of the Pauli matrices and are thus $2\times 2$. Why is this ok? • – Qmechanic Mar 25 '17 at 16:52 The six generators are $S^{ab}\equiv i[\gamma^a,\gamma^b]/4$, where $(a,b)$ are the three boosts $(0,1), (0,2) (0,3),$ and the three spatial rotations $(1,2), (2,3), (3,1)$. The Dirac gamma matrices are $4$-by-$4$ matrices. It's true that you can write down the gamma's in terms of two-by-two blocks of two-by-two Pauli sigma's, but that is still four-by-four matrices. The Lie algebra of $SO(3,1)$ is locally isomophic to that of $SU(2)\times SU(2)$ which is what the $1/2,1/2$ representation refers to, but unless you are dealing with massless (Weyl) particles you need both $SU(2)$'s. We can take $$\gamma_0 = \left(\matrix{0&1\\ 1&0}\right), \quad \gamma^a= \left(\matrix{0&\sigma_a\\ -\sigma_a &0}\right).$$ We also have $$\quad \gamma^5= \left(\matrix{1&0\\ 0&-1}\right),$$ Then setting $(\Sigma_1,\Sigma_2,\Sigma_3)=(S_{23},S_{31},S_{12})$ $$\Sigma_a= \frac 12 \left(\matrix{\sigma_a&0\\ 0&\sigma_a}\right)$$ are the rotation generators, and $$K_a=\frac i2 \left(\matrix{\sigma_a&0\\ 0&-\sigma_a}\right)$$ are the boost generators (I do not guarantee that the signs are correct). Note that the boost generators are not Hermitian. The non-compact Lorentz group $SO(1,3)$ has no finite dimensional unitary representation. My representation matrices are adapted to the decomposition of the algebra into the two $S_a\pm iK_a$ mutually commuting $SU(2)$ subalgebras of $SO(1,3)$. If I got my signs right, the matrices should obey the Lorentz algebra $$[\Sigma^a,\Sigma^b]= i \epsilon_{abc} \Sigma^c\\ [\Sigma^a,K^b]= i \epsilon_{abc} K^c\\ [K^a,K^b]= - i\epsilon_{abc} \Sigma^c.$$ • How do you know how to construct these generators given the 2x2 representation? – Watw Mar 25 '17 at 18:17 • You construct gamma matrices: I'll amend my original answer to describe. – mike stone Mar 25 '17 at 20:56 • I don't really think this explains how you combine the two $SU(2)$ algebras to get the Lorentz algebra... – Watw Mar 26 '17 at 10:11 • @Watw. Set $J^{(1)}_a = \Sigma_a+iK_a$ and $J^{2)}_a= \Sigma_a-iK_a$, then $[J^{(1)}_a, J^{(1)}_a]=i\epsilon_{abc} J^{(1)}_c$. Same for $J^{(2)}_a$. And $[J^{(1)}_a, J^{(2)}_a]=0$. – mike stone Mar 26 '17 at 12:55 • @Mike Stone: Do you realize that the question was about the $D(\frac{1}{2},\frac{1}{2})$-representation whereas you explained about the $D(\frac{1}{2},0) \oplus D(0,\frac{1}{2})$-representation. – Frederic Thomas Dec 19 '17 at 13:44
2019-08-19 19:31:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9421681761741638, "perplexity": 364.6710255416353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314904.26/warc/CC-MAIN-20190819180710-20190819202710-00339.warc.gz"}
http://openstudy.com/updates/4fb1dfb0e4b055653427e23a
1. anonymous 2. anonymous To find the length of a vector, simply take the square root of the sum of the squares, $$\ \huge \sqrt{4+16+25}$$. This, $$\ \huge \sqrt{45}$$, or $$\ \huge 3\sqrt{5}$$. Does that make sense to you? 3. anonymous so it would be $3\sqrt{5} \right?$ 4. anonymous Length of vector: $$\ \huge \sqrt{x^2 + y^2 + z^2} .$$ 5. anonymous Yes, $$\ \huge 3\sqrt{5}$$. 6. anonymous Thank you!!
2016-10-01 03:17:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.986656665802002, "perplexity": 1638.8908037904928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662507.79/warc/CC-MAIN-20160924173742-00221-ip-10-143-35-109.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/205976/is-the-space-of-immersions-of-sn-into-mathbb-rn1-simply-connected
# Is the space of immersions of $S^n$ into $\mathbb R^{n+1}$ simply connected? The title is the question. Sorry, this isn't quite research level. I imagine the answer is well-known, just not to me. Thanks for any help! • some tag with "topology" would be natural (I'm not sure whether this is considered as algebraic or geometric topology, or both) – YCor May 7 '15 at 22:28 • btw there was no comment about the meaning of the question: does it mean continuous immersions modulo continuous isotopy, smooth immersion modulo smooth isotopy, and are these two point of views equivalent in any dimension? – YCor May 7 '15 at 22:30 • @YCor, I think "immersion" is inherently a smooth notion, so it means smooth immersions modulo smooth isotopy. But the Smale-Hirsch theorem turns it into a topological problem about continuous maps. – Dylan Thurston May 8 '15 at 0:40 • @DylanThurston: an immersion can be characterized as a map between manifolds $f:X\to Y$ such that for every $x\in X$ there are neighborhoods $V,V'$ of $x$ and $f(x)$ such that $f(V)\subset V'$ and the restriction of $V$ can be described in coordinates as the inclusion of a subspace of $\mathbf{R}^d$ into $\mathbf{R}^n$. This makes sense in the continuous setting. But OK, I understand there's consensus that the setting is the smooth setting. – YCor May 8 '15 at 8:01 • @YCor: this is smooth immersions up to 1-parameter families of immersions. Sometimes this is called "regular isotopy" to distinguish it from "isotopy". We're talking about the homotopy-type of the space of immersions. Immersions up to plain isotopy have an enormous amount of components, primarily indexed by the image (as a stratified space). – Ryan Budney May 8 '15 at 15:13 I'm not a professional topologist by any means, but let me give this a shot. There's some discussion in the first few lectures of John Francis from this course. Please correct me if I've made mistakes below... By Smale and Hirsch, the space of immersions of $S^n$ into $R^{n+1}$ is homotopy equivalent to the space of unbased maps from $S^n$ into $V_{n}(\mathbb{R}^{n+1})$, $Map(S^n,V_n(\mathbb{R}^{n+1})$. Here $V_n(\mathbb{R}^{n+1})$ is the Stiefel manifold of $n$ frames in $n+1$-space. $V_{n}(\mathbb{R}^{n+1})$ is homeomorphic to $SO(n+1)$. See Ryan's answer for some more details. Your question is then basically equivalent to characterizing $\pi_1(Map(S^{n},SO(n+1)))$, the fundamental group of that mapping space. As B.S. points out in his answer, since $SO(n+1)$ is a group, $Map(S^n,SO(n+1))$ is homotopy equivalent to $SO(n+1)\times\Omega^n(SO(n+1))$, therefore the above fundamental group is $\pi_1(SO(n+1))\times\pi_1(\Omega^n(SO(n+1)))\cong\pi_1(SO(n+1))\times\pi_{n+1}(SO(n+1))$. Though B.S. has pointed out that we can see that it's always nontrivial just from the first factor, we can in fact compute the group from known results on homotopy groups of $SO(n+1)$ (see e.g. this table compiled from the literature by Klaus Johannson). The result for all $n$ is (scroll right in the grey box below): n| 1 | 2 | 3 | 4 | 5 | 6 | 8s-1 | 8s | 8s+1 | 8s+2 | 8s+3 | 8s+4 | 8s+5 | 8s+6 | -| --- | ------ | -------------- | --------- | ---- | ------ | ------------------- | -------------- | ----------- | --------- | -------------- | --------- | --------- | ------ | | Z | Z_2+Z | Z_2+Z_2+Z_2 | Z_2+Z_2 | Z_2 | Z_2+Z | Z_2+Z_2+Z_2+Z_2 | Z_2+Z_2+Z_2 | Z_2+Z+Z_2 | Z_2+Z_2 | Z_2+Z_2+Z_2 | Z_2+Z_2 | Z_2+Z_4 | Z_2+Z | In the rightmost columns, $s$ is any integer greater than or equal to 1, and the plus signs denote direct sum. Apologies for the ugly formatting of the table. • I think that is not what Hirsch-Smale says here. Hirsch-Smale says that the space of immersions is homotopy equivalent to the space of sections of the fiber bundle over $S^n$ with fiber $V_n(\mathbb{R}^{n+1})$ associated to the tangent bundle, which is trivializable only for $n = 1, 3, 7$. You also need to be more careful about the distinction between spaces of based vs. unbased maps; e.g. based maps from $S^1$ is the based loop space but free maps from $S^1$ is the free loop space, and these have different homotopy groups in general. – Qiaochu Yuan May 7 '15 at 18:10 • @Qiaochu: literally you're right but there is a reduction in this particular case. Smale-Hirsch says immersions (via the derivative) are homotopy-equivalent to the bundle mono-morphisms $TS^n \to T \mathbb R^{n+1}$, but using the almost-trivializability of $TS^n$ you can check this mapping space has the same homotopy-type as $\Omega^n SO_{n+1}$. – Ryan Budney May 7 '15 at 18:14 • @Ryan: huh. Okay, I'm willing to believe that it has the same homotopy type as the space of maps from $S^n$ to $SO(n+1)$, but surely this doesn't in turn have the same homotopy type as the space of based maps...? – Qiaochu Yuan May 7 '15 at 18:16 • Oh, yes, you're right there. It's the unbased loop space. The based loop space would be immersions with a fixed behaviour at a point. I'll supply the details in a short partial-answer. – Ryan Budney May 7 '15 at 18:17 • I think there's a fibration $\Omega^n(SO(n+1))\rightarrow X \rightarrow SO(n+1)$, where X is the space of unbased maps from $S^n$ to $SO(n+1)$. So you can look at the long exact sequence attached to this fibration to try to compute $\pi_1(X)$. Since $\pi_2(SO(n+1))$ vanishes, this only has five nontrivial terms. $\star \rightarrow \pi_{n+1}(SO(n+1)) \rightarrow \pi_1(X) \rightarrow \mathbb{Z}_2 \rightarrow \pi_n(SO(n+1)) \rightarrow \pi_0(X) \rightarrow \star$. By Kervaire's table for $\pi_{n+1}(SO(n+1))$ you basically have the answer up to knowing how $\pi_1$ acts on $\pi_n$ for $SO(n+1)$ – Noah Snyder May 7 '15 at 20:12 Your space is never simply connected (for $n\geq 1$). As already answered, it is (weakly) homotopy equivalent by Smale-Hirsch h-principle to the space of unbased maps from $S^n$ to $SO(n+1)$, which is itself $SO(n+1)\times \Omega^n SO(n+1)$ ($\Omega^n$ = based maps from $S^n$). So $\pi_1$ is at least $\pi_1(SO(n+1))$ (and more in general). In fact it is not even connected as soon as $\pi_n SO(n+1)$ isn't trivial, which occurs for $n=1,3,4,5$ and many more (maybe all except $n=2$?). But it is connected for $n=2$, which awarded its celebrity to Smale (sphere eversion), even if his advisor Raoul Bott didn't believed it at first. EDIT: according to j.c. $\pi_n SO(n+1)$ is trivial only for $n=2,6$. Quite an interesting fact ! • Your answer shows that for any choice of base-point there is a canonical surjective homomorphism from this fundamental group to the cyclic group of order two (induced by the projection to $SO(n+1)$). This should correspond to a natural 2-fold covering (of the space of immersions). What is it? – YCor May 7 '15 at 22:26 • Could you say a little more about why the space of unbased maps from $S^n$ to $SO(n+1)$ the same as $SO(n+1)\times\Omega^nSO(n+1)$? By the way, $\pi_n SO(n+1)$ is given in the table cited in my answer, it's nontrivial except for $n=2$ and $n=6$. – j.c. May 7 '15 at 22:31 • @j.c.: Since $SO(n+1)$ is a group, the space of maps from $S^n$ to $SO(n+1)$ is the same as the product of $SO(n+1)$ (image of the base point of $S^n$) and $\Omega^n SO(n+1)$ (based maps $\infty\mapsto I$). By the way, I didn't know of the case $n=6$. Thanks ! – BS. May 7 '15 at 23:26 Let me just fill-in the gap in j.c.'s exposition. Smale-Hirsch states that the derivative from the space of immersions $Imm(S^n, \mathbb R^{n+1})$ to the space of bundle monomorphisms $Mono(TS^n, T\mathbb R^{n+1})$ is a homotopy-equivalence. There is a cute observation that allows you to nail-down the space of fibrewise one-to-one maps $TS^n \to T\mathbb R^{n+1}$. Notice that $TS^n$ is virtually trivial, i.e. $TS^n \oplus \epsilon^1 = S^n \times \mathbb R^{n+1}$, where $\epsilon^1$ is a line bundle over $S^n$. So the idea is to extend any bundle monomorphism to an orientation-preserving bundle monomorphism $S^n \times \mathbb R^{n+1} \to T \mathbb R^{n+1}$. You can do this continuously, over the entire space using the a cross-product type construction. This is a homotopy-equivalence between $Mono(TS^n, T\mathbb R^{n+1})$ and the $n$-fold free loop space of $SO_{n+1}$. • The space of unbased maps from $S^n$ to a space is not the $n$-fold free loop space; that would be the space of unbased maps from $T^n$. – Qiaochu Yuan May 7 '15 at 18:45 • Qiaochu you're too much a grad student. That's needlessly pedantic. – Ryan Budney May 7 '15 at 19:38 • @QiaochuYuan If it helps, I suspect Ryan's comment lies in the space of unbased critiques from Professors to Grad Students. – Vidit Nanda May 7 '15 at 21:11 • Unbased, only up to a controlled homotopy. – Ryan Budney May 7 '15 at 22:05 • @RyanBudney Observe: the mapping space $\text{map}(S^n,G)$ of a Lie group $G$ splits as a product $\Omega^n G \times G$. – John Klein May 8 '15 at 2:02 For $n=1$, the space $Imm(S^1,\mathbb R^2)$ has $\mathbb Z$ many connected components described by the rotation index. In each case the fundamental group is $\mathbb Z$. See Thm 2.10 of here for the components with rotation index $\ne 0$, and see this paper for rotation index 0.
2021-04-20 20:15:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8603425025939941, "perplexity": 435.8787926398734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039490226.78/warc/CC-MAIN-20210420183658-20210420213658-00541.warc.gz"}
https://mikesmathpage.wordpress.com/2017/08/26/
# A project for kids inspired by Nassim Taleb and Alexander Bogomolny I woke up yesterday morning to see this problem posted on twitter by Alexander Bogomolny: About a two months ago we did a fun project inspired by a different problem Bogomolny posted: Working through an Alexander Bogomolny probability problem with kids It seemed as though this one could be just as fun. I started by introducing the problem and then proposing that we explore a simplified (2d) version. I was excited to hear that the boys had some interesting ideas about the complicated problem: Next we went down to the living room to explore the easier problem. The 2d version, $|x| + |y| \leq 1$, is an interesting way to talk about both absolute value and lines with kids: Next we returned to the computer to view two of Nassim Taleb’s ideas about the problem. I don’t know why the tweets aren’t embedding properly, so here are the screen shots of the two tweets we looked at in this video. They can be accessed via Alexander Bogomolny’s tweet above (which is embedding just fine . . . .) The first tweet reminded the boys of a different (and super fun) project about hypercubes inspired by a Kelsey Houston-Edwards video that we did over the summer: One more look at the Hypercube The connection between these two projects is actually pretty interesting and maybe worth an entire project all by itself. Next we returned to the living room and made a rhombic dodecahedron out of our zometool set. Having the zometool version helped the boys see the square in the middle of the shape that they were having trouble seeing on the screen. Seeing that square still proved to be tough for my younger son, but he did eventually see it. After we identified the middle square I had to boys show that there is also a cube hiding inside of the shape and that this cube allows you to see surprisingly easily how to calculate the volume of a rhombic dodecahedron: Finally, we wrapped up by using some 3d printed rhombic dodecahedrons to show that they tile 3d Euclidean space (sorry that this video is out of focus): Definitely a fun project. I love showing the boys fun connections between algebra and geometry. It is also always tremendously satisfying to find really difficult problems that can be made accessible to kids. Thanks to Alexander Bogomolny and Nassim Taleb for the inspiration for this project.
2023-03-28 06:26:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3917836844921112, "perplexity": 1017.8447886726038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948765.13/warc/CC-MAIN-20230328042424-20230328072424-00354.warc.gz"}
https://www.dcode.fr/equation-addition
Search for a tool Tool to solve an addition equation with one or more unknowns. Specialized equation solver for additions with one or more variables. Results Tag(s) : Symbolic Computation Share dCode and more dCode is free and its tools are a valuable help in games, maths, geocaching, puzzles and problems to solve every day! A suggestion ? a feedback ? a bug ? an idea ? Write to dCode! Please, check our dCode Discord community for help requests! NB: for encrypted messages, test our automatic cipher identifier! Thanks to your feedback and relevant comments, dCode has developed the best 'Addition with Unknown(s)' tool, so feel free to write! Thank you! Solving Domain Set Reals numbers (fractions or decimal places, etc.) Integers numbers How to solve an addition with dCode? The addition solving uses the equation solver available on dCode. This is a special form of equation involving the + addition operator. Example: Equation with 1 variable $1+x=2 \iff x=1$ Example: Equation with 2 variables $x+y=0 \iff x=-y$ Indicate the variable with a letter (generally x) in order to indicate that it is this unknown which is needed, the solver will give the addition result either in the form of integer if it exists, either in form of fraction, or in numerical form (number with decimal places). For missing numbers calculations like 1_2+4_=1_5 with blanks, use the missing digits calculation solver. How to solve a system with multiple additions? Indicate several lines of addition with the same variable(s). An alternative solution is to separate them with the AND logical operator: &&. How to solve other types of equations? Equations involving subtraction, multiplication, division etc. can be solved via the dCode equation solver. Source code dCode retains ownership of the online 'Addition with Unknown(s)' tool source code. Except explicit open source licence (indicated CC / Creative Commons / free), any 'Addition with Unknown(s)' algorithm, applet or snippet (converter, solver, encryption / decryption, encoding / decoding, ciphering / deciphering, translator), or any 'Addition with Unknown(s)' function (calculate, convert, solve, decrypt / encrypt, decipher / cipher, decode / encode, translate) written in any informatic language (Python, Java, PHP, C#, Javascript, Matlab, etc.) and no data download, script, copy-paste, or API access for 'Addition with Unknown(s)' will be for free, same for offline use on PC, tablet, iPhone or Android ! dCode is free and online. Need Help ? Please, check our dCode Discord community for help requests! NB: for encrypted messages, test our automatic cipher identifier!
2021-07-27 08:36:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23249471187591553, "perplexity": 8076.3024644483585}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153223.30/warc/CC-MAIN-20210727072531-20210727102531-00718.warc.gz"}
http://www.scholarpedia.org/article/Talk:Novikov-Shifman-Vainshtein-Zakharov_beta_function
# Talk:Novikov-Shifman-Vainshtein-Zakharov beta function ## Reviewer B I think that this "living review" would be a nice opportunity for a definitive discussion on the origin of the denominator of the NSVZ beta function, which gave rise to a lot of literature and debate. Moreover, related to this issue, I believe that it would be useful a remark on the relation between "holomorphic" and "canonical" gauge coupling constant, which originates the well known puzzle concerning the absence of evolution of the "canonical" gauge coupling constant below a certain scale. Under this respect, indeed, the pole of the NSVZ beta function seems to be just a sign of the failure of \alpha as a coupling constant at low energies. Finally, the Author uses both \alpha and g for the gauge coupling constant. Does He refer to the same coupling constant ? 1) Refereee A He made two (almost identical) comments in the text. I do not think these comments are relevant, since the referee A seemingly does not realize that at this point I discuss the zero modes in the instanton background, and only matter fermions (not bosons!) have these zero modes. Matter spin-zero fields have no zero modes in the instanton. To make this aspect absolutely clear I added " in the instanton background" in the text, and erased the comments of referee A. 2) As for the referee B, I added the definition of \alpha in terms of g^2. It seems to me that his other proposals go way beyond the scope of an encyclopedic article. He suggests a thorough discussion of the denominator of the NSVZ beta function since, allegedly, " the question is not yet settled in the literature". First, I do not think it is not settled; second, Scholarpedia is definitely NOT an appropriate forum for discussing and debating nuances. The distinction between the canonic and holomorphic constants is mentioned at the end of section "History and theoretical basis". I believe that the reference given there is sufficient for an encyclopedic article. After all, this is NOT a comprehensive review. The goals of the former and the latter are different. ### Reviewer B reply to Author's point 2 above Contrarily to the Author's opinion, I believe that an encyclopedic article, more that a regular journal article, should give an as much as possible comprehensive review, which is definitely not the case under consideration. But the Author for three times in his reply explicitly says that Scholarpedia is not worth an inclusive study, so he is coherent, under this respect. My opinion is that the Author's contribution does not fit the Scholarpedia "Aims and policy" which I read in http://www.scholarpedia.org/article/Scholarpedia#Aims_and_policy. In particular it does not appear to me a "useful encyclopedic reference for scholars of different levels".
2019-10-17 06:42:36
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8166040778160095, "perplexity": 761.9627792844939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672723.50/warc/CC-MAIN-20191017045957-20191017073457-00349.warc.gz"}
https://gateoverflow.in/tag/knowledge-representation
# Recent questions tagged knowledge-representation 1 Consider the results of a medical experiment that aims to predict whether someone is going to develop myopia based on some physical measurements and heredity. In this case, the input dataset consists of the person's medical characteristics and the target variable is ... develop myopia and $0$ for those who aren't. This can be best classified as Regression Decision Tree Clustering Association Rules 2 3 '$R$ is a robot of $M$’ means $R$ can perform some of the tasks that otherwise $M$ would do and $R$ is unable to do anything else.Which of the following is the most appropriate representation to model this situation? None of these 4 Match the following : LIST-I LIST-II a. Script i. Directed graph with labelled nodes for graphical representation of knowledge b. Conceptual Dependencies ii. Knowledge about objects and events is stored in record-like structures consisting of slots and slot values. c. Frames iii. Primitive concepts and ... roles, props and scenes Codes : a b c d iv ii i iii iv iii ii i ii iii iv i i iii iv ii 5 High level knowledge which relates to the use of sentences in different contexts and how the context affect the meaning of the sentences? Morphological Syntactic Semantic Pragmatic 6 Which of the following is a knowledge representation technique used to represent knowledge about stereotype situation? Semantic network Frames Scripts Conceptual Dependency 1 vote 7 Consider a fuzzy set old as defined below Old =$\left\{(20, 0.1), (30, 0.2), (40, 0.4), (50, 0.6), (60, 0.8), (70, 1), (80, 1)\right\}$. Then the alpha-cut for alpha = $0.4$ for the set old will be $\left\{(40, 0.4)\right\}$ $\left\{50, 60, 70, 80\right\}$ $\left\{(20, 0.1), (30, 0.2)\right\}$ $\left\{(20, 0), (30, 0), (40, 1), (50, 1), (60, 1), (70, 1), (80, 1)\right\}$
2020-08-12 08:41:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23093321919441223, "perplexity": 1913.652183464795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738888.13/warc/CC-MAIN-20200812083025-20200812113025-00298.warc.gz"}
https://www.physicsforums.com/threads/a-question-of-weight.841680/
# A Question of Weight 1. Nov 6, 2015 ### Dryson1 Its the age old question which weighs more, a ton Styrofoam Balls a 1/4" in diameter or a ton of Steel Balls a 1/4" in diameter? Of course the answer is that both weigh the same. Now seeing as both weigh the same which would cause more harm to you if you stood underneath of them and let them drop on you from a height of 20 feet in the air? The steel balls would of course because the Styrofoam Balls falling from 20 feet would be influenced by the air thus causing some of the Styrofoam Balls to not land on you. So is the age old answer to the question of, is a ton of Styrofoam Balls the same weight as a ton of Steel Balls being yes still a valid answer? 2. Nov 6, 2015 ### Buzz Bloom Hi Dryson: I don't get why you think the added context off falling would possibly affect the answer to the question. Can you elaborate on that? Do you have in mind that the concept of "weight" changes when an object falls? BTW, even if you were on the moon (in a space suit), the Styrofoam would do less damage even though the momentum when the each of balls hits your head would be the same. Regards, Buzz 3. Nov 6, 2015 ### Staff: Mentor The age-old question as I heard it, was "Which is heaver, a pound of silver or a pound of lead?" The reason is that silver, gold, and precious stones are measured in Troy units (troy lb and troy oz), while other materials are measured in avoirdupois units. See https://en.wikipedia.org/wiki/Troy_weight 1 troy pound $\approx$ 373.24 g. 1 avoirdupois pound $\approx$ 454 g. 4. Nov 6, 2015 ### HallsofIvy Staff Emeritus I doubt that "which weighs more, a ton Styrofoam Balls a 1/4" in diameter or a ton of Steel Balls a 1/4" in diameter?" really is an "age old" question since Styrofoam is not itself "age old"! I don't see what the fact that the steel balls hurting more than the Styrofoam balls has to do with the question of its weight so I don't understand your question.
2017-08-16 23:52:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5783100724220276, "perplexity": 1753.7245357149689}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102757.45/warc/CC-MAIN-20170816231829-20170817011829-00283.warc.gz"}
https://mathoverflow.net/questions/300704/the-quotient-of-a-superspecial-abelian-surface-by-the-involution
# The quotient of a superspecial abelian surface by the involution Let $E_i\!: y_i^2 = f(x_i)$ be two copies of a supersingular elliptic curve over a field of odd characteristics. Consider the involution $$i\!: E_1\times E_2 \to E_1\times E_2,\qquad (x_1, y_1, x_2, y_2) \mapsto (x_2, -y_2, x_1, -y_1)$$ and the quotient $S := E_1\times E_2/i$. Is it a K3 surface? What is (are) its defining equation(s)?
2019-04-23 06:50:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9447041153907776, "perplexity": 261.5165139340599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578593360.66/warc/CC-MAIN-20190423054942-20190423080942-00378.warc.gz"}
https://eprint.iacr.org/2019/195
## Cryptology ePrint Archive: Report 2019/195 Algorithms for CRT-variant of Approximate Greatest Common Divisor Problem Jung Hee Cheon and Wonhee Cho and Minki Hhan and Minsik Kang and Jiseung Kim and Changmin Lee Abstract: The approximate greatest common divisor problem (ACD) and its variants have been used to construct many cryptographic primitives. In particular, variants of the ACD problem based on Chinese remainder theorem (CRT) are exploited in the constructions of a batch fully homomorphic encryption to encrypt multiple messages in one ciphertext. Despite the utility of the CRT-variant scheme, the algorithms to solve its security foundation have not been studied well compared to the original ACD based scheme. In this paper, we propose two algorithms for solving the CCK-ACD problem, which is used to construct a batch fully homomorphic encryption over integers. To achieve the goal, we revisit the orthogonal lattice attack and simultaneous Diophantine approximation algorithm. Both two algorithms take the same time complexity $2^{\tilde{O}(\frac{\gamma}{(\eta-\rho)^2})}$ up to a polynomial factor to solve the CCK-ACD problem for the bit size of samples $\gamma$, secret primes $\eta$, and error bound $\rho$. Compared to Chen and Nguyen's algorithm in Eurocrypt' 12, which takes $\tilde{O}(2^{\rho/2})$ complexity, our algorithm gives the first parameter condition related to $\eta$ and $\gamma$ size. We also report the experimental results for our attack upon several parameters. From the results, we can see that our algorithms work well both in theoretical and experimental terms. Category / Keywords: CCK-ACD; Lattice; orthogonal lattice attack; SDA Date: received 19 Feb 2019, last revised 26 Feb 2019 Contact author: changmin lee at ens-lyon fr,tory154@snu ac kr,wony0404@snu ac kr,hhan_@snu ac kr Available format(s): PDF | BibTeX Citation Short URL: ia.cr/2019/195 [ Cryptology ePrint archive ]
2019-07-19 22:46:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8147209882736206, "perplexity": 2278.2419180634192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526386.37/warc/CC-MAIN-20190719223744-20190720005744-00048.warc.gz"}
http://math.stackexchange.com/questions/233246/periodic-orbits
# Periodic orbits Let $x\in\mathbb R$ be a periodic point with lenght 2 of the recursion $x_{n+1}=f(x_n)$ My book about Dynamic systems says that this recursion has a fixed point now, because a periodic point with length 2 exists. May you could help me to prove this statement? - What does length 2 mean? – Amr Nov 9 '12 at 0:49 Does it mean that there exists n such that f(f(x n))=x n – Amr Nov 9 '12 at 0:50 Yes, exactly. $f(f(x_2))=x_2$ – Montaigne Nov 9 '12 at 0:51 In general if $f$ is continuous, and there exists an integer k>1 and a real number $a$ such that $f^k(a)=a$ then there exists a real number $r$ such that $f(r)=r$. Proof: Let $g(x)=f(x)-x$. Since: $g(a)+g(f(a))+g(f^2(a))+...+g(f^{k-1}(a))=f^k(a)-a=0$, therefore either one of the numbers $g(a),g(f(a)),g(f^2(a)),...,g(f^{k-1}(a))$ is zero (in this case we are done) or one of these numbers is positive and another number is negative. Assume WLOG that the second case holds. Let $g(f^i(a))g(f^j(a))<0$ (it means that they have different signs). Since, $g$ has a sign change in the interval $[\min(f^i(a),f^j(a)),\max(f^i(a),f^j(a))]$, thus $g$ has a root.
2013-05-19 17:14:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.930656373500824, "perplexity": 212.7177342334113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697843948/warc/CC-MAIN-20130516095043-00010-ip-10-60-113-184.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/584844/dual-vector-spaces-with-orthonormal-basis
# Dual Vector Spaces with Orthonormal Basis I'm really stuck on the following quesiton. Let $U$ and $V$ be finite dimensional vector spaces over the complex numbers with bases $e_1,..,e_n$ of $U$ and $f_1,...,f_m$ of $V$. They also have dual spaces $U^*$ and $V^*$ with bases $e^i$ and $f^i$ respectively. Then assume that both spaces are Hermitian. Let $T_U:U\to U^*$ be defined by $T_U(w)(u) = \langle w,u\rangle \forall w,u\in U$. Prove that if the basis $e_i$ is orthonormal then $T_U(e_i) = e^i$. I have tried showing that $T_U(e_i)(u) = \langle e_i, u\rangle$ $= \langle e_i,x_1e_1+...+x_ne_n\rangle$ $= \langle e_i, x_1e_1\rangle + \langle e_i, x_2e_2\rangle +\cdots +\langle e_i, x_ne_n\rangle$ $=x_1\langle e_i,e_1\rangle + x_2\langle e_i,e_2\rangle +\cdots +x_n\langle e_i,e_n\rangle$ So we are left with, $x_i\langle e_i,e_i \rangle$ since the others are mutually orthogonal. I'm having trouble understanding why this implies the desired result. • Please use \langle and \rangle to get $\langle\cdots\rangle$. (I fixed it for you this time.) Oh, and what role is $V$ playing here? – Harald Hanche-Olsen Nov 28 '13 at 18:09 ## 3 Answers To see that $T_U(e_i) = e^i, \tag{1}$ first compute $T_U(e_i)(e_j) = \langle e_i, e_j \rangle = \delta_{ij} \tag{2}$ by orthonormality of the $e_i$. Then note that $e^i(e_j) = \delta_{ij} \tag{3}$ as well, this time by the duality if the bases $e_i$, $e^j$. Then since $T_U(e_i)$ and $e^i$ agree on the basis elements $e_j$ of $U$, it follows by linearity thet agree on every $u = \sum u_j e_j \in U$: $T_U(e_i)(\sum u_j e_j) = \sum u_j T_U(e_i)(e_j) = \sum u_j e^i(e_j) = e^i(\sum u_j e_j); \tag{4}$ thus $T_U(e_i) = e^i, \tag{5}$ and we are done!QED Hope this helps. Holiday Cheers, and as always, Fiat Lux!!! • Thanks, I almost had it. I think my troubles stemmed from not entirely understanding the definition of the map. I got a bit worried because I had $T_U$ defined as $T_U(w)(u)$ but I had to prove something about $T_U(w)$. I didn't realise that this meant I had to prove it for all $u$. Thanks everyone – benjiebob Nov 28 '13 at 18:30 • @ benjiebob: Yup, you sure did! Glad to help out on this bee-a-ooo-ti-ful Turkey Day here in Beserkeley, California; and thanks for the "acceptance"! – Robert Lewis Nov 28 '13 at 18:34 The property of the dual basis is that $e^{i}(e_{j}) = \delta_{i,j}$. So, why don't you consider how $T_{U}(e_{i})$ and $e^{i}$ act on the basis of $U$? Hint: First of all you don't need $V$ for your question, right? So let's first talk about your Hermitian. If $U$ is a Hermitian space, then there is a Hermitian form $$\langle, \rangle: U \times U \longrightarrow \Bbb C$$ Now your map $T_U$ is not defined in a clear way. It should go from $U \to U^*$, but then, you have to define $$T_U: U \to U^*, u \mapsto \langle -,u \rangle$$ and then $T_U(u) \in U^*$ for $u \in U$. So to prove that $T_U(e_i) = e^i$, you just have to know how the $e^i$ is defined, namely via the Kronecker Delta: $$e^i (e_j) = \delta_{ij}$$ So start with $u = \sum_{i=1}^{n} e_i$ and show what happens if you let $T_U$ act on $u$.
2020-09-29 04:04:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9606373310089111, "perplexity": 246.97846482531796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401624636.80/warc/CC-MAIN-20200929025239-20200929055239-00113.warc.gz"}
https://gyires.inf.unideb.hu/KMITT/a52/ch10s05.html
22.5. 22.5 Translation, distortion, geometric transformations Objects in the virtual world may move, get distorted, grow or shrink, that is, their equations may also depend on time. To describe dynamic geometry, we usually apply two functions. The first function selects those points of space, which belong to the object in its reference state. The second function maps these points onto points defining the object in an arbitrary time instance. Functions mapping the space onto itself are called transformations. A transformation maps point to point . If the transformation is invertible, we can also find the original for some transformed point using inverse transformation . If the object is defined in its reference state by inequality , then the points of the transformed object are since the originals belong to the set of points of the reference state. Parametric equations define the Cartesian coordinates of the points directly. Thus the transformation of parametric surface requires the transformations of its points Similarly, the transformation of curve is: Transformation may change the type of object in the general case. It can happen, for example, that a simple triangle or a sphere becomes a complicated shape, which are hard to describe and handle. Thus it is worth limiting the set of allowed transformations. Transformations mapping planes onto planes, lines onto lines and points onto points are particularly important. In the next subsection we consider the class of homogeneous linear transformations, which meet this requirement. 22.5.1. 22.5.1 Projective geometry and homogeneous coordinates So far the construction of the virtual world has been discussed using the means of the Euclidean geometry, which gave us many important concepts such as distance, parallelism, angle, etc. However, when the transformations are discussed in details, many of these concepts are unimportant, and can cause confusion. For example, parallelism is a relationship of two lines which can lead to singularities when the intersection of two lines is considered. Therefore, transformations are discussed in the context of another framework, called projective geometry. The axioms of projective geometry turn around the problem of parallel lines by ignoring the concept of parallelism altogether, and state that two different lines always have an intersection. To cope with this requirement, every line is extended by a “point at infinity” such that two lines have the same extra point if and only if the two lines are parallel. The extra point is called the ideal point. The projective space contains the points of the Euclidean space (these are the so called affine points) and the ideal points. An ideal point “glues” the “ends” of an Euclidean line, making it topologically similar to a circle. Projective geometry preserves that axiom of the Euclidean geometry which states that two points define a line. In order to make it valid for ideal points as well, the set of lines of the Euclidean space is extended by a new line containing the ideal points. This new line is called the ideal line. Since the ideal points of two lines are the same if and only if the two lines are parallel, the ideal lines of two planes are the same if and only if the two planes are parallel. Ideal lines are on the ideal plane, which is added to the set of planes of the Euclidean space. Having made these extensions, no distinction is needed between the affine and ideal points. They are equal members of the projective space. Introducing analytic geometry we noted that everything should be described by numbers in computer graphics. Cartesian coordinates used so far are in one to one relationship with the points of Euclidean space, thus they are inappropriate to describe the points of the projective space. For the projective plane and space, we need a different algebraic base. 22.5.1.1.  Projective plane. Let us consider first the projective plane and find a method to describe its points by numbers. To start, a Cartesian coordinate system is set up in this plane. Simultaneously, another Cartesian system is established in the three-dimensional space embedding the plane in a way that axes are parallel to axes , the plane is perpendicular to axis , the origin of the Cartesian system of the plane is in point of the three-dimensional space, and the points of the plane satisfy equation . The projective plane is thus embedded into a three-dimensional Euclidean space where points are defined by Descartes-coordinates (Figure 22.29). To describe a point of the projective plane by numbers, a correspondence is found between the points of the projective plane and the points of the embedding Euclidean space. An appropriate correspondence assigns that line of the Euclidean space to either affine or ideal point of the projective plane, which is defined by the origin of the coordinate system of the space and point . Points of an Euclidean line that crosses the origin can be defined by parametric equation where is a free real parameter. If point is an affine point of the projective plane, then the corresponding line is not parallel with plane (i.e. is not constant zero). Such line intersects the plane of equation at point , thus the Cartesian coordinates of point in planar coordinate system are . On the other hand, if point is ideal, then the corresponding line is parallel to the plane of equation (i.e. ). The direction of the ideal point is given by vector . The presented approach assigns three dimensional lines crossing the origin and eventually triplets to both the affine and the ideal points of the projective plane. These triplets are called the homogeneous coordinates of a point in the projective plane. Homogeneous coordinates are enclosed by brackets to distinguish them from Cartesian coordinates. A three-dimensional line crossing the origin and describing a point of the projective plane can be defined by its arbitrary point except the origin. Consequently, all three homogeneous coordinates cannot be simultaneously zero, and homogeneous coordinates can be freely multiplied by the same non-zero scalar without changing the described point. This property justifies the name “homogeneous”. It is often convenient to select that triplet from the homogeneous coordinates of an affine point, where the third homogeneous coordinate is 1 since in this case the first two homogeneous coordinates are identical to the Cartesian coordinates: From another point of view, Cartesian coordinates of an affine point can be converted to homogeneous coordinates by extending the pair by a third element of value 1. The embedded model also provides means to define the equations of the lines and line segments of the projective space. Let us select two different points on the projective plane and specify their homogeneous coordinates. The two points are different if homogeneous coordinates of the first point cannot be obtained as a scalar multiple of homogeneous coordinates of the other point. In the embedding space, triplet can be regarded as Cartesian coordinates, thus the equation of the line fitted to points and is: If , then the affine points of the projective plane can be obtained by projecting the three-dimensional space onto the plane of equation . Requiring the two points be different, we excluded the case when the line would be projected to a single point. Hence projection maps lines to lines. Thus the presented equation really identifies the homogeneous coordinates defining the points of the line. If , then the equation expresses the ideal point of the line. If parameter has an arbitrary real value, then the points of a line are defined. If parameter is restricted to interval , then we obtain the line segment defined by the two endpoints. 22.5.1.2.  Projective space. We could apply the same method to introduce homogeneous coordinates of the projective space as we used to define the homogeneous coordinates of the projective plane, but this approach would require the embedding of the three-dimensional projective space into a four-dimensional Euclidean space, which is not intuitive. We would rather discuss another construction, which works in arbitrary dimensions. In this construction, a point is described as the centre of mass of a mechanical system. To identify a point, let us place weight at reference point , weight at reference point , weight at reference point , and weight at reference point . The centre of mass of this mechanical system is: Let us denote the total weight by . By definition, elements of quadruple are the homogeneous coordinates of the centre of mass. To find the correspondence between homogeneous and Cartesian coordinates, the relationship of the two coordinate systems (the relationship of the basis vectors and the origin of the Cartesian coordinate system and of the reference points of the homogeneous coordinate system) must be established. Let us assume, for example, that the reference points of the homogeneous coordinate system are in points (1,0,0), (0,1,0), (0,0,1), and (0,0,0) of the Cartesian coordinate system. The centre of mass (assuming that total weight is not zero) is expressed in Cartesian coordinates as follows: Hence the correspondence between homogeneous coordinates and Cartesian coordinates is (): The equations of lines in the projective space can be obtained either deriving them from the embedding four-dimensional Cartesian space, or using the centre of mass analogy: If parameter is restricted to interval , then we obtain the equation of the projective line segment. To find the equation of the projective plane, the equation of the Euclidean plane is considered (equation 22.1). The Cartesian coordinates of the points on an Euclidean plane satisfy the following implicit equation Using the correspondence between the Cartesian and homogeneous coordinates (equation 22.17) we still describe the points of the Euclidean plane but now with homogeneous coordinates: Let us multiply both sides of this equation by , and add those points to the plane which have coordinate and satisfy this equation. With this step the set of points of the Euclidean plane is extended with the ideal points, that is, we obtained the set of points belonging to the projective plane. Hence the equation of the projective plane is a homogeneous linear equation: or in matrix form: Note that points and planes are described by row and column vectors, respectively. Both the quadruples of points and the quadruples of planes have the homogeneous property, that is, they can be multiplied by non-zero scalars without altering the solutions of the equation. 22.5.2. 22.5.2 Homogeneous linear transformations Transformations defined as the multiplication of the homogeneous coordinate vector of a point by a constant matrix are called homogeneous linear transformations: Theorem 22.12 Homogeneous linear transformations map points to points. Proof. A point can be defined by homogeneous coordinates in form , where is an arbitrary, non-zero constant. The transformation results in when a point is transformed, which are the -multiples of the same vector, thus the result is a single point in homogeneous coordinates. Note that due to the homogeneous property, homogeneous transformation matrix is not unambiguous, but can be freely multiplied by non-zero scalars without modifying the realized mapping. Theorem 22.13 Invertible homogeneous linear transformations map lines to lines. Proof. Let us consider the parametric equation of a line: and transform the points of this line by multiplying the quadruples with the transformation matrix: where and are the transformations of and , respectively. Since the transformation is invertible, the two points are different. The resulting equation is the equation of a line fitted to the transformed points. We note that if we had not required the invertibility of the the transformation, then it could have happened that the transformation would have mapped the two points to the same point, thus the line would have degenerated to single point. If parameter is limited to interval , then we obtain the equation of the projective line segment, thus we can also state that a homogeneous linear transformation maps a line segment to a line segment. Even more generally, a homogeneous linear transformation maps convex combinations to convex combinations. For example, triangles are also mapped to triangles. However, we have to be careful when we try to apply this theorem in the Euclidean plane or space. Let us consider a line segment as an example. If coordinate has different sign at the two endpoints, then the line segment contains an ideal point. Such projective line segment can be intuitively imagined as two half lines and an ideal point sticking the “endpoints” of these half lines at infinity, that is, such line segment is the complement of the line segment we are accustomed to. It may happen that before the transformation, coordinates of the endpoints have similar sign, that is, the line segment meets our intuitive image about Euclidean line segments, but after the transformation, coordinates of the endpoints will have different sign. Thus the transformation wraps around our line segment. Theorem 22.14 Invertible homogeneous linear transformations map planes to planes. Proof. The originals of transformed points defined by are on a plane, thus satisfy the original equation of the plane: Due to the associativity of matrix multiplication, the transformed points also satisfy equation which is also a plane equation, where This result can be used to obtain the normal vector of a transformed plane. An important subclass of homogeneous linear transformations is the set of affine transformations, where the Cartesian coordinates of the transformed point are linear functions of the original Cartesian coordinates: where vector describes translation, is a matrix of size and expresses rotation, scaling, mirroring, etc., and their arbitrary combination. For example, the rotation around axis , () by angle is given by the following matrix This expression is known as the Rodrigues-formula. Affine transformations map the Euclidean space onto itself, and transform parallel lines to parallel lines. Affine transformations are also homogeneous linear transformations since equation (22.22) can also be given as a matrix operation, having changed the Cartesian coordinates to homogeneous coordinates by adding a fourth coordinate of value 1: A further specialization of affine transformations is the set of congruence transformations (isometries) which are distance and angle preserving. Theorem 22.15 In a congruence transformation the rows of matrix have unit length and are orthogonal to each other. Proof. Let us use the property that a congruence is distance and angle preserving for the case when the origin and the basis vectors of the Cartesian system are transformed. The transformation assigns point to the origin and points , , and to points , , and , respectively. Because the distance is preserved, the distances between the new points and the new origin are still 1, thus , , and . On the other hand, because the angle is also preserved, vectors , , and are also perpendicular to each other. Exercises 22.5-1 Using the Cartesian coordinate system as an algebraic basis, prove the axioms of the Euclidean geometry, for example, that two points define a line, and that two different lines may intersect each other at most at one point. 22.5-2 Using the homogeneous coordinates as an algebraic basis, prove an axiom of the projective geometry stating that two different lines intersect each other in exactly one point. 22.5-3 Prove that homogeneous linear transformations map line segments to line segments using the centre of mass analogy. 22.5-4 How does an affine transformation modify the volume of an object? 22.5-5 Give the matrix of that homogeneous linear transformation which translates by vector . 22.5-6 Prove the Rodrigues-formula. 22.5-7 A solid defined by inequality in time moves with uniform constant velocity . Let us find the inequality of the solid at an arbitrary time instance . 22.5-8 Prove that if the rows of matrix are of unit length and are perpendicular to each other, then the affine transformation is a congruence. Show that for such matrices . 22.5-9 Give that homogeneous linear transformation which projects the space from point onto a plane of normal and place vector . 22.5-10 Show that five point correspondences unambiguously identify a homogeneous linear transformation if no four points are co-planar.
2020-02-24 03:33:34
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9000799655914307, "perplexity": 233.7046278331828}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145869.83/warc/CC-MAIN-20200224010150-20200224040150-00181.warc.gz"}
http://mathhelpforum.com/algebra/32250-help.html
# Math Help - help 1. ## help Prove or Disprove that there exists a real number x such that x^2+x+1 is less than or equal to zero. 2. Do you know the intermediate value theorem? 3. yes 4. Why not just use the quadratic equation? $ x = \frac {-b\pm \sqrt{b^2 - 4ac}}{2a}$ 5. Yes that gives me a solution but then how do i prove that the equation works? Thats where i am now stuck. 6. From the quadratic equation, you know there are no real roots, meaning the function is never 0 (either always negative, or always positive). With that, you can note that letting x=0 gives you a value of 1, so the function is always positive. Not quite the most rigorous proof and probably not what your professor is looking for, but it is valid. Edit: Also, I forgot to add that this works because the function is continuous over all real numbers. 7. Originally Posted by natewalker205 Yes that gives me a solution but then how do i prove that the equation works? Thats where i am now stuck. Well the quadratic formula tells you where you have a value of x such that $x^2 + x + 1 = 0$ which is one of the criterion of your proof. So the existence of x proves that the function can be less than or equal to 0. Likely the best method is the intermediate value theorem, if you can do it that way. -Dan
2014-09-02 22:03:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7916073799133301, "perplexity": 222.35846460190834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535922871.14/warc/CC-MAIN-20140901014522-00093-ip-10-180-136-8.ec2.internal.warc.gz"}
https://doubtnut.com/ncert-solutions/class-8-maths-chapter-1-rational-numbers-1
# Rational Numbers NCERT Solutions : Class 8 Maths ## NCERT Solutions for Class 8 Maths : Rational Numbers Filter Filters : Classes • • • • • • • Chapters • • • • • • • • • • • 6 More Exercises • • • ### NCERT Class 8 | RATIONAL NUMBERS | Exercise 02 | Question No. 05 Find five rational numbers between. (i) 2/3 and 4/5 (ii) (-3)/2 and 5/3 (iii) 1/4 and 1/2 ### NCERT Class 8 | RATIONAL NUMBERS | Exercise 02 | Question No. 01 Represent these numbers on the number line. (i) 7/4 (ii) (−5)/6 Doubtnut one of the best online education platform provides free NCERT Solutions of Maths for Class 8 Rational Numbers which are solved by our maths experts as per the NCERT (CBSE) guidelines. We provide the solutions as video solutions in which concepts are also explained along with answer which will help students while learning , preparing for exams and in doing homework. These Solutions will help to revise complete syllabus and score more marks in examinations. Get here free, accurate and comprehensive NCERT Solutions for Class 8 Maths Rational Numbers which have been reviewed by our maths counsellors as per the latest edition following up the CBSE guidelines. We provide video solutions in which solutions to all the questions of NCERT Class 8 Maths textbook are explained in step by step and detailed way. ## The Topics of the Chapter Rational Numbers are : ADDITION OF RATIONAL NUMBERS INTRODUCTION REPRESENTATION OF RATIONAL NUMBERS ON THE NUMBER LINE RECAPITULATION MULTIPLICATION OF RATIONAL NUMBERS PROPERTIES OF ADDITION OF RATIONAL NUMBERS ALGEBRAIC EXPRESSIONS SUBTRACTION OF RATIONAL NUMBERS PROPERTIES OF MULTIPLICATION OF RATIONAL NUMBERS RATIONAL NUMBERS BETWEEN TWO RATIONAL NUMBERS It contains these exercises along with solved examples. All the exercises are solved in the video. Select the exercise to view the solutions exercisewise of the Chapter Rational Numbers: ## NCERT Solutions Class 8 Maths Chapter Rational Numbers Exercises: We have covered all the exercises and also Solved examples in the videos. Along with the practise exercise Students should also practise solved examples to clear the concepts of the Rational Numbers. If incase you have any doubt you can watch the solutions for the given questions.Watch the solutions for the given questions in which it is also explained in the video steps to solve the questions along with answers. CBSE board is one of the most renowned education boards in the country. The board makes use of NCERT books which are included in the syllabus. But students do not find it quite useful in clearing their concepts. Math is a subject that requires a lot of practice and this is possible only when the students are able to get their concepts cleared. So, in this case, they try to look forward to the different reference books with the hope of scoring good marks in the examinations.  They should get a detailed theory that would help them to attempt all types of tricky questions in the exams. Students of Class 8 have to deal with all types of algebraic expressions where they need to get rid of any sort of doubts as well. So, in order to provide with the best lessons, Doubtnut has come up with the best video tutorials where students are able to get the right and clear knowledge of the different chapters. The intuitive study materials help a lot to clear all their confusions thereby making it possible to achieve success out of it. Students do not have to remain ignorant at all once they follow all the useful and important video tutorials prepared by the experts of Doubtnut. NCERT Solutions for Class 8 Maths Chapter 1 focuses on Rational Numbers. The experts have made it possible to design the course in a very simple manner where students are able to get the right idea on different concepts. Students do not find any sort of problem in learning Algebraic expressions, recapitulation, proper and improper fractions, etc. It also provides the best support to the students so that they do not find it difficult to understand all the concepts.  This saves a lot of time for the students where they do not have to move out of their place in getting a perfect education. ## Topics and Subtopics of NCERT Class 8 Maths Solutions Chapter 1 ### Addition of Rational Numbers Here the students are provided with the right lessons on addition of Rational Numbers. It is to be noted that when it comes to the addition of the rational numbers, it is done in the same way as that of addition of fractions. If there is a need to add two rational numbers, it is important to convert each of them into a rational number with positive denominator. The rational numbers are divided into the following two categories: ### When given numbers have same denominator Here, in this case we define (a/b + c/b) = (a + c)/b For example: (i) Add 3/7 and 56/7 Solution: 3/7 + 56/7 = (3 + 56)/7 = 59/7, [Since, 3 + 56 = 5 9] Therefore, 3/7 + 56/7 = 59/7 (ii) Add 8/13 and -5/13 Solution: 3/13 + -5/13 = [3 + (-5)]/13 = (3 -5)/13 = -2/13, [Since, 3 - 5 = -2] Therefore, 3/13 + -5/13 = = -2/13. • When Denominators of Given Numbers are Unequal: Here, we need to take the LCM of their denominators and then express each of the given numbers as shown below: 1. Add 5/6 and 7/9 Solution: Clearly, denominators of the given numerators are positive. The LCM of the denominators 6 and 18 is 18. Now, we express 5/6 and 7/9 into forms in which both of them have the same denominator 18. We have, 5/6 = 5 × 3/6 × 3 = 15/18 and 7/9 = 7 × 2/9 × 2 = 14/18 Therefore, 5/6 + 7/9 = 15/18 + 14/18 = (15 + 14)/18 = 29/18 1.  Add 5/6 and -3/7 Solution: The denominators of the given rational numbers are 6 and 7 respectively. The LCM of 6 and 7 is 42. Now, we rewrite the given rational numbers into forms in which both of them have the same denominator. 5/6 = 5 × 7/6 × 7 = 35/42 and -3/7 = -3 × 6/7 × 6 = -18/42 Therefore, 5/6 + -3/7 = 35/42 + -18/42 = 35 - 18/42 = 17/42 ### Representation of Rational Numbers on the Number Line Here, in this chapter, students are provided with the perfect understanding of the representation of Rational Numbers. We need to check for the negative and positive sign of the Rational Numbers. There are two ways to represent them on the number line which are Proper Fraction and Improper Fraction. ### Recapitulation Here the method of Recapitulation is provided to the students and that too with examples in order to get their doubts cleared in the perfect manner. This helps the students to get the proper idea on how to work on different solutions without any problem at all. ### Multiplication of Rational Numbers In this chapter, students are able to get the right understanding of the multiplication of Rational Numbers. For example, if a/b and c/d are any two rational numbers, then a/b × c/d = a × c/b × d. This rule is followed for the product of rational numbers. ### Properties of Addition of Rational Numbers Here, it is possible to find the right concepts on the properties of addition of Rational Numbers. Concepts like associative property, commutative property, closure property, the existence of additive identity property, etc are introduced to the students so that they get the right idea about it. ### Algebraic Expressions In this chapter, students get the right concept cleared about algebraic expressions.  For example, 2x – 3y + 9z is an algebraic expression. With lots of exercises and examples, students are able to get the right concept in a perfect manner. ### Subtraction of Rational Numbers It is very important for the students to get the best idea on the subtraction of Rational Numbers. Here, the concept gets cleared with the help of the best video tutorials that make the student get all the right idea on how to work on it and find the accurate result. ### Properties of Multiplication of Rational Numbers Here, the students are provided with the right solution to prove that the product of two rational numbers is always a rational number. So, it becomes quite possible to get the right idea on how to get the right solutions in a perfect manner. ### Rational Numbers Between Two Rational Numbers It is possible to insert infinitely many Rational Numbers between any two rational numbers.  With solved examples of Rational Numbers, it becomes possible for the students to get the right idea about it. Thus, it is quite important for the students to get the right solutions on how to learn the right skill of Rational Numbers. With the help of the best video tutorials provided by the experts of Doubtnut, the students get the right understanding about the mathematical concepts. All doubts can be cleared in the comfort of your home. By watching the video lessons, it proves to be the best way to learn different concepts without any problem at all. With the perfect Class 8 Maths NCERT Solutions, it becomes possible to get the ultimate idea about it by clearing all the doubts.  Students can really expect to score good marks in the exams with the help of the engaging study material provided by Doubtnut. Moreover, the NCERT solutions can be downloaded in pdf format for easy accessibility. The solutions are provided at free of cost. Students do not have to pay any amount to learn via Doubtnut website or app.
2021-04-12 07:43:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4666326344013214, "perplexity": 616.9453937229839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066613.21/warc/CC-MAIN-20210412053559-20210412083559-00578.warc.gz"}
http://kitchingroup.cheme.cmu.edu/blog/2016/03/16/Getting-graphical-feedback-as-tooltips-in-Emacs/
## Getting graphical feedback as tooltips in Emacs | categories: emacs | tags: | View Comments In a continued exploration of Emacs as a user interface today we consider how to look at alternative representations of text. The long term idea is you might have a link to a datafile, say a Raman spectrum. You could click on the link to open it, perhaps even in analysis software. But, it might be nice to have a thumbnail type image that shows the data in graphical form. That might be sufficient for some purposes to identify which file to open. You need to see the video to see the tooltips actually working in Emacs: https://www.youtube.com/watch?v=uX_hAPb9NOc To illustrate the idea here we will have E1macs display an image when you mouse over some words that represent fruit, specifically grapes, kiwi and strawberry. We have in this directory images of those fruit: ls grapes.png image-tooltips.org kiwi.png strawberry.png We will use font-lock to add a tooltip to those words that displays the image in the minibuffer. I thought the image would show in the tooltip here, but for some reason it doesn't. Maybe that is ok, since it doesn't clutter the text with big images. Font lock also makes the words stand out a bit so you know there is something there. Here is our tooltip code, and the font-lock-add-keywords that activates it. (defun image-tooltip (window object position) (save-excursion (goto-char position) (let* ((img-file (format "%s.png" (thing-at-point 'word))) (s (propertize "Look in the minbuffer" 'display (create-image (expand-file-name img-file))))) (message "%s" s)))) nil '(("\\<kiwi\\>\\|\\<grapes\\>\\|strawberry" 0 '(face font-lock-keyword-face help-echo image-tooltip)))) Some examples of fruit are the kiwi, the strawberry and grapes. That is a little example to illustrate the concept. Now, imagine something more sophisticated, e.g. a link to a molecular simulation that generates a thumbnail of the atomic geometry, and a summary of the energy. Or a Raman spectrum that shows a thumbnail of the spectrum.
2017-10-19 19:54:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47221508622169495, "perplexity": 2659.167496373082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823462.26/warc/CC-MAIN-20171019194011-20171019214011-00484.warc.gz"}
http://nrich.maths.org/2274/note
### Forgotten Number I have forgotten the number of the combination of the lock on my briefcase. I did have a method for remembering it... ### Man Food Sam displays cans in 3 triangular stacks. With the same number he could make one large triangular stack or stack them all in a square based pyramid. How many cans are there how were they arranged? ### Sam Again Here is a collection of puzzles about Sam's shop sent in by club members. Perhaps you can make up more puzzles, find formulas or find general methods. # Picturing Triangle Numbers ### Why do this problem? This problem offers students an opportunity to relate numerical ideas to spatial representation, and vice versa. The interactivity allows students to explore large triangle numbers. Visualising the combination of two identical triangle numbers can lead to an understanding of the general formula for triangle numbers. ### Possible approach This problem works very well in conjunction with Mystic Rose and Handshakes. The whole class could work on all three problems together, or small groups could be allocated one of the three problems to work on, and then report back to the rest of the class. Write the sequence $1, 3, 6, 10, 15 ...$ on the board and ask the students to work out what's going on. Can they continue the sequence? "Can you work out the tenth number in this sequence? What about the 20th?" Suggest to students that it would be very helpful to have a quicker method for working out numbers at any point in this sequence. Introduce the terminology "triangle numbers" if it has not been met before, and show the pictorial representation of the fifth triangle number. Ask students to visualise how they could fit together two copies of the same triangle number to make a rectangle. Give students time to work together in pairs to explain what happens when they combine other pairs of identical triangle numbers, keeping a record of what they do. Then bring the class together and write on the board the dimensions of the rectangles they found for different triangle numbers, using the interactivity to check. "What is special about the dimensions of the rectangles? Why?" "Can you write down the dimensions of the rectangle made from two copies of the 250th triangle number?" "Can you use this to work out the 250th triangle number?" Ask students to explain a method for finding any triangle number. This may be in words, using diagrams, or algebraically. "Can we use our insights to help us to determine whether a number is a triangle number?" Give the class a selection of numbers such as the ones suggested in the problem, and allow some time for them to work in pairs to determine which are triangle numbers. ### Key questions What is special about the dimensions of the rectangles made from two identical triangle numbers? Can you devise a method for working out any triangle number? ### Possible extension Students could write their method for calculating triangle numbers using algebra. This formula could then be used to gain insight into facts about triangle numbers such as: $T_{n+1} - T_{n} = n+1$ $T_{n} + T_{n-1} = n^2$ Why can triangle numbers end with the digit $8$ but never with the digit $9$? What other digits can never appear in the units column of a triangle number? Will there ever be two consecutive triangle numbers ending with the digits 000? What about 0000? ### Possible support Make multilink cubes available and encourage use of diagrams to aid with visualisation of the triangle numbers. For another problem that uses a similar idea go to Picturing Square Numbers
2013-12-05 22:24:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22866716980934143, "perplexity": 684.3455066453434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163048362/warc/CC-MAIN-20131204131728-00005-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-for-college-students-7th-edition/chapter-6-section-6-7-formulas-and-applications-of-rational-equations-exercise-set-page-477/52
## Intermediate Algebra for College Students (7th Edition) $1$ or $-\displaystyle \frac{2}{3}$ $\left[\begin{array}{lll} \text{2 times the reciprocal of a number } & \rightarrow & 2\cdot\frac{1}{x}\\\\ \text{ is subtracted from... } & \rightarrow & ... - 2\cdot\frac{1}{x}\\\\ \text{... 3 times the number } & \rightarrow & 3x - 2\cdot\frac{1}{x}\\ \\ \text{... the difference is 1 } & \rightarrow & 3x - 2\cdot\frac{1}{x}=1 \end{array}\right]$ $3x - 2\displaystyle \cdot\frac{1}{x}=1\qquad.../\times x$ $3x^{2}-2=x$ $3x^{2}-x-2=0$ ... Trying with synthetic division, we find: $$\begin{array}{rrrr} 1 & | & 3 & -1 & -2 & \\ & & & 3 & 2 & \\ & & -- & -- & -- & \\ & & 3 & 2 & 0 & \end{array}$$ $0=(x-1)(3x+2)$ $x=1$ or $x=-\displaystyle \frac{2}{3}$
2019-12-09 20:44:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4664662182331085, "perplexity": 949.8139631026144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540523790.58/warc/CC-MAIN-20191209201914-20191209225914-00367.warc.gz"}
https://stats.stackexchange.com/questions/179557/resampling-under-the-null-versus-the-alternative-hypothesis
# Resampling under the null versus the alternative hypothesis I'm looking at a community of organisms using simultaneous GLMs via the mvabund package. The manyglm function from this package fits a GLM for every species, each with a common set of predictors. The sum of likelihood ratios of each model (each species) is the test statistic, and resampling of observations is done to compute the significance. Once the manyglm function is run, you can analyze the models with anova.manyglm or summary.manyglm. The difference between these two is that anova.manyglm resamples under the null hypothesis while summary.manyglm` resamples under the alternative hypothesis. In this package, the resampling scheme is of residuals from the model. I'm making my way through "Bootstrapping Methods and their Applications" (Davison and Hinkley, 1997), but even after reading the chapters on boostrapping for a GLM, I'm still having trouble with the distinction. So my question is what exactly is the difference between resampling under the null versus the alternative hypothesis?
2021-08-04 19:25:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.557029128074646, "perplexity": 1305.785919586009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154897.82/warc/CC-MAIN-20210804174229-20210804204229-00444.warc.gz"}
https://www.cs.swarthmore.edu/~richardw/classes/cs21/f15/w02_loops_nums_str.php
# Loops, Numbers, and Strings In class exercises The goal for this week is to become comfortable with four basic data types in python: integers, floats, strings, and lists. By now you should be familiar with some basic linux (cd, ls); the emacs editor; editing, saving, and running python files in your cs21 directory; recognizing the linux shell ($) and the python shell (>>>), and running basic python commands in the python shell. On the python side of things we talked about print(), raw_input(), saving data in variables, and the basic structure of a program (descriptive comment, definition of function, calling the function). If you have any questions about these topics, please let me know. We will be building on them this week. Numbers Two types: integers or whole number and floating point or decimal numbers. Built in operations: + (addition), - (subtraction), * (multiplication), / (division), ** (exponentiation), % (remainder or mod), abs() (absolute value). One tricky thing about python is that if a mathematical expression involves only integers, the result must be an integer. If an expression has at least one floating point number, the result is a float. Floating point numbers are approximations to real numbers. Usually this approximation is good enough (unless you are flying a spacecraft to Mars or trying to forecast the path of a Hurricane), but the answers may surprise you. You may want to convert an int to a float. This can be done via casting using e.g., float(3). Alternatively, if we remember that an operation involving a float and an int returns a float, we can convert a possible integer variable val to a float using val=1.0*val, or val=1.*val. What is the output of the following expressions? Are any expressions invalid? Are any results surprising? int("3") int("3.2") float("3") float("3.2") str(3) str(3.2) 3/2 float(3/2) float(3)/2 3.0/2 3*1./2 5%4 6%4 8%4 7.2%4 You can import additional math functions from the math library. >>> from math import * >>> pi 3.1415926535897931 >>> sqrt(2) 1.4142135623730951 >>> sin(pi/4) 0.70710678118654746 >>> sin(pi/2) 1.0 >>> import math >>> help(math) #displays all functions available to you. If you need to import additional feature use the from <library> import * at the top of your program. You only need to import a library once. If you want to get help on a library, start a python shell, run import <library> followed by help(<library>). Lists We now introduce a new data type, the list, which can store multiple elements. Run the following range commands in the python shell. Remember to start python by typing python from the linux prompt. cumin[~]$ python Python 2.5.2 (r252:60911, Jul 31 2008, 17:28:52) [GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2 >>> range(5) [0, 1, 2, 3, 4] >>> #List types. A list of what? range(5) range(1,5) range(1,5,2) Think about the following questions and discuss them with a neighbor: • What is the syntax for a python list? What symbols mark the beginning and end of list and what symbol separates items in a list • The range function can be called at least three different ways (one, two, or three arguments). What are the semantics for these three versions of range? Are there more ways to call range? What type of data can you supply as arguments to range? • Try to create calls to range that generate the following lists. It may not be possible to generate all lists shown below. [2, 4, 6, 8] [-1, 0, 1, 2, 3] [0, 3, 6, 9, 12] [1, 1.5, 2, 2.5] [4, 3, 2, 1, 0] [1, 2, 4, 8, 16] Loops Just generating lists can be pretty boring, but we can loop over a list to have python execute code multiple times. This construct is called a for loop and it has the following syntax: for <var> in <sequence>: <body> Whitespace is significant. Loop body starts when indentation starts. Loop body ends when indentation ends. for i in range(1,5,2): print(i) for i in range(3): print("Hello there") print("i=%d" % (i)) Tracing. Loop semantics. Strings The string data type represents text. A string can be thought of as a list of characters, though there is a slight difference between a string and a list of characters. More on this later. Python supports a few standard operations on strings including + (concatenation), * (duplication), and % (string formatting, more later). List indexing and slicing We can loop over any list and since strings are almost like a list of characters, we can can loop over strings: >>> s="Swarthmore" >>> for ch in s: ... print(ch) ... S w a r t h m o r e For any list, we can also use the function len() to determine the number of items in the list. >>> len(s) 10 >>> values=range(3) >>> len(values) 3 We can also select parts of a list using the indexing operator. Try the following statements in the python shell. What are the semantics of ls[0], ls[-1], ls[2:4], ls[:-1], and ls[2:]? Try some more examples to experiment. ls=range(1,11) s="Swarthmore" ls[0] ls[-1] ls[2:4] ls[:-1] ls[2:] s[0] s[-1] s[-4:] s[:3]+s[4] The primary difference between lists and strings is that lists are mutable while strings are not. Thus, we can change elements in a list, but not in a string. print(ls) ls[0]=2009 s[3]='a' The accumulator pattern A design pattern is a generic method for solving a class of problems. The standard algorithm might be described as follows: get input process input and do computation display output Almost any computational problem can be set up in this very general way, but step two is a very vague. Let's look at another common pattern, the accumulator. initialize accumulator variable(s) loop until done: update accumulator variable(s) display output Many useful computational problems fit this pattern. Examples include computing a sum, average, or standard deviation of a list of numbers, reversing a string, or counting the number of times a particular value occurs in a list. Let's try to compute the average of a list of numbers entered by the user. Prompt the user to first enter the number of values he/she wishes to average and then prompt for each number. Finally display the average. Start with pseudocode, a written idea that organizes your thought process. Then write your solution in python and test. String Formatting We can get more control over how data values are displayed using string formatting. myInt = 86 myFloat = 1./3 print("The value %d is an integer but %0.2f is not" % (myInt, myFloat)) Using this new style requires a few steps. First, set up a string that contains one or more formatting tags. All tags have the form %<width><.><precision><type-char>. The type-char is s for strings, d for integers (don't ask why), and f for floats. Do not put any variable names in the format string, just tags which serve as placeholders for data that will come later. After the format string put a single % symbol after the close quote. Finally, place the data elements that you wish to substitute for the tags separated by commas and enclosed in parentheses. Let's step through a few examples in the python shell. >>> month="Sep" >>> day=8 >>> year=2015 >>> print("Today is %d %s %d" % (day, month, year)) Today is 8 Sep 2015 >>> print("Tomorrow is %d/%d/%d" %(9,day+1, year)) Tomorrow is 9/8/2015 >>> for val in range(1,200,20): ... print("%7.2f" % (val*6.582118)) ... 6.58 138.22 269.87 401.51 533.15 664.79 796.44 928.08 1059.72 1191.36
2018-04-21 17:34:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21353624761104584, "perplexity": 2159.987455978509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945272.41/warc/CC-MAIN-20180421164646-20180421184646-00120.warc.gz"}
http://boundedtheoretics.blogspot.ca/2010/07/
Thursday, July 29, 2010 Feeling charitable toward Baylor’s IDC cubs The reason I come off as a nasty bastard on this blog is that I harbor quite a bit of anger toward the creationist bastards who duped me as a teenager. The earliest stage of overcoming my upbringing was the worst time of my life. I wanted to die. Consequently, I am deadly serious in my opposition to “science-done-right proves the Bible true” mythology. William A. Dembski provokes me especially with his prevarication and manipulation. He evidently believes that such behavior is moral if it serves higher ends in the “culture war.” My take is, shall we say, more traditional. When Robert J. Marks II, Distinguished Professor of Engineering at Baylor University, and Fellow of the Institute of Electrical and Electronics Engineers (IEEE), began collaborating with Dembski, I did not rush to the conclusion that he was like Dembski. But it has become apparent that he is willing to play the system. For instance, Dembski was miraculously elevated to the rank of Senior Member of the IEEE, which only 5% of members ever reach, in the very year that he joined the organization. To be considered for elevation, a member must be nominated by a fellow. Although Marks was the founding president of the progenitor of the IEEE Computational Intelligence Society, which addresses evolutionary computation (EC), he and his IDCist collaborators go to the IEEE Systems, Man, and Cybernetics Society for publication. He is fully aware that reviewers there are unlikely to know much about EC, and are likely to give the benefit of the doubt to a paper bearing his name. I would love to see him impugn the integrity of his and my colleagues in the Computational Intelligence Society by claiming that they don’t review controversial work fairly. But it ain’t gonna happen. I’ve come to see Marks as the quintessential late-career jerk, altogether too ready to claim expertise in an area he has never engaged vigorously. He is so cocksure as to publish a work of apologetics with the title Evolutionary Computation: A Perpetual Motion Machine for Design Information? (Chap. 17 of Evidence for God, M. Licona and W. A. Dembski, eds.). He states outright some misapprehensions that are implicit in his technical publications. Here’s the whopper: “A common structure in evolutionary search is an imposed fitness function, wherein the merit of a design for each set of parameters is assigned a number.” Who are you, Bob Marks, to say what is common and what is not in a literature you do not follow? Having scrutinized over a thousand papers in EC, and perused many more, I say that you are flat-out wrong. There’s usually a natural, not imposed, sense in which some solutions are better than others. Put up the references, Distinguished Professor Expert, or shut up. Marks and coauthors cagily avoid scrutiny of their (few) EC sources by dumping on the reviewers references to entire books, i.e., with no mention of specific pages or chapters. This is because their EC veneer will not withstand a scratch. The chapter I just linked to may seem to contradict that, given its references to early work in EC by Barricelli (1962), Crosby (1967), and Bremmerman [sic] et al. (1966). [That's Hans-Joachim Bremermann.] First, note the superficiality of the references. Marks did not survey the literature to come by them. The papers appear in a collection of reprints edited by David Fogel, Evolutionary Computation: The Fossil Record (IEEE Press, 1998). Marks served as a technical editor of the volume, just as I did, and he should have cited it. Although Marks is an electrical engineer, he has been working with two of Baylor’s graduate students in computer science, Winston Ewert and George Montañez. I would hazard a guess that there is some arrangement for the students to turn their research with Marks into masters’ theses. I’ve been sitting on some errors in their most recent publication, Efficient Per Query Information Extraction from a Hamming Oracle, thinking that the IDC cubs would get what they deserved if they included the errors in their theses. Well, I’ve got a soft spot for students, and I’m feeling charitable today. But there’s no free lunch for Marks. He has no business directing research in EC, his reputation in computational intelligence notwithstanding, and I hope that the CS faculty at Baylor catch on to the fact. On first reading the paper, I was deeply annoyed by the combination of a Chatty-Cathy, self-reference-laden introduction focusing on “oracles,” irrelevant to the majority of the paper, with a non-survey of the relevant literature in the theory of EC. Ewert et al. dump in three references to books, without discussion of their content, at the beginning of their 4-1/2 page section giving Markov-chain analyses of evolutionary algorithms. It turns out that one of the books does not treat EC at all — I contacted the author to make sure. As I have discussed here and here, two of the algorithms they analyze are abstracted from defective Weasel programs that Dawkins supposedly used in the mid-1980's. It offends me to see these whirlygigs passed off as objects worthy of analysis in the engineering literature. Yet again, they express the so-called average active information per query as $$I_\oplus = {{I_\Omega} \over Q} = \frac{\log N^L}{Q} = {{L \log N} \over Q},$$ where Q is not the simple random variable it appears to be, but is instead the expected number of trials (“queries”) a procedure requires to maximize the number of characters in a “test” string that match a “target” string. Strings are over an alphabet of size N, and are of length L. Unless you have something to hide, you write $$I_\oplus ={{L \log N} \over {E[T]}},$$ where T is the random number of trials required to obtain a perfect match of the target. This is a strange idea of an average, and it appears that a reviewer said as much. Rather than acknowledge the weirdness overtly, Ewert et al. added a cute “yeah, we know, but we do it consistently” footnote. Anyone without a prior commitment to advancing “intelligence creates active information” ideology would simply flip the fraction over to get the average number of trials per bit of endogenous information IΩ, $$\frac{1}{I_\oplus} = E\left[{T \over {I_\Omega}}\right] = {{E[T]} \over {L \log N}}.$$ This has a clear interpretation as expected performance normalized by a measure of problem hardness. But when it’s “active information or bust,” you’re not free to go in any sensible direction available to you. I have to add that I can’t make a sensible connection between average active information per query and active information. Given a bound K on the number of trials to match the target string, the active information is $$I_+ = \log \Pr\{T \leq K\} + {L \log N}.$$ Do you see a relationship between I+ and I that I’m missing? By the way, I happened upon prior work regarding the amount of information required to solve a problem. The scholarly lassitude of the IDC “maverick geniuses” glares out yet again. On second reading, I bothered to do sanity checking of the plots. I saw immediately that the surfaces in Fig. 2 were falling off in the wrong directions. For fixed alphabet size N, the plots show the average active information per query increasing as the string length L increases, when it obviously should decrease. The problem is harder, not easier, when the target string is longer. Comparing Fig. 5 to Figs. 3 and 4, it’s easy to see that the subscripts for N and L are reversed somewhere. But what makes Fig. 3 cattywampus is not so simple. Ewert et al. plot $$I_\oplus(L, N) = \frac{L \log N}{E[T_{N,L}]}$$ instead of $$I_\oplus(L, N) = \frac{L \log N}{E[T_{L,N}]}.$$ That is, the matrix of expected numbers of trials to match the target string is transposed, but the matrix of endogenous information values is not. The embarrassment here is not that the cubs got confused about indexing of square matrices of values, but that a team of four, including Marks and Dembski, shipped out the paper for review, and then submitted the final copy for publication, with nary a sanity check of the plots. From where I sit, it appears that Ewert and Montañez are getting more in the way of indoctrination than advisement from Marks and Dembski. Considering that various folks have pointed out errors in every paper that Marks and Dembski have coauthored, you’d think the two would give their new papers thorough goings-over. It is sad that Ewert and Montañez probably know more about analysis of algorithms than Marks and Dembski do, and evidently are forgetting it. The fact is that $$E[T_{L,N}] = \Theta(N L \log L)$$ for all three of the evolutionary algorithms they consider, provided that parameters are set appropriately. It follows that $$I_\oplus = \Theta\left(\frac{L \log N}{N L \log L}\right) = \Theta\left(\frac{\log N}{N \log L}\right).$$ In the case of (C), the (1, λ) evolutionary algorithm, setting the mutation rate to 1 / L and the number of offspring λ to N ln L does the trick. (Do a lit review, cubs — Marks and Dembski will not.) From the perspective of a computer scientist, the differences in expected numbers of trials for the algorithms are not worth detailed consideration. This is yet another reason why the study is silly. Methinks it is like the OneMax problem The optimization (not search) problem addressed by Ewert et al. (and the Weasel program) is a straightforward generalization of a problem that has been studied heavily by theorists in evolutionary computation, OneMax. In the OneMax problem, the alphabet is {0, 1}, and the fitness function is the number of 1's in the string. In other words, the target string is 11…1. If the cubs poke around in the literature, they’ll find that Dembski and Marks reinvented the wheel with some of their analysis. That’s the charitable conclusion, anyway. Winston Ewert and George Montañez, don’t say the big, bad evilutionist never gave you anything. Wednesday, July 28, 2010 Creeping elegance, or shameless hacking? In my previous post, I did not feel great about handling processes in the following code for selection, but I did not see a good way around it. from heapq import heapreplace def selected(population, popSize, nSelect, best = None): if best == None: best = nSelect * [(None, None)] threshold = best[0][0] nSent = 0 for process in processes: if nSent == popSize: break process.submit(population[nSent], threshold) nSent += 1 for process, x, score in scored(popSize): if score > threshold: heapreplace(best, (score, x)) threshold = best[0][0] if nSent < popSize: process.submit(population[nSent], threshold) nSent += 1 return best What I really want is for selected to know nothing about parallel processing, and for the generator of scored individuals to know nothing about selection. The problem is that threshold changes dynamically, and the generator needs a reference to it. As best I can tell, there are no scalar references in Python. Having taught LISP a gazillion times, I should have realized immediately that I could exploit the lexical scoping of Python, and pass to the generator a threshold-returning function defined within the scope of selected. def selected(self, population, nSelect, best = None): if best is None: best = nSelect * [(None, None)] threshold = lambda: best[0][0] for x, score in evaluator.eval(population, threshold): if score > best[0][0]: heapreplace(best, (score, x)) return best Perhaps I should not be blogging about my first Python program. Then again, I’m not the worst programmer on the planet, and some folks may learn from my discussion of code improvement. This go around, I need to show you the generator. def eval(self, population, threshold): popSize = len(population) nSent = 0 for process in self.processes: if nSent == popSize: break process.submit(population[nSent], threshold()) nSent += 1 for unused in population: yield process.result() if nSent < popSize: process.submit(population[nSent], threshold()) nSent += 1 This is a method in class Evaluator, which I plan to release. No knowledge of parallel processing is required to use Evaluator objects. The __init__ method starts up the indexed collection of processes, each of which knows its own index. It also opens a Pipe through which processes send their indexes when they have computed the fitness of individuals submitted to them. The Evaluator object’s Connection to the pipe is named isReady. The first for loop comes from the original version of selected. Iteration over population in the second for loop is just a convenient way of making sure that a result is generated for each individual. In the first line of the loop body, a ready process is identified by receiving its index through the isReady connection. Then the generator yields the result of a fitness evaluation. The flow of control stops flowing at this point, and resumes only when selected returns to the beginning of its for loop and requests the next result from the eval generator. When execution of the generator resumes, the next unevaluated individual in the population, if any, is submitted to the ready process, along with the value of a call to the threshold function. The call gives the current value of best[0][0], the selection threshold. By the way, the Pipe should be a Queue, because only the “producer” processes, and not the “consumer” process, send messages through it. But Queue is presently not functioning correctly under the operating system I use, Mac OS X. Monday, July 26, 2010 Efficient selection with fitness thresholds, heaps, and parallel processing — easier done than said The obvious approach to selection in an evolutionary algorithm is to preserve the better individuals in the population and cull the others. This is known as truncation selection. The term hints at sorting a list of individuals in descending order of fitness, and then truncating it to length nSelect. But that is really not the way to do things. And doing selection well is really not that hard. After providing a gentle review of the considerations, I’ll prove my point with 18 lines of code. A principle of computational problem solving is not to waste time determining just how bad a bad solution is. Suppose we’re selecting the 3 fittest of a population of 10 individuals, and that the first 3 fitness scores we obtain are 90, 97, and 93. This means that we’re no longer interested in individuals with fitness of 90 or lower. If it becomes clear in the course of evaluating the fourth individual that its fitness does not exceed the threshold of 90, we can immediately assign it fitness of, say, 0 and move on to the next individual. Use of the threshold need not be so simple. For some fitness functions, a high threshold reduces work for all evaluations. An example is fitness based on the Levenshtein distance of a string of characters t from a reference string s. This distance is the minimum number of insertions, deletions, and substitutions of single characters required to make the strings identical. Fitness is inversely related to distance. Increasing the threshold reduces the number of possible alignments of the strings that must be considered in computing the distance. In limited experiments with an evolutionary computation involving the Levenshtein distance, I’ve halved the execution time by exploiting thresholds. A natural choice of data structure for keeping track of the nSelect fittest individuals is a min heap. All you need to know about the heap is that it is stored in an indexed data structure, and that the least element has the least index. That is, the threshold element is always heap[0] when indexing is zero-based. The heap is initialized to contain nSelect dummy individuals of infinitely poor fitness. When an individual has super-threshold fitness, it replaces the threshold element, and the heap is readjusted. Evolutionary computations cry out for parallel processing. It is cruel and immoral to run them sequentially on computers with multiple processors (cores). But I have made it seem as though providing the fitness function with the selection threshold depends upon sequential evaluation of individuals. There are important cases in which it does not. If parents compete with offspring for survival, then the heap is initialized at the beginning of the run, and is reinitialized only when the fitness function changes — never, in most applications. Also, if the number of fitness evaluations per generation exceeds the number of processors, as is common with present technology, then there remains a sequential component in processing. The way I’ve approached parallel processing is to maintain throughout the evolutionary run a collection of processes dedicated to fitness evaluation. The processes exist when fitness evaluation cum selection begins. First an individual is submitted, along with the threshold, to each process. Then fitness scores are received one by one. For each score received, the heap and threshold are updated if necessary, and an unevaluated individual is submitted, along with the threshold, to the process that provided the score. In the experiments I mentioned above, I’ve nearly halved the execution time by running two cores instead of one. That is, the combined use of thresholds and two fitness-evaluation processes gives almost a factor-of-4 speedup. The Python function I’m going to provide an explanation that any programmer should be able to follow. But first look the code over, considering what I’ve said thus far. The heap, named best, is an optional parameter. The variable nSent registers the number of individuals that have been submitted to evaluation processes. It steps from 0 to popSize, the size of the population. from heapq import heapreplace def selected(population, popSize, nSelect, best = None): if best == None: best = nSelect * [(None, None)] threshold = best[0][0] nSent = 0 for process in processes: if nSent == popSize: break process.submit(population[nSent], threshold) nSent += 1 for process, x, score in scored(popSize): if score > threshold: heapreplace(best, (score, x)) threshold = best[0][0] if nSent < popSize: process.submit(population[nSent], threshold) nSent += 1 return best If no heap is supplied, best is set to an indexed collection of nSelect (unfit, dummy) pairs represented as (None, None). This works because any (fitness, individual) pair is greater than (None, None). The expression best[0][0] yields the fitness of the least fit individual in the heap, i.e., the threshold fitness for selection. The first for loop submits to each of the waiting processes an individual in population to evaluate, along with threshold. [My next post greatly improves selected by eliminating the direct manipulation of processes.] The loop exits early if there is a surplus of processes. The processes are instances of a subclass of multiprocessing.Process that I have defined, but am “hiding” from you. I am illustrating how to keep the logic of parallel processing simple through object-oriented design. You don’t need to see the code to understand perfectly well that process.submit() communicates the arguments to process. The second for loop iterates popSize times, processing triples obtained from scored. Despite appearances, scored is not a function, but a generator. It does not return a collection of all of the triples. In each iteration, it yields just one (process, x, score) to indicate the process that most recently communicated an evaluation (x, score). This indicates not only that the fitness of individual x is score, but that process is waiting to evaluate another individual. If the new score exceeds the selection threshold, then (score, x) goes into the best heap, and threshold is updated. And then the next unevaluated individual in the population, if any, is submitted along with the threshold to the ready process. When the loop is exited, each individual has had its chance to get into the best heap, which is returned to the caller. By the way, there’s an argument to be made that when the best heap is supplied to the function, an individual with fitness equal to that of the worst in the heap should replace the worst. Presumably the heap contains parents that are competing with offspring for survival. Replacing parents with offspring when they are no better than the offspring can enhance escape from fitness plateaus. Tuesday, July 13, 2010 Sure mutation in Python In Python, “lazy” mutation goes something like this: for i in range(len(offspring)):     if random() < mutation_rate:         offspring[i] = choice(alphabet) The random() value is uniform on [0, 1), and the choice function returns a character drawn uniformly at random from alphabet. It follows from my last post that this can be made right within the implementation of an evolutionary algorithm by defining adjusted_rate = \     len(alphabet) / (len(alphabet) - 1) * mutation_rate and using it in place of mutation_rate. “And that’s all I have to say about that.” If you want a mutation operator that surely mutates, the following code performs well: from random import randint alphabet = 'abcdefghijklmnopqrstuvwxyz ' alphaSize = len(alphabet) alphaIndex = \     dict([(alphabet[i], i) for i in range(alphaSize)]) def mutate(c):     i = randint(0, alphaSize - 2)     if i >= alphaIndex[c]:         i += 1     return alphabet[i] Here alphaIndex is a dictionary associating each character in the alphabet with its index in the string alphabet. The first character of a string is indexed 0. Thus the expressions alphaIndex['a'] and alphaIndex['d'] evaluate to 0 and 3, respectively. For all characters c in alphabet, alphaIndex[c] == alphabet.index(c). Looking up an index in the dictionary alphaIndex is slightly faster than calling the function alphabet.index, which searches alphabet sequentially to locate the character. The performance advantage for the dictionary would be greater if the alphabet were larger. The function mutate randomly selects an index other than that of character c, and returns the character in alphabet with the selected index. It starts by calling randint to get a random index i between “least index” (0) and “maximum index minus 1.” The trick is that if i is greater than or equal to the index of the character c that we want to exclude from selection, then it is bumped up by 1. This puts it in the range alphaIndex[c] + 1, …, alphaSize - 1 (the maximum index). All indexes other than that of c are equally likely to be selected. Monday, July 12, 2010 The roly poly and the cockroach You may have noted in my post on Dembski's Weasel-whipping that I gave .0096 as the mutation rate of the Dobzhansky program, a conventional (1, 200) evolutionary algorithm (EA), while Yarus indicates that “1 in 100 characters in each generation” are mutated. Well, the slight discrepancy is due to the fact that the program uses the “lazy” mutation operator that I slammed as a bug in the algorithms analyzed by Ewert, Montañez, Dembski, and Marks [here]. I should explain that what is a roly poly in one context is a big, fat, nasty cockroach in another. To mutate is to cause or to undergo change. That is, mutation is actual change, not an attempt at change. The lazy mutation operator simply overwrites a character in a phrase with a character drawn randomly from the alphabet, and sometimes fails to change the character. For an alphabet of size N, the mutation rate is (N − 1) / N times the probability that the operator is invoked. For the Dobzhansky program, N = 27, and 26 / 27 × .01 ≈ .0096. The difference between .01 and .0096 is irrelevant to what Yarus writes about the program. Correcting an EA implementation that uses a lazy mutation operator is trivial. Set the rate at which the operation is performed to N / (N − 1) times the desired mutation rate. Goodbye, roly poly. No such trick exterminates the cucaracha of Ewert et al. Their algorithms (A) and (B), abstracted from the apocryphal Weasel programs, apply the lazy mutation operator to exactly one randomly selected character in each offspring phrase. The alphabet size ranges from 1 to 100, so the probability that an offspring is a mutant ranges from 0 / 1 to 99 / 100. As I explained in my previous post, it is utterly bizarre for the alphabet size to govern mutation in this manner. The algorithms are whirlygigs, of no interest to biologists and engineers. Ewert et al. also address as algorithm (C) what would be a (1, 200) EA, were its “mutation” always mutation. The rate of application of the lazy mutation operator is fixed at 1 / 20. It is important to know that the near-optimal mutation rate for phrases of length L is 1 / L. With 100 characters in the alphabet, the effective mutation rate is almost 1 / 20, and the algorithm is implicitly tuned to handle phrases of length 20. For a binary alphabet, the effective mutation rate is just 1 / 40, and the algorithm is implicitly tuned to handle phrases of length 40. This should give you a strong sense of why the mutation rate should be explicit in analysis — as it always is in the evolutionary computation literature that the “maverick geniuses” do not bother to survey. Regarding the number of mutants I may have seemed to criticize the Dobzhansky program for generating many non-mutant offspring. That was not my intent. I think it’s interesting that the program performs so well, behaving as it does. With the near-optimal mutation rate of 1 / L, the probability of generating a copy of the parent, (1 − 1 / L)L, converges rapidly on e-1 ≈ 0.368. Even for L as low as 25, an average of 72 in 200 offspring are exact copies of the parent. It would not have been appropriate for Yarus to tune the mutation rate to the length of the Dobzhansky quote. That’s the sort of thing we do in computational problem solving. It’s not how nature works. I don’t make much of the fact that Yarus had an expected 109, rather than 73, non-mutant offspring per generation. Edit: Yarus possibly changed the parameter settings of the program from those I’ve seen. I really don’t care if he did. I’m trying to share some fundamentals of how (not) to analyze evolutionary algorithms.
2017-06-24 13:47:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3906829357147217, "perplexity": 1588.508693405683}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320263.78/warc/CC-MAIN-20170624133941-20170624153941-00265.warc.gz"}
https://www.gamedev.net/forums/topic/193268-dev-c-includes/
#### Archived This topic is now archived and is closed to further replies. # Dev-C++ includes This topic is 5135 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I am using Dev-C++. I made a project with three files: main.cpp, definitions.h, and graphics.cpp. I #included the other two in main. When I tried to compile, it got tons of errors. I removed the other two from the project, but still #included them in main, and it worked. I thind the compiler tried to compile each file in the project separately. How can I make it only compile the one ''main'' file in a project? I don''t want to have to dig through the project folder (eventually, when it is much bigger) to look at some snippet of code. ##### Share on other sites NEVER include a cpp file... They are automatically thrown in by the compiler... ##### Share on other sites No, I believe what happened was that not only did you include them, but the compiler included them as well because they were part of the project. This would lead to a lot of duplicate definition errors. I''m not an expert, so I may be wrong, but I think this is what happened. (Stolen from Programmer One) UNIX is an operating system, OS/2 is half an operating system, Windows is a shell, and DOS is a boot partition virus ##### Share on other sites That is exactly the opposite of what you want to do. You''re right, Dev-C++ builds each cpp file individually - that''s the whole point of C''s compiler model. Each .cpp is compiled (by gcc in this case) into a .o (object) file, which the linker (ld in this case) then takes and links with all of the other .o files generated from all the other .cpp files. Mix in any included libraries, shake, and viola - program compiled. Putting everything in one file is EvilTM. Just don''t #include the graphics.cpp file in main.cpp, and let Dev-C++ do what it''s supposed to do. ##### Share on other sites If I just let the compiler do its thing, it works almost perfectly. The huge amounts of errors are gone. The only problem is that main doesn''t recognise the class defined in character. How can I tell the compiler that main depends on the external class ''Character''? I tried extern, but it might have been bad syntaxt. ##### Share on other sites quote: Original post by nagromo The only problem is that main doesn''t recognise the class defined in character. How can I tell the compiler that main depends on the external class ''Character''? In which file is the Character class defined? If it''s in a header (.h) file, go ahead and include it and it should work. If it''s in a .cpp file, then you need to strip the class''s declaration out of that file (so it only contains the class implementation), put it in a header file, and include the header file where necessary. Don''t forget the #ifndef/#define...#endif in the header file, or you''re gonna have more problems. yckx ##### Share on other sites I always use the #ifnef stuff in .h files. Here is how I have it now: Character.h: class definition Character.cpp: class member functions main.cpp: main file Where should I #include Character.h? If I include it in main, the Character.cpp file doesn''t recognize the member functions Character::Character(u16 x,...) { [syntax error before ''::'' token] but if I include it in Character, main still can''t use the class. I know my code is OK, because if I copy and paste it all into main it works, but as the project grows that would be a horror. ##### Share on other sites //main.h #include everything that all files need. #define all datatypes #define all your functions from every file *** //main.cpp #include "main.h" #include <something ONLY main needs> *** character.cpp #include "main.h" #include <something ONLY character.cpp needs> ##### Share on other sites I did that and it finally works! Wouldn't that lead to repetitive includes, though? As I get more cpp files, I don't want to include my entire API and definitions in each file. Another thing that might work is to say that main depends on definitions in character. Is there a way to tell it what file depends on other files without using makefiles? [edited by - nagromo on November 27, 2003 12:21:55 PM] ##### Share on other sites That''s why modulirization is good. If your api is split in several .h files, each of which makes sense on its own, then you don''t have to include the headers for the ENTIRE API in each other CPP file; only the headers for the modules you actually need functions from. However, "unnecessary includes" don''t really cost anything; they''re already in the file cache; parsing them should take a few milliseconds (literally). If you program for Windows, you probably #include <windows.h> which includes like 80% of the Win32 API (not very modularized :-) and that doesn''t hurt too much. For large projects, there''s also pre-compiled headers on many compilers. Anyway, the model that''s defined by the compiler and linker is that each CPP file is compiled as its own thing, generating a O or OBJ file. The compiler or development environment do not keep any state between each successive compile. Thus, if you want to use a function in two separate CPP files, you need to include the prototype of the function (from a header) in each of the CPP files. The compiler is a separate program that''s started for each CPP file by the "make" utility or development environment, and which terminates when it''s done compiling a single CPP file. Once you have all CPP files turned into O/OBJ files, your make or build environment will probably call the linker to compose all those O/OBJ files, together with libraries that you specify, or which are default for your environment, into an executable. Here is where you get "missing symbol" or "duplicate definition" errors. I find that it''s quite important to understand exactly how symbol and references work, are generated, and are resolved, because otherwise you will be forever confused about the exact roles of your different tools, which leads to pain. ##### Share on other sites There a post recently about C''s #include system sucking. This is why. For each class, have one .cpp file and one .h (or .hpp) file. Put the class definition in the .h and the implementaion in the .cpp. Then, only inlcude the .h files where thay are needed. Don''t put everything in one header file. There a some things I like about Java, and some things I hate, but the modules are what I like most. //3DObject definitions#ifndef OBJECT3D_H#define OBJECT3D_H//--------------------------------------#include CObject.h#include CSprite.h#include C3DSModel.h#include CMD2Model.h#include CMD3Model.h//etc...//--------------------------------------#endif
2017-12-18 17:27:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3747597634792328, "perplexity": 4146.044886003806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948618633.95/warc/CC-MAIN-20171218161254-20171218183254-00151.warc.gz"}
https://socratic.org/questions/a-sample-of-hydrogen-gas-occupies-14-1-l-at-stp-how-many-moles-of-the-gas-are-pr
# A sample of hydrogen gas occupies 14.1 L at STP. How many moles of the gas are present? $0.63 m o l$ $14.1 \cancel{L} \setminus \cdot \frac{1 m o l}{22.4 \cancel{L}} = 0.6294642857143 m o l$
2019-08-18 21:06:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6984453797340393, "perplexity": 1132.0916704386086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314130.7/warc/CC-MAIN-20190818205919-20190818231919-00509.warc.gz"}
http://www.maths.bath.ac.uk/~masmdp/ab/spread.html
# SELF-AVOIDING WALKS AND TREES IN SPREAD-OUT LATTICES ## by Mathew D. Penrose Let $\G_R$ be the graph obtained by joining all sites of $Z^d$ which are separated by a distance of at most $R$. Let $\mu(\G_R)$ denote the connective constant for counting the self-avoiding walks in this graph. Let $\lambda(\G_R)$ denote the corresponding constant for counting the trees embedded in $\G_R$. Then as $R$ goes to infinity, $\mu(\G_R)$ is asymptotic to the co-ordination number $k_R$ of $\G_R$, while $\lambda(\G_R)$ is asymptotic to $e k_R$. However, if $d$ is 1 or 2, then $\mu(\G_R) - k_R$ diverges to $-\infty$. Journal of Statistical Physics 77, 3-15 (1994).
2018-12-13 13:15:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9765937328338623, "perplexity": 143.92972054949428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824822.41/warc/CC-MAIN-20181213123823-20181213145323-00268.warc.gz"}
https://hal-udl.archives-ouvertes.fr/hal-01464788v1
# On the fourth derivative test for exponential sums Abstract : We give an upper bound for the exponential sum $\sum_{m=1}^Mexp(2i\pi f(m))$ where $f$ is a real-valued function whose fourth derivative has the order of magnitude $\lambda>0$ small. Van der Corput's classical bound, in terms of $M$ and $\lambda$ only, involves the exponent $1/14$. We show how this exponent may be replaced by any $\theta<1/12$ without further hypotheses. The proof uses a recent result by Wooley on the cubic Vinogradov system. Keywords : Document type : Journal articles Forum Mathematicum, De Gruyter, 2016, 28 (2), pp.403-404. 〈https://www.degruyter.com/view/j/form〉. 〈10.1515/forum-2014-0216〉 Cited literature [5 references] https://hal.archives-ouvertes.fr/hal-01464788 Contributor : Olivier Robert <> Submitted on : Tuesday, June 12, 2018 - 2:40:10 PM Last modification on : Friday, October 26, 2018 - 10:47:28 AM Document(s) archivé(s) le : Thursday, September 13, 2018 - 10:39:32 PM ### File fourth-test-Forum.pdf Files produced by the author(s) ### Citation Olivier Robert. On the fourth derivative test for exponential sums. Forum Mathematicum, De Gruyter, 2016, 28 (2), pp.403-404. 〈https://www.degruyter.com/view/j/form〉. 〈10.1515/forum-2014-0216〉. 〈hal-01464788〉 Record views
2019-02-21 01:24:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.517573893070221, "perplexity": 2356.866206724718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247497858.46/warc/CC-MAIN-20190221010932-20190221032932-00406.warc.gz"}
https://physics.stackexchange.com/questions/666714/why-are-there-no-known-white-dwarfs-between-1-35-to-1-44-solar-masses
# Why are there no known white dwarfs between 1.35 to 1.44 solar masses? The Chandrasekhar limit for white dwarfs is 1.44 Solar masses, however the heaveist known white dwarf is only 1.35 solar masses. https://earthsky.org/space/smallest-most-massive-white-dwarf/ What's the cause of this gap in mass? The "Chandrasekhar mass" of 1.44 solar masses is based on a pair of unrealistic assumptions, that are not met in practice, which means the true mass limit is more like 1.37 or 1.38 solar masses. The two assumptions are: (I) that the white dwarf is supported by ideal electron degeneracy pressure. i.e. Point-like, non-interacting fermions. (II) That the structure of the star is governed by Newtonian gravity. The first assumption fails because the electrons and ions do have Coulomb interactions that make the material more compressible. More importantly, at high densities (and the density increases with mass), the electron Fermi energy eventually becomes high enough to initiate electron capture to make more neutron-rich nuclei. Since the electrons are ultra-relativistic, the star is already marginally stable at this stage, and the removal of electrons causes instability and collapse. The second assumption fails because more massive white dwarfs are smaller and General Relativity must be used. The General Relativistic formulation of the equation of hydrostatic equilibrium features pressure on the RHS. So the higher the pressure, the steeper the required pressure gradient. Ultimately, this also leads to an instability at a finite size and density that occurs at masses lower than the canonical Chandrasekhar mass. For typical C/O white dwarfs, both of the instabilities discussed above occur when the white dwarf is at about 1.38 solar masses. Note that white dwarfs of more than about 1.2 solar masses are not expected to arise from the evolution of a single star. If the C/O core of a star is more massive than this, then it will also become hot enough to ignite these elements. More massive white dwarfs will need to have been produced by accretion in a binary system or by a merger. Then, another factor comes into play, which is the possible detonation of the entire white dwarf, which may also occur at about 1.37 to 1.38 solar masses, possibly ignited by the fusion of helium from the accreted material or by pycnonuclear reactions in the dense C/O core. Postscript - there actually are some white dwarfs with estimated masses of $$1.35-1.37M_\odot$$ in classical novae binary systems (e.g. Hachisu & Kato 2001). These may be systems that are about to go "bang". • This also means the accreting versions tend to explode at a very consistent mass and result in consistent energy release, thus type Ia supernovae are used as standard candles. Sep 19 '21 at 6:10 • It seems a really odd coincidence that two effects of so fundamentally different nature both kick in within 90% of the non-interactive/Newtonian Chandrasekhar limit. Is there any known reason for why it plays out this way? Sep 20 '21 at 8:38 • @leftaroundabout - The reason is a survivorship bias. The Chandrasekhar limit was derived using some simplifying assumptions, like every result in physics. It would achieve little fame had it left a gap of orders of magnitude between the heaviest observed white dwarfs and the theoretical limit. That great fit left a thinner mass gap for the inevitable corrections to either the observational or theoretical side of the equation. Sep 20 '21 at 9:52 • @leftaroundabout the reason is that the dependence of density on mass becomes very steep as the canonical Chandrasekhar limit is approached. It is like $\rho \propto M^2$ at lower masses, but becomes much steeper as the electrons become ultrarelativistic. This means that the central density increases by orders of magnitude as you increase the mass up towards that last 10% before the Chandrasekhar mass. And it is density that is really the important parameter here. Sep 20 '21 at 10:05 • +1 for an excellent answer covering most white dwarfs, but what about the unusual ones (which is what we might be looking for at the limit of stability)? For example, does the rotation of a white dwarf affect its ability to remain stable at more than 1.38 solar masses? What if the rotation isn't uniform? What about the composition: some white dwarfs are ONeMg & others are He. I'm guessing that greater or lesser mass in the nucleus will change the constraints on relativistic uncertainty? Dec 13 '21 at 0:19
2022-01-19 01:47:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6996709704399109, "perplexity": 372.28474366857813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301217.83/warc/CC-MAIN-20220119003144-20220119033144-00637.warc.gz"}
https://www.physicsforums.com/threads/find-the-frequency-and-wavelength-of-a-100-mev-gamma-ray-photon.216430/
# Find the frequency and wavelength of a 100 MeV gamma ray photon ## Homework Statement Find the frequency and wavelength of a 100 MeV gamma ray photon ## Homework Equations 100 MeV=1.602 X 10^-11 ## The Attempt at a Solution I do not know how to do this. I am in a class for elementary teachers and we have to solve this problem. ## Answers and Replies Do you have any equations that relate wavelength to frequency? I do not know any. Well you should certainly have an equation for relating wavelength and frequency, the less obvious thing is relating the energy of a photon to the frequency/wavelength You should also have an equation for that, it involves planck's constant, h Wave length equals frequency times 340 meters per second... Wave length equals frequency times 340 meters per second... Unfortunately that's the specific equation for sound waves, pretty much at sea level. The topic creator is dealing with electromagnetic waves mgb_phys Homework Helper The equation for frequency and wavelength is simply speed = wavelength * frequency A gamma ray photon is a kind of light so you need the speed of light. There is also an equation relating it energy, energy = h * frequency where 'h' is a constant = 6.6 x 10 ^-34 Js Remember that all you values must be in the same units ( metres, seconds, Joules) for these constant to work We're doing this in my A-level at school. First find the frequency using, Energy (J)=Planck Constant (Js) X Frequency (Hz) Or, E=hf. The planck Constant is (6.6 x 10^-34). --------------------------------------------------- So: (1.602 X 10^-11)J = (6.6 x 10^-34)Js X f Or (1.602 X 10^-11)J / (6.6 x 10^-34)Js = f So the frequency is (2.47 X 10^22)Hz. ---------------------------------------------------- Speed (ms) = Frequency (Hz) X Wavelength (m) So: Speed of light-> (3.0 X 10^8)ms / (2.47 X 10^22)Hz = Wavelength (m) =(1.215 X 10^-14) Redbelly98 Staff Emeritus
2021-11-28 11:43:24
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8116888403892517, "perplexity": 2509.2012218100353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358520.50/warc/CC-MAIN-20211128103924-20211128133924-00274.warc.gz"}
https://getrevising.co.uk/revision-notes/the-periodic-table
# The Periodic Table HideShow resource information # Electronic structure and the periodic table As you have seen, there is a link between an atom's electronic structure and its position in the periodic table. You can work out an atom's electronic structure from its place in the periodic table. Periodic table related to electronic structure The diagram shows a section of the periodic table, with the elements arranged as usual in the order of their atomic number, from 2 to 20. The red numbers below each chemical symbol show its electronic structure. Moving across each period, you can see that the number of occupied energy levels is the same as the period number. As you go across each period from left to right, an energy level gradually becomes filled with electrons. The highest occupied energy level contains just one electron on the left-hand side of the table. It is filled by the time you get to the right-hand side. Moving down each group, you can see that the…
2016-10-25 03:42:20
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9369707703590393, "perplexity": 318.38022894157046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719877.27/warc/CC-MAIN-20161020183839-00486-ip-10-171-6-4.ec2.internal.warc.gz"}
http://openstudy.com/updates/4d8eb63e1a428b0b238a792d
## anonymous 5 years ago Find the oblique or (slant asymptote) of y = (5x^3 +6x^2) / (5x^2 + 4x + 5) and express your answer in the form y = mx + b, where m and b are constants I've done oblique asyptotes years ago. Maybe I can help you out, there are several steps that you must take. 1) Check if your function is an oblique asyptote my finding the limi as x --> infinity 2) If you end up with infinity, then youhave an oblique asymptote 3) After that divide your function y/x to find m, since y = mx + b; mx = y, m = y/x. Then simplify your expression. 4) Then after finding y/x, which is the slope, subtract it from the original mx to find b , so b = (y - mx) . Remember mx = the new slope that you have found. 5) finally you have the slope + b, so you can then write the equation of the OA. Those were general steps that can help you solve any problem that has an OA, as for your question: $y = 5x^3 + 6x^2/ 5x^2 + 4x + 5$ , step number (1) divide it by x and you'll get $y = (5x^3 + 6x^2)/ x(5x^2 + 4x + 5)$ multiply x using the distributive law and you will have : $y = 5x^3 + 6x^2 /5x^3 + 4x^2 + 5x$ now find the limit of this as it goes to infinity: m = $\lim_{x \rightarrow \infty} = 5x^3/5x^3$ = 1 After that we need to find b now, we know that b = y - mx, and we have found m = 1, so m = x. Now subtract y ( the original function) from m ( which is x): $(5x^3+6x^2 / 5x^2 + 4x+ 5 ) - x$ do the simplification and you'll end up with : $5x^3+ 6x^2 - 5x^3 - 4x^2- 5x/5x^3 + 4x^2 + 5x$ = $2x^2-5x/ 5x^2 + 4x + 5$ then find the limit of that as x goes to infinity: $\lim_{x \rightarrow \infty} 2x^2/5x^2 = 2/5$ Finally write your equation in a more decent form: - m = 1 -b = 2/5 so y = x + 2/5 Done :)
2017-01-18 00:03:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8377771973609924, "perplexity": 373.046112150919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00144-ip-10-171-10-70.ec2.internal.warc.gz"}
https://bioinformatics.stackexchange.com/questions/10442/putting-labels-of-different-sizes-on-one-pymol-object/10514
# Putting labels of different sizes on one PyMOl Object I'm new to PyMol (and StackExchange!) and working on my first project. I have the structure of a protein as an object, called PolyA-M, and the idea is that the residues of it are shown as spheres of differing sizes based on a calcualted conservation value. I want to label each sphere with it's amino acid, but to have the label size corresponding to sphere size. Here is a sample of the code that might be used to label one amino acid. alter ( resid 138 ), resn = "Q135" alter ( name CB and resid 138 ), vdw = vdw * 0.8 * 0.679934640522875 set_color col138, [ 1.0, 0.181, 0.181 ] show spheres, name CB and resi 138 show sticks, ( name CA or name CB ) and resid 138 set label_size, 14, PolyA-M label PolyA-M and name CB and resid 138, "Q" color col138, name CB and resi 138 In this case, residue 138 is medium sized sphere with a size 14 label Q that fits well. However, if later on in the code another residue is labelled as so: alter ( resid 198 ), resn = "A208" alter ( name CB and resid 198 ), vdw = vdw * 0.8 * 0.359803921568627 set_color col198, [ 1.0, 0.601, 0.601 ] show spheres, name CB and resi 198 show sticks, ( name CA or name CB ) and resid 198 set label_size, 8, PolyA-M label PolyA-M and name CB and resid 198, "A" color col198, name CB and resi 198 This will label residue 198 with a small A to fit the small sphere. However, this will also change the earlier label to size 8 as well, making it too small for the sphere. Is there any way for me to prevent this, such that each residue keeps its own label size? I know that one way is to create multiple identical objects and keep all labels of the same size restricted to the same object (For example, PolyA-M_10 contains all residues with label size 10, polyA-M_14 all residues with label size 14) but I was wondeirng if there is a more efficient way? Thank you for any help! This is my first Stack Exchange question so any feedback would be appreciated :) 'label_size' is a object-state-level setting, which means that you will have to rely on the 'create' command to create a new object for every different label size. It can certainly be automated, and for this I'd recommend getting into Pyhton Scripting for PyMOL • Ah, I see what you mean! The way I've been doing it with multiple objects is to just create multiple copies in the directory (So PolyA-M_1 to PolyA-M_10) such that it reads those in too, and then writes labels of different sizes onto them. But if there isn't a command which is able to change label size at a residue level, then this definitely sounds more efficient than having loads of different files. It would also be good for me to learn more scripting in general. Thank you! Sep 20, 2019 at 10:32 I ended up fixing this by just putting this into the boilerplate code: copy PolyA-MC_4, PolyA-MC copy PolyA-MC_6, PolyA-MC copy PolyA-MC_8, PolyA-MC copy PolyA-MC_10, PolyA-MC copy PolyA-MC_12, PolyA-MC copy PolyA-MC_14, PolyA-MC copy PolyA-MC_16, PolyA-MC copy PolyA-MC_18, PolyA-MC copy PolyA-MC_20, PolyA-MC This copied my object such that I had one per size of label text, and then each label was placed on the object. As all labels on a given object are the same size, it worked! e.g. for one residue the labelling is now: set label_size, 4, PolyA-MC_4 Effectively, it's the same as what I was already doing, but instead of having ten copy filies in the directory only one is needed. Much easier for sending to people, and more efficient!
2022-05-22 10:01:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6980955004692078, "perplexity": 2515.4410912536996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545326.51/warc/CC-MAIN-20220522094818-20220522124818-00715.warc.gz"}
https://discuss.codechef.com/questions/67500/cseq-april15-long-time-limit-verification
× # CSEQ - April15 Long (Time Limit Verification) -2 Hi, I would like to know the time constraints about this problem. I have an algorithm which takes O(20*N) time. where N is ((10power6) + 3). Subtask-3 is getting TLE. Does the solution needs more improvement then that? I'm really exhausted optimizing it. I may have reached the atomic state of optimization. Please let me know the Time constraints in O() notation. @admin: In case, any verification of code needed please see this submission. http://www.codechef.com/viewsolution/6692845 Thanks. asked 07 Apr '15, 13:03 320●1●2●15 accept rate: 0% 0★admin ♦♦ 19.8k●350●498●541 0 It is against codechef rules to discuss solutions to any problem during an ongoing contest. Also, your solution is not visible to anyone for an ongoing contest. The time-limit is mentioned in the problem, there may be a different approach. :) The editorials will be put up after the contest, you can verify there. Happy Coding! :D answered 07 Apr '15, 13:19 31●2 accept rate: 28% Hi @sanchitkum, I don't want to discuss the actual solution/approach to the problem. The time-limit mentioned is 1 second. I wanted to know in big-O notation. So, like that I can validate it with my algorithm Time-complexity. I know the code won't be visible to any users but however admin may see it. Hence, the tag. :) I'm gonna try to debug further. Thanks. (07 Apr '15, 13:23) 2 Look at the constraints and they say a lot about O()... (07 Apr '15, 23:43) toggle preview community wiki: Preview By Email: Markdown Basics • *italic* or _italic_ • **bold** or __bold__ • image?![alt text](/path/img.jpg "title") • numbered list: 1. Foo 2. Bar • to add a line break simply add two spaces to where you would like the new line to be. • basic HTML tags are also supported • mathemetical formulas in Latex between \$ symbol Question tags: ×263 ×35 question asked: 07 Apr '15, 13:03 question was seen: 1,963 times last updated: 13 Apr '15, 15:13
2019-02-22 12:50:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7099820375442505, "perplexity": 5617.958780059436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247517815.83/warc/CC-MAIN-20190222114817-20190222140817-00520.warc.gz"}
https://www.csdn.net/tags/MtjaQgysOTc5MDgtYmxvZwO0O0OO0O0O.html
• 把多维Fuzzing技术面临的问题归纳为组合爆炸、覆盖脆弱语句和触发潜在漏洞三个问题,对存在的多种多维Fuzzing技术进行了研究和比较,并总结出多维Fuzzing技术的三个基本步骤:定位脆弱语句、查找影响脆弱语句的输入... • Bonsai Fuzzing是一种模糊测试算法,可为诸如编译器之类的软件生成简洁的测试用例。 在中描述了更多详细信息: Vasudev Vikram,Rohan Padhye和Koushik Sen.使用Bonsai Fuzzing成长了一个测试语料库。 出现在第43... • 以下所有说明均假设一个Linux操作系统在执行docker容器的(可能是虚拟的)计算机上具有sudo权限,docker安装以及该存储库的本地版本url-fuzzing-docker。 注意:Vagrantfiles可用于构建符合构建要求(2021年1月)... • 本文阐述了工控网络协议的特点以及 Fuzzing 测试的困难,讨论并比较了现有各种 Fuzzing 测试技术应用于工控网络协议的优缺点,提出工控网络协议的专用 Fuzzing 测试工具的设计准则,最后展望了工控网络协议 Fuzzing... • by Michael Sutton •为什么模糊测试简化了测试设计并捕获了其他方法遗漏的缺陷 •模糊测试过程:从识别输入到评估“可利用性” •了解有效模糊测试的要求 •比较基于突变和基于生成的模糊器 ... • Web软件安全漏洞层出不穷,攻击手段日益变化,为分析现有的Web控件漏洞检测方法,提出基于Fuzzing测试方法的Web控件漏洞检测改进模型。该系统从功能上分为五大模块进行设计和实现,并结合静态分析与动态分析技术检测... • FuzzIL: Coverage Guided Fuzzing for JavaScript Engines • 通过分析比较多种Fuzzing技术的定义,结合其当前发展所基于的知识和采用的方法,给出了Fuzzing技术的一个新的定义;重点从与黑盒测试技术的区别、测试对象、架构和测试数据产生机理四个方面总结了当前Fuzzing技术... • dockerized_fuzzing 在Docker中运行模糊测试。 目前,我们已经整合了37种可用的模糊测试工具。 此是一部分。 相应的纸张将出现在Usenix安全2021年 引用本文: @inproceedings{unifuzz-li, title={{UNIFUZZ}: A ... • 关于这本书 欢迎来到《模糊的书》! 软件中存在错误,而捕获错误可能会涉及很多工作。 本书通过自动化软件测试(特别是通过自动生成测试)解决了这个问题。 近年来,看到了新技术的发展,这些新技术导致了测试生成和... • Fuzzing-Dicts-master • ContractFuzzer generates fuzzing inputs based on the ABI specifications of smart contracts, defines test oracles to detect security vulnerabilities, instruments the EVM to log smart contracts runtime ... • ## Recent Fuzzing Papers 千次阅读 2019-03-13 15:19:10 Recent Papers Related To Fuzzing 原文在GitHub上更新: https://github.com/wcventure/FuzzingPaper All Papers Interesting Fuzzing DifFuzz: Differential Fuzzing for Side-Channel Analysis (ICSE 2019) ... Recent Papers Related To Fuzzing 原文在GitHub上进行更新: https://github.com/wcventure/FuzzingPaper All Papers Interesting Fuzzing DifFuzz: Differential Fuzzing for Side-Channel Analysis (ICSE 2019)REST-ler: Stateful REST API Fuzzing (ICSE 2019)Life after Speech Recognition: Fuzzing Semantic Misinterpretation for Voice Assistant Applications (NDSS 2019)ContractFuzzer: Fuzzing Smart Contracts for Vulnerability Detection (ASE 2018)IoTFuzzer: Discovering Memory Corruptions in IoT Through App-based Fuzzing (NDSS 2018)What You Corrupt Is Not What You Crash: Challenges in Fuzzing Embedded Devices (NDSS 2018)MoonShine: Optimizing OS Fuzzer Seed Selection with Trace Distillation (USENUX Security2018)Singularity: Pattern Fuzzing for Worst Case Complexity (FSE 2018)NEZHA: Efficient Domain-Independent Differential Testing (S&P 2017) Evaluate Fuzzing Evaluating Fuzz Testing (CCS 2018) Kernel Fuzzing PeriScope: An Effective Probing and Fuzzing Framework for the Hardware-OS Boundary (NDSS2019)Fuzzing File Systems via Two-Dimensional Input Space Exploration (S&P 2019)Razzer: Finding Kernel Race Bugs through Fuzzing (S&P 2019)kAFL: Hardware-Assisted Feedback Fuzzing for OS Kernels (Usenix Security2017) Hybrid Fuzzing Send Hardest Problems My Way: Probabilistic Path Prioritization for Hybrid Fuzzing (NDSS2019)QSYM: A Practical Concolic Execution Engine Tailored for Hybrid Fuzzing (USENUX Security2018)Angora: Efficient Fuzzing by Principled Search (S&P 2018)Driller: Argumenting Fuzzing Through Selective Symbolic Execution(NDSS 2016) Addressing Magic bytes \ checksum REDQUEEN: Fuzzing with Input-to-State Correspondence (NDSS2019)T-Fuzz: fuzzing by program transformation (S&P 2018)FairFuzz: A Targeted Mutation Strategy for Increasing Greybox Fuzz Testing Coverage (ASE 2018)VUzzer: Application-aware Evolutionary Fuzzing (NDSS 2017) Inputs-aware Fuzzing SLF: Fuzzing without Valid Seed Inputs (ICSE2019)Superion: Grammar-Aware Greybox Fuzzing (ICSE 2019)ProFuzzer: On-the-fly Input Type Probing for Better Zero-day Vulnerability Discovery (S&P 2019) Directed Fuzzing Directed Greybox Fuzzing (CCS 2017)Hawkeye: Towards a Desired Directed Grey-box Fuzzer (CCS 2018) Addressing Collision CollAFL: Path Sensitive Fuzzing (S&P 2018) Fuzzing Overhead & Performance Full-speed Fuzzing: Reducing Fuzzing Overhead through Coverage-guided Tracing (S&P 2019)Designing New Operating Primitives to Improve Fuzzing Performance (CCS 2017) Enhancing Memory Error Enhancing Memory Error Detection for Large-Scale Applications and Fuzz Testing (NDSS 2018) Power Schedule Coverage-based Greybox Fuzzing as Markov Chain (CCS 2016) Learning-based Fuzzing NEUZZ: Efficient Fuzzing with Neural Program Smoothing (S&P 2019) Fuzzing Machine Learning Model TensorFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing (2018)Coverage-Guided Fuzzing for Deep Neural Networks (2018) DifFuzz (Side-channal analysis)ProFuzzer (Inference input structure)FairFuzz (Target rare branches)FairFuzz & ProFuzzerEnhancing Memory Error DetectionNEZHA (Differential testing)REDQUEEN Interesting Fuzzing DifFuzz: Differential Fuzzing for Side-Channel Analysis (ICSE 2019) Abstract: Side-channel attacks allow an adversary to uncover secret program data by observing the behavior of a program with respect to a resource, such as execution time, consumed memory or response size. Side-channel vulnerabilities are difficult to reason about as they involve analyzing the correlations between resource usage over multiple program paths. We present DifFuzz, a fuzzing-based approach for detecting side-channel vulnerabilities related to time and space. DifFuzz automatically detects these vulnerabilities by analyzing two versions of the program and using resource-guided heuristics to find inputs that maximize the difference in resource consumption between secret-dependent paths. The methodology of DifFuzz is general and can be applied to programs written in any language. For this paper, we present an implementation that targets analysis of Java programs, and uses and extends the Kelinci and AFL fuzzers. We evaluate DifFuzz on a large number of Java programs and demonstrate that it can reveal unknown side-channel vulnerabilities in popular applications. We also show that DifFuzz compares favorably against Blazer and Themis, two state-of-the-art analysis tools for finding side-channels in Java programs. REST-ler: Stateful REST API Fuzzing (ICSE 2019) Paper Abstract: This paper introduces REST-ler, the first stateful REST API fuzzer. REST-ler analyzes the API specification of a cloud service and generates sequences of requests that automatically test the service through its API. REST-ler generates test sequences by (1) inferring producer-consumer dependencies among request types declared in the specification (eg inferring that “a request B should be executed after request A” because B takes as an input a resource-id x produced by A) and by (2) analyzing dynamic feedback from responses observed during prior test executions in order to generate new tests (eg learning that “a request C after a request sequence A;B is refused by the service” and therefore avoiding this combination in the future). We present experimental results showing that these two techniques are necessary to thoroughly exercise a service under test while pruning the large search space of possible request sequences. We used REST-ler to test GitLab, a large open-source self-hosted Git service, as well as several Microsoft Azure and Office365 cloud services. REST-ler found 28 bugs in Gitlab and several bugs in each of the Azure and Office365 cloud services tested so far. These bugs have been confirmed by the service owners, and are either in the process of being fixed or have already been fixed. Life after Speech Recognition: Fuzzing Semantic Misinterpretation for Voice Assistant Applications (NDSS 2019) Paper Abstract: Popular Voice Assistant (VA) services such as Amazon Alexa and Google Assistant are now rapidly appifying their platforms to allow more flexible and diverse voice-controlled service experience. However, the ubiquitous deployment of VA devices and the increasing number of third-party applications have raised security and privacy concerns. While previous works such as hidden voice attacks mostly examine the problems of VA services’ default Automatic Speech Recognition (ASR) component, our work analyzes and evaluates the security of the succeeding component after ASR, i.e., Natural Language Understanding (NLU), which performs semantic interpretation (i.e., text-to-intent) after ASR’s acoustic-to-text processing. In particular, we focus on NLU’s Intent Classifier which is used in customizing machine understanding for third-party VA Applications (or vApps). We find that the semantic inconsistency caused by the improper semantic interpretation of an Intent Classifier can create the opportunity of breaching the integrity of vApp processing when attackers delicately leverage some common spoken errors. In this paper, we design the first linguistic-model-guided fuzzing tool, named LipFuzzer, to assess the security of Intent Classifier and systematically discover potential misinterpretation-prone spoken errors based on vApps’ voice command templates. To guide the fuzzing, we construct adversarial linguistic models with the help of Statistical Relational Learning (SRL) and emerging Natural Language Processing (NLP) techniques. In evaluation, we have successfully verified the effectiveness and accuracy of LipFuzzer. We also use LipFuzzer to evaluate both Amazon Alexa and Google Assistant vApp platforms. We have identified that a large portion of real-world vApps are vulnerable based on our fuzzing result. ContractFuzzer: Fuzzing Smart Contracts for Vulnerability Detection (ASE 2018) Abstract: Decentralized cryptocurrencies feature the use of blockchain to transfer values among peers on networks without central agency. Smart contracts are programs running on top of the blockchain consensus protocol to enable people make agreements while minimizing trusts. Millions of smart contracts have been deployed in various decentralized applications. The security vulnerabilities within those smart contracts pose significant threats to their applications. Indeed, many critical security vulnerabilities within smart contracts on Ethereum platform have caused huge financial losses to their users. In this work, we present ContractFuzzer, a novel fuzzer to test Ethereum smart contracts for security vulnerabilities. ContractFuzzer generates fuzzing inputs based on the ABI specifications of smart contracts, defines test oracles to detect security vulnerabilities, instruments the EVM to log smart contracts runtime behaviors, and analyzes these logs to report security vulnerabilities. Our fuzzing of 6991 smart contracts has flagged more than 459 vulnerabilities with high precision. In particular, our fuzzing tool successfully detects the vulnerability of the DAO contract that leads to USD 60 million loss and the vulnerabilities of Parity Wallet that have led to the loss of USD 30 million and the freezing of USD 150 million worth of Ether. IoTFuzzer: Discovering Memory Corruptions in IoT Through App-based Fuzzing (NDSS 2018) Abstract: With more IoT devices entering the consumer market, it becomes imperative to detect their security vulnerabilities before an attacker does. Existing binary analysis based approaches only work on firmware, which is less accessible except for those equipped with special tools for extracting the code from the device. To address this challenge in IoT security analysis, we present in this paper a novel automatic fuzzing framework, called IOTFUZZER, which aims at finding memory corruption vulnerabilities in IoT devices without access to their firmware images. The key idea is based upon the observation that most IoT devices are controlled through their official mobile apps, and such an app often contains rich information about the protocol it uses to communicate with its device. Therefore, by identifying and reusing program-specific logic (e.g., encryption) to mutate the test case (particularly message fields), we are able to effectively probe IoT targets without relying on any knowledge about its protocol specifications. In our research, we implemented IOTFUZZER and evaluated 17 real-world IoT devices running on different protocols, and our approach successfully identified 15 memory corruption vulnerabilities (including 8 previously unknown ones). What You Corrupt Is Not What You Crash: Challenges in Fuzzing Embedded Devices (NDSS 2018) Paper Slides Abstract: As networked embedded systems are becoming more ubiquitous, their security is becoming critical to our daily life. While manual or automated large scale analysis of those systems regularly uncover new vulnerabilities, the way those systems are analyzed follows often the same approaches used on desktop systems. More specifically, traditional testing approaches relies on observable crashes of a program, and binary instrumentation techniques are used to improve the detection of those faulty states. In this paper, we demonstrate that memory corruptions, a common class of security vulnerabilities, often result in different behavior on embedded devices than on desktop systems. In particular, on embedded devices, effects of memory corruption are often less visible. This reduces significantly the effectiveness of traditional dynamic testing techniques in general, and fuzzing in particular. Additionally, we analyze those differences in several categories of embedded devices and show the resulting impact on firmware analysis. We further describe and evaluate relatively simple heuristics which can be applied at run time (on an execution trace or in an emulator), during the analysis of an embedded device to detect previously undetected memory corruptions. MoonShine: Optimizing OS Fuzzer Seed Selection with Trace Distillation (USENUX Security2018) Paper Abstract: OS fuzzers primarily test the system call interface between the OS kernel and user-level applications for security vulnerabilities. The effectiveness of evolutionary OS fuzzers depends heavily on the quality and diversity of their seed system call sequences. However, generating good seeds for OS fuzzing is a hard problem as the behavior of each system call depends heavily on the OS kernel state created by the previously executed system calls. Therefore, popular evolutionary OS fuzzers often rely on hand-coded rules for generating valid seed sequences of system calls that can bootstrap the fuzzing process. Unfortunately, this approach severely restricts the diversity of the seed system call sequences and therefore limits the effectiveness of the fuzzers. In this paper, we develop MoonShine, a novel strategy for distilling seeds for OS fuzzers from system call traces of real-world programs while still maintaining the dependencies across the system calls. MoonShine leverages light-weight static analysis for efficiently detecting dependencies across different system calls. We designed and implemented MoonShine as an extension to Syzkaller, a state-of-the-art evolutionary fuzzer for the Linux kernel. Starting from traces containing 2.8 million system calls gathered from 3,220 real-world programs, MoonShine distilled down to just over 14,000 calls while preserving 86% of the original code coverage. Using these distilled seed system call sequences, MoonShine was able to improve Syzkaller’s achieved code coverage for the Linux kernel by 13% on average. MoonShine also found 14 new vulnerabilities in the Linux kernel that were not found by Syzkaller. Singularity: Pattern Fuzzing for Worst Case Complexity (FSE 2018) Paper Abstract: We describe a new blackbox complexity testing technique for determining the worst-case asymptotic complexity of a given application. The key idea is to look for an input pattern —rather than a concrete input— that maximizes the asymptotic resource usage of the program. Because input patterns can be described concisely as programs in a restricted language, our method transforms the complexity testing problem to optimal program synthesis. In particular, we express these input patterns using a new model of computation called Recurrent Computation Graph (RCG) and solve the optimal synthesis problem by developing a genetic programming algorithm that operates on RCGs. We have implemented the proposed ideas in a tool called Singularity and evaluate it on a diverse set of benchmarks. Our evaluation shows that Singularity can effectively discover the worst-case complexity of various algorithms and that it is more scalable compared to existing state-of-the-art techniques. Furthermore, our experiments also corroborate that Singularity can discover previously unknown performance bugs and availability vulnerabilities in real-world applications such as Google Guava and JGraphT. NEZHA: Efficient Domain-Independent Differential Testing (S&P 2017) Differential testing uses similar programs as cross-referencing oracles to find semantic bugs that do not exhibit explicit erroneous behaviors like crashes or assertion failures. Unfortunately, existing differential testing tools are domain-specific and inefficient, requiring large numbers of test inputs to find a single bug. In this paper, we address these issues by designing and implementing NEZHA, an efficient input-format-agnostic differential testing framework. The key insight behind NEZHA’s design is that current tools generate inputs by simply borrowing techniques designed for finding crash or memory corruption bugs in individual programs (e.g., maximizing code coverage). By contrast, NEZHA exploits the behavioral asymmetries between multiple test programs to focus on inputs that are more likely to trigger semantic bugs. We introduce the notion of δ-diversity, which summarizes the observed asymmetries between the behaviors of multiple test applications. Based on δ-diversity, we design two efficient domain-independent input generation mechanisms for differential testing, one gray-box and one black-box. We demonstrate that both of these input generation schemes are significantly more efficient than existing tools at finding semantic bugs in real-world, complex software. Evaluate Fuzzing Evaluating Fuzz Testing (CCS 2018) Abstract: Fuzz testing has enjoyed great success at discovering security critical bugs in real software. Recently, researchers have devoted significant effort to devising new fuzzing techniques, strategies, and algorithms. Such new ideas are primarily evaluated experimentally so an important question is: What experimental setup is needed to produce trustworthy results? We surveyed the recent research literature and assessed the experimental evaluations carried out by 32 fuzzing papers. We found problems in every evaluation we considered. We then performed our own extensive experimental evaluation using an existing fuzzer. Our results showed that the general problems we found in existing experimental evaluations can indeed translate to actual wrong or misleading assessments. We conclude with some guidelines that we hope will help improve experimental evaluations of fuzz testing algorithms, making reported results more robust. Kernel Fuzzing PeriScope: An Effective Probing and Fuzzing Framework for the Hardware-OS Boundary (NDSS2019) Paper Abstract: The OS kernel is an attractive target for remote attackers. If compromised, the kernel gives adversaries full system access, including the ability to install rootkits, extract sensitive information, and perform other malicious actions, all while evading detection. Most of the kernel’s attack surface is situated along the system call boundary. Ongoing kernel protection efforts have focused primarily on securing this boundary; several capable analysis and fuzzing frameworks have been developed for this purpose. However, there are additional paths to kernel compromise that do not involve system calls, as demonstrated by several recent exploits. For example, by compromising the firmware of a peripheral device such as a Wi-Fi chipset and subsequently sending malicious inputs from the Wi-Fi chipset to the Wi-Fi driver, adversaries have been able to gain control over the kernel without invoking a single system call. Unfortunately, there are currently no practical probing and fuzzing frameworks that can help developers find and fix such vulnerabilities occurring along the hardware-OS boundary. We present PeriScope, a Linux kernel based probing framework that enables fine-grained analysis of device-driver interactions. PeriScope hooks into the kernel’s page fault handling mechanism to either passively monitor and log traffic between device drivers and their corresponding hardware, or mutate the data stream on-the-fly using a fuzzing component, PeriFuzz, thus mimicking an active adversarial attack. PeriFuzz accurately models the capabilities of an attacker on peripheral devices, to expose different classes of bugs including, but not limited to, memory corruption bugs and double-fetch bugs. To demonstrate the risk that peripheral devices pose, as well as the value of our framework, we have evaluated PeriFuzz on the Wi-Fi drivers of two popular chipset vendors, where we discovered 15 unique vulnerabilities, 9 of which were previously unknown. Fuzzing File Systems via Two-Dimensional Input Space Exploration (S&P 2019) Paper Abstract: File systems, a basic building block of an OS, are too big and too complex to be bug free. Nevertheless, file systems rely on regular stress-testing tools and formal checkers to find bugs, which are limited due to the ever-increasing complexity of both file systems and OSes. Thus, fuzzing, proven to be an effective and a practical approach, becomes a preferable choice, as it does not need much knowledge about a target. However, three main challenges exist in fuzzing file systems: mutating a large image blob that degrades overall performance, generating image-dependent file operations, and reproducing found bugs, which is difficult for existing OS fuzzers. Hence, we present JANUS, the first feedback-driven fuzzer that explores the two-dimensional input space of a file system, i.e., mutating metadata on a large image, while emitting image-directed file operations. In addition, JANUS relies on a library OS rather than on traditional VMs for fuzzing, which enables JANUS to load a fresh copy of the OS, thereby leading to better reproducibility of bugs. We evaluate JANUS on eight file systems and found 90 bugs in the upstream Linux kernel, 62 of which have been acknowledged. Forty-three bugs have been fixed with 32 CVEs assigned. In addition, JANUS achieves higher code coverage on all the file systems after fuzzing 12 hours, when compared with the state-of-the-art fuzzer Syzkaller for fuzzing file systems. JANUS visits 4.19x and 2.01x more code paths in Btrfs and ext4, respectively. Moreover, JANUS is able to reproduce 88-100% of the crashes, while Syzkaller fails on all of them. Razzer: Finding Kernel Race Bugs through Fuzzing (S&P 2019) Paper Abstract: A data race in a kernel is an important class of bugs, critically impacting the reliability and security of the associated system. As a result of a race, the kernel may become unresponsive. Even worse, an attacker may launch a privilege escalation attack to acquire root privileges. In this paper, we propose Razzer, a tool to find race bugs in kernels. The core of Razzer is in guiding fuzz testing towards potential data race spots in the kernel. Razzer employs two techniques to find races efficiently: a static analysis and a deterministic thread interleaving technique. Using a static analysis, Razzer identifies over-approximated potential data race spots, guiding the fuzzer to search for data races in the kernel more efficiently. Using the deterministic thread interleaving technique implemented at the hypervisor, Razzer tames the non-deterministic behavior of the kernel such that it can deterministically trigger a race. We implemented a prototype of Razzer and ran the latest Linux kernel (from v4.16-rc3 to v4.18-rc3) using Razzer. As a result, Razzer discovered 30 new races in the kernel, with 16 subsequently confirmed and accordingly patched by kernel developers after they were reported. kAFL: Hardware-Assisted Feedback Fuzzing for OS Kernels (Usenix Security2017) Paper Slide CodePaper Abstract: Many kinds of memory safety vulnerabilities have been endangering software systems for decades. Amongst other approaches, fuzzing is a promising technique to unveil various software faults. Recently, feedback-guided fuzzing demonstrated its power, producing a steady stream of security-critical software bugs. Most fuzzing efforts—especially feedback fuzzing—are limited to user space components of an operating system (OS), although bugs in kernel components are more severe, because they allow an attacker to gain access to a system with full privileges. Unfortunately, kernel components are difficult to fuzz as feedback mechanisms (i.e., guided code coverage) cannot be easily applied. Additionally, non-determinism due to interrupts, kernel threads, statefulness, and similar mechanisms poses problems. Furthermore, if a process fuzzes its own kernel, a kernel crash highly impacts the performance of the fuzzer as the OS needs to reboot. In this paper, we approach the problem of coverage-guided kernel fuzzing in an OS-independent and hardware-assisted way: We utilize a hypervisor and Intel’s Processor Trace (PT) technology. This allows us to remain independent of the target OS as we just require a small user space component that interacts with the targeted OS. As a result, our approach introduces almost no performance overhead, even in cases where the OS crashes, and performs up to 17,000 executions per second on an off-the-shelf laptop. We developed a framework called kernel-AFL (kAFL) to assess the security of Linux, macOS, and Windows kernel components. Among many crashes, we uncovered several flaws in the ext4 driver for Linux, the HFS and APFS file system of macOS, and the NTFS driver of Windows. Hybrid Fuzzing: Send Hardest Problems My Way: Probabilistic Path Prioritization for Hybrid Fuzzing (NDSS2019) Paper Abstract: Hybrid fuzzing which combines fuzzing and concolic execution has become an advanced technique for software vulnerability detection. Based on the observation that fuzzing and concolic execution are complementary in nature, the state-of-the-art hybrid fuzzing systems deploy “demand launch” and “optimal switch” strategies. Although these ideas sound intriguing, we point out several fundamental limitations in them, due to oversimplified assumptions. We then propose a novel “discriminative dispatch” strategy to better utilize the capability of concolic execution. We design a novel Monte Carlo based probabilistic path prioritization model to quantify each path’s difficulty and prioritize them for concolic execution. This model treats fuzzing as a random sampling process. It calculates each path’s probability based on the sampling information. Finally, our model prioritizes and assigns the most difficult paths to concolic execution. We implement a prototype system DigFuzz and evaluate our system with two representative datasets. Results show that the concolic execution in DigFuzz outperforms than that in a state-of-the-art hybrid fuzzing system Driller in every major aspect. In particular, the concolic execution in DigFuzz contributes to discovering more vulnerabilities (12 vs. 5) and producing more code coverage (18.9% vs. 3.8%) on the CQE dataset than the concolic execution in Driller. QSYM: A Practical Concolic Execution Engine Tailored for Hybrid Fuzzing (USENUX Security2018) Abstract: Recently, hybrid fuzzing has been proposed to address the limitations of fuzzing and concolic execution by combining both approaches. The hybrid approach has shown its effectiveness in various synthetic benchmarks such as DARPA Cyber Grand Challenge (CGC) binaries, but it still suffers from scaling to find bugs in complex, real-world software. We observed that the performance bottleneck of the existing concolic executor is the main limiting factor for its adoption beyond a small-scale study. To overcome this problem, we design a fast concolic execution engine, called QSYM, to support hybrid fuzzing. The key idea is to tightly integrate the symbolic emulation with the native execution using dynamic binary translation, making it possible to implement more fine-grained, so faster, instruction-level symbolic emulation. Additionally, QSYM loosens the strict soundness requirements of conventional concolic executors for better performance, yet takes advantage of a faster fuzzer for validation, providing unprecedented opportunities for performance optimizations, e.g., optimistically solving constraints and pruning uninteresting basic blocks. Our evaluation shows that QSYM does not just outperform state-of-the-art fuzzers (i.e., found 14× more bugs than VUzzer in the LAVA-M dataset, and outperformed Driller in 104 binaries out of 126), but also found 13 previously unknown security bugs in eight real-world programs like Dropbox Lepton, ffmpeg, and OpenJPEG, which have already been intensively tested by the state-of-the-art fuzzers, AFL and OSS-Fuzz. Angora: Efficient Fuzzing by Principled Search (S&P 2018) Abstract: Abstract-Fuzzing is a popular technique for finding software bugs. However, the performance of the state-of-the-art fuzzers leaves a lot to be desired. Fuzzers based on symbolic execution produce quality inputs but run slow, while fuzzers based on random mutation run fast but have difficulty producing quality inputs. We propose Angora, a new mutation-based fuzzer that outperforms the state-of-the-art fuzzers by a wide margin. The main goal of Angora is to increase branch coverage by solving path constraints without symbolic execution. To solve path constraints efficiently, we introduce several key techniques: scalable byte-level taint tracking, context-sensitive branch count, search based on gradient descent, and input length exploration. On the LAVA-M data set, Angora found almost all the injected bugs, found more bugs than any other fuzzer that we compared with, and found eight times as many bugs as the second-best fuzzer in the program who. Angora also found 103 bugs that the LAVA authors injected but could not trigger. We also tested Angora on eight popular, mature open source programs. Angora found 6, 52, 29, 40 and 48 new bugs in file, jhead, nm, objdump and size, respectively. We measured the coverage of Angora and evaluated how its key techniques contribute to its impressive performance. Driller: Argumenting Fuzzing Through Selective Symbolic Execution(NDSS 2016) Paper Abstract: Memory corruption vulnerabilities are an everpresent risk in software, which attackers can exploit to obtain unauthorized access to confidential information. As products with access to sensitive data are becoming more prevalent, the number of potentially exploitable systems is also increasing, resulting in a greater need for automated software vetting tools. DARPA recently funded a competition, with millions of dollars in prize money, to further research focusing on automated vulnerability finding and patching, showing the importance of research in this area. Current techniques for finding potential bugs include static, dynamic, and concolic analysis systems, which each having their own advantages and disadvantages. A common limitation of systems designed to create inputs which trigger vulnerabilities is that they only find shallow bugs and struggle to exercise deeper paths in executables. We present Driller, a hybrid vulnerability excavation tool which leverages fuzzing and selective concolic execution in a complementary manner, to find deeper bugs. Inexpensive fuzzing is used to exercise compartments of an application, while concolic execution is used to generate inputs which satisfy the complex checks separating the compartments. By combining the strengths of the two techniques, we mitigate their weaknesses, avoiding the path explosion inherent in concolic analysis and the incompleteness of fuzzing. Driller uses selective concolic execution to explore only the paths deemed interesting by the fuzzer and to generate inputs for conditions that the fuzzer cannot satisfy. We evaluate Driller on 126 applications released in the qualifying event of the DARPA Cyber Grand Challenge and show its efficacy by identifying the same number of vulnerabilities, in the same time, as the top-scoring team of the qualifying event. Note: 我们都知道,fuzzing对于一些比较宽松的限制(比如x>0)能够很容易的通过变异产生一些输入达到该条件;而symbolic execution非常擅长求解一下magic value(比如x == deadleaf)。这是一篇比较经典的将concolic execution和fuzzing结合在一起的文章,该文章的主要思想就是先用AFL等Fuzzer根据seed进行变异,来测试程序。当产生的输入一直走某些路径,并没有探测到新的路径时,此时就"stuck"了。这时,就是用concolic execution来产生输入,保证该输入能走到一些新的分支。从而利用concolic execution来辅助fuzz。 REDQUEEN: Fuzzing with Input-to-State Correspondence (NDSS2019) Abstract: Automated software testing based on fuzzing has experienced a revival in recent years. Especially feedback-driven fuzzing has become well-known for its ability to efficiently perform randomized testing with limited input corpora. Despite a lot of progress, two common problems are magic numbers and (nested) checksums. Computationally expensive methods such as taint tracking and symbolic execution are typically used to overcome such roadblocks. Unfortunately, such methods often require access to source code, a rather precise description of the environment (e.g., behavior of library calls or the underlying OS), or the exact semantics of the platform’s instruction set. In this paper, we introduce a lightweight, yet very effective alternative to taint tracking and symbolic execution to facilitate and optimize state-of-the-art feedback fuzzing that easily scales to large binary applications and unknown environments. We observe that during the execution of a given program, parts of the input often end up directly (i.e., nearly unmodified) in the program state. This input-to-state correspondence can be exploited to create a robust method to overcome common fuzzing roadblocks in a highly effective and efficient manner. Our prototype implementation, called REDQUEEN, is able to solve magic bytes and (nested) checksum tests automatically for a given binary executable. Additionally, we show that our techniques outperform various state-of-the-art tools on a wide variety of targets across different privilege levels (kernel-space and userland) with no platform-specific code. REDQUEEN is the first method to find more than 100% of the bugs planted in LAVA-M across all targets. Furthermore, we were able to discover 65 new bugs and obtained 16 CVEs in multiple programs and OS kernel drivers. Finally, our evaluation demonstrates that REDQUEEN is fast, widely applicable and outperforms concurrent approaches by up to three orders of magnitude. T-Fuzz: fuzzing by program transformation (S&P 2018) Abstract: Fuzzing is a simple yet effective approach to discover software bugs utilizing randomly generated inputs. However, it is limited by coverage and cannot find bugs hidden in deep execution paths of the program because the randomly generated inputs fail complex sanity checks, e.g., checks on magic values, checksums, or hashes. To improve coverage, existing approaches rely on imprecise heuristics or complex input mutation techniques (e.g., symbolic execution or taint analysis) to bypass sanity checks. Our novel method tackles coverage from a different angle: by removing sanity checks in the target program. T-Fuzz leverages a coverage-guided fuzzer to generate inputs. Whenever the fuzzer can no longer trigger new code paths, a light-weight, dynamic tracing based technique detects the input checks that the fuzzer-generated inputs fail. These checks are then removed from the target program. Fuzzing then continues on the transformed program, allowing the code protected by the removed checks to be triggered and potential bugs discovered. Fuzzing transformed programs to find bugs poses two challenges: (1) removal of checks leads to over-approximation and false positives, and (2) even for true bugs, the crashing input on the transformed program may not trigger the bug in the original program. As an auxiliary post-processing step, T-Fuzz leverages a symbolic execution-based approach to filter out false positives and reproduce true bugs in the original program. By transforming the program as well as mutating the input, T-Fuzz covers more code and finds more true bugs than any existing technique. We have evaluated T-Fuzz on the DARPA Cyber Grand Challenge dataset, LAVA-M dataset and 4 real-world programs (pngfix, tiffinfo, magick and pdftohtml). For the CGC dataset, T-Fuzz finds bugs in 166 binaries, Driller in 121, and AFL in 105. In addition, found 3 new bugs in previously-fuzzed programs and libraries. FairFuzz: A Targeted Mutation Strategy for Increasing Greybox Fuzz Testing Coverage (ASE 2018) Abstract: In recent years, fuzz testing has proven itself to be one of the most effective techniques for finding correctness bugs and security vulnerabilities in practice. One particular fuzz testing tool, American Fuzzy Lop (AFL), has become popular thanks to its ease-of-use and bug-finding power. However, AFL remains limited in the bugs it can find since it simply does not cover large regions of code. If it does not cover parts of the code, it will not find bugs there. We propose a two-pronged approach to increase the coverage achieved by AFL. First, the approach automatically identifies branches exercised by few AFL-produced inputs (rare branches), which often guard code that is empirically hard to cover by naively mutating inputs. The second part of the approach is a novel mutation mask creation algorithm, which allows mutations to be biased towards producing inputs hitting a given rare branch. This mask is dynamically computed during fuzz testing and can be adapted to other testing targets. We implement this approach on top of AFL in a tool named FairFuzz. We conduct evaluation on real-world programs against state-of-the-art versions of AFL. We find that on these programs FairFuzz achieves high branch coverage at a faster rate that state-of-the-art versions of AFL. In addition, on programs with nested conditional structure, it achieves sustained increases in branch coverage after 24 hours (average 10.6% increase). In qualitative analysis, we find that FairFuzz has an increased capacity to automatically discover keywords. VUzzer: Application-aware Evolutionary Fuzzing (NDSS 2017) Abstract: Fuzzing is an effective software testing technique to find bugs. Given the size and complexity of real-world applications, modern fuzzers tend to be either scalable, but not effective in exploring bugs that lie deeper in the execution, or capable of penetrating deeper in the application, but not scalable. In this paper, we present an application-aware evolutionary fuzzing strategy that does not require any prior knowledge of the application or input format. In order to maximize coverage and explore deeper paths, we leverage control- and data-flow features based on static and dynamic analysis to infer fundamental properties of the application. This enables much faster generation of interesting inputs compared to an application-agnostic approach. We implement our fuzzing strategy in VUzzer and evaluate it on three different datasets: DARPA Grand Challenge binaries (CGC), a set of real-world applications (binary input parsers), and the recently released LAVA dataset. On all of these datasets, VUzzer yields significantly better results than state-of-the-art fuzzers, by quickly finding several existing and new bugs. Inputs-aware Fuzzing SLF: Fuzzing without Valid Seed Inputs (ICSE2019) Paper Abstract: Fuzzing is an important technique to detect software bugs and vulnerabilities. It works by mutating a small set of seed inputs to generate a large number of new inputs. Fuzzers’ performance often substantially degrades when valid seed inputs are not available. Although existing techniques such as symbolic execution can generate seed inputs from scratch, they have various limitations hindering their applications in real-world complex software without source code. In this paper, we propose a novel fuzzing technique that features the capability of generating valid seed inputs. It piggy-backs on AFL to identify input validity checks and the input fields that have impact on such checks. It further classifies these checks according to their relations to the input. Such classes include arithmetic relation, object offset, data structure length and so on. A multi-goal search algorithm is developed to apply class specific mutations in order to satisfy inter-dependent checks all together. We evaluate our technique on 20 popular benchmark programs collected from other fuzzing projects and the Google fuzzer test suite, and compare it with existing fuzzers AFL and AFLFast, symbolic execution engines KLEE and S2E, and a hybrid tool Driller that combines fuzzing with symbolic execution. The results show that our technique is highly effective and efficient, out-performing the other tools. Superion: Grammar-Aware Greybox Fuzzing (ICSE 2019) Paper Abstract: In recent years, coverage-based greybox fuzzing has proven itself to be one of the most effective techniques for finding security bugs in practice. Particularly, American Fuzzy Lop (AFL for short) is deemed to be a great success in fuzzing relatively simple test inputs. Unfortunately, when it meets structured test inputs such as XML and JavaScript, those grammar-blind trimming and mutation strategies in AFL hinder the effectiveness and efficiency. To this end, we propose a grammar-aware coverage-based grey-box fuzzing approach to fuzz programs that process structured inputs. Given the grammar (which is often publicly available) of test inputs, we introduce a grammar-aware trimming strategy to trim test inputs at the tree level using the abstract syntax trees (ASTs) of parsed test inputs. Further, we introduce two grammar-aware mutation strategies (i.e., enhanced dictionary-based mutation and tree-based mutation). Specifically, tree-based mutation works via replacing subtrees using the ASTs of parsed test inputs. Equipped with grammar-awareness, our approach can carry the fuzzing exploration into width and depth. We implemented our approach as an extension to AFL, named Superion; and evaluated the effectiveness of Superion on real-life large-scale programs (a XML engine libplist and three JavaScript engines WebKit, Jerryscript and ChakraCore). Our results have demonstrated that Superion can improve the code coverage (i.e., 16.7% and 8.8% in line and function coverage) and bug-finding capability (i.e., 30 new bugs, among which we discovered 21 new vulnerabilities with 16 CVEs assigned and 3.2K USD bug bounty rewards received) over AFL and jsfunfuzz. ProFuzzer: On-the-fly Input Type Probing for Better Zero-day Vulnerability Discovery (S&P 2019) Abstract: Existing mutation based fuzzers tend to randomly mutate the input of a program without understanding its underlying syntax and semantics. In this paper, we propose a novel on-the-fly probing technique (called ProFuzzer) that automatically recovers and understands input fields of critical importance to vulnerability discovery during a fuzzing process and intelligently adapts the mutation strategy to enhance the chance of hitting zero-day targets. Since such probing is transparently piggybacked to the regular fuzzing, no prior knowledge of the input specification is needed. During fuzzing, individual bytes are first mutated and their fuzzing results are automatically analyzed to link those related together and identify the type for the field connecting them; these bytes are further mutated together following type-specific strategies, which substantially prunes the search space. We define the probe types generally across all applications, thereby making our technique application agnostic. Our experiments on standard benchmarks and real-world applications show that ProFuzzer substantially outperforms AFL and its optimized version AFLFast, as well as other state-of-art fuzzers including VUzzer, Driller and QSYM. Within two months, it exposed 42 zero-days in 10 intensively tested programs, generating 30 CVEs. Directed Fuzzing Directed Greybox Fuzzing (CCS 2017) Paper Code Abstract: Existing Greybox Fuzzers (GF) cannot be effectively directed, for instance, towards problematic changes or patches, towards critical system calls or dangerous locations, or towards functions in the stack-trace of a reported vulnerability that we wish to reproduce. In this paper, we introduce Directed Greybox Fuzzing (DGF) which generates inputs with the objective of reaching a given set of target program locations efficiently. We develop and evaluate a simulated annealing-based power schedule that gradually assigns more energy to seeds that are closer to the target locations while reducing energy for seeds that are further away. Experiments with our implementation AFLGo demonstrate that DGF outperforms both directed symbolic-execution-based whitebox fuzzing and undirected greybox fuzzing. We show applications of DGF to patch testing and crash reproduction, and discuss the integration of AFLGo into Google’s continuous fuzzing platform OSS-Fuzz. Due to its directedness, AFLGo could find 39 bugs in several well-fuzzed, security-critical projects like LibXML2. 17 CVEs were assigned. Hawkeye: Towards a Desired Directed Grey-box Fuzzer (CCS 2018) Abstract: Grey-box fuzzing is a practically effective approach to test real-world programs. However, most existing grey-box fuzzers lack directedness, i.e. the capability of executing towards user-specified target sites in the program. To emphasize existing challenges in directed fuzzing, we propose Hawkeye to feature four desired properties of directed grey-box fuzzers. Owing to a novel static analysis on the program under test and the target sites, Hawkeye precisely collects the information such as the call graph, function and basic block level distances to the targets. During fuzzing, Hawkeye evaluates exercised seeds based on both static information and the execution traces to generate the dynamic metrics, which are then used for seed prioritization, power scheduling and adaptive mutating. These strategies help Hawkeye to achieve better directedness and gravitate towards the target sites. We implemented Hawkeye as a fuzzing framework and evaluated it on various real-world programs under different scenarios. The experimental results showed that Hawkeye can reach the target sites and reproduce the crashes much faster than state-of-the-art grey-box fuzzers such as AFL and AFLGo. Specially, Hawkeye can reduce the time to exposure for certain vulnerabilities from about 3.5 hours to 0.5 hour. By now, Hawkeye has detected more than 41 previously unknown crashes in projects such as Oniguruma, MJS with the target sites provided by vulnerability prediction tools; all these crashes are confirmed and 15 of them have been assigned CVE IDs. CollAFL: Path Sensitive Fuzzing (S&P 2018) Abstract: Coverage-guided fuzzing is a widely used and ef- fective solution to find software vulnerabilities. Tracking code coverage and utilizing it to guide fuzzing are crucial to coverage- guided fuzzers. However, tracking full and accurate path coverage is infeasible in practice due to the high instrumentation overhead. Popular fuzzers (e.g., AFL) often use coarse coverage information, e.g., edge hit counts stored in a compact bitmap, to achieve highly efficient greybox testing. Such inaccuracy and incompleteness in coverage introduce serious limitations to fuzzers. First, it causes path collisions, which prevent fuzzers from discovering potential paths that lead to new crashes. More importantly, it prevents fuzzers from making wise decisions on fuzzing strategies. In this paper, we propose a coverage sensitive fuzzing solution CollAFL. It mitigates path collisions by providing more accurate coverage information, while still preserving low instrumentation overhead. It also utilizes the coverage information to apply three new fuzzing strategies, promoting the speed of discovering new paths and vulnerabilities. We implemented a prototype of CollAFL based on the popular fuzzer AFL and evaluated it on 24 popular applications. The results showed that path collisions are common, i.e., up to 75% of edges could collide with others in some applications, and CollAFL could reduce the edge collision ratio to nearly zero. Moreover, armed with the three fuzzing strategies, CollAFL outperforms AFL in terms of both code coverage and vulnerability discovery. On average, CollAFL covered 20% more program paths, found 320% more unique crashes and 260% more bugs than AFL in 200 hours. In total, CollAFL found 157 new security bugs with 95 new CVEs assigned. Full-speed Fuzzing: Reducing Fuzzing Overhead through Coverage-guided Tracing (S&P 2019) Paper Abstract: Coverage-guided fuzzing is one of the most successful approaches for discovering software bugs and security vulnerabilities. Of its three main components: (1) test case generation, (2) code coverage tracing, and (3) crash triage, code coverage tracing is a dominant source of overhead. Coverage-guided fuzzers trace every test case’s code coverage through either static or dynamic binary instrumentation, or more recently, using hardware support. Unfortunately, tracing all test cases incurs significant performance penalties–even when the overwhelming majority of test cases and their coverage information are discarded because they do not increase code coverage. To eliminate needless tracing by coverage-guided fuzzers, we introduce the notion of coverage-guided tracing. Coverage-guided tracing leverages two observations: (1) only a fraction of generated test cases increase coverage, and thus require tracing; and (2) coverage-increasing test cases become less frequent over time. Coverage-guided tracing encodes the current frontier of coverage in the target binary so that it self-reports when a test case produces new coverage–without tracing. This acts as a filter for tracing; restricting the expense of tracing to only coverage-increasing test cases. Thus, coverage-guided tracing trades increased time handling coverage-increasing test cases for decreased time handling non-coverage-increasing test cases. To show the potential of coverage-guided tracing, we create an implementation based on the static binary instrumentor Dyninst called UnTracer. We evaluate UnTracer using eight real-world binaries commonly used by the fuzzing community. Experiments show that after only an hour of fuzzing, UnTracer’s average overhead is below 1%, and after 24-hours of fuzzing, UnTracer approaches 0% overhead, while tracing every test case with popular white- and black-box-binary tracers AFL-Clang, AFL-QEMU, and AFL-Dyninst incurs overheads of 36%, 612%, and 518%, respectively. We further integrate UnTracer with the state-of-the-art hybrid fuzzer QSYM and show that in 24-hours of fuzzing, QSYM-UnTracer executes 79% and 616% more test cases than QSYM-Clang and QSYM-QEMU, respectively. Designing New Operating Primitives to Improve Fuzzing Performance (CCS 2017) Paper Code Abstract: Fuzzing is a software testing technique that finds bugs by repeatedly injecting mutated inputs to a target program. Known to be a highly practical approach, fuzzing is gaining more popularity than ever before. Current research on fuzzing has focused on producing an input that is more likely to trigger a vulnerability. In this paper, we tackle another way to improve the performance of fuzzing, which is to shorten the execution time of each iteration. We observe that AFL, a state-of-the-art fuzzer, slows down by 24x because of file system contention and the scalability of fork() system call when it runs on 120 cores in parallel. Other fuzzers are expected to suffer from the same scalability bottlenecks in that they follow a similar design pattern. To improve the fuzzing performance, we design and implement three new operating primitives specialized for fuzzing that solve these performance bottlenecks and achieve scalable performance on multi-core machines. Our experiment shows that the proposed primitives speed up AFL and LibFuzzer by 6.1 to 28.9x and 1.1 to 735.7x, respectively, on the overall number of executions per second when targeting Google’s fuzzer test suite with 120 cores. In addition, the primitives improve AFL’s throughput up to 7.7x with 30 cores, which is a more common setting in data centers. Our fuzzer-agnostic primitives can be easily applied to any fuzzer with fundamental performance improvement and directly benefit large-scale fuzzing and cloud-based fuzzing services. Enhancing Memory Error: Enhancing Memory Error Detection for Large-Scale Applications and Fuzz Testing (NDSS 2018) Abstract: Memory errors are one of the most common vulnerabilities for the popularity of memory unsafe languages including C and C++. Once exploited, it can easily lead to system crash (i.e., denial-of-service attacks) or allow adversaries to fully compromise the victim system. This paper proposes MEDS, a practical memory error detector. MEDS significantly enhances its detection capability by approximating two ideal properties, called an infinite gap and an infinite heap. The approximated infinite gap of MEDS setups large inaccessible memory region between objects (i.e., 4 MB), and the approximated infinite heap allows MEDS to fully utilize virtual address space (i.e., 45-bits memory space). The key idea of MEDS in achieving these properties is a novel user-space memory allocation mechanism, MEDSALLOC. MEDSALLOC leverages a page aliasing mechanism, which allows MEDS to maximize the virtual memory space utilization but minimize the physical memory uses. To highlight the detection capability and practical impacts of MEDS, we evaluated and then compared to Google’s state-of-the-art detection tool, AddressSanitizer. MEDS showed three times better detection rates on four real-world vulnerabilities in Chrome and Firefox. More importantly, when used for a fuzz testing, MEDS was able to identify 68.3% more memory errors than AddressSanitizer for the same amount of a testing time, highlighting its practical aspects in the software testing area. In terms of performance overhead, MEDS slowed down 108% and 86% compared to native execution and AddressSanitizer, respectively, on real-world applications including Chrome, Firefox, Apache, Nginx, and OpenSSL. Power Schedule Coverage-based Greybox Fuzzing as Markov Chain (CCS 2016) Paper Code Abstract: Coverage-based Greybox Fuzzing (CGF) is a random testing approach that requires no program analysis. A new test is generated by slightly mutating a seed input. If the test exercises a new and interesting path, it is added to the set of seeds; otherwise, it is discarded. We observe that most tests exercise the same few “high-frequency” paths and develop strategies to explore significantly more paths with the same number of tests by gravitating towards low-frequency paths. We explain the challenges and opportunities of CGF using a Markov chain model which specifies the probability that fuzzing the seed that exercises path i generates an input that exercises path j. Each state (i.e., seed) has an energy that specifies the number of inputs to be generated from that seed. We show that CGF is considerably more efficient if energy is inversely proportional to the density of the stationary distribution and increases monotonically every time that seed is chosen. Energy is controlled with a power schedule. We implemented the exponential schedule by extending AFL. In 24 hours, AFLFAST exposes 3 previously unreported CVEs that are not exposed by AFL and exposes 6 previously unreported CVEs 7x faster than AFL. AFLFAST produces at least an order of magnitude more unique crashes than AFL. Learning-based Fuzzing: NEUZZ: Efficient Fuzzing with Neural Program Smoothing (S&P 2019) Paper Abstract: Fuzzing has become the de facto standard technique for finding software vulnerabilities. However, even state-of-the-art fuzzers are not very efficient at finding hard-to-trigger software bugs. Most popular fuzzers use evolutionary guidance to generate inputs that can trigger different bugs. Such evolutionary algorithms, while fast and simple to implement, often get stuck in fruitless sequences of random mutations. Gradient-guided optimization presents a promising alternative to evolutionary guidance. Gradient-guided techniques have been shown to significantly outperform evolutionary algorithms at solving high-dimensional structured optimization problems in domains like machine learning by efficiently utilizing gradients or higher-order derivatives of the underlying function. However, gradient-guided approaches are not directly applicable to fuzzing as real-world program behaviors contain many discontinuities, plateaus, and ridges where the gradient-based methods often get stuck. We observe that this problem can be addressed by creating a smooth surrogate function approximating the target program’s discrete branching behavior. In this paper, we propose a novel program smoothing technique using surrogate neural network models that can incrementally learn smooth approximations of a complex, real-world program’s branching behaviors. We further demonstrate that such neural network models can be used together with gradient-guided input generation schemes to significantly increase the efficiency of the fuzzing process. Our extensive evaluations demonstrate that NEUZZ significantly outperforms 10 state-of-the-art graybox fuzzers on 10 popular real-world programs both at finding new bugs and achieving higher edge coverage. NEUZZ found 31 previously unknown bugs (including two CVEs) that other fuzzers failed to find in 10 real-world programs and achieved 3X more edge coverage than all of the tested graybox fuzzers over 24 hour runs. Furthermore, NEUZZ also outperformed existing fuzzers on both LAVA-M and DARPA CGC bug datasets. Fuzzing Machine Learning Model TensorFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing (2018) Paper Code Abstract: Machine learning models are notoriously difficult to interpret and debug. This is particularly true of neural networks. In this work, we introduce automated software testing techniques for neural networks that are well-suited to discovering errors which occur only for rare inputs. Specifically, we develop coverage-guided fuzzing (CGF) methods for neural networks. In CGF, random mutations of inputs to a neural network are guided by a coverage metric toward the goal of satisfying user-specified constraints. We describe how fast approximate nearest neighbor algorithms can provide this coverage metric. We then discuss the application of CGF to the following goals: finding numerical errors in trained neural networks, generating disagreements between neural networks and quantized versions of those networks, and surfacing undesirable behavior in character level language models. Finally, we release an open source library called TensorFuzz that implements the described techniques. Coverage-Guided Fuzzing for Deep Neural Networks (2018) Paper Abstract: In company with the data explosion over the past decade, deep neural network (DNN) based software has experienced unprecedented leap and is becoming the key driving force of many novel industrial applications, including many safety-critical scenarios such as autonomous driving. Despite great success achieved in various human intelligence tasks, similar to traditional software, DNNs could also exhibit incorrect behaviors caused by hidden defects causing severe accidents and losses. In this paper, we propose DeepHunter, an automated fuzz testing framework for hunting potential defects of general-purpose DNNs. DeepHunter performs metamorphic mutation to generate new semantically preserved tests, and leverages multiple plugable coverage criteria as feedback to guide the test generation from different perspectives. To be scalable towards practical-sized DNNs, DeepHunter maintains multiple tests in a batch, and prioritizes the tests selection based on active feedback. The effectiveness of DeepHunter is extensively investigated on 3 popular datasets (MNIST, CIFAR-10, ImageNet) and 7 DNNs with diverse complexities, under large set of 6 coverage criteria as feedback. The large-scale experiments demonstrate that DeepHunter can (1) significantly boost the coverage with guidance; (2) generate useful tests to detect erroneous behaviors and facilitate the DNN model quality evaluation; (3) accurately capture potential defects during DNN quantization for platform migration. 展开全文 • 自动起毛 基于“教程,我们旨在自动化ROS 2上的模糊测试。 安装 已安装Docker。 依存关系 用法 source start.sh colcon build 执照 • 本篇文章中主要记录Fuzzing101中Exercise 1的学习过程,关于文章中所用到的测试工具与测试目标,将会在后面的内容中展现。通过本文,将会将会学习到对目标应用程序进行插桩、AFL-FUZZ的使用、GDB验证fuzz结果。编写... 前言 本篇文章中主要记录Fuzzing101中Exercise 1的学习过程,关于文章中所用到的测试工具与测试目标,将会在后面的内容中展现。通过本文,将会展示如下知识点: 对目标应用程序进行插桩AFL-FUZZ的使用GDB验证fuzz结果 本次目标为Xpdf 3.02版本,该本存在一个不受控制的递归漏洞CVE-2019-13288,该漏洞是通过构造文件的方式,使得Parser.cc文件中的Parser::getObj()函数多次调用,由于每个被调用的函数都会在栈上开辟一块新的使用空间(栈帧),如果一个函数被递归调用太多次,就会导致栈内存耗尽并导致程序崩溃,因此远程攻击者可以利用这个漏洞进行DOS攻击 编写不易,如果能够帮助到你,希望能够点赞收藏加关注哦Thanks♪(・ω・)ノ 模糊测试系列往期回顾: AFL源码分析之afl-fuzz.c详细注释(二):FUZZ执行流程(五万字警告,慎入) AFL源码分析之afl-fuzz.c详细注释(一):初始配置(6万字警告,慎入) AFL源码分析之afl-gcc.c详细注释 AFL源码分析之afl-as.c详细注释 免费资源:AFL-2.57b.zip(AFL源码分析章节版本) 项目介绍 本文主要用到三个项目:Fuzzing101、AFL、XPDF,本章节会进行简单介绍 Fuzzing 101 项目地址:https://github.com/antonio-morales/Fuzzing101 Fuzzing101为GitHub上的一个开源项目,项目收录了十个真实的漏洞目标(目前已展示八个),该项目主要面向想要学习fuzz的人,以及想要在真实项目程序中尝试fuzz的人 选取Fuzzing 101作为模糊测试专栏的练习,是觉得这个项目中的样例比较好上手,逐步的可以见识到各种fuzz的方式 AFL 非常经典的面向安全的模糊器,采用一种新型的编译时检测算法和遗传算法,可以自动发现测试用例在二进制文件中的内部状态 关于AFL的详细内容可以查看前几篇关于源码注释的文章,本次使用AFL而不是fuzzing101中推荐的AFL++,是因为想要验证一下前段时间读源码的成果 Xpdf Xpdf是一个免费的PDF查看器和工具包,包括文本提取器、图像转换器、HTML转换器,这是一个开源项目 Xpdf也是本次实验的目标程序,后面的内容会对该项目做进一步介绍 环境部署 sudo apt-get install gcc git make wget build-essential 下面直接看目标安装过程 AFL安装 hollk@ubuntu:~$cd$HOME hollk@ubuntu:~$git clone https://github.com/google/AFL.git && cd AFL/ 这里需要注意的是需要将afl-clang-fast.c文件中的部分内容删掉,不然编译时候会报错 hollk@ubuntu:~/AFL$ vim ./llvm_mode/afl-clang-fast.c ---------------131~134行------------------- #ifndef __ANDROID__ cc_params[cc_par_cnt++] = "-mllvm"; cc_params[cc_par_cnt++] = "-sanitizer-coverage-block-threshold=0"; #endif ------------------------------------------- 将这部分内容删掉,保存退出 接下来编译AFL源码: hollk@ubuntu:~/AFL$make AFL_TRACE_PC=1 hollk@ubuntu:~/AFL$ make install 检验一下AFL是否编译成功: Xpdf插桩编译 接下来对本次目标进行安装与插桩,首先为本次模糊测试项目创建一个新的目录: hollk@ubuntu:~/AFL$cd$HOME hollk@ubuntu:~$mkdir fuzzing_xpdf && cd fuzzing_xpdf 下载Xpdf 3.02 hollk@ubuntu:~/fuzzing_xpdf$ wget https://dl.xpdfreader.com/old/xpdf-3.02.tar.gz hollk@ubuntu:~/fuzzing_xpdf$tar -xvzf xpdf-3.02.tar.gz hollk@ubuntu:~/fuzzing_xpdf$ cd xpdf-3.02/ 在xpdf-3.02/目录下,可以看到一个叫configure的文件,用编辑器打开它可以看到,这是一个生成Makefile的程序: 编译程序默认指定的时gcc,由于我们需要在编译的时候进行插桩,所以将编译命令指定为afl-clang-fast hollk@ubuntu:~/fuzzing_xpdf/xpdf-3.02$export CC=/home/hollk/AFL/afl-clang-fast hollk@ubuntu:~/fuzzing_xpdf/xpdf-3.02$ export CXX=/home/hollk/AFL/afl-clang-fast++ 编译程序,并指定将编译好的程序放在$HOME/fuzzing_xpdf/install/中 hollk@ubuntu:~/fuzzing_xpdf/xpdf-3.02$ ./configure --prefix="$HOME/fuzzing_xpdf/install/" hollk@ubuntu:~/fuzzing_xpdf/xpdf-3.02$ make hollk@ubuntu:~/fuzzing_xpdf/xpdf-3.02$make install 编译好后,我们需要确认一下插桩成功了,因此进入到install目录,由于我们使用的时afl-clang-fast进行插桩,所以需要查看插桩关键字:__sanitizer_cov hollk@ubuntu:~/fuzzing_xpdf/xpdf-3.02$ cd $HOME/fuzzing_xpdf/install/ hollk@ubuntu:~/fuzzing_xpdf/install/bin$ strings ./pdftotext | grep __sanitizer_cov 插桩成功效果如下: FUZZ阶段 实验环境已经部署好了,接下来就需要进行fuzz了 fuzz前期准备 在正式执行fuzz前,需要准备一些fuzz的种子,也就是说需要输入的测试用例,并且要为这些文件创建一个目录: hollk@ubuntu:~/fuzzing_xpdf/install/bin$cd$HOME/fuzzing_xpdf hollk@ubuntu:~/fuzzing_xpdf$mkdir pdf_examples && cd pdf_examples hollk@ubuntu:~/fuzzing_xpdf/pdf_examples$ wget https://github.com/mozilla/pdf.js-sample-files/raw/master/helloworld.pdf hollk@ubuntu:~/fuzzing_xpdf/pdf_examples$wget http://www.africau.edu/images/default/sample.pdf hollk@ubuntu:~/fuzzing_xpdf/pdf_examples$ wget https://www.melbpc.org.au/wp-content/uploads/2017/10/small-example-pdf-file.pdf 这里需要注意的是helloworld.pdf这个文件所在的项目已经没了,所以需要从https://github.com/mozilla/pdf.js-sample-files链接中获取后,再复制到pdf_examples目录中 这里需要检测一下我们下载下来的文件是否是可用的,可以调用fuzzing_xpdf/install/bin目录下的pdfinfo查看一下helloworld.pdf的基本信息: $HOME/fuzzing_xpdf/install/bin/pdfinfo -box -meta$HOME/fuzzing_xpdf/pdf_examples/helloworld.pdf 成功效果如下: 与此同时还需要关闭系统的核心转储,确保在fuzz过程中出现crash也不会使得程序中止 sudo su echo core >/proc/sys/kernel/core_pattern exit 开始fuzz 现在所有的条件都已经准备就绪,下面就开始进行fuzz了: $HOME/AFL/afl-fuzz -i$HOME/fuzzing_xpdf/pdf_examples/ -o $HOME/fuzzing_xpdf/out/ -M fuzzer1 --$HOME/fuzzing_xpdf/install/bin/pdftotext @@ $HOME/fuzzing_xpdf/output 这里说明一下使用的参数: -i:指定输入文件夹,里面是准备好的种子-o:指定输出文件夹,存放fuzz过程中出现的生成的queue、crash、hang等-M:可以选用主从多开fuzzer(其他fuzzer用-S指定,需要注意的是输出路径要保持一致)–:分隔符,后加测试目标@@:指代文件,如果不加@@就是标准输入 那么使用-S指定多开fuzzer就可以同时进行多个fuzzer 这里由于分配给虚拟机只有4核,所以就只开了4个fuzzer,当然这个数量取决于你分配的核的数量,可以用htop查看一下当前的资源情况 可以看到,现在四个内核都处于使用状态 fuzz结果 首先说明一下AFL界面上的各个参数: process timing:进程时间 run time:运行总时间last new path:距离上一次发现新路径的时间last uniq crash:距离上一次发现crash的时间last uniq hang:距离上一次挂起的时间 overall results:总体状态信息 cycles done:运行的总周期数total paths:运行的总路径数uniq crashes:运行中的崩溃次数uniq hangs:运行中的挂起次数 cycle progress:进程周期进展 now processing:当前测试用例编号paths timed out:由于超时放弃的输入数量 map coverage: 映射覆盖率 map desity:已命中分支元组数与bitmap可以容纳的数量的比例(当前输入/全部语料库值)count coverage: 二进制文件中元组命中计数变化 stage progress:执行状态 now tring:正在执行变异策略stage execs:当前的进度total execs:全部执行总数exec speed:执行速度 findings in depth:路径深度发现 favored paths:基于最小化算法的favored路径new edge on:新发现边数total crashes:总crashes数total tmouts:总挂起信息 fuzzing strategy yields:用于追踪fuzzing策略所获得的路径和尝试执行的次数的比例,用于验证有效性path geometry: levels:fuzzing过程中达到的路径深度,用户提供测试用例为1,根据变异阶段递增逐渐增加pending:还有多少输入数据没有经过任何测试pend fav:fuzzer在这个队里中真正想达到的条目own finds:找到的新路径数量imported:从其他fuzzer导入的路径数量stability:衡量观测到痕迹的一致性 介绍完之后就应该知道主要应该注意的点: last new path:距离上一次发现新路径的时间,关注这个点可以更好地了解到fuzzing过程中是否存在某些问题,如果距离上一次发现新路径的时间过长,说明输入的种子可能是存在问题的,这时可以考虑暂停进一步蒸馏种子uniq crashes:运行中的崩溃次数,这个就很好理解了,因为一旦出现crash,说明程序由于一些输入导致了崩溃,这个崩溃很有可能是溢出导致的 这是我在跑fuzz时候fuzzer2出现crash的情况,可以看到uniq crashes已经出现了1个crash,这时就可以暂停fuzzing查看一下out目录下的crash输出,由于本次实验开了4个fuzzer,所以将会在~/fuzzing_xpdf/out目录下看到4个文件夹: 由于是fuzzer2跑出的crash,所以直接就直接进入~/fuzzing_xpdf/out/fuzzer2/crashes路径中可以看到类似如下名称的文件(每次跑出的文件名可能会有变化,按照你跑出的文件名为准,格式类似下图): GDB验证fuzz结果 我们删掉由afl-clang-fast插桩编译的install目录下的内容,重新使用gcc编译: hollk@ubuntu:~$ rm -r $HOME/fuzzing_xpdf/install hollk@ubuntu:~/fuzzing_xpdf/xpdf-3.02$ cd $HOME/fuzzing_xpdf/xpdf-3.02/ hollk@ubuntu:~/fuzzing_xpdf/xpdf-3.02$ make clean hollk@ubuntu:~/fuzzing_xpdf/xpdf-3.02$CFLAGS="-g -O0" CXXFLAGS="-g -O0" ./configure --prefix="$HOME/fuzzing_xpdf/install/" hollk@ubuntu:~/fuzzing_xpdf/xpdf-3.02$make hollk@ubuntu:~/fuzzing_xpdf/xpdf-3.02$ make install 现在可以使用gdb指定pdftotext二进制程序运行fuzz结果文件: hollk@ubuntu:~/fuzzing_xpdf/xpdf-3.02$cd$HOME/fuzzing_xpdf/install/bin hollk@ubuntu:~/fuzzing_xpdf/install/bin$gdb --args ./pdftotext$HOME/fuzzing_xpdf/out/fuzzer2/crashes/id:000000,sig:06,src:001043,op:havoc,rep:32 /home/hollk/fuzzing_xpdf/output 进入gdb运行run命令,可以看到一连串的报错后程序会停在__vfprintf_internal报错函数中: 使用bt命令回溯一下执行过的函数: 可以看到Parser::getObj函数在不断的被递归调用,这验证了CVE-2019-13288漏洞中的描述 展开全文 • 你的鼓励不仅能让更多的人看到Fuzzing,更是作者原创的动力。 ????:博主是一个伪装成程序员的hacker。研究领域包括操作系统安全,二进制安全,Linux系统,运维,渗透测试,人工智能,信息计算科学等领域。懂些C和... 狩猎者网络安全旗下——知柯信息安全团队(知柯信安) 漏洞挖掘是否是真正的安全呢? "The best alternative to defense mechanisms is to find and fix the bugs." grescurity的观点是安全缓解 ACTUAL effective improvements to security come from building mitigations to kill entire classes of vulns, not bug hunting." Maybe we need secure language, library and tools – security by design. 什么是Fuzzing(模糊测试)? 模糊测试(Fuzzing),是一种通过向目标系统提供非预期的输入并监视异常结果来发现软件漏洞的方法。[百度百科] 模糊测试 (fuzz testing, fuzzing)是一种软件测试技术。其核心思想是将自动或半自动生成的随机数据输入到一个程序中,并监视程序异常,如崩溃,断言(assertion)失败,以发现可能的程序错误,比如内存泄漏。模糊测试常常用于检测软件或计算机系统的安全漏洞。 模糊测试最早由威斯康星大学的Barton Miller于1988年提出。他们的工作不仅使用随机无结构的测试数据,还系统的利用了一系列的工具去分析不同平台上的各种软件,并对测试发现的错误进行了系统的分析。此外,他们还公开了源代码,测试流程以及原始结果数据。 模糊测试工具主要分为两类,变异测试(mutation-based)以及生成测试(generation-based)。模糊测试可以被用作白盒,灰盒或黑盒测试。[3]文件格式与网络协议是最常见的测试目标,但任何程序输入都可以作为测试对象。常见的输入有环境变量,鼠标和键盘事件以及API调用序列。甚至一些通常不被考虑成输入的对象也可以被测试,比如数据库中的数据或共享内存。 对于安全相关的测试,那些跨越可信边界的数据是最令人感兴趣的。比如,模糊测试那些处理任意用户上传的文件的代码比测试解析服务器配置文件的代码更重要。因为服务器配置文件往往只能被有一定权限的用户修改。 【维基百科】 Fuzzing的历史: “Generates a stream of random characters to be consumed by a target program” – Miller et al. 1988年,威斯康星大学的Barton Miller教授率先在他的课程实验提出模糊测试。实验内容是开发一个基本的命令行模糊器以测试Unix程序。这个模糊器可以用随机数据来“轰炸”这些测试程序直至其崩溃。类似的实验于1995年被重复,并且包括了图形界面程序,网络协议和系统API库。一些后续工作可以测试Mac和Windows系统上的命令行程序与图形界面程序。 技术: 模糊测试工具通常可以被分为两类。变异测试通过改变已有的数据样本去生成测试数据。生成测试则通过对程序输入的建模来生成新的测试数据。 定义Definition: 我们在设计程序时,除了考虑到程序功能之外,是否会出现其他程序员无法考虑到的情况?比如安全上的问题。 Fuzzing is the execution of the PUT using input(s) sampled from an input space (the “fuzz input space”) that protrudes the expected input space of the PUT. Fuzz testing is the use of fuzzing to test if a PUT violates a security policy. 优化问题Optimization problem Fuzzing模型是一个优化问题 过程Process: 返回的B是返回的错误,C是测试的程序。整个Fuzzing测试最核心的一部分是Schedule调度,通过调度生成输入集。 Components组件: 1.Corpus语料集 2.Generator 3.Mutator 4.Input 5.Stage 6.Executor 7.Observer 8.Feedback Generator与Mutator生成语料集,Mutator是改变语料集,Input是生成的语料集,Stage与Executor是模拟程序执行过程,我们将通过Observer获取执行信息,FeedBack判定语料集的过程。 Fuzzing入门教程An entry-level tutorial: 1.选择目标 2.分析代码 3.Write the harness 4.准备前奏 5.动态调优 6.Fuzzing的停止 7.Triage 选择目标 不受信任的输入:设备模块与迁移模块 非交互式、无状态:libpng与smtpd 不安全语言:C与Rust 古老而无路径:GNU coreutils与OpenSSL 单进程:libxml2与ftpd 分析代码 Analyse the code 我们的输入类型是什么?Argv、stdin、env、shm等。 它在哪里?Main, routine, lib, etc. 程序入口在哪里? In-memory snapshot & copy 我们可以(手动)重置状态吗? 具有较少分叉的持久模式 我们应该修补和追踪吗? Socket, checksum, timer, random Write the harness (partial) 这是AFL模糊测试工具的部分Write the harness源代码 int main(int arg,char **argv){ const char program_name[]="program name"; const size t program_name_size= sizeof (program_name); static char stdin read[1024 * 16]={'\0'}; static char *ret[1024 *4] = {{char * }NULL}; for (size t i =0 ; i<program_name_size;i++) argc = 1; argv = &ret[0]; /*API init*/ #ifdef AFL HAVE_MANUAL CONTROL AFL_INIT(); #endif } #ifdef AFL_HAVE_MANUAL CONTROL //must be after AFL_INIT and befor AFL_LOOP unsigned char *fuzz buf = AFL_FUZZ_TESTCASE_BUF; #endif #ifdef AFL_HAVE_MANUAL CONTROL while(AFL_LOPP(10000)){ #endif memset(&ret[1],0,sizeof(ret) - sizeof(ret(0))); #ifdef AFL_MANUAL_CONTROL //don't use the macro directly in a call! #endif return 1; 准备前奏Prepare the prelude **Dict: man 1 expr → “\x20”, “|”, “&”, “!”, “=”, “>”, “<”, “/”, “\”, “:”, “*”, etc. Seeds: “1 + 3”, “length 1x3”, “10 / 2”, “1234 : 23”, “sad % 3”, etc. 需要Fuzz程序源代码: 执行脚本Scripts: $afl-system-config$ CC=afl-clang-lto CXX=afl-clang-lto++ RANLIB=llvm-ranlib AR=llvm-ar ./configure $AFL_USE_UBSAN=1/AFL_USE_ASAN=1/AFL_HARDEN=1 make -j$(nproc) $afl-fuzz -i seed -o out -x expr.dict -m none -M main0 ./expr_asan$ AFL_IMPORT_FIRST=1 AFL_NO_UI=1 afl-fuzz -i- -o out -L 0 -x expr.dict -S slaveX ./expr $# etc. 动态调优Dynamic tune: Coverage: Gcov, llvm-cov, etc. Performance: Linux perf, gperftools, etc. Fuzzing的停止When to stop? Triage 我们希望POC越小越好 最小化: Afl-tmin, afl-extras, afl-ddmin-mod, abstract, etc. 重复数据消除Deduplication: Afl-cmin, Stack Backtrace Hashing, Semantics-aware, etc Exploitation: GDB extension ‘exploitable’, etc. Understandability: Afl-analyze, etc. What did we get? CWE-125, Out-of-bounds Read: $ expr 0 : "\$$0*\$$*0*\\1" CWE-787, Out-of-bounds Write: $expr 0 : "$$'$$*" Fuzzing devices in QEMU QEMU Device Fuzzer 具体来讲我们是对QEMU中的IO设备进行Fuzz QEMU Device Fuzzer QTest是QEMU的重置框架。 CCS 17’ - Designing New Operating Primitives to Improve Fuzz the E1000 Network Interface Card – 4 hours 我们观察到两个问题。 1.有效操作码过低,这将影响变异和执行的速度。 我们发现写的操作(MIMO write与PCI configure write)影响较大。 我们为不同的操作码分配不同的权重-负值​​删除需要抑制的无效操作码和操作。在fork(2)之前,我们计算输入的总重量,以决定是否值得fork或只是返回fuzzer。 Struct Aware - Libprotobuf-mutator, AFLSmart, NAUTILUS (Sweat and blood) USENIX Security '19 - MOPT: Optimized Mutation Scheduling for Fuzzers 我们能不能通过语法描述直接生成? Google通过fuzzing集群对程序进行程序测试最后通知开发者。 漏洞和BUG: [Patch] SPICE/libspice-server: Fix nullptr dereference in red-parse-qxl.cpp [Vulnerability Disclosure] FFmpeg/libavcodec: Double free hevc context [Vulnerability Disclosure] GNOME/libgxps: Mishandle NULL pointer in the converter [Bug Report] qemu-system virtio-mouse: Assertion in address_space_lduw_le_cached failed [Bug Report] GNU Coreutils: Heap underflow when expr mishandles unmatched (…) in regex [Vulnerability Disclosure] QEMU/Slirp: OOB access while processing ARP/NCSI packets Fuzzing的未来发展: 现在最大的问题是,Fuzzing找到了许多漏洞,目前像Google等公司具有较完整的Fuzzing流程,它们虽然能够发现自己产品的漏洞,但是没有开发者会去理会,过几天,一个白帽子会提交申请会说我们找到了一个漏洞,按照管理应该给他发证书奖品,如果每一个程序都被发现,那么SRC平台和公司的开销会变得比较大。现在的问题是,如何让开发者重视安全测试。我们如何设计Fuzzing方案?我们是否能通过静态分析的手段选择方案?能否动态地让人参与动态覆盖度的过程,让人更方便的添加语料集。 本文参考: TSE 2019: The Art, Science, and Engineering of Fuzzing: A Survey https://media.ccc.de/v/rc3-699526-fuzzers_like_lego https://zh.wikipedia.org/wiki/%E6%A8%A1%E7%B3%8A%E6%B5%8B%E8%AF%95 哔哩哔哩Fuzzing入门-原理与实践 展开全文 • 工控系统网络协议Fuzzing测试技术研究综述 • Hackfest-高级模糊测试研讨会 从这里开始-> 以前的版本 EkoParty => 要求 您需要的工作坊是: 电报帐户。 您将需要它来使用它向我发送您的问题/解决方案 具有Internet连接的正在运行Linux系统 ... • 目前检测软件缓冲区溢出漏洞仅局限于手工分析、二进制补丁比较及fuzzing技术等,这些技术要么对人工分析依赖程度高,要么盲目性太大,致使漏洞发掘效率极为低下。结合fuzzing技术、数据流动态分析技术以及异常自动... • Fuzzing技术已被证明可以非常有效地找出网页浏览器漏洞。随著浏览器厂商提供的漏洞奖金悬赏计划与0day漏洞交易市场的成长,更多研究人员加入浏览器漏洞挖掘的行列。能够胜过这些漏洞挖掘巨头的办法,就是使用智能... • Discovering Vulnerabilities in COTS IoT Devices through Blackbox Fuzzing Web Management Interface • 高效的SIP模糊测试 这是论文《An Efficient Fuzzing Test Method For SIP Server》的源码。 • Ari Takanen_ Jared D. DeMott_ Charles Miller - Fuzzing for Software Security Testing and Quality Assurance (2018, Artech House).pdf • Reinforcement Learning-based Hierarchical Seed Scheduling for Greybox Fuzzing整体内容 论文题目 Reinforcement Learning-based Hierarchical Seed Scheduling for Greybox Fuzzing 工具名称 IAFL-HIER ... Reinforcement Learning-based Hierarchical Seed Scheduling for Greybox Fuzzing 整体内容论文内容多级代码覆盖指标(用于种子聚类)分层种子调度策略实验 论文题目Reinforcement Learning-based Hierarchical Seed Scheduling for Greybox Fuzzing工具名称AFL-HIER AFL++-HIER论文来源NDSS 2021一作Jianhan Wang (University of California, Riverside)文章链接https://www.ndss-symposium.org/ndss-paper/reinforcement-learning-based-hierarchical-seed-scheduling-for-greybox-fuzzing/ 整体内容 灰盒模糊测试(Grey-box Fuzzing) 的流程可以描述为:模糊测试通过变异和交叉拼接生成新的测试用例,然后根据能体现适应度的机制去从新生成的输入中选出适应度高的放到种子池里,用于之后的变异。不同于自然的进化过程,种子池里的种子只有一部分会被选中去进行变异等操作生成新的测试用例。 AFL使用edge coverage来衡量种子的适应度,也就是种子是否覆盖新的branch,以覆盖更多的branch。适应度函数一个重要的性能就是保存中间节点的能力(its ability to preserve intermediate waypoint)。这里我理解的是:去探索未覆盖的路径的时候,要把已经覆盖的关键路径保留。论文中举的例子:假设有一个校验是 a = 0 x d e a d b e e f a = 0xdeadbeef ,如果只考虑edge覆盖率,那么要想变异出这个a,概率是 2 3 2 2^32 。但是,如果保留重要的waypoints,把32位拆成4个8位的数字,从 0 x e f , 0 x b e e f , 0 x a d b e e f , 0 x d e a d b e e f 0xef,0xbeef,0xadbeef,0xdeadbeef 去变异,变异出正确的数值的可能性会更大。 “Be sensitive and collaborative: Analyzing impact of coverage metrics in greybox fuzzing”这篇论文通过更加细粒度地衡量代码覆盖,来保留中间节点。这个方法能够保留更多有价值的种子,增加种子的数量,但是给fuzzing种子调度加大了负担。有些种子可能永远都不会被选中。因此,在这种情况下要提出更加合理的种子调度策略。 这篇论文中用一个分级调度策略(hierarchical scheduler)来解决种子爆炸问题。分为两个部分: 通过不同敏感度等级的代码覆盖指标把种子进行聚类。通过UCB1算法设计种子选择策略。 把fuzzing视为多臂老虎机问题,然后平衡exploitation和exploration。observation是:当一个代码覆盖率衡量指标 c j c_j 比 c i c_i 更敏感,在使用 c j c_j 来保存中间节点的同时,用 c i c_i 把种子进行一个聚类,聚类到一个节点。节点用一个树形结构来组织,越接近叶子节点的种子,它的代码覆盖指标更加敏感。 种子调度会根据UCB1算法从根节点开始选,计算每个节点的分数。直到选中一个叶子节点,从这个节点中包含的众多种子中再选一个种子。每次fuzzing完,对树形结构中的值进行更新。 论文内容 Waypoint的解释:一个测试用例如果触发了bug,我们把它视为一个链的末尾,而它对应的初始的种子视为起点。那么,初始种子经过变异生成的中间测试用例到最终生成能够触发bug的测试用例,这中间的所有的种子逐渐地降低bug的搜索空间,把这些种子叫做waypoint。 多级代码覆盖指标(用于种子聚类) 使用敏感度高的代码覆盖率指标能够提高fuzzing发现更多程序状态的能力。但是,种子的数量也因此增加了,fuzzing的种子调度的负荷也增加了。种子之间的相似性和差异性影响着fuzzing的exploration和exploitation。使用聚类的方法把相似的种子聚类。同一类的种子在代码覆盖粒度上是一样的。通过多级的代码覆盖指标把种子进行聚类。 如上图所示,用多级的不同粒度的覆盖率指标把种子进行聚类。当一个测试用例覆盖了新的函数、基本块或者边,它被保存下来作为一个种子。接下来进行聚类,从根节点开始,不同的敏感度的代码覆盖衡量指标判断子节点中是否覆盖和这个新的种子一样,如果一样就继续往下聚类。如果不一样就新建一个节点。这里由三种衡量指标:函数、基本块和边。在树中深度约小的指标敏感度越低。 分层种子调度策略 挑选种子的过程就是从根节点开始搜索到一个叶子节点的过程。exploration vs Exploitation。一方面,新生成的还没怎么fuzzing过的种子可能带来新的代码覆盖率,另一方面带来新的代码覆盖率的被fuzzing过的种子被选中的几率更大。这篇论文把种子调度视为一个多臂老虎机问题,用已有的MAB算法UCB1来平衡exploitation和exploration。从根节点开始,基于代码覆盖指标选择分数最高的种子,直到选中一个叶子节点。每轮fuzzing结束,沿着种子选择的路径上的所有节点都会加分数。种子分数的计算,考虑三个方面 这个种子的稀有程度:这个种子变异出新的有趣测试用例的难易程度不确定性 一个特征 F ∈ τ l F \in \tau_l 在level l l 的 h i t c o u n t hit count 表示覆盖那个特征的测试用例的数量。 P \mathcal P 表示被测程序, I \mathcal I 是目前生成所有输入,则 n u m h i t s [ F ] = ∣ I ∈ I : F ∈ C l ( P , T ) ∣ num_hits[F] = |{I \in \mathcal I :F \in C_l(P,T)}| 很多论文中都提到,越是稀有,越要提高选中的概率。因此, r a r e n e s s [ F ] = 1 n e m _ h i t s [ F ] rareness[F] = \frac{1}{nem\_hits[F]} 假设,我们在第 t t 轮选中一个种子 s s 来进行fuzzing, I s , t \mathcal I_{s,t} 是这一轮生成的所有测试用例,代码覆盖等级 C l , l ∈ 1 , . . . , n C_l,l \in {1,...,n} 下, f c o v [ s , l , t ] = { F : F ∈ C ( P , I ) ∀ I s , t } fcov[s,l,t] = \{F:F \in C(P,I) \forall I_{s,t}\} 接下来,如何计算一轮fuzzing之后seed的reward。如果只把新覆盖的features的数量作为reward,随着fuzzing的进行,覆盖新的特征的可能性逐渐降低,因此,种子的平均回报可能会迅速减少到零。当种子数量多且方差接近零时,UCB算法不能很好地对种子进行优先排序。因此,这个论文中在计算seedreward的时候,就按最稀有的那个特征的稀有值作为这个种子的reward。 说人话就是:比如这层是函数级别的代码覆盖指标,那就首先统计所有生成的种子覆盖的函数分别的覆盖次数,找出覆盖次数最少的次数,然后用1除以它就是$SeedReward(s,l,t) $。 S e e d R e w a r d ( s , l , t ) = m a x F ∈ f c o v [ s , l , t ] ( r a r e n e s s [ F ] ) SeedReward(s,l,t) = \underset{F\in fcov[s,l,t]}{max}(rareness[F]) 在反向传播的时候,不同层的节点的reward怎么算呢?每一层的节点d可以构成一个序列, < a 1 , . . . , a n , a n + 1 > <a^1,...,a^n,a^n+1> 。在计算reward的时候,考虑到 a l a^l 会影响到之后的种子调度,并且它的feedback是由coverage level组成的,reward的计算方式如下: R e w a r d ( a l , t ) = ∏ l ≤ k ≤ n S e e d R e w a r d ( s , k , t ) n − l + 1 Reward(a^l,t) = \sqrt[n-l+1]{\prod_{l \le k \le n}{SeedReward(s,k,t)}} reward是规定好从下到上分数怎么传递,接下来就是怎么从上到下怎么去选种子。论文基于UCB1计算fuzzing一个种子的performance期望: F u z z P e r f ( a ) = Q ( a ) + U ( a ) FuzzPerf(a) = Q(a) + U(a) Q ( a ) Q(a) 是节点 a a 目前得到的reward平均值,$ U(a) \$是上置信区间半径。 在计算Q的时候使用加权平均,也就是一个种子变异的次数越多,它的稀缺性越低。 Q ( a , t ) = R e w a r d ( a , t ) + w ⋅ Q ( a , t ′ ) ⋅ ∑ p = 0 N [ a , t ] − 1 w p 1 + w ⋅ ∑ p = 0 N [ a , t ] − 1 w p Q(a,t) = \frac{Reward(a,t)+w · Q(a,t') ·\displaystyle \sum ^{N[a,t]-1}_{p=0} w^p}{1+w·\displaystyle \sum ^{N[a,t]-1}_{p=0} w^p} N [ a , t ] N[a,t] 表示t轮结束时这个种子被选中的次数,t’表示上一轮a被选中。w在后面实验部分有测试,取值为0.5。 U ( a ) U(a) ,一个节点中包含的种子数量越多,应该给更高的选中机会。 U ( a ) = C × Y [ a ] Y [ a ′ ] × l o g N [ a ′ ] N [ a ] U(a) = C \times \sqrt{\frac{Y[a]}{Y[a']}} \times \sqrt{\frac{logN[a']}{N[a]}} Y [ a ] Y[a] 就是节点中种子的数量。C是一个参数设置成1.4。(通常MCTS中都会把C设置成根号2,也就是1.4) 以上的公式只能够根据已有的数据来计算,对于没有被选中过的种子,它没有选中次数,也只有一个种子,我们就无法计算相应的值。因此对于这些种子,它也有覆盖的特征路径,根据它覆盖的特征,这么计算: S e e d R a r e n e s s ( s , l ) = ∑ F ∈ C l ( p , s ) r a r e n e s s 2 [ F ] ∣ F : F ∈ C l ( p , s ) ∣ SeedRareness(s,l) = \sqrt{ \frac{\sum_{F\in C_l (p,s)} rareness^2[F]}{|{F:F \in C_l(p,s)}|} } 这里用根号是为了让这个值更小,保留更多的差异性,那么节点的稀有性计算如下: R a r e n e s s ( a l ) = S e e d R a r e n e s s ( s , l ) Rareness(a^l) = SeedRareness(s,l) 也就是上一层的稀有性和它的子节点是一样的。每轮除了反向传播reward,也要反向传播稀有性。 结合上述所有公式,选中一个节点的分数计算方式如下: S c o r e ( a ) = R a r e n e s s ( a ) × F u z z P e r f ( a ) Score(a) = Rareness(a) \times FuzzPerf(a) 实验 分别基于AFL和AFL++实现了两个原型:AFL-HEIR和AFL++-HEIR。 CGC: CGC中crash的程序的平均以及最低和最高数量、首先发现crash的时间。代码覆盖率:统计AFL-HEIR比其它跑的好的程序的数量。别的程序2小时达到的代码覆盖率AFL-HEIR的用时。AFL-HEIR用多长时间超过其它的fuzzer. Fuzzbench:比较了AFL++-HEIR和AFL++和AFL++-FLAT。6个小时代码覆盖率平均值。unique edge数量。 这里对实验结果进行了讨论: 与CGC基准测试的结果相比,我们发现在大多数FuzzBench基准测试中,我们的性能并没有明显优于afl++。我们怀疑原因是我们在评估中使用的基于ucb1的调度程序和超参数更倾向于开发而不是探索。因此,当被测试的程序相对较小时(例如,CGC基准测试),我们的调度器可以发现更多的错误,而不会牺牲太多的整体覆盖。但是在FuzzBench程序中,打破一些独特的边缘(表II)可能会因为不探索其他更容易覆盖的边缘而被掩盖。 吞吐量: CGC上计算工具在种子调度上花费的时间。 分析多级种子调度策略的有效性:拿AFL-HEIR和AFL比较,AFL-Flat和AFLfast比较。统计每个fuzzer生成的种子数量,以及AFL++-HEIR在不同level的节点数量。 分析公式中参数的影响。 对其它覆盖率指标的支持性。 展开全文 • ## Fuzzing技术简介 千次阅读 2018-12-25 11:37:42 一、什么是Fuzzing? Fuzz本意是“羽毛、细小的毛发、使模糊、变得模糊”,后来用在软件测试领域,中文一般指“模糊测试”,英文有的叫“Fuzzing”,有的叫“Fuzz Testing”。本文用fuzzing表示模糊测试。 Fuzzing... • ## Fuzzing 千次阅读 2014-10-29 16:49:06 一些或者某位杰出的黑客在研究漏洞发掘技术的时候发明了Fuzzing技术。可以说这是一种非常快速而有效的发掘技术。Fuzzing技术的思想就是利用“暴力”来实现对目标程序的自动化测试,然后监视检查其最后的结果,如果... • SmartSeed: Smart Seed Generation for Efficient Fuzzing 摘要 模糊测试是一种自动检测应用程序漏洞的方法。对于基于遗传算法的fuzzing,它可以改变用户提供的种子文件,以获得大量输入,然后使用这些输入测试客观... • 关于使用WASM在Web浏览器中对本机代码进行模糊处理的我的演讲的演示。 此回购包含演示(示例)的示例和示例(演示),我使用使用libFuzzer在浏览器中对C / C ++程序进行模糊测试。 它包含以及一些工具来帮助用户... • Hardware Fuzzing(Fuzzer), 硬件模糊测试的代码。都打包在里面了 • winafl, adm的一个 fork,用于 fuzzing Windows 二进制文件 WinAFL Original AFL code written by Michal Zalewski <lcamtuf@google.com> Windows fork written and maintai ...
2022-01-24 10:55:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3101836144924164, "perplexity": 6737.446141789757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304528.78/warc/CC-MAIN-20220124094120-20220124124120-00570.warc.gz"}
http://www.numericalmethod.com/javadoc/suanshu/com/numericalmethod/suanshu/optimization/multivariate/constrained/convex/sdp/socp/problem/portfoliooptimization/PortfolioRiskExactSigma.html
# SuanShu, a Java numerical and statistical library com.numericalmethod.suanshu.optimization.multivariate.constrained.convex.sdp.socp.problem.portfoliooptimization ## Class PortfolioRiskExactSigma • All Implemented Interfaces: Function<Vector,Double>, RealScalarFunction, Iterable<SOCPGeneralConstraints> public class PortfolioRiskExactSigma extends SOCPRiskConstraint Constructs the constraint coefficient arrays of the portfolio risk term in the compact form. The constraints are generated during the transformation of the objective function. The portfolio risk in the objective function is transformed into the following constraints: $(x+w^{0})^{\top}\Sigma(x+w^{0})\leq t_1.$ By letting $$y=x+w^{0}$$, it can be written as: $y^{\top}\Sigma\;y\leq t_1$ When the exact covariance matrix $$\Sigma$$ is used, then the portfolio risk constraint is equivalent to: $y^{\top}\Sigma\;y\leq t_1 \Longleftrightarrow y^{\top}\Sigma\;y+(\frac{t_{1}-1}{2})^{2}\leq(\frac{t_{1}+1}{2})^{2}\Longleftrightarrow ||\left(\begin{array}{c}\Sigma^{\frac{1}{2}}y\\\frac{t_{1}-1}{2}\end{array}\right)||_{2}\leq \frac{t_{1}+1}{2}.$ And the standard SOCP form of the portfolio risk constraint in this case are: $||\left(\begin{array}{c}\Sigma^{\frac{1}{2}}y\\\frac{t_{1}-1}{2}\end{array}\right)||_{2}\leq \frac{t_{1}+1}{2}\Longleftrightarrow ||A_{1}^{\top}z+C_{1}||_{2}\leq b^{\top}_{1}z+d_{1}\\ A_{1}^{\top}=\left(\begin{array}{cc}\Sigma^{\frac{1}{2}} & 0_{n\times 1}\\0_{1\times n} & 1/2\end{array}\right)\nonumber,\; C_{1}=\left(\begin{array}{c}0_{n\times 1}\\-1/2\end{array}\right),\; b_{1}=\left(\begin{array}{c}0_{n\times 1}\\1/2\end{array}\right)\; d_{1}=\frac{1}{2},\; z=\left(\begin{array}{c}y\\t_{1}\end{array}\right).$ • ### Nested Class Summary Nested Classes Modifier and Type Class and Description static class  PortfolioRiskExactSigma.DefaultRoot Computes the matrix root by Cholesky and on failure by MatrixRootByDiagonalization. static class  PortfolioRiskExactSigma.Diagonalization Computes the matrix root by MatrixRootByDiagonalization. static interface  PortfolioRiskExactSigma.MatrixRoot Specifies the method to compute the root of a matrix. • ### Nested classes/interfaces inherited from class com.numericalmethod.suanshu.optimization.multivariate.constrained.convex.sdp.socp.problem.portfoliooptimization.SOCPPortfolioConstraint SOCPPortfolioConstraint.ConstraintViolationException, SOCPPortfolioConstraint.Variable • ### Nested classes/interfaces inherited from interface com.numericalmethod.suanshu.analysis.function.Function Function.EvaluationException • ### Constructor Summary Constructors Constructor and Description PortfolioRiskExactSigma(Matrix Sigma) Transforms the portfolio risk term, $$y^{\top}\Sigma\;y\leq t_1$$, into the standard SOCP form when the exact covariance matrix is used. PortfolioRiskExactSigma(Matrix Sigma, PortfolioRiskExactSigma.MatrixRoot root) Transforms the portfolio risk term, $$y^{\top}\Sigma\;y\leq t_1$$, into the standard SOCP form when the exact covariance matrix is used. • ### Method Summary All Methods Modifier and Type Method and Description boolean areAllConstraintsSatisfied(Vector x) Checks whether all SOCP constraints represented by this portfolio constraint are satisfied. int dimensionOfDomain() Get the number of variables the function has. int dimensionOfRange() Get the dimension of the range space of the function. Double evaluate(Vector y) Evaluate the function f at x, where x is from the domain. Matrix Sigma() • ### Methods inherited from class com.numericalmethod.suanshu.optimization.multivariate.constrained.convex.sdp.socp.problem.portfoliooptimization.SOCPPortfolioConstraint getVariables, iterator, newSOCPGeneralConstraints • ### Methods inherited from class java.lang.Object clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait • ### Methods inherited from interface java.lang.Iterable forEach, spliterator • ### Constructor Detail • #### PortfolioRiskExactSigma public PortfolioRiskExactSigma(Matrix Sigma, PortfolioRiskExactSigma.MatrixRoot root) Transforms the portfolio risk term, $$y^{\top}\Sigma\;y\leq t_1$$, into the standard SOCP form when the exact covariance matrix is used. Parameters: Sigma - the covariance matrix root - the method to compute the root of a matrix • #### PortfolioRiskExactSigma public PortfolioRiskExactSigma(Matrix Sigma) Transforms the portfolio risk term, $$y^{\top}\Sigma\;y\leq t_1$$, into the standard SOCP form when the exact covariance matrix is used. Parameters: Sigma - the covariance matrix • ### Method Detail • #### Sigma public Matrix Sigma() Specified by: Sigma in class SOCPRiskConstraint • #### areAllConstraintsSatisfied public boolean areAllConstraintsSatisfied(Vector x) Checks whether all SOCP constraints represented by this portfolio constraint are satisfied. The constraint generated by objective function to find the optimal solution. It cannot be "violated". Specified by: areAllConstraintsSatisfied in class SOCPPortfolioConstraint Parameters: x - a portfolio solution or allocation; the asset weights Returns: true • #### evaluate public Double evaluate(Vector y) Description copied from interface: Function Evaluate the function f at x, where x is from the domain. Parameters: y - x Returns: f(x) • #### dimensionOfDomain public int dimensionOfDomain() Description copied from interface: Function Get the number of variables the function has. For example, for a univariate function, the domain dimension is 1; for a bivariate function, the domain dimension is 2. Returns: the number of variables • #### dimensionOfRange public int dimensionOfRange() Description copied from interface: Function Get the dimension of the range space of the function. For example, for a Rn->Rm function, the dimension of the range is m. Returns: the dimension of the range
2017-07-22 20:56:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.289511501789093, "perplexity": 2489.5459638056186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424148.78/warc/CC-MAIN-20170722202635-20170722222635-00023.warc.gz"}
https://fkhi19.kattis.com/problems/fkhi19.dulmal
Hide # Dulmál This time around you have been tasked with the job of creating a cipher for secret transmissions. Why? I suppose I could tell you, but you have to promise you’ll tell no one! Beneath The University of Iceland is a secret society of cavedwellers that have true control over the university. A secret council that stays hidden so no one can disturb their schemes. They recently built a supercomputer out of several hundred GPUs that they somehow acquired. But now comes the problem. Now they have to transmit data from this supercomputer through The University of Iceland’s wifi and the lizard people dwelling in the caves want to be completely sure that no one can read the data even if they were to somehow intercept it, no one but them selves of course. To solve this problem, one of the lizards suggested that they agree on an alphabet along with one secret number and then communicate via a series of numbers. A number would then be decoded into a letter by raising the secret number to that power and then take the remainder when the result is divided by the number of letters in the alphabet plus one. If the remainder were $r$ the resulting letter would be the $r$-th letter in the alphabet (counting from one). The other lizards quite liked this suggestion but weren’t convinced it’d work. They weren’t sure if all the letters could be encoded such that this decoding method would work. The lizard suggesting this cipher wasn’t able to prove that all letters could be encoded in this fashion. That’s where you come in! You have to deduce if a given secret number is valid. The lizards are very computer science oriented so they of course don’t just want a one off answer but a program that can solve that problem in general, that way they don’t need to ask for help again later. ## Input The only line in the input contains two integers $2 \leq n \leq 10^9$ and $1 \leq k \leq n$ where $n$ is the number of letters in the alphabet and $k$ is the chosen secret number. ## Output One line saying ‘Gild leynitala!’ (‘Valid secret number’ in Icelandic) if the secret number is valid, ‘Ogild leynitala!’ (‘Invalid secret number’ in Icelandic) otherwise. Sample Input 1 Sample Output 1 4 3 Gild leynitala! Sample Input 2 Sample Output 2 5 3 Ogild leynitala! Sample Input 3 Sample Output 3 6 2 Ogild leynitala! CPU Time limit 2 seconds Memory limit 1024 MB
2022-06-29 04:02:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46166011691093445, "perplexity": 983.025242301607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103620968.33/warc/CC-MAIN-20220629024217-20220629054217-00750.warc.gz"}
https://www.overleaf.com/learn/latex/Questions/Including_displayed_mathematics_(equations_and_formulae)
## Class files This is the 11th video in a series of 21 by Dr Vincent Knight of Cardiff University. In addition to including mathematics within a sentence, we can display equations and formulae on their own. In this tutorial we see how to write simple equations, and introduce the commands \frac for fractions and \sum for summations. To try this for yourself, click here to open the 'Displayed mathematics' example.
2018-10-23 01:30:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8575480580329895, "perplexity": 1829.3584128678624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515564.94/warc/CC-MAIN-20181023002817-20181023024317-00048.warc.gz"}
https://testbook.com/objective-questions/mcq-on-bar-graph--5eea6a1639140f30f369f55b
# Bar Graph MCQ Quiz - Objective Question with Answer for Bar Graph - Download Free PDF ## Question 1 ### Comprehension: Direction: The following bar graph shows the number of articles (in thousands) sold in different years. Study this graph and answer the questions. # If the number of articles sold by Company B in 2015 is 78% more than the difference between articles sold by Company A and Company B in 2014, then the approximate increase or decreases in the articles sold by Company B in 2015 from previous year is (approx.) 1. 77% Increases 2. 87% Decreases 3. 77% Decreases 4. 12% Increases Option 2 : 87% Decreases ## Detailed Solution Calculations: The articles sold by company A in 2014 = 1,59,000 The articles sold by company B in 2014 = 1,48,000 Difference between articles sold by company A and company B in 2014 = 1,59,000 – 1,48,000 = 11,000 According to the question, The number of articles sold by company B in 2015 = 11,000 × 178/100 = 19,580 Decrease in numbers = 1,48,000 – 19,580 = 1,28,420 Decrease % = (1,28,420/1,48,000) × 100 = 86.77% ≈ 87% ∴ The articles sold by company B in 2015 has decreased by approx 87% from previous year ## Question 2 ### Comprehension: Directions: Look at the following figure and answer the questions given below. The Bar graph below shows the results of a survey on five different food items in the hotel liked by males and females. # Females, who like burgers are approximately how much percentage more than males who like burgers? 1. 73% 2. 35% 3. 100% 4. 62% Option 4 : 62% ## Detailed Solution Given: Female who like burgers = 2100 Male who like burgers = 1300 Formula used: Percentage change = {(Final value – Initial value) × 100}/Initial value Calculation: Female who like burgers = 2100 male who like burgers = 1300 The difference between male and Female who like burger = 2100 – 1300 = 800 Percentage = (800/1300) × 100 = 61.5% ≈ 62% ∴ The percentage of females who like burgers more than males who like burgers is 62%. ## Question 3 ### Comprehension: Directions: Look at the following figure and answer the questions given below. The Bar graph below shows the results of a survey on five different food items in the hotel liked by males and females. # Find the ratio of people who like sandwiches and who like hotdogs. 1. 117 ∶ 23 2. 27 ∶ 59 3. 4 ∶ 5 4. 139 ∶ 63 Option 3 : 4 ∶ 5 ## Detailed Solution Calculation: Total number of people who like sandwiches = 1500 + 1300 = 2800 Total number of people who like hotdogs = 1000 + 2500 = 3500 Ratio = 2800/3500 = 4 : 5 ∴ Ratio of people who like sandwiches and hotdogs is 4 : 5 # The first graph shows the number of students (boys and girls in thousands) in Grade 1 to Grade 5.And the bar graph below shows the percentage share of five schools in the total students studying in that class.Based on the information, if the boys to girls ratio in school D is 3 : 1, then the number of boys studying in School D is: 1. 25550 2. 26250 3. 28750 4. 22450 Option 2 : 26250 ## Detailed Solution Given: Total number of students (boys and girls in thousands) in Grade 1 to Grade 5 = 20 + 15 + 12 + 22 + 8 + 18 + 14 + 9 + 10 + 12 = 140 thousands or 1,40,000 Percentage of students in school D = 25% Ratio of boys to girls in school D = 3 : 1 Formula: x% of any number = Actual number × (x/100) Calculation: Number of students in school D = 1,40,000 × (25/100) = 35000 Number of boys in school D = 3,50,00 × (3/4) = 26,250 Shortcut Trick The number of boys in school D is 3x according to the ratio given. That means, the number must be a multiple of 3 or we can say the number is divisible by 3. According to the divisibility rule of 3, we can see that only option 2 is the possible solution. Check: number = 26250 = 2+ 6+ 2 + 5 + 0 = 15 divisible by 3. ∴ The number of boys in school D is 26250. ## Question 5 ### Comprehension: Directions: The bar graph shows the number of employees working in the different departments of a company. Study the diagram and answer the following questions. # The number of employees of department G are greater than those of department C by ___________. 1. 42.8% 2. 75% 3. 150% 4. 84.2% Option 2 : 75% ## Detailed Solution Number of employees of department G = 350 Number of employees in department C = 200 The difference between the number of employees of department G and department C = (350 - 200) = 150 Required Percentage = (Difference in number of employees in Department G and C)/(Number of employees in Department C) × 100% ⇒ (150/200) × 100% = (3/4) × 100% = 75% ∴ Required Percentage = 75% ## Question 6 ### Comprehension: Directions: The bar graph shows the number of employees working in the different departments of a company. Study the diagram and answer the following questions. # If the average compensation of an employee of department A is Rs 40,000 per month, then what is the total compensation (in Rs lakhs) of all employees of department A per month? 1. 800 2. 40 3. 80 4. 400 Option 3 : 80 ## Detailed Solution Total compensation of all the employees of Department A = (Average compensation of an employee of Department A) × Total number of employees of Department A ⇒ Rs. 40000 × 200 = Rs. 8000000 = Rs. 80 Lakhs ∴ Total compensation of all the employees of Department A = Rs. 80 Lakhs ## Question 7 ### Comprehension: The bar chart represents number of fiction and non - fiction books in four libraries L1, L2, L3 and L4. Consider the bar chart and answer questions based on it. # What is the percentage difference of total number of Fiction books in libraries L3 and L4 to the Non Fiction books in L3 and L4? 1. 13.33% 2. 11.76% 3. 6.67% 4. 0% Option 1 : 13.33% ## Detailed Solution Total number of fiction books in libraries L3 and L4 = 500 + 350 = 850 Total number of non fiction books in libraries L3 and L4 = 450 + 300 = 750 ∴ Percentage difference = [(850 - 750)/750] × 100 = 13.33% # The following graph shows the expenditure on education sector by Indian government for the years-2014 - 15 to 2019 - 20If the government plans to increase the expenditure by 30% on the average of the expenditures in 2016-17, 2017-18, 2018-19, then the approximate amount (in billions of rupees) to be spent in 2019-20 is: 1. Rs. 1,162.6 2. Rs. 1,129.5 3. Rs. 1087.3 4. Rs. 1,033.5 Option 4 : Rs. 1,033.5 ## Detailed Solution Given: Expenditure in 2016-2017 = 720 Expenditure in 2017-2018 = 820 Expenditure in 2018-2019 = 845 Formula: Average = Sum of all observations/Total number of all observations Calculation: Average expenditure of all three years = (720 + 820 + 845)/3 = 2385/3 = 795 ∴ Expenditure in 2019-2020 = 795 × (130/100) = 1033.5 # The bar chart given below shows the total exports (in = 1000 crores of a country for 7 consecutive years Y1, Y2, Y3, Y4, Y5, Y6 and Y7.What is the average exports from years Y1 to Y7? 1. 931.45 2. 886.19 3. 964.28 4. 847.87 Option 3 : 964.28 ## Detailed Solution CALCULATION: Average exports = (500 + 750 + 1000 + 1200 + 1100 + 900 + 1300)/7 ∴ Average Exports from years Y1 to Y7 = 6750/7 = 964.28 ## Question 10 ### Comprehension: The bar chart represents number of fiction and non - fiction books in four libraries L1, L2, L3 and L4. Consider the bar chart and answer questions based on it. # The ratio of total number of non - fiction to fiction books in all libraries is 1. 7/6 2. 6/7 3. 15/17 4. 17/15 Option 2 : 6/7 ## Detailed Solution From the given data, Total number of fiction books L1, L2, L3 and L4 = (500 + 400 + 500 + 350) = 1750 Total number of non - fiction books L1, L2, L3 and L4 = (350 + 400 + 450 + 300) = 1500 ∴ Ratio of total number of non - fiction to fiction books in all libraries = 1500 : 1750 = 6 : 7 # The following graph shows the production of paddy in the agricultural land available the 6 places S1, S2, S3, S4, S5, S6 (area in hundreds of hectares and production in thousand of tonnes)Based on the information, the production ratio (production to land) is highest and lowest, respectively, in: 1. S5, S3 2. S2, S1 3. S2, S4 4. S3, S6 Option 3 : S2, S4 ## Detailed Solution Calculation: The production ratio of S1 = 765/415 = 1.84 The production ratio of S2 = 856/390 = 2.19 The production ratio of S3 = 729/356 = 2.04 The production ratio of S4 = 964/550 = 1.75 The production ratio of S5 = 580/280 = 2.07 The production ratio of S6 = 865/440 = 1.96 Now we can say The production ratio (production to land) of S2 and S4 is highest and lowest respectively. ## Question 12 ### Comprehension: Directions: Look at the following figure and answer the questions given below. # What is the percentage of average sales of the munch compared to the average sales of snickers in both the years? 1. 94.46% 2. 95.7% 3. 92.25% 4. 122.5% Option 4 : 122.5% ## Detailed Solution Given: Total sales of Munch = (30 + 19) lakh = 49 lakh Total sales of snickers = (18 + 22) lakh = 40 lakh Formula used: Percentage = (Average sales of Munch/Average sales of snickers) × 100 Average sales = Total sales/Number of years Calculations: Average sales of Munch = 49/2 = 24.5 lakh Average sales of snickers = 40/2 = 20 lakh Percentage = (24.5/20) × 100 ⇒ 2450/20 ⇒ 122.5% The percentage of average sales of the munch compared to the average sales of snickers is 122.5% # The given Bar Graph presents the number of different types of vehicles (in lakhs) exported by a company during 2014 and 2015.The average number of type A, B and D vehicles exported in 2015 was x% less than the number of type E vehicles exported in 2014. What is the value of x? 1. 25 2. 20 3. 18 4. 24 Option 3 : 18 ## Detailed Solution The average number of type A, B and D vehicles exported in 2015 = (35 + 33 + 55)/3 = 41 lakh The number of type E vehicles exported in 2014 = 50 According to the question, x% = (50 – 41)/50 × 100 = 18% # Study the given histogram that shows the marks obtained by students in an examination and answer the question that follows.If the total marks obtained by students be represented as a pie chart, then the central angle of the sector representing  marks 200 or more but less than 300, is:(correct to the nearest degree) 1. 154° 2. 128° 3. 68° 4. 88° Option 1 : 154° ## Detailed Solution Formula used: Central angle of pie chart = (given frequency/total frequency) × 360° Calculation: Number of students who scored marks between 200 and 300 = 45 + 60 = 105 Total students = 30 + 45 + 60 + 35 + 40 + 35 = 245 Central angle of pie chart = 105/245 × 360° = 154° ∴ The central angle of the sector representing  marks 200 or more but less that 300 is 154° ## Question 15 ### Comprehension: Directions: Study the following graph carefully and answer the question that follows. # Area cultivated in West Bengal is what percentage more than that cultivated in Tamil Nadu? 1. 37.5% 2. 25% 3. 50% 4. 30% Option 1 : 37.5% ## Detailed Solution ⇒ Area cultivated in West Bengal = 22% ⇒ Area cultivated in Tamil Nadu = 16% ⇒ Required % = [(22 – 16)%/16%] × 100 = 37.5% ## Question 16 ### Comprehension: Direction: The bar graph shows the population of different countries. Study the diagram and answer the following questions. # If country D spends $120 per person on health annually, then how much does it spend (in$ millions) on health on its entire population? 1. 6000 2. 600 3. 2400 4. 240 Option 1 : 6000 ## Detailed Solution Population of country D = 50 If $120 is spent per person in country D then the total amount spent = 50 million × 120 =$6000 million ∴ Required answer = \$6000 million ## Question 17 ### Comprehension: The bar chart represents number of fiction and non - fiction books in four libraries L1, L2, L3 and L4. Consider the bar chart and answer questions based on it. # The ratio of total books of libraries L1 and L3 together to L2 and L4together is 1. 29 : 36 2. 33 : 32 3. 36 : 29 4. 32 : 33 Option 3 : 36 : 29 ## Detailed Solution From the given data, Total books of libraries of L1 and L3 = (500 + 350) + (500 + 450) = 850 + 950 = 1800 Total books of libraries of L2 and L4 = (400 + 400) + (350 + 300) = 800 + 650 = 1450 ∴ Ratio of total books of libraries of L1 and L3 to L2 and L4 = 1800 : 1450 = 36 : 29 ## Question 18 ### Comprehension: Directions: The bar graph shows the number of employees working in the different departments of a company. Study the diagram and answer the following questions. # What is the ratio of number of employees of department A to that of department F 1. 7 : 4 2. 5 : 7 3. 7 : 5 4. 4 : 7 Option 4 : 4 : 7 ## Detailed Solution Number of employees in department A = 200 Number of employees in department F = 350 Ratio of number of employees of department A to that of department F = 200 : 350 = 4 : 7 ∴ Required Ratio = 4 : 7 # The following graph given the annual profit earned by a company during the period 1996-2001. Study the graph carefully and answer the questions that follow.$$profit\ =\ {Income - Expenditure}$$The expenditure of the company during the year 1996 was Rs. 30 lakhs. The income of the company in that year was 1. 45 2. 65 3. 55 4. 75 Option 4 : 75 ## Detailed Solution Given: Expenditure of the company during the year 1996 = Rs. 30 lakhs Profit of the company during the year 1996 = 45 lakh Formula: $$profit\ =\ {Income - Expenditure}$$ Calculation: According to the given formula 45 = Income - 30 ⇒ Income  = 45 + 30 ∴ Income = 75 lakhs. ## Question 20 ### Comprehension: Directions: The bar graph shows the number of employees working in the different departments of a company. Study the diagram and answer the following questions. 1. C 2. D 3. A 4. B
2021-09-24 11:36:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2567737400531769, "perplexity": 2813.599494474359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057524.58/warc/CC-MAIN-20210924110455-20210924140455-00611.warc.gz"}
https://dsp.stackexchange.com/questions/25948/adjust-mean-of-signal-using-exponential
# adjust mean of signal using exponential I have discrete signals whose values are between 0 and 1. I wish to post-process such a signal such that the mean of its values equals 0.5, yet keeping maximum and minimum values 0/1 intact. Intuitively, I think an 'element-wise' exponential would be the right approach, yet I can't seem to come up with the method to determine the value of the appropriate exponential. Fictive matlab example to illustrate: >> s=[0.2 0.7 1 0.5 0.4 0.2 0 0.3]; >> mu=mean(s) mu = 0.41250 What is the best method to determine the 0.69998 exponential ? • This is not an exponential but a power law. – Yves Daoust Sep 21 '15 at 8:26 • @YvesDaoust: This is a subtlety that I don’t master. Can you please elaborate? – Stevo Sep 21 '15 at 12:44 • Subtle. You are performing an exponentiation of each of your samples $s$, with exponent or power $a$. It is defined as $s^a = \exp{a \log{s}}$ for $s>0$, a sort of exponential of a $\log{s}$, called a power-law. An "exponential" of your data could be thought to take a form like $\exp{a s}$. – Laurent Duval Sep 22 '15 at 17:31 You need to do it numerically, as a related toy problem $2^a + 3^a = 4$ appears to have no symbolic solution. Bisection method (binary search) is probably the easiest solver to implement: minexponent = 0; maxexponent = 2; targetsum = 10; precision = 32; loop precision times { exponent = (minexponent + maxexponent)*0.5; if (sum(s.^exponent) < targetsum) { maxexponent = exponent; } else { minexponent = exponent; } } return exponent; The smallest possible exponent value is 0 for your problem. If you are unsure about the largest possible exponent value, you can test the easy-to-check exponents 1, 2, 4, 8, 16 etc. (as many as needed before the sum goes below the target) before the above code, exponentiating into an auxiliary array by multiplying each element by itself. Note that if all of the elements are zero, or if all of the elements are one, then you cannot get any other mean than that by exponentiation. The plot below shows 1000 runs of the algorithm, and the value of exponent at each iteration. • Thanks for your answer. It makes it evident that I indeed would need to go down a numerical root-finding approach. The bisection method is probably good-enough for my purpose and indeed simplest (vs other root finding methods I've just read about) – Stevo Sep 21 '15 at 12:56
2021-01-21 01:46:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6793827414512634, "perplexity": 773.8629059714365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522150.18/warc/CC-MAIN-20210121004224-20210121034224-00542.warc.gz"}
https://www.scipedia.com/public/Bouziane_Bouzerd_2020b
## Abstract The interfacial crack in bimaterials is a very interesting problem for composite materials and which has received particular attention from several researchers. In this study, we will propose a numerical modeling of the interfacial crack between two orthotropic materials using a special mixed finite element. For the calculation of the energy release rate, a technique, based on the association of the present mixed finite element with the virtual crack extension method, was used. The numerical model proposed, in this work, was used to study a problem of interfacial crack in bimaterials. Two cases were treated: isotropic and orthotropic bimaterials. The results obtained, using the present element, were compared with the values of the analytical solution and other numerical models found in the literature. Keywords: Interfacial crack, mixed finite element, virtual crack extension method, energy release rate, orthotropic bimaterials ## 1. Introduction The interfacial fracture is a complex phenomenon which is still badly understood, so it would already be enough to justify its study. Indeed, the interface located between two dissimilar materials is, on the mechanical level, a weak point: when these materials are requested by stresses, of thermal origin for example, the fracture of the interface is a mode usually observed. Moreover, one knows little about the mechanical conditions which lead to this fracture. A comprehension of the interfacial fracture thus represents a significant stake in the field of composite materials. The problem of the interfacial crack in isotropic bimaterials has been treated by many researchers. We can cite, for example, the work of Williams [1], Erdogan [2,3], England [4], Rice and Sih [5], Hutchinson et al. [6], Rice [7] and Suo and Hutchinson [8]. The cracks along the interface between two anisotropic plates were initially treated by Gotoh [9]. The case of plane deformation of interfacial crack between two anisotropic materials was studied by Clements [10], Willis [11], Qu and Bassani [12], Suo [13] and Ni and Nemat-Nasser [14]. Bassani and Qu [15] have explicitly resolved the special case of Griffith's problem and the solution of the general problem has been found by Suo [13] and Qu and Li [16]. The crack path in the anisotropic medium was studied theoretically and numerically by Gao et al. [17], a weak plane model was adopted to characterize the anisotropic fracture toughness and the maximum energy release rate criterion was chosen to predict the crack path. The problem of interfacial cracks in anisotropic bimaterials was also treated by Wang et al. [18], Juan and Dingreville [19]. Based on anisotropic elasticity, Tanaka et al. [20] evaluate the energy release rate by the modified crack closure integral of the finite element method, and convert to the stress intensity factor for the cases of cracks on elastic symmetrical planes. Two approaches have been described by Banks-Sills and Ikeda [21] for considering an interface crack between two anisotropic materials. Both approaches have been used for orthotropic and monoclinic materials. The problem of cracked orthotropic bimaterial was also studied by Bouchemella et al. [22]. Fracture analysis of orthotropic cracked media was investigated by applying the recently developed Extended IsoGeometric Analysis (XIGA) [23] using the T-spline basis functions [24]. The same method XIGA was used by Habib et al. [25] for the analysis of static fracture behaviour for a crack in orthotropic materials. Khatir and Wahab [26] used an inverse algorithm based on Proper Orthogonal Decomposition (POD) and Radial Basis Functions (RBF) for single and multiple cracks identification in plate structures. The inverse analyses combine experimental fracture mechanics tests with numerical models based on the XIGA method. The eXtended IsoGeometric Analysis combined with Particle Swarm Optimization (PSO) have been used for crack identification in two-dimensional linear elastic problems (plate) based on inverse problem [27]. In this paper, a numerical modeling has been proposed to study the interfacial crack between two orthotropic materials. This model uses a two-dimensional mixed finite element developed in a natural plane. It is an element with 7 nodes: 5 displacement nodes and 2 stress nodes. The proposed model was used to calculate the energy release rate in a cracked orthotropic bimaterial using a technique that combines the present element with the virtual crack extension method. In this work, two cases of interfacial cracks were treated: an isotropic bimaterial and an orthotropic bimaterial. The results obtained, using the present mixed finite element, were compared with the values of the analytical solution and other numerical models found in the literature. ## 2. Numerical modelling of interfacial crack The bimaterial has been discredized using a special mixed finite element RMQ-7 (Reissner Modified Quadrilateral) as shown in Figure 1(a). The present mixed finite element used in this study is two-dimensional element with seven nodes: five displacement nodes and two stress nodes as shown in Figure 1(b). The node 5 coincides with the crack tip. This element was developed by Bouzerd [28], in the physical (${\displaystyle x,y}$) plane, and was reformulated and validated by Bouziane et al. [29] in a natural (${\displaystyle \xi ,\eta }$) plane. (a) Discretization of bimaterial (b) RMQ-7 element Figure 1. Discretization of bimaterial and RMQ-7 element Displacement for the present mixed finite element can be given by ${\displaystyle u=\sum _{i=1}^{5}{N}_{i}{u}_{i}}$ (1) where ${\textstyle {N}_{i}}$ are the shape functions and ${\textstyle {u}_{i}}$ is the nodal displacement corresponding to node ${\displaystyle i}$. For the present element, the shape functions are given as follows ${\displaystyle {N}_{1}=-{\frac {1}{4}}\left(1-\xi \right)\left(1-\eta \right)\xi }$, ${\displaystyle \quad {N}_{2}={\frac {1}{4}}\left(1+\xi \right)\left(1-\eta \right)\xi }$, ${\displaystyle \quad {N}_{3}={\frac {1}{4}}\left(1+\xi \right)\left(1+\eta \right)}$, ${\displaystyle \quad {N}_{4}=frac{1}{4}\left(1-\xi \right)\left(1+\eta \right)}$, ${\displaystyle \quad {N}_{5}=}$${\displaystyle {\frac {1}{2}}\left(1-{\xi }^{2}\right)\left(1-\eta \right)}$ (2) The element stress component is approximated by ${\textstyle \left\{\sigma \right\}=\left[M\right]\left\{\tau \right\}}$ (3) where ${\displaystyle [M]}$ is the matrix of interpolation functions for stresses and ${\displaystyle \{\tau \}}$ is the vector of nodal stresses. For the RMQ-7 element (Figure 1(b)), the shape functions ${\displaystyle M_{i2}}$, used to evaluate ${\displaystyle \sigma _{12}}$ and ${\displaystyle \sigma _{22}}$ [29] for nodes 6 and 7 are obtained by ${\textstyle {M}_{i2}^{6}={\frac {1}{6}}\left(1-2\xi \right)\left(1-\right.}$${\displaystyle \left.2\eta \right)}$, ${\displaystyle \quad {M}_{i2}^{7}=}$${\displaystyle {\frac {1}{6}}\left(1+2\xi \right)\left(1-2\eta \right)\quad ,\quad i=1,2}$ (4) The element stiffness matrix [Ke] is given by the following expression ${\textstyle \left[{K}_{e}\right]=\left[{\begin{matrix}\left[{K}_{\sigma \sigma }\right]&\left[{K}_{\sigma u}\right]\\{\left[{K}_{\sigma u}\right]}^{T}&\left[0\right]\end{matrix}}\right]}$ (5) where the sub-matrices, ${\textstyle \left[{K}_{\sigma \sigma }\right]}$ and ${\textstyle \left[{K}_{\sigma u}\right]}$ , are given by the following relations ${\displaystyle \left[{K}_{\sigma \sigma }\right]=-t\int _{{A}_{e}}^{}{\left[M\right]}^{T}\left[S\right]\left[M\right]d{A}^{e}}$ ${\displaystyle \,\left[{K}_{\sigma u}\right]=t\int _{{A}_{e}}^{}{\left[M\right]}^{T}\left[B\right]d{A}^{e}}$ (6) where ${\displaystyle [S]}$ is the compliance matrix, ${\displaystyle [M]}$ is the matrix of interpolation functions for stresses, ${\displaystyle [B]}$ is the strain-displacement matrix of shape function derivatives, ${\displaystyle t}$ is the thickness, ${\displaystyle A^{e}}$ is the element area and ${\displaystyle T}$ indicate the matrix transpose. ## 3. Computation of energy release rate The virtual crack extension method, associated with the mixed finite element RMQ-7, is used to calculate the energy release rate ${\displaystyle G}$ [28]. In this technique, the first calculation of the deformation energy ${\textstyle {\Pi }_{1}}$ is carried out in the initial configuration of the crack. The crack is then moved an infinitesimal distance ${\textstyle \delta a}$ in the direction of its axis. The deformation energy ${\textstyle {\Pi }_{2}}$ is evaluated again in the second configuration, the energy released during this crack length variation is ${\displaystyle \delta \Pi ={\Pi }_{2}-\,{\Pi }_{1}}$ (7) The energy release rate ${\displaystyle G}$ will be evaluated thereafter starting from the relation ${\displaystyle G={\frac {\delta \Pi }{\delta a}}}$ (8) Calculation by the virtual crack extension method requires two finite element analysis. The use of the RMQ-7 element makes it possible to introduce one mesh for the calculation of the energy release rate, which represents a considerable profit in computing times and setting data compared to the traditional techniques which use two meshes [28]. Indeed the intermediate displacement node of the RMQ-7 element is associated to crack tip, and consequently the length of crack ${\displaystyle a}$ can be increased by a quantity ${\textstyle \delta a}$ while acting inside strict of the crack element by translation of the tip crack node without disturbing the remainder of the mesh. With the assumption on materials and displacements (linear elastic behaviour and small displacements), the solutions ${\textstyle u(a)}$ and ${\textstyle u(a+\delta a)}$ obtained in the structure with a crack length ${\displaystyle a}$ and in the same structure with a crack length ${\textstyle a+\delta a}$ are as close as the disturbance ${\textstyle \delta a}$ is small compared to dimensions of the crack element. We can thus write with a good approximation ${\displaystyle u\left(a\right)=u(a+\delta a)}$ (9) Several calculations on simple examples enabled us to confirm the relation Equation (9), which is theoretically coherent and physically acceptable, considering the assumptions used. If we consider that the external loading does not vary during the increase ${\textstyle \delta a}$, the energy release rate is calculated as follows: ${\displaystyle G=-{\frac {\Pi \left(a+\delta a\right)-\Pi (a)}{\delta a}}}$ (10) where ${\textstyle \Pi \left(a+\right.}$${\displaystyle \left.\delta a\right)}$ and ${\textstyle \Pi (a)}$ represent respectively the deformation energy of the cracked structure in the configuration ${\textstyle a+\delta a}$ and ${\displaystyle a}$. In its discretized form, the deformation energy is written ${\displaystyle \Pi ={\frac {1}{2}}\sum _{i=1}^{ne}{\left\{u\right\}}_{i}^{T}{\left[K\right]}_{i}{\left\{u\right\}}_{i}}$ (11) where ${\displaystyle ne}$ is the total number of elements in discretized structure, ${\textstyle {\left\{u\right\}}_{i}}$ the vertical vector containing the nodal values of element ${\displaystyle i}$, ${\textstyle {\left[K\right]}_{i}}$ the elementary matrix of element ${\displaystyle i}$, and the exponent ${\displaystyle T}$ indicates the transposed vector. By substitution of Equation (11) in Equation (10), the expression of the energy release rate ${\displaystyle G}$ becomes ${\displaystyle G=-{\frac {1}{2\delta a}}\left[\sum _{i=1}^{ne}{\left\{u(a+\delta a)\right\}}_{i}^{T}{\left[K(a+\delta a)\right]}_{i}{\left\{u(a+\delta a)\right\}}_{i}-\right.}$${\displaystyle \left.\sum _{i=1}^{ne}{\left\{u(a)\right\}}_{i}^{T}{\left[K(a)\right]}_{i}{\left\{u(a)\right\}}_{i}\right]}$ (12) Taking account of Equation (9), the expression Equation (12) can be written in the following form ${\displaystyle G=-{\frac {1}{2\delta a}}\sum _{i=1}^{ne}{\left\{u(a+\delta a)\right\}}_{i}^{T}\left[{\left[K(a+\delta a)\right]}_{i}-{\left[K(a)\right]}_{i}\right]{\left\{u(a+\delta a)\right\}}_{i}}$ (13) and as only the crack element is disturbed, then ${\displaystyle G}$ results more simply in the relation ${\displaystyle G=-{\frac {1}{2\delta a}}{\left\{u(a+\delta a)\right\}}_{f}^{T}\left[{\left[K(a+\delta a)\right]}_{f}-\right.}$${\displaystyle \left.{\left[K(a)\right]}_{fi}\right]{\left\{u(a+\delta a)\right\}}_{f}}$ (14) where the index ${\displaystyle f}$ indicates that the matrix and vector used are those of the crack element. The expression Equation (14) shows that only the crack element is concerned, and consequently it is enough to place in the mesh another RMQ-7 element equivalent to that placed on the crack, in other words an element which has the same geometry and made up of same material. The energy release rate is calculated according to the relation Equation (14) with only one discretization starting from the difference of the elementary matrices of the element containing the crack and representing the state ${\textstyle a+\delta a}$ and its equivalent element representing the state ${\displaystyle a}$. The expression Equation (14) can be written differently as follows ${\displaystyle {G=-{\frac {1}{2}}\left\{u\right\}}_{f}^{T}{\frac {{\left[\delta K\right]}_{f}}{\delta a}}{\left\{u\right\}}_{f}}$ (15) In practice, we carry out the discretization of the cracked structure in the configuration ${\textstyle a+\Delta a}$, and we locate the element containing the crack like its equivalent element representing the configuration ${\displaystyle a}$, in order to save their elementary matrices during the assembly operation and before the application of the boundary conditions. After the resolution phase, the nodal values of the crack element are extracted, and a special module is used to evaluate the energy release rate according to the following formula ${\displaystyle {G=-{\frac {1}{2}}\left\{u\right\}}_{f}^{T}{\frac {{\left[\Delta K\right]}_{f}}{\Delta a}}{\left\{u\right\}}_{f}}$ (16) ## 4. Numerical examples ### 4.1 Presentation of the example The example treated, in this study, is the interfacial crack centered of a bimaterial plate. This example was studied by Chow et al. [30] with plane strain condition. This rectangular bimaterial is made of material #1 and #2 and subjected to a tension ${\displaystyle \sigma _{22}^{0}=1}$ MPa. As shown in Figure 2, the dimensions of the bimaterial are the half crack length a=1mm, the width w=20a and the height ${\displaystyle h=20}$a. Two cases are treated in this example. In the first case it is assumed that the materials #1 and #2 are isotropic and in the second case the materials are considered to be orthotropic (carbon composites: AS4/3501-6) with lay-up angle of 0 and 90 degree. The material properties of the used materials are defined in Table 1. Figure 2. Bimaterial plate A stress ${\displaystyle \sigma _{11}^{0}}$ is applied to the side of the material #2. In the case of plane strain, this stress is expressed by ${\displaystyle {\sigma }_{11}^{0}=\left[{\frac {{\nu }_{12\#2}+{\nu }_{13\#2}{\nu }_{32\#2}}{1+{\nu }_{13\#2}{\nu }_{31\#2}}}-\right.}$${\displaystyle \left.{\frac {{\nu }_{12\#1}+{\nu }_{13\#1}{\nu }_{32\#1}}{1+{\nu }_{13\#2}{\nu }_{31\#2}}}\left({\frac {{E}_{1\#2}}{{E}_{1\#1}}}\right)\right]{\sigma }_{22}^{0}}$ (17) where ${\displaystyle E}$ is the Young's modulus and ${\displaystyle \nu }$ is the Poisson's ratio of the material. Table 1. Material property Isotropic Orthotropic (0 degree) Orthotropic (90 degree) ${\displaystyle G_{\#1}=1}$ GPa ${\displaystyle E_{3}=142}$ GPa ${\displaystyle E_{1}=142}$ GPa ${\displaystyle \nu _{\#1}=\nu _{\#2}=0.3}$ ${\displaystyle E_{1}/E_{3}=E_{2}/E_{3}=6.91\times 10^{-2}}$ ${\displaystyle E_{2}/E_{1}=E_{3}/E_{1}=6.91\times 10^{-2}}$ ${\displaystyle G_{12}/E_{3}=2.68\times 10^{-2}}$ ${\displaystyle G_{23}/E_{1}=2.68\times 10^{-2}}$ ${\displaystyle G_{13}/E_{3}=G_{23}/E_{3}=4.23\times 10^{-2}}$ ${\displaystyle G_{13}/E_{1}=G_{12}/E_{1}=4.23\times 10^{-2}}$ ${\displaystyle \nu _{31}=\nu _{32}=\nu _{12}=0.3}$ ${\displaystyle \nu _{12}=\nu _{13}=\nu _{23}=0.3}$ In the example above, the authors (Chow et al. 1995) calculate and compare the stress intensity factors ${\displaystyle K_{1}}$ and ${\displaystyle K_{2}}$, the energy release rate is calculated according to ${\displaystyle K_{1}}$ and ${\displaystyle K_{2}}$ by the expression given by Qu and Bassani [31]. The results are resumed in Table 2 for the two materials (isotropic and orthotropic). Table 2. Energy release rate in the numerical example Material Exact solution Hybrid element Mutual integral Extrapolation technique 205 nodes 679 nodes 237 nodes 679 nodes 237 nodes Isotropic ${\displaystyle G_{\#2}/G_{\#1}=1}$ 10,988E-04 11,290E-04 11,302 E-04 11,253 E-04 13,132 E-04 12,554E-04 ${\displaystyle G_{\#2}/G_{\#1}=5}$ 06,453E-04 06,606E-04 06,614 E-04 06,592 E-04 07,649 E-04 07,326E-04 ${\displaystyle G_{\#2}/G_{\#1}=50}$ 05,353E-04 05,460E-04 05,461 E-04 05,444 E-04 06,287 E-04 06,026E-04 Orthotropic [0/0] 03,170E-04 03,257E-04 03,262 E-04 03,247 E-04 03,793 E-04 03,540E-04 [90/90] 02,200E-04 02,221E-04 02,216 E-04 02,221 E-04 02,549 E-04 02,480E-04 [0/90] 02,640E-04 02,685E-04 02,679 E-04 02,675 E-04 03,094 E-04 03,021E-04 ### 4.2 Results and discussions The mixed finite element RMQ-7 is now used to calculate the energy release rate of the cracked bimaterial plate. For this purpose three meshes (207, 237 and 677 nodes) are used in order to be able to compare the results of RMQ-7 element with the other elements results by using approximately the same number of nodes. The results obtained are resumed in the Table 3. Table 3. Energy release rate obtained using RMQ-7 element Material RMQ-7 mixed finite element 207 nodes 237 nodes 677 nodes Isotropic ${\displaystyle G_{\#2}/G_{\#1}=1}$ 11,272E-04 11,205E-04 11,126E-04 ${\displaystyle G_{\#2}/G_{\#1}=5}$ 06,393E-04 06,486E-04 06,438E-04 ${\displaystyle G_{\#2}/G_{\#1}=50}$ 05,274E-04 05,278E-04 05,297E-04 Orthotropic [0/0] 03,225E-04 03,237E-04 03,167 E-04 [90/90] 02,260E-04 02,293E-04 02,168 E-04 [0/90] 02,691E-04 02,764E-04 02,617 E-04 According to the number of nodes, the numerical results of the energy release rate for different methods are listed in Tables 4, 5 and 6 for both the isotropic bimaterial and anisotropic bimaterial. The difference with exact solution for the different methods are calculated and consigned in Tables 4, 5 and 6. This difference is expressed by the Error (%) calculated as follows ${\displaystyle Error\,(\%)={\frac {G-{G}_{exact}}{{G}_{exact}}}\times 100\,\%}$ (18) Compared to the exact solution, the numerical results show the accuracy and efficiency of the RMQ-7 element. The difference between the values of exact solution and those of the mixed finite element vary between -0,10% and 4,70%. Table 4. Energy release rate for crack along bimaterial interface, Mesh 1: 207 nodes Material Exact solution RMQ-7 element Hybrid element 207 nodes Error % 205 nodes Error % Isotropic ${\displaystyle G_{\#2}/G_{\#1}=1}$ 10,988E-04 11,272E-04 2,58 11,290E-04 2,75 ${\displaystyle G_{\#2}/G_{\#1}=5}$ 06,453E-04 06,393E-04 -0,93 06,606E-04 2,37 ${\displaystyle G_{\#2}/G_{\#1}=50}$ 05,353E-04 05,274E-04 -1,48 05,460E-04 2,00 Orthotropic [0/0] 03,170E-04 03,225E-04 1,74 03,257E-04 2,74 [90/90] 02,200E-04 02,260E-04 2,73 02,221E-04 0,95 [0/90] 02,640E-04 02,691E-04 1,93 02,685E-04 1,70 For isotropic bimaterials, the RMQ-7 element, for the same number of nodes, shows a clear superiority compared to the eight noded isoparametric displacement finite element (extrapolation technique), and more accurate results than those of the mutual integral method. For example, with the RMQ-7 element, the Error passed from -0,93% to 2,58% with 207 nodes whereas the Error varied from 2,00% to 2,75 using the hybrid element with 205 nodes. For orthotropic bimaterials, the element RMQ-7 shows its performance compared to the classical displacement element. It still gives results clearly closer to the exact solution. Compared to the mutual integral method the RMQ-7 element gives very satisfactory results. Using RMQ-7 element with 677, the difference varied between -0,10% and -1,45% whereas it is between 0,73% and 2,90% using mutual method with 679 nodes. Table 5. Energy release rate for crack along bimaterial interface, Mesh 2: 237 nodes Material Exact solution RMQ-7 element Mutual integral Extrapolation technique 237 nodes Error % 237 nodes Error % 237 nodes Error % Isotropic ${\displaystyle G_{\#2}/G_{\#1}=1}$ 10,988E-04 11,205E-04 1,98 11,253E-04 2,41 12,554E-04 14,25 ${\displaystyle G_{\#2}/G_{\#1}=5}$ 06,453E-04 06,486E-04 0,51 06,592E-04 2,15 07,326E-04 13,53 ${\displaystyle G_{\#2}/G_{\#1}=50}$ 05,353E-04 05,278E-04 -1,40 05,444E-04 1,70 06,026E-04 12,57 Orthotropic [0/0] 03,170E-04 03,237E-04 2,11 03,247E-04 2,43 03,540E-04 11,67 [90/90] 02,200E-04 02,293E-04 4,23 02,221E-04 0,95 02,480E-04 12,73 [0/90] 02,640E-04 02,764E-04 4,70 02,675E-04 1,33 03,021E-04 14,43 The results obtained, using the present mixed finite element, show the efficiency and accuracy of the proposed numerical model, which can give an acceptable solution with a few degrees of freedom from a unique mesh. It should be noted that during numerical calculation, the choice of the variation of the crack length ${\textstyle \Delta a}$ is very significant. Indeed, it is necessary that this variation is sufficiently small so that the approximation Equation (9) has a justification, and not too small to avoid problems involved in the precision machine. The results show also, that the current techniques of the finite elements analysis make it possible to find an effective numerical solution and a high precision to the problems of fracture mechanic. Table 6. Energy release rate for crack along bimaterial interface, Mesh 3: 677 nodes Material Exact solution RMQ-7 element Mutual integral Extrapolation technique 677 nodes Error % 679 nodes Error % 679 nodes Error % Isotropic ${\displaystyle G_{\#2}/G_{\#1}=1}$ 10,988E-04 11,126E-04 1,26 11,302E-04 2,86 13,132E-04 19,51 ${\displaystyle G_{\#2}/G_{\#1}=5}$ 06,453E-04 06,438E-04 0,23 06,614E-04 2,49 07,649E-04 18,53 ${\displaystyle G_{\#2}/G_{\#1}=50}$ 05,353E-04 05,297E-04 -1,05 05,461E-04 2,02 06,287E-04 17,45 Orthotropic [0/0] 03,170E-04 03,167 E-04 -0,10 03,262E-04 2,90 03,793E-04 19,65 [90/90] 02,200E-04 02,168 E-04 -1,45 02,216E-04 0,73 02,549E-04 15,86 [0/90] 02,640E-04 02,617 E-04 -0,87 02,679E-04 1,48 03,094E-04 17,20 ## 5. Conclusion In this paper, a numerical modeling has been proposed to study the interfacial crack between two orthotropic materials. This model uses a special mixed finite element developed in a natural plane. It is a two-dimensional element with seven nodes: five displacement nodes and two stress nodes. The proposed numerical model was used to calculate the energy release rate in a cracked orthotropic bimaterial using a technique that combines the present element with the virtual crack extension method. Two cases were treated: isotropic and orthotropic bimaterials. The accuracy and the efficiency of the proposed model has been evaluated by comparing the numerical solution with an available analytical solution or numerical ones obtained from others methods. Comparisons with existing analytical or numerical solutions for interfacial cracks in orthotropic bimaterials proved that the proposed model provide a good accuracy and efficiency. ## References [1] Williams M.L. The stresses around a fault or crack in dissimilar media. Bulletin of Seismology Society of America, 49:199-204, 1959. [2] Erdogan F. Stress distribution in nonhomogeneous elastic plane with cracks. J. Appl. Mech., 30:232-236, 1963. [3] Erdogan F. Stress distribution in bonded dissimilar materials with cracks. J. Appl. Mech., 32:403-409, 1965. [4] England A.H. A crack between dissimilar media. J. Appl. Mech., 32:400-402, 1965. [5] Rice J.R., Sih G.C. Plane problems of cracks in dissimilar media. J. Appl. Mech., 32:418-423, 1965. [6] Hutchinson J.W., Mear M., Rice J.R. Crack paralleling an interface between dissimilar materials. ASME Journal of Applied Mechanics, 54:828-832, 1987. [7] Rice J.R. Elastic fracture mechanics concepts for interfacial cracks. ASME Journal of Applied Mechanics, 55:98-103, 1988. [8] Suo Z., Hutchinson J.W. Interface crack between two elastic layers. Int. J. Fract., 43:1–18, 1990. [9] Gotoh M. Some problems of bonded anisotropic plates with cracks along the bond. Int. J. Fract. Mech., 3:253-265, 1967. [10] Clements D.L. A crack between dissimilar anisotropic media. Int. J. Engng. Sci., 9:257–265, 1971. [11] Willis J.R. Fracture mechanics of interfacial cracks. J. Mech. Phys. Solids, 19:353-368, 1971. [12] Qu J., Bassani J.L. Cracks on bimaterial and bicrystal interfaces. J. Mech. Phys. Solids, 37(4):417-433, 1989. [13] Suo Z. Singularities, interfaces and cracks in dissimilar anisotropic media. Proc. R. Soc. Lond. A, 427:331-358, 1990. [14] Ni L., Nemat-Nasser S. Interface crack in anisotropic dissimilar materials: An analytic solution. J. Mech. Phys. Solids, 39(1):113-144, 1991. [15] Bassani J.L., Qu J. Finite crack on bimaterial and bicrystal interfaces. J. Mech. Phys. Solids, 37(4):435-453, 1989. [16] Qu J., Li Q. Interfacial dislocation and its applications to interface cracks in anisotropic bimaterials. J. Elasticity, 26:169-195, 1991. [17] Gao Y., Liu Z., Zeng Q., Wang T., Zhuang Z., Hwang K-C. Theoretical and numerical prediction of crack path in the material with anisotropic fracture toughness. Engineering Fracture Mechanics, 180:330-347, 2017. [18] Wang X., Zhou, K., Wu M.S. Interface cracks with surface elasticity in anisotropic bimaterials. Int. J. of Solids and Structures, 59:110-120, 2015. [19] Juan P.-A., Dingreville R. Mechanics of finite cracks in dissimilar anisotropic elastic media considering interfacial elasticity. J. of the Mechanics and Physics of Solids, 99:1-18, 2017. [20] Tanaka K., Oharada K., Yamada D., Shimizu K. Fatigue crack propagation in short-carbon-fiber reinforced plastics evaluated based on anisotropic fracture mechanics. Int. Journal of Fatigue, 92:415-425, 2016. [21] Banks-Sills L., Ikeda T. Stress intensity factors for interface cracks between orthotropic and monoclinic material. Int. J. Fract., 167(1):47-56, 2011. [22] Bouchemella S., Bouzerd, H., Khaldi, N. Modélisation des interfaces fissurées des bimatériaux orthotropes. XIXème Congrès Français de Mécanique, Marseille, France, 2009. [23] Ghorashi S.Sh., Valizadeh N., Mohammadi S. Extended isogeometric analysis (XIGA) for simulation of stationary and propagating cracks. Int. J. Numer. Methods Eng., 89:1069 –1101, 2012. [24] Ghorashi S.Sh., Valizadeh N., Mohammadi S., Rabczuk T. T-spline based XIGA for fracture analysis of orthotropic media. Computers and Structures, 147:138–146, 2015. [25] Habib S.H., Belaidi I., Khatir S., Abdel Wahab M. Numerical Simulation of cracked orthotropic materials using extended isogeometric analysis. Journal of Physics: Conf. Series, 842:012061, 2017. [26] Khatir S., Wahab M.A. Fast simulations for solving fracture mechanics inverse problems using POD-RBF XIGA and Jaya algorithm. Engineering Fracture Mechanics, 205:285-300, 2019. [27] Khatir S., Wahab M.A., Benaissa B., Köppen M. Crack identification using eXtended IsoGeometric Analysis and particle swarm optimization. In Fracture, Fatigue and Wear, pp. 210-222, Springer, Singapore, 2018. [28] Bouzerd H. Elément fini mixte pour interface cohérente ou fissurée. Thèse de doctorat, Université de Claude Bernard (Lyon I), France, 1992. [29] Bouziane S., Bouzerd H., Guenfoud M. Mixed finite element for modelling interfaces. European Journal of Computational Mechanics, 18(2):155-175, 2009. [30] Chow W.T., Beom H.G., Alturi S.N. Calculation of stress intensity factors for an interfacial crack between dissimilar anisotropic media, using a hybrid element method and mutual integral. Computational Mechanics, 15(6):546-557, 1995. [31] Qu J., Bassani J.L. Interfacial fracture mechanics for anisotropic bimaterial. Journal of Applied Mechanics, 60:422-431, 1993. ### Document information Published on 28/09/21 Accepted on 10/09/21 Submitted on 09/08/20 Volume 37, Issue 3, 2021 DOI: 10.23967/j.rimni.2021.09.004 ### Document Score 0 Views 74 Recommendations 0
2021-12-03 18:46:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 116, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7687622904777527, "perplexity": 6916.230524514831}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362918.89/warc/CC-MAIN-20211203182358-20211203212358-00557.warc.gz"}
http://www.ncatlab.org/nlab/show/good+open+cover
# nLab good open cover ### Context #### Topology topology algebraic topology topos theory # Contents ## Definition ###### Definition A cover $\{U_i \to X\}$ of a topological space $X$ is called a good cover – or good open cover if it is 1. an open cover; 2. such that all the $U_i$ and all their nonempty finite intersections are contractible topological spaces. ###### Remark For $X$ a topological manifold one often needs that the non-empty finite intersections are homeomorphic to an open ball and for smooth manifolds often one needs that the finite non-empty intersections are diffeomorphic to an open ball. In the literature this is traditionally glossed over, but it is in fact a subtle point, see the discussion at open ball and see below at Existence on paracompact manifolds. Therefore it makes sense to explicitly say: ###### Definition Given a smooth manifold $X$ a differentiably good open cover is a good open cover one all whose finite non-empty intersections are in fact diffeomorphic to an open ball, hence to a Cartesian space. ## Properties ### Exsistence on paracompact manifold ###### Proposition Every paracompact manifold admits a good open cover, def. 1. ###### Proof Every paracompact manifold admits a Riemannian metric, and for any point in a Riemannian manifold there is a geodesically convex neighborhood (any two points in the neighborhood are connected by a unique geodesic in the neighborhood, one whose length is the distance between the points; see for example the remark after lemma 10.3 (Milnor) page 59). It is immediate that a nonempty intersection of finitely many such geodesically convex neighborhoods is also geodesically convex, hence contractible. ###### Remark It is apparently a folk theorem that every geodesically convex neighbourhood in a Riemannian manifold is diffeomorphic to a Cartesian space (for instance this is asserted on page 42 of (BottTu). This implies the following strengthening of the above result, which appears stated as theorem 5.1 in BottTu. But a complete proof in the literature is hard to find (see also the discussion of the references at ball). We give a complete proof below. ###### Proposition Every paracompact manifold of dimension $d$ admits a differentiably good open cover, def. 2, hence an open cover such that every non-empty finite intersection is diffeomorphic to the Cartesian space $\mathbb{R}^d$. ###### Proof By (Greene) every paracompact manifold admits a Riemannian metric with positive convexity radius $r_{\mathrm{conv}} \in \mathbb{R}$. Choose such a metric and choose an open cover consisting for each point $p\in X$ of the geodesically convex open subset $U_p := B_p(r_{conv})$ given by the geodesic $r_{conv}$-ball at $p$. Since the injectivity radius of any metric is at least $2r_{\mathrm{conv}}$ it follows from the minimality of the geodesics in a geodesically convex region that inside every finite nonempty intersection $U_{p_1} \cap \cdots \cap U_{p_n}$ the geodesic flow around any point $u$ is of radius less than or equal the injectivity radius and is therefore a diffeomorphism onto its image. Moreover, the preimage of the intersection region under the geometric flow is a star-shaped region in the tangent space $T_u X$: because the intersection of geodesically convex regions is itself geodesically convex, so that for any $v \in T_u X$ with $\exp(v) \in U_{p_1} \cap \cdots \cap U_{p_n}$ the whole geodesic segment $t \mapsto \exp(t v)$ for $t \in [0,1]$ is also in the region. So we have that every finite non-empty intersection of the $U_p$ is diffeomorphic to a star-shaped region in a vector space. By the results cited at ball (e.g. theorem 237 of (Ferus)) this star-shaped region is diffeomorphic to an $\mathbb{R}^n$. ### Coverages of good open covers ###### Corollary The category $ParaMfd$ of paracompact manifolds admits a coverage whose covering families are good open covers. The same holds true for subcategories such as ###### Proof It is sufficient to check this in $ParaMfd$. We need to check that for $\{U_i \to U\}$ a good open cover and $f : V \to U$ any morphism, we get commuting squares $\array{ V_j &\to& U_{i(j)} \\ \downarrow && \downarrow \\ V &\stackrel{f}{\to}& U }$ such that the $\{V_i \to V\}$ form a good open cover of $V$. Now, while $ParaMfd$ does not have all pullbacks, the pullback of an open cover does exist, and since $f$ is necessarily a continuous function this is an open cover $\{f^* U_i \to V\}$. The $f^* U_i$ need not be contractible, but being open subsets of a paracompact manifold, they are themselves paracompact manifolds and hence admit themselves good open covers $\{W_{i,j} \to f^* U_i\}$. Then the family of composites $\{W_{i,j} \to f^* U_i \to V\}$ is clearly a good open cover of $V$. ###### Proposition Every CW complex admits a good open cover. ###### Proof It suffices to prove that if $X$ admits a good open cover and $\phi: S^n \to X$ is an attaching map, then the pushout $\array{ S^n & \stackrel{\phi}{\to} & X \\ i \downarrow & & \downarrow \\ D^{n+1} & \to & Y }$ also admits a good open cover. Let $\{U_\alpha\}$ be a good open cover of $X$ closed under nonempty finite intersections, and choose a contracting homotopy $h_\alpha: I \times U_\alpha \to U_\alpha$ such that $h_\alpha(0, -) = id$ and $h_\alpha(1, -)$ is constant. For any subset $S \subseteq D^{n+1}$, let $Hull(S)$ denote the convex hull of $S$. Then, if $V$ is relatively open in the boundary $S^n$, $Hull(V)$ is open in $D^{n+1}$. It follows that the image in $Y$ of $V_\alpha \coloneqq U_\alpha \cup Hull(\phi^{-1}(U_\alpha)) \subseteq X \cup D^{n+1}$ is open in $Y$, and it is contractible: define a contracting homotopy $H_\alpha: I \times V_\alpha \to V_\alpha$ by $H_\alpha(t, v) = \left\{ \array{ (1 - 2t)v + 2t \frac{v}{|v|} & 0 \leq t \lt \frac1{2}, v \in int(D^{n+1}) \cap V_\alpha \\ v & 0 \leq t \leq \frac1{2}, v \in U_\alpha \\ h_\alpha(2t - 1, \phi(\frac{v}{|v|})) & \frac1{2} \leq t \leq 1, v \in int(D^{n+1}) \cap V_\alpha \\ h_\alpha(2t - 1, v) & \frac1{2} \leq t \leq 1, v \in U_\alpha } \right.$ These sets $V_\alpha$ together with $int(D^{n+1})$ form a good open cover of $Y$. ## $n$POV The following nPOV perspective on good open covers gives a useful general “explanation” for their relevance, which also explains the role of good covers in Cech cohomology generally and abelian sheaf cohomology in particular. ###### Proposition Let $sPSh(CartSp)_{proj}$ be the category of simplicial presheaves on the category CartSp equipped with the projective model structure on simplicial presheaves. Let $X$ be a smooth manifold, regarded as a 0-truncated object of $sPSh(C)$. Let $\{U_i \to X\}$ be a good open cover by open balls in the strong sense: such that every finite non-empty intersection is diffeomorphic to an $\mathbb{R}^d$. Then: the Cech nerve $C(\{U\}) \in sPSh(C)$ is a cofibrant resolution of $X$ in the local model structure on simplicial presheaves. ###### Proof By assumption we have that $C(U)$ is degreewise a coproduct of representables. It is also evidently a split hypercover. This implies the statement by the characterization of cofibrant objects in the projective structure. This has a useful application in the nerve theorem. Notice that the descent condition for simplicial presheaves on CartSp at (good) covers is very weak, since all Cartesian spaces are topologically contractible, so it is easy to find the fibrant objects $A \in sPSh(C)_{proj, loc}$ in the topological localization of $sPSh(C)_{proj}$ for the canonical coverage of CartSp. The above observation says that for computing the $A$-valued cohomology of a diffeological space $X$, it is sufficient to evaluate $A$ on (the Cech nerve of) a good cover of $X$. We can turn this around and speak for any site $C$ of a covering family $\{U_i \to X\}$ as being good if the corresponding Cech nerve is degreewise a coproduct of representables. In the projective model structure on simplicial presheaves on $C$ such good covers will enjoy the central properties of good covers of topological spaces. ## References • John Milnor, Morse theory , Princeton University Press (1963) • R. Greene, Complete metrics of bounded curvature on noncompact manifolds Archiv der Mathematik Volume 31, Number 1 (1978) Revised on January 16, 2014 14:15:10 by David Roberts (129.127.252.10)
2014-03-12 04:13:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 78, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8873457908630371, "perplexity": 298.54600774519014}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021306108/warc/CC-MAIN-20140305120826-00070-ip-10-183-142-35.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/User_talk:%E7%9F%B3%E5%BA%AD%E8%B1%90
# User talk:石庭豐 ## Talkback Hello, 石庭豐. You have new messages at Talk:Superkey. Message added 19:04, 3 May 2011 (UTC). You can remove this notice at any time by removing the {{Talkback}} or {{Tb}} template. ## Welcome! Hello, 石庭豐, and welcome to Wikipedia! Thank you for your contributions. I hope you like the place and decide to stay. Here are some pages that you might find helpful: I hope you enjoy editing here and being a Wikipedian! Please sign your messages on discussion pages using four tildes (~~~~); this will automatically insert your username and the date. If you need help, check out Wikipedia:Questions, ask me on my talk page, or ask your question and then place {{helpme}} before the question on your talk page. If you are interested in mathematics-related themes, you may want to check out the Mathematics Portal. If you are interested in improving mathematics-related articles, you may want to join WikiProject Mathematics (sign up here or say hello here). Again, welcome!  —Kri (talk) 17:16, 10 August 2011 (UTC) ## TeX {cases} with right bracket You asked for 'Something like \begin{cases} but for bracket on right-hand side?' at Help talk:Displaying a formula/Archive 2. I have no idea if (and how) it can be achieved in Wikiperia math renderer, but Google reveals there is a TeX mathtools package which allows it. Hope that helps: http://tex.stackexchange.com/questions/47560/how-to-put-a-brace-on-the-right-not-left-to-group-cases --CiaPan (talk) 09:33, 1 April 2015 (UTC) My question was actually rhetorical. I was actually requesting for new implementation. About TeX syntax as you suggested, thanks, but I'm not sure that would be interesting to users in a general manner. See, we have "Wiki" syntax to make math expressions. Then in some situations, we have to use TeX syntax! It would be nice if we have a unified syntax. 石庭豐 (talk) 10:11, 1 April 2015 (UTC)
2016-10-25 17:40:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9026546478271484, "perplexity": 3658.2795333507406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720238.63/warc/CC-MAIN-20161020183840-00481-ip-10-171-6-4.ec2.internal.warc.gz"}
https://byjus.com/rd-sharma-solutions/class-10-maths-chapter-4-triangles-exercise-4-3/
# RD Sharma Solutions Class 10 Triangles Exercise 4.3 ## RD Sharma Solutions Class 10 Chapter 4 Exercise 4.3 ### RD Sharma Class 10 Solutions Chapter 4 Ex 4.3 PDF Free Download #### Exercise 4.3 Q.1)  In a $\Delta$ ABC, AD is the bisector of $\angle$ A , meeting side BC at D. (i) if BD = 2.5 cm, AB = 5 cm, and AC = 4.2 cm, find DC. (ii) if BD = 2 cm, AB = 5 cm, and DC = 3 cm, find AC. (iii)  if AB = 3.5 cm, AC = 4.2 cm, and DC = 2.8 cm, find BD. (iv)  if AB = 10 cm, AC = 14 cm, and BC = 6 cm, find BD and DC. (v)  if AC = 4.2 cm, DC = 6 cm, and BC = 10 cm, find AB. (vi) if AB = 5.6 cm, BC = 6 cm, and DC = 3 cm, find BC. (vii) if AB = 5.6 cm, BC = 6 cm, and BD = 3.2 cm, find AC. (viii) if AB = 10 cm, AC = 6 cm, and BC = 12 cm, find BD and DC. Sol: (i) It is given that BD = 2.5 cm, AB = 5 cm, and AC = 4.2 cm. In $\Delta$ ABC,  AD is the bisector of $\angle$ A , meeting side BC at D. We need to find DC, Since, AD is $\angle$ A bisector, Then,  $\frac{AB}{AC} = \frac{2.5}{DC}$ $\frac{5}{4.2} = \frac{2.5}{DC}$ 5DC = 4.2 x 2.5 DC = (4.2 x 2.5)/5 DC = 2.1 (ii) It is given that BD = 2 cm, AB = 5 cm, and DC = 3 cm In $\Delta$ ABC, AD is the bisector of $\angle$ A, meeting side BC at D We need to find AC. Since, AD is $\angle$ A bisector. Therefore, $\frac{AB}{AC} = \frac{BD}{DC}$  (since AD is the bisector of $\angle$ A and side BC) Then, $\frac{5}{AC} = \frac{2}{3}$ 2AC = 5 x 3 AC = 15/2 AC = 7.5 cm (iii) It is given that AB = 3.5 cm, AC = 4.2 cm, and DC = 2.8 cm In $\Delta$ ABC, AD is the bisector of $\angle$ A, meeting side BC at D We need to find BD. Since, AD is $\angle$ A bisector Therefore,  $\frac{AB}{AC} = \frac{BD}{DC}$  (since, AD is the bisector of $\angle$ A and side  BC) Then,  $\frac{3.5}{4.2} = \frac{BD}{2.8}$ BD = (3.5 x 2.8)/4.2 BD = 7/3 BD = 2.3 cm (iv) It is given that AB = 10 cm, AC = 14 cm, and BC = 6 cm In $\Delta$ ABC, AD is the bisector of $\angle$ A meeting side BC at D We need to find BD and DC. Since, AD is bisector of $\angle$ A Therefore, $\frac{AB}{AC} = \frac{BD}{DC}$  (AD is bisector of $\angle$ A and side BC) Then, $\frac{10}{14} = \frac{x}{6 -x}$ 14x = 60 – 6x 20x = 60 x = 60/20 BD = 3 cm and DC = 3 cm. (v) It is given that AC = 4.2 cm, DC = 6 cm, and BC = 10 cm. In $\Delta$ ABC, AD is the bisector of  $\angle$ A, meeting side BC at D. We need to find out AB, Since, AD is the bisector of $\angle$ A Therefore, $\frac{AC}{AB} = \frac{DC}{BD}$ Then, $\frac{4.2}{AB} = \frac{6}{4}$ 6AB = 4.2 x 4 AB = (4.2 x 4)/6 AB = 16.8/6 AB = 2.8 cm (vi) It is given that AB = 5.6 cm, BC = 6 cm, and DC = 3 cm In $\Delta$ ABC,  AD is the bisector of  $\angle$ A,  meeting side BC at D We need to find BC, Since, AD is the $\angle$ A bisector Therefore, $\frac{AC}{AB} = \frac{BD}{DC}$ Then, $\frac{6}{5.6} = \frac{3}{DC}$ DC = 2.8 cm And, BC = 2.8 + 3 BC = 5.8 cm (vii) It is given that AB = 5.6 cm, BC = 6 cm, and BD = 3.2 cm In $\Delta$ ABC,  AD is the bisector of $\angle$ A , meeting side BC at D Therefore, $\frac{AB}{AC} = \frac{BD}{DC}$ $\frac{5.6}{AC} = \frac{3.2}{2.8}$   (DC = BC – BD) AC = (5.6 x 2.8)/3.2 AC = 4.9 cm (viii) It is given that AB = 10 cm, AC = 6 cm, and BC = 12 cm In $\Delta$ ABC, AD is the $\angle$ A bisector, meeting side BC at D. We need to find BD and DC Since, AD is bisector of $\angle$ A So, $\frac{AC}{AB} = \frac{DC}{BD}$ Let BD = x cm Then, $\frac{6}{10} = \frac{12 – x}{x}$ 6x = 120 – 10x 16x = 120 x = 120/16 x = 7.5 Now, DC = 12 – BD DC = 12 – 7.5 DC = 4.5 BD = 7.5 cm and DC = 4.5 cm. Q2.) AE is the bisector of the exterior $\angle$CAD meeting BC produced in E. If AB = 10 cm, AC = 6 cm, and BC = 12 cm, Find CE. Sol: It is given that AE is the bisector of the exterior $\angle$CAD Meeting BC produced E and AB = 10 cm, AC = 6 cm, and BC = 12 cm. Since AE is the bisector of the exterior $\angle$CAD. So, $\frac{BE}{CE} = \frac{AB}{AC}$ $\frac{12 + x}{x} = \frac{10}{x}$ 72 + 6x = 10x 4x = 72 x = 18 CE = 18 cm Q.3) $\Delta$ ABC is a triangle such that $\frac{AB}{AC} = \frac{BD}{DC}$, $\angle$B = 70, $\angle$C = 50, find $\angle$BAD. Sol: It is given that in $\Delta$ ABC, $\frac{AB}{AC} = \frac{BD}{DC}$, $\angle$B = 70 and $\angle$C = 50 We need to find $\angle$ BAD In $\Delta$ ABC, $\angle$A = 180 – (70 + 50) = 180 – 120 = 60 Since, $\frac{AB}{AC} = \frac{BD}{DC}$ Therefore, AD is the bisector of $\angle$A Hence, $\angle$BAD = 60/2 = 30 Q.4)  Check whether AD is the bisector of $\angle$A of $\Delta$ABC in each of the following : (i) AB = 5 cm, AC = 10 cm, BD = 1.5 cm and CD = 3.5 cm (ii) AB = 4 cm, AC = 6 cm, BD = 1.6 cm and CD = 2.4 cm (iii)  AB = 8 cm, AC = 24 cm, BD = 6 cm and BC = 24 cm (iv)  AB = 6 cm, AC = 8 cm, BD = 1.5 cm and CD = 2 cm (v)  AB = 5 cm, AC = 12 cm, BD = 2.5 cm and BC = 9 cm Sol: (i) It is given that AB = 5 cm, AC = 10 cm, BD = 1.5 cm and CD = 3.5 cm We have to check whether AD is bisector of $\angle$ A First we will check proportional ratio between sides. Now, $\frac{AB}{AC} = \frac{5}{10} = \frac{1}{2}$ $\frac{BD}{CD} = \frac{1.5}{3.5} = \frac{3}{7}$ Since, $\frac{AB}{AC} \neq \frac{BD}{CD}$ Hence, AD is not the bisector of $\angle$ A. (ii) It is given that AB = 4 cm, AC = 6 cm, BD = 1.6 cm and CD = 2.4 cm. We have to check whether AD is the bisector of $\angle$ A First we will check proportional ratio between sides. So,  $\frac{AB}{AC} = \frac{BD}{DC}$ $\frac{4}{6} = \frac{1.6}{2.4}$ $\frac{2}{3} = \frac{2}{3}$     (it is proportional) Hence, AD is the bisector of $\angle$ A. (iii) It is given that AB = 8 cm, AC = 24 cm, BD = 6 cm and BC = 24 cm. We have to check whether AD is the bisector of $\angle$ A First we will check proportional ratio between sides. DC = BC – BD DC = 24 – 6 DC = 18 So,  $\frac{AB}{AC} = \frac{BD}{DC}$ $\frac{8}{24} = \frac{6}{18}$ $\frac{1}{3} = \frac{1}{3}$   (it is proportional) Hence, AD is the bisector of $\angle$ A. (iv) It is given that AB = 6 cm, AC = 8 cm, BD = 1.5 cm and CD = 2 cm. We have to check whether AD is the bisector of $\angle$ A First, we will check proportional ratio between sides. So,  $\frac{AB}{AC} = \frac{BD}{DC}$ $\frac{6}{8} = \frac{1.5}{2}$ $\frac{3}{4} = \frac{3}{4}$     (it is proportional) Hence, AD is the bisector of $\angle$ A. (v) It is given that AB = 5 cm, AC = 12 cm, BD = 2.5 cm and BC = 9 cm. We have to check whether AD is the bisector of $\angle$ A First, we will check proportional ratio between sides. So,  $\frac{AB}{AC} = \frac{5}{12}$ $\frac{BD}{CD} = \frac{2.5}{9} = \frac{5}{18}$ Since, $\frac{AB}{AC} \neq \frac{BD}{CD}$ Hence, AD is not the bisector of $\angle$ A. Q.5) In fig. AD bisects $\angle$A, AB = 12 cm, AC = 20 cm, and BD = 5 cm, determine CD. Soln.: It is given that AD bisects $\angle$ A AB = 12 cm, AC = 20 cm, and BD = 5 cm. We need to find CD. Since AD is the bisector of  $\angle$ A then,  $\frac{AB}{AC} = \frac{BD}{DC}$ $\frac{12}{20} = \frac{5}{DC}$ 12 x DC = 20 x 5 DC = 100/12 DC = 8.33 cm ∴ CD = 8.33 cm. Q6.) In $\Delta$ABC, if $\angle$1 = $\angle$2, prove that, $\frac{AB}{AC} = \frac{BD}{DC}$ Sol:  We need to prove that, $\frac{AB}{AC} = \frac{BD}{DC}$ In  $\Delta$ABC, $\angle$1 = $\angle$2 So, AD is the bisector of $\angle$A Therefore, $\frac{AB}{AC} = \frac{BD}{DC}$ Q.7)  D and E are the points on sides BC, CA and AB respectively. of a $\Delta$ABC  such that AD bisects  $\angle$A, BE bisects $\angle$B and CF bisects $\angle$C. If AB = 5 cm, BC = 8 cm, and CA = 4 cm, determine AF, CE, and BD. Sol: It is given that AB = 5 cm, BC = 8 cm and CA = 4 cm. We need to find out, AF, CE and BD. Since, AD is the bisector of $\angle$A $\frac{AB}{AC} = \frac{BD}{CD}$ Then, $\frac{5}{4} = \frac{BD}{BC – BD}$ $\frac{5}{4} = \frac{BD}{8 – BD}$ 40 – 5BD = 4 BD 9BD = 40 So, BD = 40/9 Since, BE is the bisector of $\angle$ B So, $\frac{AB}{BC} = \frac{AE}{EC}$ $\frac{AB}{BC} = \frac{AC – EC}{EC}$ $\frac{5}{8} = \frac{4 – CE}{CE}$ 5CE = 32 – 8CE 5CE + 8CE = 32 13CE = 32 So, CE = $\frac{32}{13}$ Now, since, CF is the bisector of $\angle$C So, $\frac{BC}{CA} = \frac{BF}{AF}$ $\frac{8}{4} = \frac{AB – AF}{AF}$ $\frac{8}{4} = \frac{5 – AF}{AF}$ 8AF = 20 – 4AF 12AF = 20 So, 3AF = 5 AF = 5/3 cm, CE = 32/12 cm and BD = 40/9 cm #### Practise This Question Identify the proper fractions from the following: 1. 117 2. 37 3. 99999 4. 114788 5. 14100
2019-05-23 11:27:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6653632521629333, "perplexity": 1683.641824319815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257243.19/warc/CC-MAIN-20190523103802-20190523125802-00460.warc.gz"}
https://deepai.org/publication/weight-preserving-simulated-tempering
# Weight-Preserving Simulated Tempering Simulated tempering is popular method of allowing MCMC algorithms to move between modes of a multimodal target density π. One problem with simulated tempering for multimodal targets is that the weights of the various modes change for different inverse-temperature values, sometimes dramatically so. In this paper, we provide a fix to overcome this problem, by adjusting the mode weights to be preserved (i.e., constant) over different inverse-temperature settings. We then apply simulated tempering algorithms to multimodal targets using our mode weight correction. We present simulations in which our weight-preserving algorithm mixes between modes much more successfully than traditional tempering algorithms. We also prove a diffusion limit for an version of our algorithm, which shows that under appropriate assumptions, our algorithm mixes in time O(d [log d]^2). ## Authors • 4 publications • 18 publications • 14 publications • ### Optimal Temperature Spacing for Regionally Weight-preserving Tempering Parallel tempering is popular method for allowing MCMC algorithms to pro... 10/13/2018 ∙ by Nicholas G. Tawn, et al. ∙ 0 • ### Skew Brownian Motion and Complexity of the ALPS Algorithm Simulated tempering is a popular method of allowing MCMC algorithms to m... 09/25/2020 ∙ by Gareth O. Roberts, et al. ∙ 0 • ### A Framework for Adaptive MCMC Targeting Multimodal Distributions We propose a new Monte Carlo method for sampling from multimodal distrib... 12/06/2018 ∙ by Emilia Pompe, et al. ∙ 0 • ### Simulated Tempering Method in the Infinite Switch Limit with Adaptive Weight Learning We investigate the theoretical foundations of the simulated tempering me... 09/13/2018 ∙ by Anton Martinsson, et al. ∙ 0 • ### Does Hamiltonian Monte Carlo mix faster than a random walk on multimodal densities? Hamiltonian Monte Carlo (HMC) is a very popular and generic collection o... 08/09/2018 ∙ by Oren Mangoubi, et al. ∙ 0 • ### Accelerating Parallel Tempering: Quantile Tempering Algorithm (QuanTA) Using MCMC to sample from a target distribution, π(x) on a d-dimensional... 08/30/2018 ∙ by Nicholas G. Tawn, et al. ∙ 0 • ### Deep Multimodal Transfer-Learned Regression in Data-Poor Domains In many real-world applications of deep learning, estimation of a target... 06/16/2020 ∙ by Levi McClenny, et al. ∙ 0 ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Introduction Consider the problem of drawing samples from a target distribution, on a -dimensional state space where is only known up to a scaling constant. A popular approach is to use Markov chain Monte Carlo (MCMC) which uses a Markov chain that is designed in such a way that the invariant distribution of the chain is . However, if exhibits multimodality, then the majority of MCMC algorithms which use tuned localised proposal mechanisms, e.g. Roberts et al. (1997) and Roberts and Rosenthal (2001), fail to explore the state space, which leads to biased samples. Two approaches to overcome this multimodality issue are the simulated and parallel tempering algorithms. These methods augment the state space with auxiliary target distributions that enable the chain to rapidly traverse the entire state space. The major problem with these auxiliary targets is that in general they don’t preserve regional mass, see Woodard et al. (2009a), Woodard et al. (2009b) and Bhatnagar and Randall (2016). This problem can result in the required run-time of the simulated and parallel tempering algorithms growing exponentially with the dimensionality of the problem. In this paper, we provide a fix to overcome this problem, by adjusting the mode weights to be preserved (i.e., constant) over different inverse-temperatures. We apply our mode weight correction to produce new simulated and parallel tempering algorithms for multimodal target distributions. We show that, assuming the chain mixes at the hottest temperature, our mode-preserving algorithm will mix well for the original target as well. This paper is organised as follows. Section 2 reviews the simulated and parallel tempering algorithms and the existing literature for their optimal setup. Section 3 describes the problems with modal weight preservation that are inherent with the traditional approaches to tempering, and introduces a prototype solution called the HAT algorithm that is similar to the parallel tempering algorithm but uses novel auxiliary targets. Section 4 presents some simulation studies of the new algorithms. Section 5 provides a theoretical analysis of a diffusion limit and the resulting computational complexity of the HAT algorithm in high dimensions. Section 6 concludes and provides a discussion of further work. ## 2 Tempering Algorithms There is an array of methodology available to overcome the issues of multimodality in MCMC, the majority of which use state space augmentation e.g. Wang and Swendsen (1990), Geyer (1991), Marinari and Parisi (1992), Neal (1996), Kou et al. (2006), Nemeth et al. (2017). Auxiliary distributions that allow a Markov chain to explore the entirety of the state space are targeted, and their mixing information is then passed on to aid inter-modal mixing in the desired target. A convenient approach for augmentation methods, such as the popular simulated tempering (ST) and parallel tempering (PT) algorithms introduced in Geyer (1991) and Marinari and Parisi (1992), is to use power-tempered target distributions, for which the target distribution at inverse temperature level is defined as πβ(x)∝[π(x)]β for . For each algorithm one needs to choose a sequence of “inverse temperatures” such that where , so that equals the original target density , and hopefully the hottest distribution is sufficiently flat that it can be easily sampled. The ST algorithm augments the original state space with a single dimensional component indicating the current inverse temperature level creating a - dimensional chain, , defined on the extended state space that targets π(β,x)∝K(β)π(x)β (1) where ideally , resulting in a uniform marginal distribution over the temperature component of the chain. Techniques to learn exist in the literature, e.g. Wang and Landau (2001) and Atchadé and Liu (2004), but these techniques can be misleading unless used with care. The ST algorithm procedure is given in Algorithm 1. The PT approach is designed to overcome the issues arising due to the typically unknown marginal normalisation constants. The PT algorithm runs a Markov chain on an augmented state space with target distribution defined as πn(x0,x1,…,xn)∝πβ0(x0)πβ1(x1)…πβn(xn). The PT algorithm procedure is given in Algorithm 2. ### 2.1 Optimal Scaling for the ST and PT Algorithms Atchadé et al. (2011) and Roberts and Rosenthal (2014) investigated the problem of selecting optimal inverse-temperature spacings for the ST and PT algorithms. Specifically, if a move between two consecutive temperature levels, and , is to be proposed, then what is the optimal choice of ? Too large, and the move will probably be rejected; too small, and the move will accomplish little (similar to the situation for the Metropolis algorithm, cf. Roberts et al. (1997) and Roberts and Rosenthal (2001)). For ease of analysis, Atchadé et al. (2011) and Roberts and Rosenthal (2014) restrict to -dimensional target distributions of the iid form: π(x)∝d∏i=1f(xi). (4) They assume that the process mixes immediately (i.e., infinitely quickly) within each temperature, to allow them to concentrate solely on the mixing of the inverse-temperature process itself. To achieve non-degeneracy of the limiting behaviour of the inverse-temperature process as , the spacings are scaled as , i.e.  where a positive value to be chosen optimally. Under these assumptions, Atchadé et al. (2011) and Roberts and Rosenthal (2014) prove that the inverse-temperature processes of the ST and PT algorithms converge, when speeded up by a factor of , to a specific diffusion limit, independent of dimension, which thus mixes in time , implying that the original ST and PT algorithms mix in time as . They also prove that the mixing times of the ST and PT algorithms are optimised when the value of is chosen to maximise the quantity ℓ2×2Φ(−ℓ√I(β)2) where . Furthermore, this optimal choice of corresponds to an acceptance rate of inverse-temperature moves of 0.234 (to three decimal places), similar to the earlier Metropolis algorithm results of Roberts et al. (1997) and Roberts and Rosenthal (2001). From a practical perspective, setting up the temperature levels to achieve optimality can be done via a stochastic approximation approach (Robbins and Monro (1951)), similarly to Miasojedow et al. (2013) who use an adaptive MCMC framework (see e.g. Roberts and Rosenthal (2009)). ### 2.2 Torpid Mixing of ST and PT Algorithms The above optimal scaling results suggest that the mixing time of the ST and PT algorithms through the temperature schedule is , i.e. grows only linearly with the dimension of the problem, which is very promising. However, this optimal, non-degenerate scaling was derived under the assumption of immediate, infinitely fast within-temperature mixing, which is almost certainly violated in any real application. Indeed, this assumption appears to be overly strong once one considers the contrasting results regarding the scalability of the ST algorithm from Woodard et al. (2009a) and Woodard et al. (2009b). Their approach instead relies on a detailed analysis of the spectral gap of the ST Markov chain and how it behaves asymptotically in dimension. They show that in cases where the different modal structures/scalings are distinct, this can lead to mixing times that grow exponentially in dimension, and one can only hope to attain polynomial mixing times in special cases where the modes are all symmetric. The fundamental issue with the ST/PT approaches are that in cases where the modes are not symmetric, the tempered targets do not preserve the regional/modal weights. That motivates the current work, which is designed to preserve the modal weights even when performing tempering transformations, as we discuss next. Interestingly, a lack of modal symmetry in the multimodal target will affect essentially all the standard multimodal focused methods: the Equi-Energy Sampler of Kou et al. (2006), the Tempered Transitions of Neal (1996), and the Mode Jumping Proposals of Tjelmeland and Hegstad (2001), all suffer in this setting. Hence, the work in this paper is applicable beyond the immediate setting of the ST/PT approaches. ## 3 Weight Stabilised Tempering In this section, we present our modifications which preserve the weights of the different modes when performing tempering transformations. We first motivate our algorithm by considering mixtures of Gaussian distributions. Consider a -dimensional bimodal Gaussian target distribution with means, covariance matrices and weights given by for respectively. Hence the target density is given by: π(x)=w1ϕ(x,μ1,Σ1)+w2ϕ(x,μ2,Σ2), (5) where is the pdf of a multivariate Gaussian with mean and covariance matrix . Assuming the modes are well-separated then the power tempered target from (1) can be approximated by a bimodal Gaussian mixture (cf. Woodard et al. (2009b), Tawn (2017)): π(x)=W(1,β)ϕ(x,μ1,Σ1β)+W(2,β)ϕ(x,μ2,Σ2β), (6) where the weights are approximated as W(i,β) = wβi|Σi|1−β2wβ1|Σ1|1−β2+wβ2|Σ2|1−β2 (7) ∝ wβi|Σi|1−β2. A one-dimensional example of this is illustrated in Figure 1, which shows plots of a bimodal Gaussian mixture distribution as . Clearly the second mode, which was originally wide but very short and hence of low weight, takes on larger and larger weight as , thus distorting the problem and making it very difficult for a tempering algorithm to move from the second mode to the first when is small. Higher dimensionality makes this weight-distorting issue exponentially worse. Consider the situation with but and where is the identity matrix. Then W(2,β)W(1,β)≈σd(1−β), (8) so the ratio of the weights degenerates exponentially fast in the dimensionality of the problem for a fixed . This heuristic result in ( 8 ) shows that between levels there can be a “phase-transition” in the location of probability mass, which becomes critical as dimensionality increases. ### 3.1 Weight Stabilised Gaussian Mixture Targets Consider targeting a Gaussian mixture given by π(x)∝J∑j=1wjϕ(x,μj,Σj) (9) in the (practically unrealistic) setting where the target is a Gaussian mixture with known parameters, including the weights. By only tempering the variance component of the modes, one can spread out the modes whilst preserving the component weights. We formalise this notion as follows: ###### Definition 1 (Weight-Stabilised Gaussian Mixture (WSGM)) For a Gaussian mixture target distribution , as in (9), the weight-stabilised Gaussian mixture (WSGM) target at inverse temperature level is defined as πWSβ(x)∝J∑j=1wjϕ(x,μj,Σjβ). (10) Figure 2 shows the comparison between the target distributions used when using power-based targets vs the WSGM targets for the example from Figure 1. Using these WSGM targets in the PT scheme can give substantially better performance than when using the standard power based targets. This is very clearly illustrated in the simulation study section below in Section 4.1. Henceforth, when the term “WSGM ST/PT Algorithm” is used it refers to the implementation of the standard ST/PT algorithm but now using the WSGM targets from (10). ### 3.2 Approximating the WSGM Targets In practice, the actual target distribution would be non-Gaussian, and only approximated by a Gaussian mixture target. On the other hand, due to the improved performance gained from using the WSGM over just targeting the respective power-tempered mixture, there is motivation to approximate the WSGM in the practical setting where parameters are unknown. To this end, we present a theorem establishing useful equivalent forms of the WSGM; these alternative equivalent forms give rise to a practically applicable approximation to the WSGM. To establish the notation, let the target be a mixture distribution given by π(x)∝J∑j=1hj(x)=J∑j=1wjgj(x) (11) where is a normalised target density. Then set πβ(x)∝J∑j=1fj(x,β)=J∑j=1W(j,β)[gj(x)]β∫[gj(x)]βdx. (12) Then we have the following result, proved in the Appendix. ###### Theorem 3.1 (WSGM Equivalences) Consider the setting of (11) and (12) where the mixture components consist of multivariate Gaussian distributions i.e. . Then 1. [Standard, non-weight-preserving tempering] If then W(j,β)∝wβj|Σj|1−β2. 2. [Weight-preserving tempering, version #1] Denoting and ; if takes the form hj(x)exp{(1−β2)(∇j(x))T[∇2j(x)]−1∇j(x)}. then . 3. [Weight-preserving tempering, version #2] If fj(x,β)=hj(x)βhj(μj)(1−β) then . Remark 1: In Theorem 3.1, statement (b) shows that second order gradient information of the ’s can be used to preserve the component weight in this setting. Remark 2: Statement (c) extends statement (b) to no longer require the gradient information about the but simply the mode/mean point . Essentially this shows that by appropriately rescaling according to the height of the component as the components are “powered up” then component weights are preserved in this setting. Remark 3: A simple calculation shows that statement (c) holds for a more general mixture setting when all components of the mixture share a common distribution but different location and scale parameters. The results of Theorem 3.1 are derived under the impractical setting that the components are all known and that is indeed a mixture target. One would like to exploit the results of (b) and (c) from Theorem 3.1 to aid mixing in a practical setting where the target form is unknown but may be well approximated by a mixture. Suppose herein that the modes of the multimodal target of interest, , are well separated. Thus an approximating mixture of the form given in (11) would approximately satisfy π(x)∝hM(x) where . Hence it is tempting to apply a version of Theorem 3.1(b) to directly as opposed to the . So at inverse temperature , the point-wise target would be proportional to π(x)exp{(1−β2)(∇π(x))T[∇2π(x)]−1∇π(x)}. where and . This only uses point-wise gradient information up to second order. At many locations in the state space, provided that is at a temperature level that is sufficiently cool that the tail overlap is insignificant, and if the target was indeed a Gaussian mixture then this approach would give almost exactly the same evaluations as from (12) in the setting of (b). However, at locations between modes when the Hessian of is positive semi-definite, this target behaves very badly, with explosion points that make it improper. Instead, under the setting of well separated modes then one can appeal instead to the weight preserving characterisation in Theorem 3.1(c). Assume that one can assign each location in the state space to a “mode point” via some function , with a corresponding tempered target given by πβ(x)∝π(x)βπ(μx,β)1−β. Note the mode assignment function’s dependence on . This can be understood to be necessary by appealing to Figure 2 where it is clear that the narrow mode in the WSGM target has a “basin of attraction” that expands as the temperature increases. ###### Definition 2 (Basic Hessian Adjusted Tempering (BHAT) Target) For a target distribution on with a corresponding “mode point assigning function” ; the BHAT target at inverse temperature level is defined as πBHβ(x)∝π(x)βπ(μx,β)1−β. (13) However, in this basic form there is an issue with this target distribution at hot temperatures when . The problem is that it leaves discontinuities that can grow exponentially large and this can make the hot state temperature level mixing exponentially slow if using standard MCMC methods for the within temperature moves. This problem can be mitigated if one assumes more knowledge about the target distribution. Suppose that the mode points are known and so there is a collection of mode points . This assumption seems quite strong but in general if one cannot find mode points then this is essentially saying that one cannot find the basins of attraction and thus the desire to obtain the modal relative masses (as MCMC is trying to do) must be relatively impossible. Indeed, being able to find mode points either prior to or online in the run of the algorithm is possible e.g. Tjelmeland and Hegstad (2001), Behrens (2008) and Tawn et al. (2018). Furthermore, assume that the target, , is in a neighbourhood of the mode locations and so there is an associated collection of positive definite covariance matrices where . From this and knowing the evaluations of at the mode points then one can approximate the weights in the regions to attain a collection where ^wj=π(μj)|Σj|1/2∑Kk=1π(μk)|Σk|1/2 With denoting the pdf of a then we define the following modal assignment function motivated by the WSGM: ###### Definition 3 (WSGM mode assignment function) With collections , and specified above then for a location and inverse temperature define the WSGM mode assignment function as A(x,β)=argmaxj{^wjϕ(x|μj,Σjβ)}. (14) Under the assumption that there are collections , and that have either been found through prior optimisation or through an adaptive online approach we define the following: ###### Definition 4 (Hessian Adjusted Tempering (HAT) Target) For a target distribution on with collections , and defined above along with the associated mode assignment function given in (14), then the Hessian adjusted tempering (HAT) target is defined as πHβ(x)∝{π(x)βπ(μA(x,β))1−βif A(x,β)=A(x,1)G(x,β)if A(x,β)≠A(x,1) (15) where with G(x,β)=π(μ^A)((2π)dΣ^A)1/2ϕ(x|μ^A,Σ^Aβ)βd/2. Essentially the function “” specifies the target distribution when the chain’s location, , is in a part of the state space where the narrower modes expand their basins of attraction as the temperature gets hotter. Both the choice of and the mode assignment function used in Definition 4 are somewhat canonical to the Gaussian mixture setting. With the same assignment function specified in Definition 3, an alternative and seemingly robust “” that one could use is given by G(x,β)=π(x,1,A) +(2P(A(x,β))P(A(x,β))+P(A(x,1))−1)[π(x,β,A)−π(x,1,A)] where and . With either of the suggested forms of the function then under the assumption that the target is continuous and bounded on , and that for all , ∫Xπβ(x)dx<∞, then is a well defined probability density, i.e. Definition 4 makes sense. For a bimodal Gaussian mixture example Figure 3 compares the HAT target relative to the WSGM target; showing that the HAT targets are a very good approximation to the WSGM targets, even at the hotter temperature levels. We propose to use the HAT targets in place of the power-based targets for the tempering algorithms given in Section 2. We thus define the following algorithms, which are explored in the following sections. ###### Definition 5 (Hessian Adjusted Simulated Tempering (HAST) Algorithm) The HAST algorithm is an implementation of the ST algorithm (Section 2, Algorithm 1) where the target distribution at inverse temperature level is given by from Definition 4. ###### Definition 6 (Hessian Adjusted (Parallel) Tempering (HAT) Algorithm) The HAT algorithm is an implementation of the PT algorithm (Section 2, Algorithm 2) where the target distribution at inverse temperature level is given by from Definition 4. ## 4 Simulation Studies ### 4.1 WSGM Algorithm Simulation Study We begin by comparing the performances of a ST algorithm targeting both the power-based and WSGM targets for a simple but challenging bimodal Gaussian mixture target example. The example will illustrate that the traditional ST algorithm, using power-based targets, struggles to mix effectively through the temperature levels due to a bottleneck effect caused by the lack of regional weight preservation. The example considered is the 10-dimensional target distribution given by the bi-modal Gaussian mixture π(x)=w1ϕ(μ1,Σ1)(x)+w2ϕ(μ2,Σ2)(x) (16) where , , , , and . When power based tempering is used, then mode 1 accounts for only 20% of the mass at the cold level, but at the hotter temperature levels becomes the dominant mode containing almost all the mass. For both runs the same geometric temperature schedule was used: Δ={1,0.32,0.322,…,0.326}. This geometric schedule is justified by Corollary 1 of Tawn and Roberts (2018), which suggests this is an optimal setup in the case of a regionally weight preserved PT setting. Indeed, this schedule induces a swap move acceptance rates around 0.22 for the WSGM algorithm; close to the suggested 0.234 optimal value. This temperature schedule gave swap acceptance rates of approximately 0.23 between all levels of the power-based ST algorithm except for the coldest level swap where this degenerated to 0.17. That shows that the power-based ST algorithm was set up essentially optimally according to the results in Atchadé et al. (2011). In order to ensure that the within-mode mixing isn’t influencing the temperature space mixing, a local modal independence sampler was used for the within-mode moves. This essentially means that once a mode has been found, the mixing is infinitely fast. We use the modal assignment function which specifies that the location is in mode 1 if and in mode 2 otherwise. Then the within-move proposal distribution for a move at inverse temperature level is given by (17) where is the density function of a Gaussian random variable with mean and variance matrix . Figure 4 plots a functional of the inverse temperature at each iteration of the algorithm. The functional is (18) where is the usual sign function and is the minimum of the inverse temperatures. The significance of this functional will become evident from the results of the core theoretical contributions made in this paper in Theorems 5.1 and 5.2 in Section 5. Essentially it is taking a transformation of the current inverse temperature at iteration of the Markov chain, such that when the magnitude of is 1 and when the temperature is at its hottest level, i.e. , then is zero. Furthermore, in this example the sign of is a reasonable proxy to identify the mode that the chain is contained in with a negative value suggesting the chain is in the mode centred on and otherwise. Figure 4 clearly illustrates that the hot state modal weight inconsistency leads the chain down a poor trajectory since at hot temperatures nearly all the mass is in modal region 1. This results in the chain never reaching the other mode in the entire (finite) run of the algorithm. Indeed, the trace plots in Figure 4 show that the chain is effectively trapped in mode 1, which although it only has 20% of the mass in the cold state, is completely dominant at the hotter states. ### 4.2 Simulation study for HAT To demonstrate the capabilities of the HAT algorithm in a non-Gaussian setting where the modes exhibit skew then a five-dimensional four-mode skew-normal mixture target example is presented. Albeit a mixture, this example can be seen to give similar target distribution geometries to non-mixture settings due to the skew nature of the modes. π(x)∝4∑k=1wk5∏i=1f(xi|μk,σk,α) (19) where the skew normal density is given by f(z|μ,σ,α)=2σϕ(z−μσ)Φ(α(z−μ)σ) and where , , , , , , , , and . As will be seen in the forthcoming simulation results the imbalance of scales within each modal region ensures that this is a very challenging problem for the PT algorithm. Since this target fits into the setting of Corollary 1 of Tawn and Roberts (2018) then a geometric inverse temperature schedule is approximately optimal for the HAT target in this setting. Indeed, Tawn and Roberts (2018) suggest that the geometric ratio should be tuned so that the acceptance rate for swap moves between consecutive temperatures is approximately 0.234. In this case, eight tempering levels were used to obtain effective mixing; these were geometrically spaced and given by , was found to be approximately optimal and gave an average of 0.22 for the swaps between consecutive levels for the HAT algorithm. Using this temperature schedule along with appropriately tuned RWM proposals for the within temperature moves, 10 runs of both the PT and HAT algorithms were performed. In each individual run, each temperature marginal was updated with RWM proposals followed by a temperature swap move proposal and this was repeated with sweeps. This results in a sample output of 600,001 of the cold state chain prior to any burn-in removal. Herein for this example denote . As expected, the scale imbalance between the modes resulted in the PT algorithm performing poorly and with significant bias in the sample output. In contrast, the HAT approach was highly successful in converging relatively rapidly to the target distribution, exhibiting far more frequent inter-modal jumps at the cold state. Figure 5 shows one representative example of a run of the PT and HAT algorithms by plotting the first component of the five-dimensional marginal chain at the coldest target state. It illustrates the impressive inter-modal mixing of HAT across all 4 modal regions as opposed to the very sticky mixing exhibited by the PT algorithm. Figure 6 shows the running approximation of (which is approximately the weight of the first mode i.e. ) after the iteration of the cold state chains, after removing a burn-in period of 10,000 initial iterations, for the ten runs of the PT and HAT runs respectively. The approximation after iteration is given by ^Wk1:=1k−10000k∑i=100011(−30 where is the location of the first component of the five-dimensional chain at the coldest temperature level after the iteration. This figure indicates that the PT algorithm fails to provide a stable estimate for with the running weight approximations far from stable at the end of the runs; in stark contrast the HAT algorithm exhibits very stable performance in this case. In fact the final estimates for the are given in Table 1. Table 2 presents the results of using the 10 runs of each algorithm in a batch-means approach to estimate the Monte Carlo variance of the estimator of . The results in Table 2 show that the Monte Carlo error is approximately a factor of 10 higher for the PT algorithm than the HAT approach. However, it is also important to analyse this inferential gain jointly with the increase in computational cost. Table 2 also shows that the average run time for the 10 HAT runs was 451 seconds which is a little more than 2 times slower than the average run time of the PT algorithm (217 seconds) in this example. The major extra expense is due to the cost of computing the WSGM mode assignment function in (14) at both the cold and current temperature of interest at each evaluation of the HAT target. Anyhow, this is very promising since for a little more than twice the computational cost the inferential accuracy appears to be ten times better in this instance. ## 5 Diffusion Limit and Computational Complexity In this section, we provide some theoretical analysis for our algorithm. We shall prove in Theorems 5.1 and 5.2 below that as the dimension goes to infinity, a simplified and speeded-up version of our weight-preserving simulated tempering algorithm (i.e., the HAST Algorithm from Definition 5, equivalent to the ST Algorithm 1 with the adjusted target from Definition 4) converges to a certain specific diffusion limit. This limit will allow us to draw some conclusions about the computational complexity of our algorithm. ### 5.1 Assumptions We assume for simplicity (though see below) that our target density is a mixture of the form (11) with just modes, of weights and respectively, with each mixture component a special i.i.d. product as in (4). We further assume that a weight-preserving transformation (perhaps inspired by Theorem 3.1(b) or (c)) has already been done, so that πβ(x) ∝ p[g1(x)]β∫[g1(x)]βdx+(1−p)[g2(x)]β∫[g2(x)]βdx ≡ pgβ1(x)+(1−p)gβ2(x) for each . We consider a simplified version of the weight-preserving process, in which the chain always mixes immediately within each mode, but the chain can only jump between modes when at the hottest temperature , at which point it jumps to one of the two modes with probabilities and respectively. Let denote the indicator of which mode the process is in, taking value or . We shall sometimes concentrate on the Exponential Power Family special case in which each of the two mixture component factors is of the form for some . This includes the Gaussian case for which and . (Note that the HAT target in (15) requires the existence of second derivatives about the mode points, corresponding to .) As in Atchadé et al. (2011) and Roberts and Rosenthal (2014), following Predescu et al. (2004) and Kone and Kofke (2005), we assume that the inverse temperatures are given by , with βi=βi−1−ℓ(βi−1)/d1/2 (21) for some fixed function . In many cases, including the Exponential Power Family case, the optimal choice of is for a constant . We let be the inverse temperature at time for the -dimensional process. To study weak convergence, we let be a continuous-time version of the process, speeded up by a factor of , where is an independent standard rate 1 Poisson process. To combine the two modes into one single process, we further augment this process by multiplying it by when the algorithm’s state is closer to the second mode, while leaving it positive (unchanged) when state is closer to the first mode. Thus define X(d)t=(3−2I)β(d)N(dt). (22) ### 5.2 Main Results Our first diffusion limit result (proved in the Appendix), following Roberts and Rosenthal (2014), states that when we are at an inverse temperature greater than , the inverse temperature process behaves identically to the case where there is only one mode (i.e. ). ###### Theorem 5.1 Assume the target is of the form (11), with modes of weights and , with inverse weights chosen as in (21). Then up until the first time the process hits , as , converges weakly to a fixed diffusion process given by (22). Theorem 5.1 described what happens away from . However it says nothing about what happens at . Moreover its statespace is not connected, and we have not even properly defined at . To resolve these issues we define h(x)=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩∫xβmin1ℓ(u)du, when % x>0−∫−xβmin1ℓ(u)du, when x<00, when x=0 and set , thus making the process continuous at 0. ###### Remark 1 The process leaves constant densities locally invariant, for all where is the adjoint of the infinitesimal generator of , as will be shown in the Appendix. This suggests that the density of the invariant distribution of (if it exists) should be piecewise uniform, i.e. it should be constant for and also constant for though these two constants might not be equal. To make further progress, we require a proportionality condition. Namely, we assume that the quantities corresponding to are proportional to each other in the two modes. More precisely, we extend the definition of to for (corresponding to the first mode), and for (corresponding to the second mode), and assume there is a fixed function and positive constants and such that we have for (in the first mode), while for (in the second mode). For example, it follows from Section 2.4 of Atchadé et al. (2011) that in the Exponential Power Family case, for and for , so that this proportionality condition holds in that case. Corresponding to this, we choose the inverse temperature spacing function as follows (following Atchadé et al. (2011) and Roberts and Rosenthal (2014)): ℓ(β) = I−1/20(β)ℓ0 (23) for some fixed constant . To state our next result, we require the notion of skew Brownian motion, a generalisation of usual Brownian motion. Informally, this is a process that behaves just like a Brownian motion, except that the sign of each excursion from 0 is chosen using an independent Bernoulli random variable; for further details and constructions and discussion see e.g. Lejay (2006). We also require the function z(h) = h[2Φ(−ℓ02√r(h))]−1/2. where for and for . We then have the following result (also proved in the Appendix). ###### Theorem 5.2 Under the set-up and assumptions of Theorem 5.1, assuming the above proportionality condition and the choice (23), then as , the process converges weakly in the Skorokhod topology to a limit process . Furthermore, the limit process has the property that if Zt=z(h(Xt)), then is skew Brownian motion with reflection at (3−2i)[2Φ(−ℓ02√ri)]−1/2∫1βmin1ℓ(u)du,   i=1,2 . (24) ###### Remark 2 It follows from the proof of Theorem 5.2 that the specific version of skew Brownian motion that arises in the limit is one with excursion weights proportional to a=p[2Φ(−ℓ02√r1)]1/2% and\leavevmode\nobreak\ b=(1−p)[2Φ(−ℓ02√r2)]1/2. That means that the stationary density for on the positive and negative values is proportional to and respectively. This might seem surprising since the limiting weights of the modes should be equal to and , not proportional to and (unless ). The explanation is that the lengths of the positive and negative parts of the domain are given by and respectively. Hence, the total stationary mass of the positive and negative parts – and hence also the limiting modes weights – are still and as they should be. ### 5.3 Complexity Order Theorems 5.1 and 5.2 have implications for the computational complexity of our algorithm. In Theorem 5.1, the limiting diffusion process is a fixed process, not depending on dimension except through the value of . It follows that if is kept fixed, then reaches 0 (and hence mixes modes) in time . Since is derived (via ) from the process speeded up by a factor of , it thus follows that for fixed , reaches (and hence mixes modes) in time . So, if is kept fixed, then the mixing time of the weight-preserving tempering algorithm is , which is very fast. However, this does not take into account the dependence on , which might also change as a function of . Theorem 5.2 allows for control of the dependence of mixing time on the values of . The limiting skew Brownian motion process is a fixed process, not depending on dimension nor on , with range given by the reflection points in (24). It follows that reaches 0 (and hence mixes modes) in time of order the square of the total length of the interval, i.e. of order ⎛⎝2∑i=1[2Φ(−ℓ02√ri)]−1/2∫1βmin1ℓ(u)du⎞⎠2 In the Exponential Power Family case, this is easily computed to be . This raises the question of how large needs to be, as a function of dimension . If the proposal scaling is optimal for within each mode at the cold temperature, then the proposal scaling is . Then, at an inverse temperature , the proposal scaling is . Hence, at an inverse temperature , the probability of jumping from one mode to the other (a distance away) is roughly of order . This is exponentially small unless . This indicates that, for our algorithm to perform well, we need to choose . With this choice, the mixing time order becomes ⎛⎝2∑i=1[2Φ(−ℓ02√ri)]−1/2∫11/d21ℓ(u)du⎞⎠2 In the Exponential Power Family case, this corresponds to . That is, for the inverse temperature process to hit and hence mix modes, takes iterations. This is a fairly modest complexity order, and compares very favourably to the exponentially large convergence times which arise for traditional simulated tempering as discussed in Subsection 2.2. ### 5.4 More than Two Modes Finally, we note that for simplicity, the above analysis was all done for just two modes. However, a similar analysis works more generally. Indeed, suppose now that we have modes, of general weights with . Then when gets to , the process chooses one of the modes with probability . This corresponds to being replaced by a Brownian motion not on , but rather on a “star” shape with different length-1 line segments all meeting at the origin (corresponding, in the original scaling, to ), where each time the Brownian motion hits the origin it chooses one of the line segments with probability each. This process is called Walsh’s Brownian motion, see e.g. Barlow et al. (1989). (The case but corresponds to skew Brownian motion as above.) For this generalised process, a theorem similar to Theorem 5.1 can be then stated and proved by similar methods, leading to the same complexity bound of iterations in the multimodal case as well. ## 6 Conclusion and Further Work This article has introduced the HAT algorithm to mitigate the lack of regional weight preservation in standard power-based tempered targets. Our simulation studies show promising mixing results, and our theorems indicate the mixing times can become polynomial rather than exponential functions of the dimension , and indeed of time under appropriate assumptions. Various questions remain to make our HAT approach more practically applicable. The “modal assignment function” needs to be specified in an appropriate way, and more exploration into the robustness of the current assignment mechanism is needed to understand its performance on heavier and lighter tailed distributions. The suggested HAT target assumes knowledge of the mode points which typically one will not have to begin with and one would rely on effective optimisation methods to seek these out either during or prior to the run of the algorithm. Indeed, this has been partially explored by the authors in Tawn et al. (2018). The performance of HAT is heavily reliant on the mixing at the hottest temperature level; the use of RWM here can be problematic for HAT where the mode heights of the disperse modes can be far lower than the narrower modes. As such more advanced sampling schemes such as discretised tempered Langevin could give accelerated mixing at the hot state; the effects of which would be transferred to an improvement in the mixing at the coldest state. In the theoretical analysis of Section 5, the spacing between consecutive inverse-temperature levels was taken to be to induce a non trivial diffusion limit. However, this result required strong assumptions. Accompanying work in Tawn and Roberts (2018) suggests that for the HAT algorithm under more general conditions, the consecutive optimal spacing should still be , with an associated optimal acceptance rate in the interval . ## 7 Appendix In this Appendix, we prove the theorems stated in the paper. ### 7.1 Proof of Theorem 3.1 Herein, assume the mixture distribution setting of (11) and (12) where the mixture components consist of multivariate Gaussian distributions i.e. . We prove each of the three parts of Theorem 3.1 in turn. ###### Proof (Proof of Theorem 3.1(a)) Recall that where such that . Hence, taking gives fj(x,β) = wβjϕ(x;μj,Σj)β = wβj⎡⎢ ⎢⎣((2π)d|Σj|)1−β2βd/2⎤⎥ ⎥⎦ϕ(x;μj,Σjβ) ∝ wβj|Σj|1−β2ϕ(x;μj,Σjβ) ###### Proof (Proof of Theorem 3.1(b)) Recall the result of Theorem 3.1(a). To adjust for the weight discrepancy from the cold state target a multiplicative adjustment factor, is used such that fj(x,β)=hj(x)βαj(x,β) where . An identical argument to Theorem 3.1(a) shows that this immediately gives . In a Gaussian setting, up to a proportionality constant wj∝hj(x)[(2π)d2|Σj|12exp{12(x−μj)TΣ−1j(x−μj)}] (25) and at any point ∇loghj(x) = −Σ−1j(x−μj) (26) ∇2loghj(x) = −Σ−1j.
2021-05-07 20:08:26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8520061373710632, "perplexity": 913.4594015915583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988802.93/warc/CC-MAIN-20210507181103-20210507211103-00269.warc.gz"}
https://www.notatee.com/courses/video/work-kinetic-energy-and-work-energy-theorem-problems-2
• عربي Need Help? Subscribe to Physics 1 \${selected_topic_name} • Notes $\begin{array}{l}{\text { Adult cheetahs, the fastest of the }} \\ {\text { great cats, have a mass of about } 70 \mathrm{kg} \text { and have been clocked run- }} \\ {\text { ning at up to } 72 \mathrm{mph}(32 \mathrm{m} / \mathrm{s}) . \text { (a) How many joules of kinetic }} \\ {\text { energy does such a swift cheetah have? (b) By what factor would }} \\ {\text { its kinetic energy change if its speed were doubled? }}\end{array}$ $m=70kg \quad, \quad V=32 \mathrm{m/s}$ $k=\frac{1}{2} m v^{2}$ (a) $k=\frac{1}{2}(70)(32)^{2}=3.6 \times 10^{4} J$ $K=\frac{1}{2}(70)(64)^{2}=14.4 \times 10^{4} J$ $\frac{3.6 * 10^{4}}{14.4 *10^{4}}=\frac{1}{4} \longrightarrow 4 \text { times }$ $\begin{array}{l}{\text { Use the work-energy theorem to solve each of these prob- }} \\ {\text { lems. You can use Newton's laws to check your answers. Neglect }} \\ {\text { air resistance in all cases. (a) A branch falls from the top of a }} \\ {95.0 \text { -m-tall redwood tree, starting from rest. How fast is it moving }} \\ {\text { when it reaches the ground? (b) A volcano ejects a boulder directly }}\end{array}$ $\begin{array}{l}{\text { upward } 525 \mathrm{m} \text { into the air. How fast was the boulder moving just }} \\ {\text { as it left the volcano? (c) A skier moving at 5.00 } \mathrm{m} / \mathrm{s} \text { encounters a }} \\ {\text { long, rough horizontal patch of snow having coefficient of kinetic }} \\ {\text { friction } 0.220 \text { with her skis. How far does she travel on this patch }} \\ {\text { before stopping? (d) Suppose the rough patch in part ( } \mathrm{c}) \text { was only }}\end{array}$ $\begin{array}{c}{2.90 \mathrm{m} \text { long? How fast would the skier be moving when she }} \\ {\text { reached the end of the patch?(e) At the base of a frictionless icy }} \\ {\text { hill that rises at } 25.0^{\circ} \text { above the horizontal, a toboggan has a speed }} \\ {\text { of } 12.0 \mathrm{m} / \mathrm{s} \text { toward the hill. How high vertically above the base }} \\ {\text { will it go before stopping? }}\end{array}$ (a) $k_{1}=0, V_{1} = 0$ $W_{\text {total }}=W_{g r av}=m g s$ $K=w \Rightarrow {mg} s=\frac{1}{2} m{v_2}^{2}$ $\longrightarrow V_{2}=\sqrt{2 g s}=\sqrt{2(9.8)(95)}=43.2 \mathrm{m/s}$ (b) $V_{2}=0 \rightarrow K_{2}=0$ $w_{\text {total }}=w_{\text {grav }}=-m g s$ $k_{1}=-m g s \rightarrow \frac{1}{2} m {v_{1}}^{2}=-m g s \rightarrow v_{1}=\sqrt{2gs}$ $\longrightarrow V_{1}=\sqrt{2(9.8)(525)}=101 \mathrm{m/s}$ (c) $k_{2}=0$ $W_{\text {total }}=W_F=-\mu_k mg s$ ${k_{1}=\frac{1}{2} m {v_{1}}^{2}=-\mu_ k \text { mgs } \rightarrow s=\frac{V_{1}^{2}}{2 \mu_{k} g}=\frac{(5)^{2}}{2 * 0.22 * 98}}\$ $\rightarrow S=5.8 \mathrm{m}$ (d) $K_{1}=\frac{1}{2} m {v_{1}}^{2}, K_{2}=\frac{1}{2} m{ v_{2}}^{2}, W_{tot}=W_{f}=-\mu_{k} m g{s}$ $w_{\text {total }}=w_{f}=\mu_k mgs$ $k_{2}=w_{tot}+k_{1} \Rightarrow \quad \frac{1}{2} m {v_2}^2=-\mu_{k}m g s+\frac{1}{2} m{v_1}^{2}$ $V_{2}=\sqrt{V_{1}^{2}-2\mu_{k} g s} \longrightarrow \sqrt{5^{2}-2(0.22)(9.8)(2.9)}=3.53 \mathrm{m/}s$ (e) $k_{2}=0 \rightarrow v_{2}=0 \quad, \quad K_{1}=\frac{1}{2} m {v_{1}}^{2}, \quad w_{grav}=-m g y?$ $k_{1}=-m g y_{2} \rightarrow \frac{1}{2} m{ v_{1}}^{2}=-mg y_{2}$ $\rightarrow y_{2}=\frac{{v_{1}}^{2}}{2 g}=\frac{(12)^{2}}{2(9.8)}=7.35 \mathrm{m}$ $\begin{array}{l}{\text { A soccer ball with mass } 0.420 \mathrm{kg} \text { is initially moving with }} \\ {\text { speed } 2.00 \mathrm{m} / \mathrm{s} \text { . A socer player kicks the ball, exerting a constant }} \\ {\text { force of magnitude } 40.0 \mathrm{N} \text { in the same direction as the ball's }} \\ {\text { motion. Over what distance must the player's foot be in contact }} \\ {\text { with the ball to increase the ball's speed to } 6.00 \mathrm{m} / \mathrm{s} \text { ? }}\end{array}$ $m=0.42 \mathrm{kg}, \quad v_{1}=2 \mathrm{m} / \mathrm{s} \quad, \quad F=40 \mathrm{N} \quad, \mathrm{v}_{2}=6 \mathrm{m} / \mathrm{s} \quad \mathrm{s} ? ?$ $W_{total}=k_{2}-k_{1}$ $k_{1}=\frac{1}{2} m {v_{1}}^{2}=\frac{1}{2}(0.42)(2)^{2}=0.84 J$ $k_{2}=\frac{1}{2} m {v_{2}}^{2}=\frac{1}{2}(0.42)(6)^{2}=7.56 \mathrm{J}$ $W_{t o tal}=k_{2}-k_{1}=7.56-0.84=6.72 J$ $W_{F}=FS \cos \theta$ $S=\frac{W_ F}{F \cos \theta}=\frac{6.72}{40* \cos 0}=0.168 \mathrm{m}$
2021-04-21 16:22:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6111690402030945, "perplexity": 6911.327332550777}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039546945.85/warc/CC-MAIN-20210421161025-20210421191025-00198.warc.gz"}
https://kb.osu.edu/dspace/handle/1811/15719
# ROTATIONAL CONSTANTS OF MOLECULES WITH DOUBLE-MINIMUM POTENTIALS. Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/15719 Files Size Format View 1969-H-03.jpg 311.5Kb JPEG image Title: ROTATIONAL CONSTANTS OF MOLECULES WITH DOUBLE-MINIMUM POTENTIALS. Creators: Watson, J. K. G. Issue Date: 1969 Publisher: Ohio State University Abstract: The rotational constants of molecules in successive levels of a symmetric double-minimum potential show a characteristic staggering that is fairly well represented by a term in $$, where Q is the coordinate for the double-minimum motion. A precise representation can be obtained by adding a term in$$, but the observed coefficients of $$are much larger than expected from the expansions of the moments of inertia.^{1} The present work uses a term in$$ rather than in $, P$ being the momentum conjugate to Q. This term in $$originates from the second-order effects of Coriolis interaction with the other vibrations. For a quartic potential with a negative quadratic barrier the two alternative expressions provide identical fits to the observed constants, because the virial theorem gives a linear relation between , and$$. Preliminary calculations show that the observed coefficients of are of the correct order of magnitude for this Coriolis mechanism. Description: $^{1}$ S. I. Chan, J. Zinn and W. D. Gwinn, J. Chem. Phys. 54, 1319 (1961). Author Institution: Department of Chemistry, The University URI: http://hdl.handle.net/1811/15719 Other Identifiers: 1969-H-3
2017-07-20 16:53:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7409926652908325, "perplexity": 3624.5717037797504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423269.5/warc/CC-MAIN-20170720161644-20170720181644-00586.warc.gz"}
https://mathematica.stackexchange.com/questions/193514/displayform-problem-with-pi-in-fractionbox
# DisplayForm problem with $\pi$ in FractionBox The following line produces an output with the text "Pi" displayed instead of the symbol (greek letter) $$\pi$$. Code line: FractionBox[π,2] // DisplayForm All help with an alternative or correction of this would be much appreciated, thank you. Responding to a question below here is an image of how FractionBox enters an equation display. The first line produces an unintended rearrangement of the fraction in the sum. The second line is the intended result. There is no calculation intended, only display. Here are the code lines Sum[(-1)^n * StieltjesGamma[n]/n! * Subscript[B, n + 1][-I c, 2 x - 1], {0, Infinity}] // Defer // HoldForm // DisplayForm // TraditionalForm and Sum[(-1)^n * FractionBox[StieltjesGamma[n], n!] * Subscript[B, n + 1][-I c, 2 x - 1], {0, Infinity}] // Defer // HoldForm // DisplayForm // TraditionalForm • Try FractionBox[π, 2] // MakeExpression // ReleaseHold? However, what are you trying to achieve with these expressions? Just evaluating π/2 would achieve the same result. – MarcoB Mar 18 '19 at 21:23 • People here generally like users to post code as Mathematica code instead of just images or TeX, so they can copy-paste it. It makes it convenient for them and more likely you will get someone to help you. You may find this meta Q&A helpful – Michael E2 Mar 18 '19 at 22:54 • @MarcoB - thank you, your solution works and is better than mine. To be clearer about the question, I should have added that the FractionBox is part of a long equation displayed using TraditionalForm and DisplayForm. I posted only the part of the equation that causes the problem. This is a display issue so no evaluation is intended. Thank you again. I still don't understand why MMA does this, by the way. Why the string "Pi" instead of the actual greek letter? I am wondering if this is a bug or there is some reason for this. – Ohcolowisc Mar 20 '19 at 16:42 • @Michael E2 - Thank you for the comment. I am aware of the preference of code vs. images but I couldn't find an quick and easy way to show the incorrect output as well. (I.e. to show how MMA returns the text "Pi" instead of the pi symbol. – Ohcolowisc Mar 20 '19 at 16:44 • @Ohcolowisc That seems to depend specifically on your use of DisplayForm. If you use TraditionalForm, or StandardForm instead, it gets formatted as $\pi$. – MarcoB Mar 20 '19 at 16:53 Why are you inputting a FractionBox? There should not be any need to do so. However, if you must do this, you should generate the boxes using MakeBoxes instead: MakeBoxes[π/2] FractionBox["π", "2"] Update For the follow on question presented as an answer, i would use Divide instead of FractionBox: Sum[ (-1)^n Divide[StieltjesGamma[n],n!] Subscript[B,n+1][-I c,2 x-1], {0,Infinity}
2020-10-30 08:04:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.445319265127182, "perplexity": 1806.5152058853537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107909746.93/warc/CC-MAIN-20201030063319-20201030093319-00232.warc.gz"}
http://math.stackexchange.com/questions/89340/solving-a-linear-system-of-odes-given-by-a-matrix-product
# Solving a linear system of ODEs given by a matrix product Is there a simple way to solve the system of differential equations $$\mathbf{P}'(t) = \mathbf{G} \mathbf{P}(t),$$ where $\mathbf{P}(t) = (p_{ij}(t))_{i,j \in \{1,2,\ldots, n\}}$ is an $n \times n$ matrix of functions and $\mathbf{G}$ is an $n \times n$ matrix of (real) constants? Of course, some extra hypothesis might be required of $\mathbf{G}$ (e.g. distinct eigenvalues) in order for there to be a simple (or even general) solution. Just let me know if that's the case. I've only seen systems of the form $$\mathbf{x}' = \mathbf{G} \mathbf{x},$$ where now $\mathbf{x}$ is a $n\times 1$ vector instead of an $n \times n$ matrix. - Your case is not more difficult than the case $x'=Gx$ since the different columns do not interfere. Just apply the theory for $x'=Gx$ column-wise.
2016-02-12 23:25:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8996606469154358, "perplexity": 102.51423683032469}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701165378.58/warc/CC-MAIN-20160205193925-00202-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.hindawi.com/archive/2009/904836/
`Cardiovascular Psychiatry and NeurologyVolume 2009 (2009), Article ID 904836, 7 pageshttp://dx.doi.org/10.1155/2009/904836` Hypothesis ## Matrix Metalloproteinase-9 (MMP9)—A Mediating Enzyme in Cardiovascular Disease, Cancer, and Neuropsychiatric Disorders Department of Adult Psychiatry, Poznan University of Medical Sciences, ul.Szpitalna 27/33, 60-572 Poznan, Poland Received 1 May 2009; Accepted 30 June 2009 Copyright © 2009 Janusz K. Rybakowski. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. #### Abstract Matrix metalloproteinase-9 (MMP9) has been implicated in numerous somatic illnesses, including cardiovascular disorders and cancer. Recently, MMP9 has been shown to be increasingly important in several aspects of central nervous system activity. Furthermore, a pathogenic role for this enzyme has been suggested in such neuropsychiatric disorders as schizophrenia, bipolar illness, and multiple sclerosis. In this paper, the results of biochemical and molecular-genetic studies on MMP9 that have been performed in these pathological conditions will be summarized. Furthermore, I hypothesize that the MMP9 gene, as shown by functional −1562 C/T polymorphism studies, may be mediating the relationship of neuropsychiatric illnesses (schizophrenia, bipolar mood disorder, multiple sclerosis) that are comorbid with cardiovascular disease and cancer. #### 1. Introduction The matrix metalloproteinases (MMPs) are a large family of zinc-dependent, extracellularly acting endopeptidases, the substrates of which are proteins of the extracellular matrix and adhesion proteins [1]. Matrix metalloproteinase-9 (MMP9), also known as gelatinase B, 92 kDa gelatinase, or 92 Da type collagenase (which represents the largest and most complex member of this family) has recently been a subject of growing interest in human pathology. In recent years, MMPs have attracted interest as mediators of both pathology and regeneration in the central nervous system [2]. Concerning MMP9, a role for this enzyme in the plasticity of the central nervous system has been investigated in experimental studies [3]. Blocking of MMP9, either by pharmacological or genetic means, selectively inhibits hippocampal late-phase long-term potentiation as well as fear memory in mice [4]. Furthermore, TIMP-1, the endogenous inhibitor of MMP9, abolished MMP9-dependent long-term potentiation in the prefrontal cortex of freely moving rats [5]. In addition, a pathogenic role has been proposed for MMP9 in an animal model of aberrant plasticity [6] and temporal lobe epilepsy [7]. The human MMP9 gene was mapped to the chromosome region 20q11.2–q13.1 [8] and several polymorphisms of this gene were identified. The 1562 C/T polymorphism (rs3918242) was shown to exert a functional effect on gene transcription. This single nucleotide polymorphism (SNP) at 1562 bp is due to a C to T substitution (1562 CT), which results in the loss of binding of a nuclear protein to this region and an increase in transcriptional activity in macrophages. In these cells, the C/C genotype leads to a low promoter activity whereas the C/T and T/T genotypes result in high transcriptional activity [9]. The molecular-genetic studies of the functional 1562 C/T polymorphism of the MMP9 gene brought about interesting results in cardiovascular, cancer, and neuropsychiatric conditions. Research in cardiovascular illness and cancer showed that carriers of the T allele have an increased severity of coronary arteriosclerosis [10], increased cardiac mortality [11], and increased risk or more severe progression of some types of cancer [12, 13]. Recent studies also demonstrated an association of this polymorphism with a predisposition to schizophrenia [14], bipolar illness [15], and multiple sclerosis [16, 17]. Based on the results of these studies, hypothesize that the MMP9 gene, which has a functional 1562 C/T polymorphism, may mediate the epidemiological comorbidity of neuropsychiatric illnesses (schizophrenia, bipolar mood disorder, multiple sclerosis) with cardiovascular diseases and cancer. #### 2. MMP-9 in Cardiovascular Disease In a large prospective study of middle-aged men (465 cases, 1076 controls), Welsh et al. [18] showed an association of serum MMP9 with the incidence of coronary heart disease in the general population. More detailed studies have recently been performed in middle-aged population by Swedish researchers who found an association of circulating MMP9 levels not only with cardiovascular [19] but also with psychosocial risk factors for coronary artery disease (e.g., depression) [20]. Related to these observations, an inverse relationship between markers of nitric oxide formation and MMP9 was found in healthy subjects [21]. The higher level of MMP9 in patients with coronary artery disease has been recently reported [22]. Higher MMP9 level was also a correlate of coronary artery ectasia [23] and a predictor of increased mortality in patients with coronary artery disease [24]. The association of MMP9 status with a progression of coronary heart disease has also been confirmed by molecular genetic studies that used the functional 1562 C/T polymorphism of MMP9 gene. It was observed that the carriers of the T allele had increased cardiac mortality [11], and more recently, an association of T allele with myocardial infarction in patients with coronary heart disease was found [25]. In cardiac patients, a relationship was also demonstrated between the T allele of the 1562 C/T polymorphism and markedly increased levels of MMP9 [24], compatible with higher transcriptional activity of this allele in experimental studies [9]. A similar relationship between plasma MMP9 and the T allele of the 1562 C/T polymorphism was also shown in H patients under antiviral therapy [26]. Recently, Konstantino et al. [27] pointed out the prominent role of MMP9 in plaque formation, destabilization, and rupture, and postulated that MMP9 levels may serve as a biomarker for acute coronary syndrome. An association of MMP9 levels with atherosclerotic changes has been previously found in patients with atherosclerosis of the femoral artery [28] and with chronic periodontitis [29]. Higher levels of MMP9 were also observed in hypertrophic cardiomyopathy, which correlated with a worse prognosis [30]. In molecular-genetic studies that genotyped the functional 1562 C/T polymorphism of the MMP9 gene, it was observed that the carriers of the T allele had an increased severity of coronary atherosclerosis [10]. The available data also show a possible association of MMP9 with the pathogenesis and treatment of hypertensive disease. Higher MMP9 level were found preclinically in spontaneously hypertensive hyperlipidemic rats [31] and clinically in women with gestational hypertension [32]. In the group of 595 patients evaluated in the Framingham Offspring Study, higher MMP9 concentrations were related to higher risk of blood pressure progression [33]. Recently, it was demonstrated that plasma MMP9 samples were inhibited by captopril to a similar extent as the angiotensin-converting enzyme [34]. #### 3. MMP-9 in Cancer Sakata et al. [35] showed an overexpression of MMP9 in an epithelial tumor of the ovary and its contribution to lymph node metastases of ovarian carcinoma cells. Similarly, in patients with breast cancer, increased serum and tissue expression of MMP9 was associated with a worse prognosis of the course of tumor [36]. Higher levels of MMP9 have been reported in endometrial polyps, especially in those occurring in premenopausal women [37]. Recently, higher MMP9 levels were also observed in pulmonary lymphangio-leiomyomatosis characterized by excessive cell proliferation [38]. Molecular-genetic studies of the functional 1562 C/T polymorphism of the MMP9 gene have revealed a frequent association of T allele with an increased risk of some kinds of cancer and with more severe progression of the tumor and/or greater dynamics of metastases. Sugimoto et al. [12] observed that the T allele was associated with endometrial carcinoma risk in a Japanese population. Other studies showed an association of the T allele with the risk for oral squamous cell carcinoma in younger male areca users [39] and with genetic risk for esophageal squamous cell carcinoma [40]. Kader et al. [41] demonstrated that several MMP9 haplotypes (including 1562 C/T polymorphism) were associated with the risk of invasive cancer of the urinary bladder. Concerning gastric cancer, it has been found that the T allele of the 1562 C/T polymorphism of MMP9 gene is associated with an invasive phenotype of this tumor [42] and with a higher frequency of lymph node metastasis [43]. In breast cancer, Przybylowska et al. [44] reported that the T allele of this polymorphism was associated with malignance and growth of tumors, and Hughes et al. [13] showed an association with the severity of lymph node metastases. Higher risk of lymph node metastases in colorectal cancer was also found to be connected with the T allele [45]. #### 4. MMP-9 in Multiple Sclerosis An upregulation of MMPs with a decrease of tissue inhibitors (TIMPs) in biological fluids of multiple sclerosis (MS) patients and in an animal model of the disease has been found in numerous studies. Further, the potential of drugs affecting MMPs for treatment of MS has been discussed [46]. A significant elevation of MMP9 related to various courses of MS has been found [47]. Also recently, Shinto et al. [48] demonstrated that omega-3 fatty acid supplementation decreased MMP9 levels in relapsing-remitting MS. In recent years, molecular-genetic studies have focused on the functional 1562 C/T polymorphism of the MMP9 gene in MS. In the first study performed in Serbia, it was found that T allele carriers had a lower susceptibility and severity of MS, and the T allele was found significantly less frequently in women with MS [16]. The second study performed in the Czech Republic confirmed these findings, showing a significant decrease of T allele in patients with MS compared to healthy subjects, especially females [17]. Recently, epidemiological studies investigating the comorbidity of MS and vascular disease and cancer were published. The first study was performed on 9949 hospitalizations of MS patients in New York City from 1988 through 2002. It was found that MS patients were less likely to be hospitalized for ischemic heart disease and myocardial infarction. However, they were more likely to be hospitalized for ischemic stroke than matched controls (general non-MS population) [49]. A second study performed in Sweden estimated cancer risk among 20 276 patients with MS and 203 951 individuals without MS using Swedish general population register data. In patients with MS, there was a decreased overall cancer risk, however, an increased risk for brain tumors was observed [50]. #### 5. MMP-9 in Schizophrenia Studies on the MMP9 levels in schizophrenia have not yet been performed. To investigate the MMP9 gene in this illness, we genotyped the functional 1562 C/T polymorphism in a group of 442 schizophrenic patients and in 558 healthy control subjects. Since MMP9 influences hippocampal and prefrontal cortical activity [4, 5], we hypothesized that a polymorphism of the MMP9 gene is associated with the pathogenesis of schizophrenia, a condition in which prefrontal cortex impairment is one of the most common pathological findings [51]. A significant preponderance of the C/C genotype and C allele, and the diminished frequency of the T allele of the 1562 polymorphism was found in schizophrenia subjects compared to healthy controls [14]. As shown previously, in both cardiovasular disease and cancer, T allele carriers present more severe pathological manifestations of these conditions [1013]. Although the risk of cardiovascular disease in schizophrenia is reported to be similar to that of the general population [52], some studies show a more benign course of cardiovascular illness in such patients [53]. Also, compatible with our findings, a lower predisposition to cancer in schizophrenic patients has long been postulated [54], and the results of some recent analyses may partially favor such a concept [55, 56]. #### 6. MMP-9 in Bipolar Mood Disorder Similar to schizophrenia, there are no studies measuring MMP9 blood levels in patients with bipolar disorder. To investigate the status of the MMP9 gene in this illness, we genotyped the functional 1562 C/T polymorphism in a group of 416 patients with bipolar mood disorder, including 75 patients with bipolar type , and in 558 healthy control subjects. This approach has been substantiated by previous reports on the significance of MMP9 for hippocampal and prefrontal cortical activity and for aspects of brain functions such as neuroplasticity and epileptogenesis [47]. Patients with bipolar mood disorder had a significant preponderance of T allele versus C allele of the 1562 C/T polymorphism of the MMP9 gene compared to healthy control subjects. The higher frequency of the T allele was especially evident in a subgroup of patients with bipolar disorder type compared to healthy subjects [15]. Compatible with the finding that T allele carriers present more severe pathological manifestations of cardiovascular disease and cancer [1013] are findings from a recent epidemiological study demonstraing an enhanced cancer risk among patients with bipolar disorder [57]. A Swedish epidemiological study also showed more than a 2.5-fold increased mortality rate from cardiovascular disease in bipolar patients [58]. #### 7. MMP-9 and Neuropsychological Tests In view of the experimental studies showing an involvement of MMP9 in prefrontal cortex functions in rats [5], we also performed neuropsychological tests measuring this activity in patients with schizophrenia and bipolar illness, and in control subjects in relation to 1562 C/T polymorphism of MMP9 gene. 173 patients with schizophrenia (89 male, 84 female), mean age 29 years, 177 patients with bipolar illness (68 male and 109 female), mean age 43 years, and 181 healthy subjects (86 male and 95 female), mean age 35 years, were included. For cognitive assessment, a computer version of the Wisconsin Card Sorting Test (WCST) was employed, with five domains reflecting working memory and executive functions, depending primarily on prefrontal cortex activity. Additionally, the Trail Making Test, A and B, and the Stroop test, A and B, were used. In schizophrenia patients, no differences were found regarding neuropsychological performance among patients with various genotypes of the polymorphism (data not published). Among male patients with bipolar illness, the results for C/C homozygotes () were better on all domains of the WCST compared with the remaining genotypes (): no differences were found in female patients. Bipolar males and females did not differ in mean age ( years and  years) or mean duration of illness ( years and  years, resp.). In males, the mean age and mean duration of the illness of C/C homozygotes were similar to patients with the remaining genotypes [59]. In the only previous study measuring the impact of MMP9 gene on cognitive functions, Vassos et al. [60] found no association between hippocampus-dependent episodic memory and functional repeat polymorphism (CA)n of the MMP9 gene in healthy subjects. Also, in control subjects studied by us, comparison of cognitive test results within genotypes did not reveal significant differences either in the whole group or in male and female patients. The only difference was in Stroop test, part A, in male patients, where the results for C/C homozygotes () were better than other genotypes combined (). This difference in performance related to genotypes was similar to that obtained in male bipolar patients on WCST domains. Healthy male and female subjects did not differ in mean age ( years and  years, resp.) [61]. These results suggest that in humans, neuropsychological functions and MMP9 enzyme activity may not have a direct correlation. Thus, increased activity of the MMP9 system was associated with higher levels of prefrontal function in experimental animals models [4, 5], also with neuropsychiatric illnesses such as schizophrenia or multiple sclerosis [16, 17] and The results obtained in males with bipolar illness on the WCST and in healthy males on the Stroop test may suggest that under certain conditions, a correlation of higher levels of neuropsychological function with C allele (connected with lower transcriptional activity for the MMP9 gene) may exist. #### 8. Matrix Metalloproteinase-9 (MMP-9)— A Putative Mediating Enzyme for Cardiovascular Disorder, Cancer, and Neuropsychiatric Disorders Because of the functional implications of the 1562 C/T polymorphism of the MMP9 gene, the comorbidity of cardiovascular disorders, cancer, and such neuropsychiatric illnesses as schizophrenia, bipolar mood disorder, and multiple sclerosis can be hypothesized (Figure 1). Figure 1: Epidemiological relationships between cardiovascular disorder, cancer, schizophrenia, multiple sclerosis, and bipolar mood disorder and the functional 1562 polymorphism of the MMP9 gene. Hence, the T allele of the 1562 polymorphism of MMP9 gene is related to a higher transcriptional activity of the gene and in cardiovascular illness and cancer to higher MMP-levels in biological fluids and tissues. In cardiovascular illness, carrying of the T allele and/or higher MMP9 levels are related to an increased progression and mortality of coronary heart disease (CHD) [25] increased atherosclerosis [10], and increased progression of hypertension [33]. Interestingly, in neuropsychiatric disorders with a lower frequency of the T allele, some epidemiological studies suggest a more benign course of cardiovascular disease, for example, in schizophrenia [53], and fewer cardiovascular hospitalizations in MS [49]. On the other hand, the phenomenon of higher risk for cardiovascular illness and higher mortality in patients with mood disorders (which have a higher frequency of the T allel carriers) has long been observed [58]. The proposed mediating factors include impairment in endothelial function that was demonstrated both in bipolar and unipolar depression [62] and, as hypothesized here, possibility the MMP9 system. In oncology, the carrying of the T allele of the 1562 C/T MMP9 gene polymorphism is related to an increased risk for some kinds of cancer [12], more severe progression of tumor growth [42], and higher dynamics of metastases [45]. In neuropsychiatric disorders, some epidemiological studies suggest a lower overall incidence of cancer in schizophrenia [56] and in MS [50] (both illnesses with a lower frequency of T allele carriers), and increased cancer morbidity in bipolar mood disorder [57]. Interestingly, an association between bipolar mood disorder and cancer has been also found with respect to the levels of another metalloprotease, ADAM12 (a disintegrin and metalloprotease) [63, 64]. Nevertheless, it should be emphasized that in the central nervous system, there is more complex regulation of MMPs. As Agrawal et al. [65] pointed out “the good guys may go bad” under some conditions. There are several limitations to this hypothesis. The majority of referred molecular genetic research was performed with 1562 C/T functional polymorphisms of MMP9 but the other polymorphisms have not been sufficiently studied. Literature data on the human blood levels of MMPs used to develop this hypothesis were not evaluated for possible methodological issues [66]. Also, it should be acknowledged that there is a complex interplay of the MMP9 gene with the other genes and environmental factors of MMPs family and with a host of other genes and with factors. However, it is conceivable that the MMP9 gene is a mediating factor among cardiovascular disorders, cancer, schizophrenia, bipolar mood disorder, and multiple sclerosis. This is may contribute to a better explanation of the comorbidity between some somatic and neuropsychiatric illnesses. #### References 1. M. D. Sternlicht and Z. Werb, “How matrix metalloproteinases regulate cell behavior,” Annual Review of Cell and Developmental Biology, vol. 17, pp. 463–516, 2001. 2. V. W. Yong, “Metalloproteinases: mediators of pathology and regeneration in the CNS,” Nature Reviews Neuroscience, vol. 6, no. 12, pp. 931–944, 2005. 3. L. Kaczmarek, J. Lapinska-Dzwonek, and S. Szymczak, “Matrix metalloproteinases in the adult brain physiology: a link between c-Fos, AP-1 and remodeling of neuronal connections?” EMBO Journal, vol. 21, no. 24, pp. 6643–6648, 2002. 4. V. Nagy, O. Bozdagi, M. Matynia et al., “Matrix metalloproteinase-9 is required for hippocampal late-phase long-term potentiation and memory,” Journal of Neuroscience, vol. 26, no. 7, pp. 1923–1934, 2006. 5. P. Okulski, T. M. Jay, J. Jaworski et al., “TOMP-1 abolishes MMP-9-dependent long-lasting long-term potentiation in the prefrontal cortex,” Biological Psychiatry, vol. 62, no. 4, pp. 359–362, 2007. 6. A. Szklarczyk, J. Lapinska, M. Rylski, R. D. G. McKay, and L. Kaczmarek, “Matrix metalloproteinase-9 undergoes expression and activation during dendritic remodeling in adult hippocampus,” Journal of Neuroscience, vol. 22, no. 3, pp. 920–930, 2002. 7. G. M. Wilczynski, F. A. Konopacki, E. Wilczek et al., “Important role of matrix metalloproteinase 9 in epileptogenesis,” Journal of Cell Biology, vol. 180, no. 5, pp. 1021–1035, 2008. 8. P. L. St Jean, X. C. Zhang, B. K. Hart et al., “Characterization of a dinucleotide repeat in the 92 kDa type IV collagenase gene (CLG4B), localization of CLG4B to chromosome 20 and the role of CLG4B in aortic aneurysmal disease,” Annals of Human Genetics, vol. 59, no. 1, pp. 17–24, 1995. 9. B. Zhang, A. Henney, P. Eriksson, A. Hamsten, H. Watkins, and S. Ye, “Genetic variation at the matrix metalloproteinase-9 locus on chromosome 20q12.2-13.1,” Human Genetics, vol. 105, no. 5, pp. 418–423, 1999. 10. B. Zhang, S. Ye, S. M. Herrmann et al., “Functional polymorphism in the regulatory region of gelatinase B gene in relation to severity of coronary atherosclerosis,” Circulation, vol. 99, no. 14, pp. 1788–1794, 1999. 11. F. Mizon-Gerard, P. de Groote, N. Lamblin et al., “Prognostic impact of matrix metalloproteinase gene polymorphisms in patients with heart failure according to the aetiology of left ventricular systolic dysfunction,” European Heart Journal, vol. 25, no. 8, pp. 688–693, 2004. 12. M. Sugimoto, S. Yoshida, S. Kennedy, M. Deguchi, N. Ohara, and T. Maruo, “Matrix metalloproteinase-1 and -9 promoter polymorphisms and endometrial carcinoma risk in a Japanese population,” Journal of the Society for Gynecologic Investigation, vol. 13, no. 7, pp. 523–529, 2006. 13. S. Hughes, O. Agbaje, R. L. Bowen et al., “Matrix metalloproteinase single-nucleotide polymorphisms and haplotypes predict breast cancer progression,” Clinical Cancer Research, vol. 13, no. 22, pp. 6673–6680, 2007. 14. J. K. Rybakowski, M. Skibinska, P. Kapelski, L. Kaczmarek, and J. Hauser, “Functional polymorphism of matrix metalloproteinase-9 (MMP-9) gene in schizophrenia,” Schizophrenia Research, vol. 109, pp. 90–93, 2009. 15. J. K. Rybakowski, M. Skibinska, A. Leszczynska-Rodziewicz, L. Kaczmarek, and J. Hauser, “Matrix metalloproteinase-9 (MMP-9) gene and bipolar mood disorder,” NeuroMolecular Medicine, vol. 11, no. 2, pp. 128–132, 2009. 16. M. Zivkovic, T. Djuric, E. Dincic, R. Raicevic, D. Alavantic, and A. Stankovic, “Matrix metalloproteinase-9 -1562 C/T gene polymorphism in Serbian patients with multiple sclerosis,” Journal of Neuroimmunology, vol. 189, no. 1-2, pp. 147–150, 2007. 17. Y. Benesova, A. Vasku, P. Stourac et al., “Matrix metalloproteinase-9 and matrix metalloproteinase-2 gene polymorphisms in multiple sclerosis,” Journal of Neuroimmunology, vol. 205, no. 1-2, pp. 105–109, 2008. 18. P. Welsh, P. H. Whincup, O. Papacosta et al., “Serum matrix metalloproteinase-9 and coronary heart disease: a prospective study in middle-aged men,” QJM, vol. 101, no. 10, pp. 785–791, 2008. 19. P. Garvin, L. Nilsson, J. Carstensen, L. Jonasson, and M. Kristenson, “Circulating matrix metalloproteinase-9 is associated with cardiovascular risk factors in a middle-aged normal population,” PLoS ONE, vol. 3, article e1774, 2008. 20. P. Garvin, L. Nilsson, J. Carstensen, L. Jonasson, and M. Kristenson, “Plasma levels of matrix metalloproteinase-9 are independently associated with psychosocial factors in a middle-aged normal population,” Psychosomatic Medicine, vol. 71, pp. 292–300, 2009. 21. C. Demacq, I. F. Metzger, R. F. Gerlach, and J. E. Tanus-Santos, “Inverse relationship between markers of nitric oxide formation and plasma matrix metalloproteinase-9 levels in healthy volunteers,” Clinica Chimica Acta, vol. 394, no. 1-2, pp. 72–76, 2008. 22. M. L. Muzzio, V. Miksztowicz, F. Brites et al., “Metalloproteases 2 and 9, Lp-PLA(2) and lipoprotein profile in coronary patients,” Archives of Medical Research, vol. 40, no. 1, pp. 48–53, 2009. 23. A. Dogan, N. Tuzun, Y. Turker, S. Akcay, D. Kaya, and M. Ozaydin, “Matrix metalloproteinases and inflammatory markers in coronary artery ectasia: their relationship to severity of coronary artery ectasia,” Coronary Artery Disease, vol. 19, pp. 559–563, 2008. 24. S. Blankenberg, H. J. Rupprecht, O. Poirier et al., “Plasma concentrations and genetic variation of matrix metalloproteinase 9 and prognosis of patients with cardiovascular disease,” Circulation, vol. 107, no. 12, pp. 1579–1585, 2003. 25. B. D. Horne, N. J. Camp, J. F. Carlquist et al., “Multiple-polymorphism associations of 7 matrix metalloproteinase and tissue inhibitor metalloproteinase genes with myocardial infarction and angiographic coronary artery disease,” American Heart Journal, vol. 154, no. 4, pp. 751–758, 2007. 26. C. Demacq, V. B. Vasconcellos, A. M. Marcaccini, R. F. Gerlach, A. A. Machado, and J. E. Tanus-Santos, “A genetic polymorphism of matrix metalloproteinase 9 (MMP-9) affects the changes in circulating MMP-9 levels induced by highly active antiretroviral therapy in HIV patients,” Pharmacogenomics Journal, vol. 9, no. 2, pp. 265–273, 2009. 27. Y. Konstantino, T. T. Nguyen, R. Wolk, R. J. Aiello, S. G. Terra, and D. A. Fryburg, “Potential implications of matrix metalloproteinase-9 in assessment and treatment of coronary artery disease,” Biomarkers, vol. 14, no. 2, pp. 118–129, 2009. 28. F. J. Olson, C. Schmidt, A. Gummesson et al., “Circulating matrix metalloproteinase 9 levels in relation to sampling methods, femoral and carotid atherosclerosis,” Journal of Internal Medicine, vol. 263, no. 6, pp. 626–635, 2008. 29. P. O. Söder, J. H. Meurman, T. Jogerstrand, J. Nowak, and B. Söder, “Matrix metalloproteinase-9 and tissue inhibitor of matrix metalloproteinase-1 in blood as markers for early atherosclerosis in subjects with chronic periodontitis,” Journal of Periodontal Research, vol. 44, no. 4, pp. 452–458, 2009. 30. V. Roldan, F. Marin, J. R. Gimeno et al., “Matrix metalloproteinases and tissue remodeling in hypertrophic cardiomyopathy,” American Heart Journal, vol. 156, no. 1, pp. 85–91, 2008. 31. Y. Asano, S. Iwai, M. Okazaki et al., “Matrix metalloproteinase-9 in spontaneously hypertensive hyperlipidemic rats,” Pathophysiology, vol. 15, no. 3, pp. 157–166, 2008. 32. A. C. T. Palei, V. C. Sandrim, R. C. Cavalli, and J. E. Tanus-Santos, “Comparative assessment of matrix metalloproteinase (MMP)-2 and MMP-9, and their inhibitors, tissue inhibitors of metalloproteinase (TIMP)-1 and TIMP-2 in preeclampsia and gestational hypertension,” Clinical Biochemistry, vol. 41, no. 10-11, pp. 875–880, 2008. 33. R. Dhingra, M. J. Pencina, P. Schrader et al., “Relations of matrix remodeling biomarkers to blood pressure progression and incidence of hypertension in the community,” Circulation, vol. 119, no. 8, pp. 1101–1107, 2009. 34. D. Yamamoto, S. Takai, and M. Miyazaki, “Inhibitory profiles of captopril on matrix metalloproteinase-9 activity,” European Journal of Pharmacology, vol. 588, no. 2-3, pp. 277–279, 2008. 35. K. Sakata, K. Shigemasa, N. Nagai, and K. Ohama, “Expression of matrix metalloproteinases (MMP-2, MMP-9, MT1-MMP) and their inhibitors (TIMP-1, TIMP-2) in common epithelial tumors of the ovary,” International Journal of Oncology, vol. 17, no. 4, pp. 673–681, 2000. 36. Z. S. Wu, Q. Wu, J. H. Yang et al., “Prognostic significance of MMP-9 and TIMP-1 serum and tissue expression in breast cancer,” International Journal of Cancer, vol. 122, no. 9, pp. 2050–2056, 2008. 37. E. Erdemoglu, M. Guney, N. Karahan, and T. Mungan, “Expression of cyclooxygenase-2, matrix metalloproteinase-2 and matrix metalloproteinase-9 in premenopausal and postmenopausal endometrial polyps,” Maturitas, vol. 59, no. 3, pp. 268–274, 2008. 38. N. Odajima, T. Betsuyaku, Y. Nasuhara, H. Inoue, K. Seyama, and M. Nishimura, “Matrix metalloproteinases in blood from patients with LAM,” Respiratory Medicine, vol. 103, no. 1, pp. 124–129, 2009. 39. H. F. Tu, C. H. Wu, S. Y. Kao, C. J. Liu, T. Y. Liu, and M. T. Lui, “Functional -1562 C-to-T polymorphism in matrix metalloproteinase-9 (MMP-9) promoter is associated with the risk for oral squamous cell carcinoma in younger male areca users,” Journal of Oral Pathology and Medicine, vol. 36, no. 7, pp. 409–414, 2007. 40. J. Wu, L. Zhang, H. Luo, Z. Zhu, C. Zhang, and Y. Hou, “Association of matrix metalloproteinases-9 gene polymorphisms with genetic susceptibility to esophageal squamous cell carcinoma,” DNA and Cell Biology, vol. 27, no. 10, pp. 553–557, 2008. 41. A. K. Kader, L. Shao, C. P. Dinney et al., “Matrix metalloproteinase polymorphisms and bladder cancer risk,” Cancer Research, vol. 66, no. 24, pp. 11644–11648, 2006. 42. S. Matsumura, N. Oue, H. Nakayama et al., “A single nucleotide polymorphism in the MMP-9 promoter affects tumor progression and invasive phenotype of gastric cancer,” Journal of Cancer Research and Clinical Oncology, vol. 131, no. 1, pp. 19–25, 2005. 43. Y. Tang, J. Zhu, L. Chen, L. Chen, S. Zhang, and J. Lin, “Associations of matrix metalloproteinase-9 protein polymorphisms with lymph node metastasis but not invasion of gastric cancer,” Clinical Cancer Research, vol. 14, no. 9, pp. 2870–2877, 2008. 44. K. Przybylowska, A. Kluczna, M. Zadrozny et al., “Polymorphisms of the promoter regions of matrix metalloproteinases genes MMP-1 and MMP-9 in breast cancer,” Breast Cancer Research and Treatment, vol. 95, no. 1, pp. 65–72, 2006. 45. L. L. Xing, Z. N. Wang, L. Jiang et al., “Matrix metalloproteinase-9-1562C $>$ T polymorphism may increase the risk of lymphatic metastasis of colorectal cancer,” World Journal of Gastroenterology, vol. 13, no. 34, pp. 4626–4629, 2007. 46. V. W. Yong, R. K. Zabad, S. Agrawal, A. Goncalves DaSilva, and L. M. Metz, “Elevation of matrix metalloproteinases (MMPs) in multiple sclerosis and impact of immunomodulators,” Journal of the Neurological Sciences, vol. 259, no. 1-2, pp. 79–84, 2007. 47. Y. Benesova, A. Vasku, H. Novotna et al., “Matrix metalloproteinase-9 and matrix metalloproteinase-2 as biomarkers of various courses in multiple sclerosis,” Multiple Sclerosis, vol. 15, no. 3, pp. 316–322, 2009. 48. L. Shinto, G. Marracci, S. Baldauf-Wagner et al., “Omega-3 fatty acid supplementation decreases matrix metalloproteinase-9 production in relapsing-remitting multiple sclerosis,” Prostaglandins Leukotrienes and Essential Fatty Acids, vol. 80, no. 2-3, pp. 131–136, 2009. 49. N. B. Allen, J. H. Lichtman, H. W. Cohen, J. Fang, L. M. Brass, and M. H. Alderman, “Vascular disease among hospitalized multiple sclerosis patients,” Neuroepidemiology, vol. 30, no. 4, pp. 234–238, 2008. 50. S. Bahmanyar, S. M. Montgomery, J. Hillert, A. Ekbom, and T. Olsson, “Cancer risk among patients with multiple sclerosis and their parents,” Neurology, vol. 72, no. 13, pp. 1170–1177, 2009. 51. W. E. Bunney and B. G. Bunney, “Evidence for a compromised dorsolateral prefrontal cortical parallel circuit in schizophrenia,” Brain Research, vol. 31, pp. 138–146, 2000. 52. M. Davidson, “Risk of cardiovascular disease and sudden death in schizophrenia,” Journal of Clinical Psychiatry, vol. 63, supplement 9, pp. 5–11, 2002. 53. A. J. Tretiakov, “Arterial hypertension in schizophrenia as a model of benign transformation of somatic pathology,” Terapevticheskii Arkhiv, vol. 78, pp. 51–56, 2006 (Russian). 54. F. Odegaard, “Mortality in Norwegian mental hospitals from 1916 to 1933,” Acta Psychiatrica et Neurologica, vol. 11, pp. 323–356, 1936. 55. S. Leucht, T. Burkard, J. Henderson, M. Maj, and N. Sartorius, “Physical illness and schizophrenia: a review of the literature,” Acta Psychiatrica Scandinavica, vol. 116, no. 5, pp. 317–333, 2007. 56. V. S. Catts, S. V. Catts, B. I. O'Toole, and A. D. J. Frost, “Cancer incidence in patients with schizophrenia and their first-degree relatives—a meta-analysis,” Acta Psychiatrica Scandinavica, vol. 117, no. 5, pp. 323–336, 2008. 57. M. BarChana, I. Levav, I. Lipshitz et al., “Enhanced cancer risk among patients with bipolar disorder,” Journal of Affective Disorders, vol. 108, no. 1-2, pp. 43–48, 2008. 58. U. Osby, L. Brandt, N. Correia, A. Ekbom, and P. Sparen, “Excess mortality in bipolar and unipolar disorder in Sweden,” Archives of General Psychiatry, vol. 58, no. 9, pp. 844–850, 2001. 59. J. K. Rybakowski, M. Skibinska, A. Leszczynska-Rodziewicz, L. Kaczmarek, and J. Hauser, “Matrix metalloproteinase-9 gene modulates prefrontal cognition in bipolar men,” Psychiatric Genetics, vol. 19, no. 2, pp. 108–109, 2009. 60. E. Vassos, X. Ma, N. Fiotti et al., “The functional MMP-9 microsatellite marker is not associated with episodic memory in humans,” Psychiatric Genetics, vol. 18, p. 252, 2008. 61. J. K. Rybakowski, A. Borkowska, M. Skibinska, L. Kaczmarek, and J. Hauser, “The -1562 C/T polymorphism of the matrix metalloproteinase-9 gene is not associated with cognitive performance in healthy subjects,” Psychiatric Genetics. In Press. 62. J. K. Rybakowski, A. Wykretowicz, A. Heymann-Szlachcinska, and H. Wysocki, “Impairment of endothelial function in unipolar and bipolar depression,” Biological Psychiatry, vol. 60, no. 8, pp. 889–891, 2006. 63. C. Froöhlich, R. Albrechtsen, L. Dyrskjøt, L. Rudkjaer, T. F. Ørntoft, and U. M. Wewer, “Molecular profiling of ADAM12 in human bladder cancer,” Clinical Cancer Research, vol. 12, no. 24, pp. 7359–7368, 2006. 64. C. Nadri, Y. Bersudsky, R. H. Belmaker, and G. Agam, “Elevated urinary ADAM12 protein levels in lithium-treated bipolar patients,” Journal of Neural Transmission, vol. 114, no. 4, pp. 473–477, 2007. 65. S. M. Agrawal, L. Lau, and V. W. Yong, “MMPs in the central nervous system: where the good guys go bad,” Seminars in Cell and Developmental Biology, vol. 19, no. 1, pp. 42–51, 2008. 66. F. Mannello, “Serum or plasma samples?” Arteriosclerosis, Thrombosis, and Vascular Biology, vol. 28, no. 4, pp. 611–614, 2008.
2017-11-23 06:03:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6158971190452576, "perplexity": 11488.68414031237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806736.55/warc/CC-MAIN-20171123050243-20171123070243-00733.warc.gz"}
http://www.newtonproject.ox.ac.uk/view/texts/normalized/NATP00329
<59r> Sir I have sent you the sheet you want. The second book I made ready for you in Autumn, having wrote to you in Sommer that it should come out with the first & be ready against the time you might need it, & guessing by the rate of the presse in Sommer you might need it about November or December. But not hearing from you & being told (thô not truly) that upon new differences in the R. Society you had left your secretaries place: I desired my intimate friend Mr C. Montague to enquire of Mr Paget how things were & send me word. He writes that Dr Wallis has sent up some things about projectiles pretty like those of mine in the papers Mr Page{t} first shewed you, & that it was ordered I should be consulted whether I intend to print mine. I have inserted them into the beginning of the second book with divers others of that kind: which therefore if you desire to see you may command the book when you please though otherwise I should chose to let it ly by me till you are ready for it. I think I have the solution of your Prob{lem} about the Suns Parallax, but through other occasions shall scar{ce} have time to think further on these things & besides I want something of observation. For if my notion be right, the Sun draws the Moon in the Quadratures, so that there needs an equation of about 4 or $4\frac{1}{2}$ minutes to be subducted from her motion in the first Quarter & added in the last. I hope you received a letter with two Corollaries I sent you in Autumn. I have eleven sheets already, that is to M. When you have seven more printed off, I desire you would send them. I thank you for putting forward the press again, being very sensible of the great trouble I give you amidst so much business of your own {&} the R. Societys. In this as well as in divers other things you much oblige & Humble Servant Is. Newton. Trin. Coll. Cambridge. Feb. 1{3}{8}th. 1686 <59v> For his Honoured Friend Mr Edmund Halley, to be left with Mr Hunt at Gresham College in London. < insertion from the right margin of f 59v > Mr. Newton's Letter Feb. 13. 1686. < text from f 59v resumes >
2020-05-28 22:51:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33103352785110474, "perplexity": 2795.888970619029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347400101.39/warc/CC-MAIN-20200528201823-20200528231823-00532.warc.gz"}
https://codereview.stackexchange.com/questions/110392/registration-validation-storage-functions
# Registration, Validation & Storage functions The last time I wrote any serious code was the 90's, so a lot has changed and I am finding I am having to relearn pretty much everything. I have leaned a lot from previous posts and this is the result. So as the title states I need a code review to help me identify where I am making mistakes by today's standards. To thwart any PDO comments, I have decided against it for now. One note, the ValidateForms() function is just one big catch all at the moment, later I will break it down and put the parts and pieces into their corresponding functions. And yes, there are errors in that function as I am fixing them write the corresponding page/function. But the registration page and the corresponding functions do successfully complete their tasks. It's a lot, I know but I am looking for constructive criticism for overall security , better/more validation, ways to compact the code and just overall improvements I should make. The User Registration Page: include('pg_top.php'); if(isset($_POST['submitbtn'])){$errors = ValidateForms(); if(empty($errors)){$errors = UserRegistration(); } } if($errors[0] == "You must activate your account before logging in. Please check your email."){$success = $errors[0]; echo$success; }else { ?> <h1>User Registration</h1> <div class="allForms"> <?php if(!empty($errors)){ foreach($errors as $error){ echo '<div class="red">- '.$error.'</div>'; } } ?> <form action="<?php echo esc_self(); ?>" method="POST" id="registerForm"> <div><input name="role" type="radio" value="auctioneer" required checked /> I am a auctioneer!</div> <p> <br /> <input type="text" name="userName" value="<?php echo $_POST['userName']; ?>" placeholder="Username" required /> </p> <p> <label>Email</label> <br /> <input class="email" type="email" name="email" value="<?php echo$_POST['email']; ?>" placeholder="Email Address" required /> </p> <p> <br /> <input type="password" name="pswd" value="" pattern="(?=.*\d)(?=.*[a-z])(?=.*[A-Z]).{8,}" placeholder="Password" title="Must contain at least one number and one uppercase and lowercase letter, and at least 8 or more characters" required /> </p> <p> <br /> </p> <input type="submit" name="submitbtn" value="Register" /> </form> </div> <?php } include('pg_bot.php'); ?> User input validation function ValidateForms() { array_walk_recursive($_POST, 'trim'); array_walk_recursive($_GET, 'trim'); $errormsg = array(); foreach($_POST as $key =>$value){ if($key == 'auction_desc'){ continue; } if($key == 'imageUploads'){ continue; } if($value != strip_tags($value)) { $errormsg[] = 'HTML Tags are not allowed'; } } if(in_array($_POST['userName'] || $_POST['pswd'] ||$_POST['activationCode'] || $_POST['licenseNum'] ||$_POST['street'],$_POST)){$toCheck = array($_POST['userName'],$_POST['pswd'],$_POST['activationCode'],$_POST['licenseNum'], $_POST['street']); foreach($toCheck as $key =>$var) { if(empty($var)){ continue; } if(!ctype_alnum($var)){ $errormsg[] =$var.' is not alphnumeric'; } } } if(in_array($_POST['firstName']||$_POST['ampm']||$_POST['role'],$_POST)){ $toCheck = array($_POST['firstName'],$_POST['ampm'],$_POST['role']); foreach($toCheck as$key => $var) { if(empty($var)){ continue; } if(!ctype_alpha($var)){$errormsg[] = $var.' must only contain letters.'; } } } if(in_array($_POST['zip']||$_POST['month']||$_POST['day']||$_POST['year']||$_POST['hour']||$_POST['min']||$_POST['sort'],$_POST)){$toCheck = array($_POST['zip'],$_POST['month'],$_POST['day'],$_POST['year'],$_POST['hour'],$_POST['min'],$_POST['sort']); foreach($toCheck as $key =>$var) { if(empty($var)){ continue; } if(!ctype_digit($var)){ $errormsg[] = 'You have an error in either zip code or auction date/time.'; } } } if(isset($_POST['city'])){ if (!preg_match("/^[A-Za-z\ \']+$/",$_POST['city'])) { $errormsg[] = 'City name may only contain letters, single quote'; } } if(isset($_POST['state'])){ if (!preg_match("/^[A-Za-z\ \']+$/",$_POST['state'])) { $errormsg[] = 'City name may only contain letters, single quote'; } } if(isset($_POST['lastName'])){ if (!preg_match("/^[A-Za-z\\- \']+$/",$_POST['lastName'])) { $errormsg[] = 'Last name may only contain letters,a hyphen or a single quote'; } } if(isset($_POST['busName'])){ if (!preg_match("/^[0-9A-Za-z\\- \&\.\\']+$/",$_POST['busName'])) { $errormsg[] = "Business name may only contain letters, numbers and . & ' -"; } } if(isset($_POST['title'])){ if (!preg_match("/^[0-9A-Za-z\\- \&\.\\']+$/",$_POST['title'])) { $errormsg[] = "Business name may only contain letters, numbers and . & ' -"; } } if(isset($_POST['bio'])){ if (!preg_match("/^[0-9A-Za-z\\- \&\.\,$\?\%\\']+/",_POST['bio'])) { errormsg[] = "Bio name may only contain letters, numbers and ?%&()-,."; } } if(isset(_POST['question'])){ if (!preg_match("/^[0-9A-Za-z\\- \&\.\,$\?\%\\']+$/",$_POST['question'])) { $errormsg[] = "Question name may only contain letters, numbers and ?$%&()-,."; } } if(isset($_POST['category_1'])){ if (!preg_match("/^[A-Za-z\/\&\.\']+$/",$_POST['category_1'])) {$errormsg[] = "Bio name may only contain letters, numbers and ?$%&()-,."; } } if(isset($_POST['category_2'])){ if (!preg_match("/^[A-Za-z\/\&\.\']+$/",$_POST['category_2'])) { $errormsg[] = "Bio name may only contain letters, numbers and ?$%&()-,."; } } if(isset($_POST['category_3'])){ if (!preg_match("/^[A-Za-z\/\&\.\']+$/",$_POST['category_3'])) {$errormsg[] = "Bio name may only contain letters, numbers and ?$%&()-,."; } } if(isset($_POST['phone'])){ if(!preg_match($toMatch,$_POST['phone'])){ $errormsg[] = 'Phone number must be <b>(###)###-####</b> format'; } } if(isset($_POST['fax'])){ if(!preg_match($toMatch,$_POST['fax'])){ $errormsg[] = 'Fax number must be <b>(###)###-####</b> format'; } } if (isset($_POST['email']) && !filter_var($_POST['email'], FILTER_VALIDATE_EMAIL)) {$errormsg[] = "The email address entered is invalid."; } //does pswd contain 1 upper, 1 lower and 1 number_format if(isset($_POST['pswd']) && !preg_match('/(?=.{8,})(?=.*\d)(?=.*[a-z])(?=.*[A-Z]).*/',$_POST['pswd'])){ $errormsg[] = 'Password must contain one uppercase and lowercase letter and one number.'; } //Is state length longer than 2 chars if(isset($_POST['state']) && strlen($_POST['state'])!=2){$errormsg[] = 'State must be abbreviated to two characters'; } //is zipcode longer than 5 chars if(isset($_POST['zip']) && strlen($_POST['zip']) !=5){ $errormsg[] = 'Zipcode must be 5 digits.'; } //month length if(isset($_POST['month']) && strlen($_POST['month']) !=2){$errormsg[] = 'Something is wrong with the month field.'; } //day length if(isset($_POST['day'])){ if(strlen($_POST['day']) >2 || strlen($_POST['day'])< 1){$errormsg[] = 'Something is wrong with the day field.'; } } //year length if(isset($_POST['year']) && strlen($_POST['year']) !=4){ $errormsg[] = 'Something is wrong with the year field.'; } //hour length if(isset($_POST['hour'])){ if(strlen($_POST['hour']) >2 || strlen($_POST['hour'])< 1){ $errormsg[] = 'Something is wrong with the hour field.'; } } if(isset($_POST['min']) && strlen($_POST['min']) !=2){$errormsg[] = 'Something is wrong with the minute field.'; } if(isset($_POST['ampm']) && strlen($_POST['ampm']) !=2){ $errormsg[] = 'Something is wrong with the am/pm field.'; } if (isset($_POST['wsaddr']) && filter_var($_POST['wsaddr'], FILTER_VALIDATE_URL) === false) {$errormsg[] = 'The websiite address you entered is invalid.'; } if(isset($_POST['bio']) && strlen($_POST['bio'])>255){ $errormsg[] = 'Bio can not be longer than 255 characters'; } //is title longer than 75 chars. if(isset($_POST['title'])){ if(strlen($_POST['title'])>75){$errormsg[] = 'Title can not be longer than 75 characters'; } } if(isset($_POST['auction_desc'])){ if(strlen($_POST['auction_desc'])>5000){ $errormsg[] = 'Auction description is limited to 3000 characters'; } } if(isset($_POST['sort'])){ if(strlen($_POST['sort']) >1){$errormesg[] = 'Something has gone wrong with the sort feature, please try again.'; } } if(isset($_POST['role'])){ if(strlen($_POST['role']) > 10 || strlen($_POST['role']) < 5){$errormsg[] = 'Something has gone wrong please try again.'; } } return $errormsg; } and finally comparisons & storage function UserRegistration(){$errormsg = array(); if(!isset($_POST['role'])){$errormsg[] = 'You must chose if you are buyer or an auctioneer.'; } if(!isset($_POST['email'])){$errormsg[] = 'You must enter a valid email address.'; } if(!isset($_POST['pswd'])){$errormsg[] = 'You must enter a password.'; } if(!isset($_POST['retypepswd'])){$errormsg[] = 'You must enter a password.'; } if($_POST['pswd'] !==$_POST['retypepswd']){ $errormsg[] = 'Your passwords do not match.'; } if(!empty($errormsg)){ return $errormsg; }else {$token = md5(uniqid(mt_rand(), true)); $hash = password_hash($_POST['pswd'], PASSWORD_DEFAULT); $a = 0; require('includes/db_connect.php');$sql = "SELECT username FROM members WHERE username=?"; if(!($stmt =$conn->prepare($sql))){$errormsg[] = "Prepare failed: (" . $mysqli->errno . ") " .$mysqli->error; } if(!$stmt->bind_param("s",$_POST['userName'])){ $errormsg[] = "Binding parameters failed: (" .$stmt->errno . ") " . $stmt->error; } if(!$stmt->execute()) { $errormsg[] = "Execute failed: (" .$stmt->errno . ") " . $stmt->error; }$stmt->bind_result($username);$stmt->fetch(); $stmt->close(); if($_POST['userName'] == $username){$errormsg[] = "This username taken, please choose another."; }else { $sql = "INSERT INTO members (username, email, password, active, code, role, setup_complete) VALUES (?, ?, ?, ?, ?, ?, ?)"; if(!($stmt = $conn->prepare($sql))){ $errormsg[] = "Prepare failed: (" .$mysqli->errno . ") " . $mysqli->error; } if(!$stmt->bind_param("sssissi",$_POST['userName'],$_POST['email'],$hash,$a,$token,$_POST['role'],$a)){$errormsg[] = "Binding parameters failed: (" . $stmt->errno . ") " .$stmt->error; } if(!$stmt->execute()) {$errormsg[] = "Execute failed: (" . $stmt->errno . ") " .$stmt->error; } if(empty($errormsg)){$rows = $stmt->affected_rows; }$stmt->close; $conn->close(); if ($rows == 1) { SendActivationEmail($to,$token); }else { $errormsg[] = 'There was a database error Please contact support'; } if(@mail($email, $subject,$message, $headers)){$errormsg[] = "You must activate your account before logging in. Please check your email."; }else { $errormsg[] = 'There was an error sending your activation email.'; } } return$errormsg; } } • Please don't invalidate answers by changing your code. – Ethan Bierlein Nov 11 '15 at 3:22 • Correcting it doesn't invalidate the answer, the answers are still correct. It makes it so other answers or comments don't repeat what others have correctly already covered. – user3154948 Nov 11 '15 at 3:56 • Other people can see answers. Stuff won't be repeated. Again, please, don't invalidate answers. – Ethan Bierlein Nov 11 '15 at 3:57 ## Security XSS <input type="text" name="userName" value="<?php echo $_POST['userName']; ?>" placeholder="Username" required /> <input class="email" type="email" name="email" value="<?php echo$_POST['email']; ?>" placeholder="Email Address" required /> and $toCheck = array($_POST['firstName'],$_POST['ampm'],$_POST['role']); foreach($toCheck as$key => $var) { if(empty($var)){ continue; } if(!ctype_alpha($var)){$errormsg[] = $var.' must only contain letters.'; } } foreach($errors as $error){ echo '<div class="red">- '.$error.'</div>'; } These are open to reflected XSS (you can try it by inputting "> xss<script>alert(1)</script> as username). I know that you have some validation, but validation is not the preferred method of handling XSS (and in this case it's not really working). You should always use htmlspecialchars($string, ENT_QUOTES, 'UTF-8'); right at the moment when you are echoing variables/inserting them into HTML. As to your validation: You check if there are tags in the input, and if there are, you display an error. But you still echo the payload! Also, even if you would not echo it if tags are present, an attacker could break out of existing tags via eg " autofocus onfocus="alert(1). All the other checks in ValidateForms are nice to have but should not be relied upon for primary security. Misc Your code seems safe from SQL injection, and you use bcrypt for password hashing, both of which are good. • @mail($email, $subject,$message, $headers): The variables do not seem to exist in the code you posted. But you should check if they are open to email injection. • don't echo db error messages to the user. A legitimate user will not know what to do with them, and an attacker may use the information gained in further attacks. • neither uniqid nor mt_rand produce cryptographically secure values. I would just use something like openssl_random_pseudo_bytes. • Personally, I do not like restrictions on passwords. Attackers gain information through them, with which they can personalize their wordlists for easier bruteforcing, and users will generally choose bad passwords either way. ## Other • You really don't want to use @. If something goes wrong, you generally want to know about it, not suppress it. • label should always have a for attribute for usability. • If you don't need the key of an array, just write foreach($toCheck as $var) instead of foreach($toCheck as $key =>$var). • do not be too strict with your validation. For example, there are lots of people who have more than letters in their names (same goes for companies, etc). • City name may only [...] for the state check is a copy-paste error. • Something is wrong is never a good error message. Try to be specific to help your users (eg Days must be between 1 and 99 (although is that really what you want to check?)). • You have quite a lot of newlines, which stretches your code and makes it a bit harder to read. Try to build logical units which you order into blocks instead of writing each statement on its own line. • Your spacing is sometimes inconsistent (eg strlen($_POST['title'])>75 vs strlen($_POST['sort']) >1 vs strlen($_POST['role']) > 10. • Your indentation is also sometimes inconsistent (missing indentation, closing } indented, etc). • function names should start with a lowercase letter (classes start with an uppercase letter). • value="<?php echo$_POST['userName']; ?>" produces a warning when the value does not exist, which screws up the form. You should check if the value is set before echoing it (html encoded). • OK just starting to read.. but the very first thing I see is this: These are open to reflected XSS (you can try it by inputting "> xss<script>alert(1)</script> as username). I tried it and it gets kicked back with 2 errors produced in validation. they are - HTML Tags are not allowed and - > xss is not alphnumeric So before I go to much further you may have to explain how the validation isn't working because i am now confused. I have been working on this a couple days so I am partially blind to all that is involved at them moment... – user3154948 Nov 10 '15 at 22:30 • @user3154948 You may have to disable the XSS checks of your browser for this to work (but you can't rely on these checks, as users may use old browsers, there may be bugs in the check, etc). In my experience, disabling these checks will be a lot easier in firefox than in chrome. – tim Nov 10 '15 at 22:34 • Yeah disabling chromes web security didn't do it, BUT that is what IE is for! LOL So if I don't echo back the variable in the error will I be alright? Or no? So what do I do to stop this from happening in the first place? – user3154948 Nov 11 '15 at 1:32 • So what can I do, if not eliminate, to at least try to thwart potential issues like the " autofocus onfocus="alert(1)? What other things should I be doing if validation should not be relied upon for primary security? I know csrf is one, but what else No, the SendActivationEmail() function was not included as I only have the basics in place and I know it needs A LOT of work yet. value="<?php echo \$_POST['userName']; ?>" produces a warning when the value does not exist, which screws up the form. I don't see an error, could you give me a little more detail? – user3154948 Nov 11 '15 at 2:15 • @user3154948 like I said, for XSS, always use htmlspecialchars (there are a couple of places where this is not enough, but in general, this is secure. And yes, CSRF is another issue you want to look out for, in addition to XSS and SQL injection. The owasp top ten is always a good starting point to get to know security issues. The warning: did you disable showing warnings? – tim Nov 11 '15 at 9:27
2020-01-22 12:39:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2892361283302307, "perplexity": 4831.504897194855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606975.49/warc/CC-MAIN-20200122101729-20200122130729-00370.warc.gz"}
https://www.effortlessmath.com/math-topics/evaluating-trigonometric-function/
# How to Evaluate Trigonometric Function? (+FREE Worksheet!) Learn how to evaluate trigonometric functions in a few simple steps with examples and detailed solutions. ## Step by step guide to Evaluating Trigonometric Function • Find the reference angle. (It is the smallest angle that you can make from the terminal side of an angle with the $$x$$-axis.) • Find the trigonometric function of the reference angle. ### Evaluating Trigonometric Function – Example 1: Find the exact value of trigonometric function. $$tan$$$$\frac{4π}{3}$$ Solution: Rewrite the angles for an $$\frac{4π}{3}$$: $$tan$$ $$\frac{4π}{3}=tan \frac{3π+π}{3}=tan⁡(π+\frac{1}{3} π)$$ Use the periodicity of $$tan$$: $$tan$$$$(x+π .k)= tan (x)$$ $$tan$$$$⁡(π+\frac{1}{3} π)=$$ $$tan$$$$⁡(\frac{1}{3} π)=\sqrt{3}$$ ### Evaluating Trigonometric Function – Example 2: Find the exact value of trigonometric function. $$cos$$ $$270^\circ$$ Solution: Write $$cos$$ $$(270^\circ)$$ as $$cos$$ $$(180^\circ+90^\circ)$$. Recall that $$cos⁡180^\circ=-1,cos ⁡90^\circ =0$$ The reference angle of $$270^\circ$$ is $$90^\circ$$. Therefore, $$cos$$ $$270^\circ=0$$ ### Evaluating Trigonometric Function – Example 3: Find the exact value of trigonometric function. $$cos$$ $$225^\circ$$ Solution: Write $$cos$$ $$(225^\circ)$$ as $$cos$$ $$(180^\circ+45^\circ)$$. Recall that $$cos$$$$⁡180^\circ=-1$$, $$cos$$$$⁡45^\circ =\frac{\sqrt{2}}{2}$$ $$225^\circ$$ is in the third quadrant and $$cos$$ is negative in the quadrant $$3$$. The reference angle of $$225^\circ$$ is $$45^\circ$$. Therefore, $$cos$$ $$225^\circ=-\frac{\sqrt{2}}{2}$$ ### Evaluating Trigonometric Function – Example 4: Find the exact value of trigonometric function. $$tan$$ $$\frac{7π}{6}$$ Solution: Rewrite the angles for $$tan$$ $$\frac{7π}{6}$$: $$tan$$ $$\frac{7π}{6}=$$ $$tan$$ $$(\frac{6π+π}{6})=tan⁡(π+\frac{1}{6} π)$$ Use the periodicity of $$tan$$: $$tan$$$$(x+π .k)=$$ $$tan$$$$(x)$$ $$tan⁡(π+\frac{1}{6} π)=$$ $$tan$$$$⁡(\frac{1}{6} π)=\frac{\sqrt{3}}{3}$$ ## Exercises for Evaluating Trigonometric Function ### Find the exact value of each trigonometric function. • $$\color{blue}{cot \ -495^\circ=}$$ • $$\color{blue}{tan \ 405^\circ=}$$ • $$\color{blue}{cot \ 390^\circ=}$$ • $$\color{blue}{cos \ -300^\circ=}$$ • $$\color{blue}{cot \ -210^\circ=}$$ • $$\color{blue}{tan \ \frac{7π}{6}=}$$ • $$\color{blue}{1}$$ • $$\color{blue}{1}$$ • $$\color{blue}{\sqrt{3}}$$ • $$\color{blue}{\frac{1}{2}}$$ • $$\color{blue}{- \sqrt{3} }$$ • $$\color{blue}{\frac{ \sqrt{3} }{3}}$$ ### What people say about "How to Evaluate Trigonometric Function? (+FREE Worksheet!)"? No one replied yet. X 30% OFF Limited time only! Save Over 30% SAVE $5 It was$16.99 now it is \$11.99
2022-10-07 11:41:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8787614703178406, "perplexity": 859.7632628723954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00709.warc.gz"}
http://zbmath.org/?q=an:1158.35072
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) On the incompressible limit for the Navier-Stokes-Fourier system in domains with wavy bottoms. (English) Zbl 1158.35072 The motion of a compressible viscous and heat conducting fluid occupying a domain ${\Omega }\subset {ℝ}^{3}$ is described by a triple of functions – the density $\rho \left(x,t\right)$, the velocity $u\left(x,t\right)$, the temperature $\vartheta \left(x,t\right)$. These functions satisfy the Navier-Stokes-Fourier system of equations $\begin{array}{cc}\hfill \frac{\partial \rho }{\partial t}& +\text{div}\phantom{\rule{0.166667em}{0ex}}\left(\rho u\right)=0,\hfill \\ \hfill \frac{\partial }{\partial t}\left(\rho u\right)+\text{div}\phantom{\rule{0.166667em}{0ex}}\left(\rho u\otimes u\right)& +\frac{1}{{\text{Ma}}^{2}}\nabla p\left(\rho ,\vartheta \right)=\text{div}\phantom{\rule{0.166667em}{0ex}}S+\frac{1}{{\text{Fr}}^{2}}\rho \nabla F,\hfill \\ \hfill \frac{\partial }{\partial t}\left(\rho s\left(\rho ,\vartheta \right)\right)& +\text{div}\phantom{\rule{0.166667em}{0ex}}\left(\rho s\left(\rho ,\vartheta \right)u\right)+\text{div}\phantom{\rule{0.166667em}{0ex}}\left(\frac{q}{\vartheta }\right)=\sigma ,\hfill \\ \hfill \frac{d}{dt}{\int }_{{\Omega }}& \left(\frac{{\text{Ma}}^{2}}{2}\rho {|u|}^{2}+\rho e\left(\rho ,\vartheta \right)-\frac{{\text{Ma}}^{2}}{{\text{Fr}}^{2}}\rho F\right)dx=0,\hfill \end{array}\phantom{\rule{2.em}{0ex}}\left(1\right)$ where $S$ is a viscous stress tensor, the heat flux $q$ obeys Fourier’s law $q=-k\left(\vartheta \right)\nabla \vartheta ,$ $\sigma$ is the entropy production, $p$ is the pressure, $s$ is the specific entropy, $e$ is the specific internal energy, Ma and Fr denote the Mach and Froude numbers. Let $\overline{\vartheta }$ and $\overline{\rho }$ are average quantities $\overline{\vartheta }=\frac{1}{|{\Omega }|}{\int }_{{\Omega }}\vartheta \phantom{\rule{0.166667em}{0ex}}dx,\phantom{\rule{1.em}{0ex}}\overline{\rho }=\frac{1}{|{\Omega }|}{\int }_{{\Omega }}\rho \phantom{\rule{0.166667em}{0ex}}dx·$ Setting Ma=$\epsilon$, Fr=$\sqrt{\epsilon }$, where $\epsilon$ is a small parameter, the triple of unknown functions is represented by $\rho ={\rho }_{\epsilon }=\overline{\rho }+\epsilon {r}_{\epsilon },\phantom{\rule{1.em}{0ex}}u={u}_{\epsilon },\phantom{\rule{1.em}{0ex}}\vartheta ={\vartheta }_{\epsilon }=\overline{\vartheta }+\epsilon {\theta }_{\epsilon }·$ It is proved that the limits ${r}_{\epsilon }\to r,\phantom{\rule{1.em}{0ex}}{u}_{\epsilon }\to u,\phantom{\rule{1.em}{0ex}}{\theta }_{\epsilon }\to \theta \phantom{\rule{1.em}{0ex}}\text{as}\phantom{\rule{4pt}{0ex}}\epsilon \to 0$ satisfy to the Oberbeck-Boussinesq system $\begin{array}{cc}\hfill \text{div}\phantom{\rule{0.166667em}{0ex}}u=0& ,\hfill \\ \hfill \overline{\rho }\left(\frac{\partial u}{\partial t}+\text{div}\phantom{\rule{0.166667em}{0ex}}\left(u\otimes u\right)\right)+\nabla P& =\overline{\mu }{\Delta }u-r\nabla F,\hfill \\ \hfill \overline{\rho }{\overline{c}}_{p}\left(\frac{\partial \theta }{\partial t}+\text{div}\phantom{\rule{0.166667em}{0ex}}\left(\theta u\right)\right)-& \text{div}\phantom{\rule{0.166667em}{0ex}}\left(\overline{k}\nabla \theta \right)=0,\hfill \\ \hfill r+\overline{\rho }\overline{\alpha }\left(\theta -\overline{\theta }\right)& =0,\hfill \end{array}$ where the viscosity coefficient $\overline{\mu }$, the specific heat at constant pressure ${\overline{c}}_{p}$, the heat conductivity coefficient $\overline{k}$ and the coefficient of thermal expansion $\overline{\alpha }$ are evaluated at $\overline{\rho },\overline{\vartheta }$. It is shown that the oscillations of the sound waves are effectively damped by the presence of a “wavy bottom” of physical domain. ##### MSC: 35Q30 Stokes and Navier-Stokes equations 35B35 Stability of solutions of PDE 76N15 Gas dynamics, general
2014-04-24 11:29:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 26, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8535923957824707, "perplexity": 4013.5524403350228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.springerprofessional.de/transactions-on-computational-collective-intelligence-xxx/16146412
main-content ## Über dieses Buch These transactions publish research in computer-based methods of computational collective intelligence (CCI) and their applications in a wide range of fields such as the semantic web, social networks, and multi-agent systems. TCCI strives to cover new methodological, theoretical and practical aspects of CCI understood as the form of intelligence that emerges from the collaboration and competition of many individuals (artificial and/or natural). The application of multiple computational intelligence technologies, such as fuzzy systems, evolutionary computation, neural systems, consensus theory, etc., aims to support human and other collective intelligence and to create new forms of CCI in natural and/or artificial systems. This thirtieth issue is a regular issue with 12 selected papers. ## Inhaltsverzeichnis ### Analyzing the Effects of Alternative Decisions in a Multiagent System with Stigmergy-Based Interactions Abstract The goal of this paper is to describe a simple protocol based on passive stigmergy for agent interaction in a multiagent system, which can exhibit complex behavior, and to study the effects of alternative decisions, which can be seen as perturbations that can change the final state of the system. Several ways of visualizing the influence relations that the agents have on one another and the effects of alternative decisions are presented. Florin Leon ### Modelling of Emotional Contagion in Soccer Fans Abstract This research introduces a cognitive computational model of emotional contagion in a crowd of soccer supporters. It is useful for: (1) better understanding of the emotional contagion processes and (2) further development into a predictive and advising application for soccer stadium managers to enhance and improve the ambiance during the soccer game for safety or economic reasons. The model is neurologically grounded and focuses on the emotions “pleasure” and “sadness”. Structured simulations with different crowd compositions and type of matches showed the following four emergent patterns of emotional contagion: (1) hooligans are very impulsive and are not fully open for other emotions, (2) fanatic supporters are very impulsive and open for other emotions, (3) family members are very easily influenced and are not very extravert, (4) the media is less sensitive to the ambiance in the stadium. For model validation, the model outcomes were compared to the heart rate of 100 supporters and reported emotions. The model produced similar heart rate and emotional patterns, thereby establishing its validity. Further implications of the model are discussed. Berend Jutte, C. Natalie van der Wal ### An Approach for Web Service Selection and Dynamic Composition Based on Linked Open Data Abstract The wide adoption of the Service Oriented Architecture (SOA) paradigm has provided a means for heterogeneous systems to seamlessly interact and exchange data. Thus, enterprises and end-users have widely utilized Web Services (WS), either as stand-alone applications or as part of more complex service compositions, in order to fulfill their business needs. But, while WS offer a plethora of benefits, a significant challenge rises due to the abundance of available services that can be retrieved online. In this work, we propose a framework for the selection and dynamic composition of WS, by utilizing Linked open Data (LoD). In addition, we propose a hybrid algorithm that uses as input the user’s personalized weights for non-functional characteristics and the results produced by appropriate SPARQL queries that are filtered results using a top-k approach. It then handles the ranking of alternatives based on their population. Finally, using two case studies and a dataset that describes real-world WS, we argue on the feasibility and performance of the proposed method. Nikolaos Vesyropoulos, Christos K. Georgiadis, Elias Pimenidis ### Linguistic Rules for Ontology Population from Customer Request Abstract In the recent years, IT (Information Technology) offers may have represented a barrier for a customer who does not share the same technical knowledge with providers. Therefore, it would be useful to let a customer express his thoughts and intentions. For this reason, customer’s intentions analysis has become a major contemporary challenge with the relentless growth of the IT market. With an approach for automatically detecting customer’s intention from a free text, it would be possible for a provider to understand the client’s needs and, consequently, the detected intention which may serve as a useful input for recommendation engines. This paper describes an automatic approach that populates an ontology of intentions from client’s textual request in the IT market space. This approach is based on an ontology structure that models the clients’ intentions. It takes an English written request as input and produces an intention ontology instance as output by the means of many combined NLP (Natural Language Processing) techniques. The population process is mainly based on a set of linguistic rules. Moreover, a certainty factor is assigned to each rule serving later as a degree of membership to the concept instantiated with the rule. The empirical evaluation confirms the interesting performance when evaluated on the customers’ requests through a database (available at this link: https://​sites.​google.​com/​view/​customer-request-dataset) specialized in an IT domain. Noura Labidi, Tarak Chaari, Rafik Bouaziz ### Evolutionary Harmony Search Algorithm for Sport Scheduling Problem Abstract In this paper, an original enhanced harmony search algorithm (HS-TTP) is proposed for the well-known NP-hard traveling tournament problem (TTP) in sports scheduling. TTP is concerned with finding a tournament schedule that minimizes the total distances traveled by the teams. TTP is well-known, and an important problem within the collective sports communities since a poor optimization in TTP can cause heavy losses in the budget of managing the league’s competition. In order to apply HS to TTP, we use a largest-order-value rule to transform harmonies from real vectors to abstract schedules. We introduce a new heuristic for rearranging the scheduled rounds which give us a significant enhancement in the quality of the solutions. Further, we use a local search as a subroutine in HS to improve its intensification mechanism. The overall method (HS-TTP) is evaluated on publicly available standard benchmarks and compared with the state-of-the-art techniques. Our approach succeeds in finding optimal solutions for several instances. For the other instances, the general deviation from optimality is equal to 4.45%. HS-TTP is able to produce high-quality results compared to existing methods. Meriem Khelifa, Dalila Boughaci, Esma Aïmeur ### Performance Analysis of Different Learning Algorithms of Feed Forward Neural Network Regarding Fetal Abnormality Detection Abstract Ultrasound imaging is one of the safest and most effective method generally used for the diagnosis of fetal growth. The precise assessment of fetal growth at the time of pregnancy is tough task but ultrasound imaging have improved this vital aspect of Obstetrics and Gynecology. In this paper performance of different learning algorithms of Feed forward neural network based on back-propagation algorithm are analyzed and compared. Basically detection of fetal abnormality using neural network is a hybrid method, in which biometric parameters are extracted and measured from segmentation techniques. Then extracted value of biometric parameters are applied on neural network for detect the fetus status. The artificial neural network (ANN) model is applied for the better diagnosis and effective classification purpose. ANN model are design to discriminate normal and abnormal fetus based on the 2-D US images. In this paper, feed forward back- propagation neural network using Levenberg-Marquardt, Bayesian Regularization and Scaled Conjugate Gradient algorithms are analyzed and used for diagnosis and classification of fetal growth. Performance of these methods are compared and evaluated based on desired output and mean square error. Results found from the Bayesian based neural networks, are in closed confirmation with the real time results. This modeling will help radiologist to take appropriate decision in the boundary line cases. Vidhi Rawat, Alok Jain, Vibhakar Shrimali, Sammer Raghuvanshi ### DWIaaS: Data Warehouse Infrastructure as a Service for Big Data Analytics Abstract Many novel challenges and opportunities are associated with Big Data which require rethinking for many aspects of the traditional data warehouse architecture. Indeed, big data are collections of data sets so large and complex to process using classical data warehousing. This data is sourced from many different places such as social media and stored in different formats. It is primarily unstructured data needs a high performance information technology infrastructure that provides superior computational efficiency and storage capacity. This infrastructure should be flexible and scalable to ensure its management over large scale. In recent years, cloud computing is gaining momentum with more and more successful adoptions. This paper proposes a new data warehouse infrastructure as a service to effectively support distribution of big data storage, computing and parallelized programming. Hichem Dabbèchi, Ahlem Nabli, Lotfi Bouzguenda, Kais Haddar ### Hybrid Soft Computing for Atmospheric Pollution-Climate Change Data Mining Abstract Prolonged climate change contributes to an increase in the local concentrations of O3 and PMx in the atmosphere, influencing the seasonality and duration of air pollution incidents. Air pollution in modern urban centers such as Athens has a significant impact on human activities such as industry and transport. During recent years the economic crisis has led to the burning of timber products for domestic heating, which adds to the burden of the atmosphere with dangerous pollutants. In addition, the topography of an area in conjunction with the recording of meteorological conditions conducive to atmospheric pollution, act as catalytic factors in increasing the concentrations of primary or secondary pollutants. This paper introduces an innovative hybrid system of predicting air pollutant values (IHAP) using Soft computing techniques. Specifically, Self-Organizing Maps are used to extract hidden knowledge in the raw data of atmospheric recordings and Fuzzy Cognitive Maps are employed to study the conditions and to analyze the factors associated with the problem. The system also forecasts future air pollutant values and their risk level for the urban environment, based on the temperature and rainfall variation as derived from sixteen CMIP5 climate models for the period 2020–2099. Lazaros Iliadis, Vardis-Dimitris Anezakis, Konstantinos Demertzis, Stefanos Spartalis ### Hiring Expert Consultants in E-Healthcare: An Analytics-Based Two Sided Matching Approach Abstract Very often in some censorious healthcare scenario, there may be a need to have some expert consultancies (especially by doctors) that are not available in-house to the hospitals. Earlier, this interesting healthcare scenario of hiring the expert consultants (mainly doctors) from outside of the hospitals had been studied with the robust concepts of mechanism design with money and mechanism design without money. In this paper, we explore the more realistic two sided matching market in our healthcare set-up. In this, the members of the two participating communities, namely the patients and the doctors are revealing the strict preference ordering over the members of the opposite community for a stipulated amount of time. We assume that the patients and doctors are strategic in nature. With the theoretical analysis, we demonstrate that the TOMHECs, that results in the stable allocation of doctors to the patients, satisfies the several economic properties such as strategy-proof-ness (or truthfulness) and optimality. Further, the analytically based analysis of our proposed mechanisms i.e. RAMHECs and TOMHECs are carried out on the ground of the expected distance of the allocation done by the mechanisms from the top most preference. The proposed mechanisms are also validated with the help of exhaustive experiments. Vikash Kumar Singh, Sajal Mukhopadhyay, Fatos Xhafa, Aniruddh Sharma, Arpan Roy ### A Fuzzy Logic-Based Anticipation Car-Following Model Abstract The human drivers in a real world decide and act according to their experience, logic, and judgments. In contrast, mathematical models act according to mathematical equations that ensure the precision of decision to take. However, these models do not provide a promising simulation and they do not reflect the human behaviors. In this context, we present in this paper a completely artificial intelligence anticipation model of car-following problem based on fuzzy logic theory, in order to estimate the velocity of the leader vehicle in near future. The results of experiments, which were conducted by using Next Generation Simulation (NGSIM) dataset to validate the proposed model, indicate that the vehicle trajectories simulated based on the new model are in compliance with the actual vehicle trajectories in terms of deviation and gap distance. In addition, the road security is assured in terms of harmonization between gap distance and security distance. Anouer Bennajeh, Slim Bechikh, Lamjed Ben Said, Samir Aknine ### Fault-Tolerance in XJAF Agent Middleware Abstract In this paper we will present one approach and solution for the implementation of load-balancing and fault-tolerance in the XJAF agent middleware. One of the most significant features of this middleware is the use of modern enterprise technologies. Our solution relies on those technologies. First we will briefly present the XJAF architecture and its essential features and functionalities. Then we will compare results of the execution of the same example in two multi-agent frameworks that support clustering: our in-house developed system (XJAF) and widely known and used JADE. We shall demonstrate that a distributed agent application deployed on the XJAF middleware cluster can survive failure of its nodes, while the JADE-based deployment cannot. Mirjana Ivanović, Jovana Ivković, Milan Vidaković, Costin Bădică ### Model Checking of Time Petri Nets Abstract Concurrent systems are becoming tremendous in different fields such as network applications, communication protocols and client server applications. However, they are rather difficult to develop and especially, due to concurrency, these systems are faced to specific errors like deadlocks and livelocks. In this context, model checking is a promising formal method which permits systems analysis at early stage, thus ensuring prevention from errors occurring. In previous work [16], we proposed an extension of timed temporal logic TCTL with more powerful modalities aiming to specify properties with clocks quantifiers as well as features for transient states. We formally defined the syntax and the semantics of the proposed quantitative logic called $$TCTL^{\varDelta }_{h}$$. As well as in [15], we used timed automata and region graph to discuss the applicability of the proposal to model checking by studying its decidability and complexity. In this paper, we define a timed temporal logic $$TPN-TCTL^{\varDelta }_{h}$$ for time Petri nets, for which model-checking is PSPACE-complete. In fact, Petri nets in general have gained a special interest due to their expressiveness power especially while dealing with concurrency. After detailing the proposed model checking method, we show its development and integration into the tool Romeo. Finally, we prove the efficiency of the method via case studies and simulation results. Naima Jbeli, Zohra Sbaï, Rahma Ben Ayed ### Backmatter Weitere Informationen
2021-05-12 09:24:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34418290853500366, "perplexity": 1984.5397312106074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991685.16/warc/CC-MAIN-20210512070028-20210512100028-00518.warc.gz"}
https://newbedev.com/kinetic-energy-evaluation-integral-evaluation-program-ostlund
# Chemistry - Kinetic Energy Evaluation Integral Evaluation Program Ostlund ## Solution 1: The off-diagonal elements $$T_{\mu\nu}, (\mu \ne \nu)$$ are just as necessary as the diagonal elements $$T_{\mu\mu}$$ to represent the kinetic energy operator in the chosen AO basis set. Substituting a few of Szabo/Ostlund's equations into each other (in my edition, 3.147-3.154), one finds $$F_{\mu\nu} = T_{\mu\nu} + V_{\mu\nu}^{\text{nucl}} + \sum_{\lambda\kappa} P_{\lambda\kappa}\left[ \left( \mu\nu \vert \kappa\lambda \right) - \frac{1}{2} \left( \mu\lambda \vert \kappa \nu\right) \right]$$ where $$F$$ is the Fock matrix, $$V^{\text{nucl}}$$ represents the nuclear attraction term, $$P$$ is a density matrix and $$\left( \mu\nu \vert \kappa\lambda \right)$$ are two-electron repulsion integrals over the AO basis functions (denoted greek indices). Of course, your Fock matrix may be diagonal already, if you have a very special case and basis set, but in general and to perform the SCF calculation, the off-diagonal parts cannot be ignored. ## Solution 2: TAR86 gives the correct mathematical answer, but given how you express yourself I thought I would try and explain it at a slightly lower level. Your problem is in the sentence "I understand T11 to be the kinetic energy integral using the first atom orbitals so it will give the kinetic energy of the electron of the first atom" The problem is that in a molecule the electron will be partially associated with the first atom, and partially with the second - this is almost the definition of a covalent bond! Thus yes there will be a contribution to the kinetic energy due to the first electron being on the first atom, but there will also be a contribution due to the first electron being on the second atom, and it seems to me that this is what you are missing. Let's try and make that a bit more concrete with a little maths. As we are using an orbital approximation the first electron's behaviour is described by it's wavefunction. We don't know exactly what it looks like, but we hope it will be similar to the atomic wavefunctions on the two atoms. So let's write the wavefunction for the first electron as $$\psi (1)= c_1 \phi_1 + c_2 \phi_2$$ where $$\phi_1$$ is the first atomic orbital (actually contracted Gaussian basis functions) and $$c_1$$ is the weight of this function as determined by solving the approximation to the Schrodinger equation that we are interested in. Then if $$\hat T$$ is the kinetic energy operator we can obtain the Kinetic Energy of the first electron from $$T=<\psi(1)|\hat T|\psi(1)>=c_1^2<\phi_1|\hat T|\phi_1>+2c_1c_2<\phi_1|\hat T|\phi_2>+c_2^2<\phi_2|\hat T|\phi_2>$$ (using Hermiticity of $$\hat T$$ and the fact here the wavefunction is real) In this we can see a cross term which involves electron 1 on both atom 1 and 2, and in fact even a term which is due to electron 1 being on atom 2. So using your notation we can identify $$T11=<\phi_1|\hat T|\phi_1> \\ T22=<\phi_2|\hat T|\phi_2> \\ T12=<\phi_1|\hat T|\phi_2>$$ Note more generally we should talk about basis functions rather than atoms, but here as the basis functions are associated with the atomic sites (this need not be the case) and there is only 1 basis function per site (in real calculations there will be many) we can be slightly sloppy. But also understand that the general case is not really much more complicated than the above, we simply expand the orbital for the first electron using all the basis functions, and carry on similarly.
2023-02-08 00:12:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8581356406211853, "perplexity": 251.0024642643507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500664.85/warc/CC-MAIN-20230207233330-20230208023330-00876.warc.gz"}
https://crypto.stackexchange.com/questions/64243/please-explain-parameters-of-rfc5639-elliptic-curves-including-brainpoolp160r1/64278
# Please explain parameters of RFC5639 Elliptic Curves including brainpoolP160r1 RFC 5639 brainpoolP160r1 has p = E95E4A5F737059DC60DFC7AD95B3D8139515620F (Wolfram Alpha says prime) A = 340E7BE2A280EB74E2BE61BADA745D97E8F7C300 B = 1E589A8595423412134FAA2DBDEC95C8D8675E58 x = BED5AF16EA3F6A4F62938C4631EB5AF7BDBCDBC3 y = 1667CB477A1A8EC338F94741669C976316DA6321 q = E95E4A5F737059DC60DF5991D45029409E60FC09 (Wolfram Alpha says prime) h = 1 I do not understand why h = 1 yet q < p. I thought that if you had a prime field size then there is only one cyclic subgroup in size equal to the field size (c.f. Lagrange). This doesn't seem to be the case for brainpoolP160r1 The order of the curve $$\#E(\mathbb{F}_p)$$ is different from $$p$$. In fact, according to Hasse's theorem, $$\#E(\mathbb{F}_p)=p+1-t$$ where $$t$$ the Frobenius trace satisfies $$|t|<2\sqrt{p}$$. So the gap between $$\#E(\mathbb{F}_p)$$ and $$p$$ is at most $$2\sqrt{p}$$. Note that, if $$\#E(\mathbb{F}_p)=p+1$$, $$\textit{i.e.}$$ $$t=0$$, the curve is supersingular. For brainpoolP160r1, the order is 1332297598440044874827085038830181364212942568457 (160-bit). You can play with sage code: p = 0xE95E4A5F737059DC60DFC7AD95B3D8139515620F B = 0x1E589A8595423412134FAA2DBDEC95C8D8675E58 x = 0xBED5AF16EA3F6A4F62938C4631EB5AF7BDBCDBC3 y = 0x1667CB477A1A8EC338F94741669C976316DA6321 E = EllipticCurve(GF(p),[A,B]) G = E(x,y) G.order() E.order() This would be more appropriate on crypto.SX where it has been addressed several times: https://crypto.stackexchange.com/questions/27904/how-to-determine-the-order-of-an-elliptic-curve-group-from-its-parameters https://crypto.stackexchange.com/questions/8888/elliptic-curve-parameter-generation https://crypto.stackexchange.com/questions/28947/how-to-calculate-elliptic-curve-parameters Briefly, the order of the curve #E(GF(p)) (sometimes abbreviated n) is NOT p, although it is fairly close in magnitude. The curve group is the relevant finite group and is subject to Lagrange; any point on the curve has order (of the subgroup it generates) dividing n, and if n is prime, as is chosen to be the case for the Brainpool prime curves and also the X9/SECG prime curves, every point has order q=n and cofactor h=1.
2021-03-03 21:24:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8385051488876343, "perplexity": 1039.7279246344463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367790.67/warc/CC-MAIN-20210303200206-20210303230206-00181.warc.gz"}
https://forum.polymake.org/viewtopic.php?f=10&t=534
Big endian problem Discussions on installation issues go here. Moderator: Moderators jamesjer Posts: 26 Joined: 01 Mar 2012, 16:51 Contact: Big endian problem The build of polymake 3.0r2 is failing on all big endian architectures supported by Fedora. I just tried with 3.1beta2, with no change. Here is a typical failure, from a ppc64 build: Code: Select all /usr/bin/perl perl/polymake --ignore-config --script generate_docs /home/fedora/rpmbuild/BUILDROOT/polymake-3.1-0.beta2.fc25.ppc64/usr/share/polymake/doc fulton polytope group graph ideal fan tropical matroid common topaz Attempt to free unreferenced scalar: SV 0x10031bdeda0, Perl interpreter: 0x100319d0010. Makefile:186: recipe for target 'release-docs' failed make: *** [release-docs] Segmentation fault (core dumped) Some work with gdb and valgrind shows that a bad pointer is somehow getting onto the savestack. When the crash occurs, a pointer to a yy_parser has just been removed from the savestack, leaving the savestack empty. But the yy_parser is full of bogus values, suggesting that it was either freed already, or wasn't really pointing to a yy_parser at all. Here is a typical valgrind message printed when the pointer is first dereferenced: Code: Select all Attempt to free unreferenced scalar: SV 0x519f720, Perl interpreter: 0x4830040. ==15318== Invalid read of size 8 ==15318== at 0x4294E54: PerlIO__close (perlio.c:1356) ==15318== by 0x4294F33: Perl_PerlIO_close (perlio.c:1372) ==15318== by 0x416FC1F: Perl_parser_free (toke.c:763) ==15318== by 0x423963B: Perl_leave_scope (scope.c:1303) ==15318== by 0x423A007: Perl_pop_scope (scope.c:122) ==15318== by 0x4162523: S_parse_body (perl.c:2355) ==15318== by 0x4162523: perl_parse (perl.c:1650) ==15318== by 0x10000CCF: main (perlmain.c:114) ==15318== Address 0x108011101 is not stack'd, malloc'd or (recently) free'd Address 0x108011101 is the value of parser->rsfp. Since valgrind didn't complain about dereferencing the pointer to the parser (0x519f720), presumably that is valid memory; i.e., it has not been freed. Probably it is not a parser, but some other kind of object, and somehow it got into savestack[0] in such a way that perl misinterprets its type when it is popped. Since savestack is touched by namespaces.xs, CPlusPlus.xxs, and Scope.xs, I have tried to find a big-endian-specific problem in those files, but so far have failed. If you have any ideas how I could debug this issue, I would be most grateful. A most likely unrelated issue: while looking for the cause of this bug, I noticed that CPlusPlus.xxs contains a copy of struct magic_state from perl's mg.c. However, the definition in CPlusPlus.xxs does not match the definition in perl 5.24.0's mg.c. In particular, the order of the fields mgs_flags and mgs_ss_ix is reversed. jamesjer Posts: 26 Joined: 01 Mar 2012, 16:51 Contact: Re: Big endian problem I managed to identify the exact 3 files with the problem! The issue is that, starting with perl 5.14, Perl_leave_scope() started reading the type via the any_uv field, instead of any_i32. On 64-bit big endian machines, this means that the polymake code that writes the type via any_i32 is writing to the higher order 32 bits of the field, leading to the problems I described. I will attach a patch against 3.1beta2 to fix the problem. Since perl 5.16 seems to be the minimium requirement for polymake now, I did not conditionalize on perl version. A Fedora build with a version of this patch for polymake 3.0r2 succeeded on all architectures. Attachments polymake-endian.patch Patch to fix endianness issue jamesjer Posts: 26 Joined: 01 Mar 2012, 16:51 Contact: Re: Big endian problem Also, I remarked on a change to struct magic_state. That structure changed in perl 5.12, 5.14, and 5.22. I will attach a patch to fix it up as well, once again ignoring perl versions less than 5.16. Attachments polymake-magic.patch Patch to update struct magic_state
2018-07-19 04:05:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30167797207832336, "perplexity": 11338.623000897962}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590493.28/warc/CC-MAIN-20180719031742-20180719051742-00234.warc.gz"}
https://verification.asmedigitalcollection.asme.org/appliedmechanics/article/89/6/061004/1139382/Transient-Continuum-Mechanics-and-Chemomechanics?searchresult=1
## Abstract A transient chemomechanical coupling formulation for solid continuum is presented. The second-order rate and the characterized time are introduced to include the transient effect through Taylor expansion. The transient Reynold’s transport theorem is derived with the new products or material elimination considered. Based on conservation laws and the second law of thermodynamic, we state a consistent Helmholtz-energy-based framework. The transient field equations take mechanical and chemical contributions and microscopic time into account. Either microscopic time or chemical reactions leads to the unsymmetry of the stress tensor. The relationship of Helmholtz energy and constitutive properties, the evolution equations, and the entropy are consistent with the classical continuum thermodynamics and the constitutive theory in continuum mechanics. Further, the transient equations of thermal conduction and diffusion with finite velocity are naturally derived rather than postulated, and a comparison with the existing theories is discussed. ## 1 Introduction There is now a great deal of interest in the study of the transient phenomenon, due to its many applications in modern science and technology, including integrated circuits, thermal protective systems, and lithium ion batteries. The transient behavior of complex materials systems often results from the simultaneously occurring coupling chemomechanical processes, such as heat transfer, mass diffusion, chemical reactions, and mechanical deformation. The applications of short-pulse lasers in the fabrication of sophisticated microstructures, syntheses of advanced materials, and measurements of the thin film properties are typical transient chemomechanical processes [1]. In the problems with microscopic time and size, the mass diffusion [2,3] and heat conduction [4,5] will propagate with finite velocity, and the transient effect should be considered. Researches into this kind of phenomenon are traditionally based on the Fourier law of heat conduction and the Fick law of diffusion. The two laws were based on the statistical principle, and the propagation with an infinite velocity is assumed. Such traditional theory is precise enough for the conventional steady processes. However, for some transient phenomena, mass and heat transfer accompanied with chemical reaction under high temperature or extremely large diffusion flux, it will lead to errors to some extent, even contradict physical observations. Therefore, it is necessary to discuss the transient effects, diffusion [6,7], and heat conduction [812] with finite velocity. These phenomena may induce irreversible deformations, mass and density changes (new products or materials elimination due to chemical reactions), and consistent variations of the local mechanical and chemical properties. So, the coupling of thermal-mechanical chemical processes is vital for the materials system. Thus, advanced understanding of the coupled chemomechanics is becoming crucial and is gaining momentum and attracting considerable research interest [13]. However, most of the existing coupling formulations are limited to steady processes [1417]. In this paper, we will develop transient continuum mechanics that considers the microscopic time and coupling between chemistry and mechanics. The structure of the paper is as follows. In Sec. 2, we introduce modified axioms in biomechanics and extend them to chemomechanics. Based on the Taylor expansion, we define the rate that depends on the characterized time tc to consider the effect of microscopic time. Section 3 deals with the transient Reynold’s transport theorem, which takes the effects of microscopic time and chemical reactions into account. Section 4 derives the transient field equations from fundamental conservation laws considering mechanical and chemical contributions and microscopic time. In Sec. 5, we consider the transient thermal conduction, and a comparison to the existing thermal conduction theory is given. In Sec. 6, the transient diffusion is described. Finally, we present the conclusions in Sec. 7. ## 2 Axioms of Continuum Mechanics Biological phenomena are inhomogenous and multiscale (it actually is a chemical process), some of the classical conservation laws do not apply. Fung [18,19] thus modified the axioms of continuum mechanics to biomechanics as follows: 1. The system is inhomogous and multiscale in space, so the definitions of stress and strain should depend on the length scale. Generally, higher-order stress and higher-order strain are introduced, such as strain gradients. 2. Material particles are varied. They can be new products or disappear due to growth or chemical reactions, and they can move and exchange neighbors. 3. The zero-stress state is often changed due to biological (chemical) processes. For transient processes and chemomechanics, these modifications still apply. Moreover, in the temporal dimension, the process is inhomogous and multiscale also. Hence, similar to axiom (i), the definition of the rate should depend on the characterized time tc to consider the effect of the timescale, i.e., the second-order rate is introduced. Then, using the Taylor expansion, we define the rate as $DuDt=limΔt→tcu(t+Δt)−u(t)Δt=dudt+12d2udt2tc$ (1) when tc = 0, the term including second-order rate d2u/dt2 disappears; Eq. (1) reduces to the classical definition of rate $Du/Dt=limΔt→0Δu/Δt=du/dt$. Here, u is a scalar or vector. In this paper, we apply the above-mentioned axioms to transient processes and chemomechanics. Especially, we will derive Reynold’s transport theorem based on Eq. (1) to consider the transient effect and the chemical reactions. For simplicity and clear concepts, we limite this work to the simple material (i.e., excluding the higher-order strain) and infinite deformation. For the description of the fully coupled chemomechanical system, we assume that Nt species construct a solid continuum. We denote the chemical species with index N (N = 1, …, Nt). The displacement of a particle in the system is denoted by $u(x,t)$, and the particle velocity is $v(x,t)=∂u(x,t)/∂t$. The molar concentration of species N is denoted by cN, and its molar mass is MN; then, we can define the partial density as $ρα=cαMα$ (2) In this paper, the repeated Latin indices obey Einstein summation convention while the repeated Greek indices do not mean summation. The density of the system is given as $ρ=∑N=1NtρN=cNMN$ (3) The mole fractions should satisfy $∑N=1NtcN=1$ (4) Due to the above-mentioned relation, only (Nt-1) concentrations are independent. ## 3 Transient Reynold’s Transport Theorem Considering a chemical or evolving process with the characterized time tc, Reynold’s transport theorem can be derived as $DDt(∫ΩydΩ)=limΔt→tc1Δt[∫Ω(x,t+Δt)y(x,t+Δt)dΩ−∫Ω(x,t)y(x,t)dΩ]=limΔt→tc1Δt[∫Ω(x,t)+ΔΩ(x,t)y(x,t+Δt)dΩ−∫Ω(x,t)y(x,t)dΩ]=limΔt→tc1Δt[∫Ω(x,t)y(x,t+Δt)dΩ−∫Ω(x,t)y(x,t)dΩ]+limΔt→tc1Δt∫ΔΩ(x,t)y(x,t+Δt)dΩ=∫Ω(x,t)[∂y(x,t)∂t+tc2∂2y(x,t)∂t2]dΩ+∫Γ(y+tc∂y∂t)[(vk−Vk)nk]dΓ$ (5) The derivation of the above equation is similar to that in textbooks [20,21] but using the rate defined in Eq. (1) and taking into account the moving boundary due to chemical reactions or other processes. y is a scalar or vector field defined on the domain $Ω(x,t)$. v is the particle velocity, and V is the velocity of the moving boundary Γ of the domain $Ω(x,t)$. $ΔΩ(x,t)$ represents the variation of the volume $Ω(x,t)$ from mechanical and chemical contributions, which means, in addition to volume deformation, it also includes the new products (with positive direction V) or material elimination (with negative direction V) due to chemical reactions during time internal Δt. n is normal vectors of the boundary Γ. The characterized time tc represents the time scale that the transient process takes place on, which can be determined by experiments and molecular dynamics simulations. Generally, different process has different characterized time. If there are no chemical reactions, that means no material is added or reduced, $V=0$, then we have $DDt(∫ΩydΩ)=∫Ω(x,t)[∂y(x,t)∂t+tc2∂2y(x,t)∂t2]dΩ+∫Γ(y+tc∂y∂t)(vknk)dΓ$ (6) Moreover, if the characterized time tc is zero, then Eq. (6) is reduced to the classical Reynold’s theorem $ddt(∫ΩydΩ)=∫Ω(x,t)∂y(x,t)∂tdΩ+∫Γy(vknk)dΓ$ The transient Reynold’s theorem (Eq. (5)) describes the non-equilibrium process. When the system evolves to the equilibrium state, the effect of characterized time can be omitted. ## 4 Conservation Laws and Field Equations In this section, all the required field equations to describe the transient process in solid continua are derived from the conservation laws. ### 4.1 Conservation of Mass. For the balance of mass, all the chemical species (N = 1, …, Nt) are distinguished. To derive the balance of a single species N, the only change in mass of species N in that volume Ω occurs due to a mass source associated with chemical reactions and it can be written as $DDt(∫ΩcNdΩ)=∫ΩvNrw˙rdΩ(r=1,2,⋯,nr)$ (7) where vNr is the stoichiometric coefficient of the rth reaction, $w˙r$ is the rth reaction rate, and nr is the total number of reactions. Using Reynold’s transport theorem (Eq. (5)), one can get $DDt(∫ΩcαdΩ)=∫Ω(∂cα∂t+tc2∂2cα∂t2)dΩ+∫Γ[(cα+tc∂cα∂t)(vkα−Vk)]nkdΓ=∫Ω{(∂cα∂t+tc2∂2cα∂t2)+∇[(cα+tc∂cα∂t)(vkα−Vk)]}dΩ$ (8) where we introduced the mass flux with concentration $cα$ as $Jα=(cα+tc∂cα∂t)(vα−V)$ (9) Here, $vα$ is the moving velocity of species α, and $ρv=∑N=1Ntραvα$. The second integral on the right-hand side of the first line of Eq. (8) denotes the mass flux $Jα$ through the boundary Γ. Here, we describe the mass flux of a species relative to the velocity of the boundary V. That distinguishes our description from classical theory which usually does not consider the characterized time and moving of the boundary. By combining Eqs. (7) and (8), one can obtain the local form of the balance of species mass $∂cN∂t+tc2∂2cN∂t2+∇JN=vNrw˙r$ (10) Compared to the classical mass balance equation for the irreversible process with chemical reactions [22], Eq. (10) includes the second-order rate of concentration shown in the second term of the left-hand side which is due to the transient effect of the irreversible process, maybe it can be called concentration acceleration. This term is very similar to the inertial concentration which is introduced by Kuang [7,11] to remedy that Fick diffusion theory cannot explain the transient diffusion with finite speed. This work describes naturally the transient diffusion with finite velocity, which we will discuss in Sec. 6. ### 4.2 Conservation of Linear Momentum. To state the balance of linear momentum, the rates of the partial linear momenta of all species are summed and equate with body forces b and tractions t acting on the system, and one obtains $DDt(∫ΩρvdΩ)=∫ΩbdΩ+∫ΓtdΓ$ (11) By applying Reynold’s transport theorem (Eq. (5)), the left-hand side of Eq. (11) can be written as $DDt(∫ΩρvdΩ)=∫Ω{∂ρv∂t+tc2∂2ρv∂t2}dΩ+∫Γ[(ρv+tc∂ρv∂t)(vk−Vk)]nkdΓ$ (12) The second integral of the right-hand side of Eq. (12) can be expressed as $∫Γ[(ρv+tc∂ρv∂t)(vk−Vk)]nkdΓ=∫ΓσikVnkdΓ=∫Ω∇σVdΩ$ (13) where we introduce $σikV=(ρvi+tc∂ρvi∂t)(vk−Vk)$, which can be thought as the residual stress or body stress due to the convective diffusion [23] and chemical reactions. This supports the axiom (iii), i.e., the zero-stress state is often changed. The traction on the boundary can be balanced by the stress, i.e., $∫ΓtdΓ=∫Γσ⋅ndΓ=∫Ω∇σdΩ$ (14) Thus, the balance of the linear momentum is expressed as $∂ρv∂t+tc2∂2ρv∂t2=∇(σ−σV)+b$ (15) If the timescale tc approaches 0, then $∂ρv∂t=∇(σ−σV)+b$ Moreover, if the boundary is static, the above equation returns to the classical equilibrium equation. ### 4.3 Conservation of Angular Momentum. The conservation of angular momentum states that the rate of the angular momentum is equal to the total torque, i.e., $DDt(∫Ωx×ρvdΩ)=∫Ω(x×b)dΩ+∫Γ(x×t)dΓ$ (16) The rate of the angular momentum is derived as $DDt(∫Ωx×ρvdΩ)=∫Ω{∂(x×ρv)∂t+tc2∂2(x×ρv)∂t2}dΩ+∫Γ[(x×ρv)+tc(∂(x×ρv)∂t)](vk−Vk)nkdΓ=∫Ωx×{∂ρv∂t+tc2∂2ρv∂t2}dΩ+∫Γx×{[ρv+tc(∂ρv∂t)](vk−Vk)}nkdΓ=∫Ωx×(∇(σ−σV)+b)dΩ+∫Ω(δ×σV+x×∇σV)dΩ$ (17) Here, we utilized the balance of linear momentum of Eq. (15) and the following derivation $∫Γx×{[ρv+tc(∂ρv∂t)](vk−Vk)}nkdΓ=∫ΓeijkxjσkmVnmdΓ=∫Ω∇(x×σV)dΩ=∫Ω(δ×σV+x×∇σV)dΩ$ where eijk is the permutation tensor. The torque of the tractions computes to $∫Γx×tdΓ=∫Γx×(σ⋅n)dΓ=∫Ω∇(x×σ)dΩ=∫Ω(δ×σ+x×∇σ)dΩ$ (18) Then, by combining Eqs. (16)(18), one can reach $∫Ω[δ×(σ−σV)]dΩ=0$ (19) The above equation results in the symmetry condition $[σ−σV]=[σ−σV]T$ (20) From Eq. (20), one can find that either of the transient effect and chemical reactions results in $(σV)T≠σV$ and the unsymmetry of the stress tensor $σ$. Conversely, without the characterized time tc and boundary motion (V = 0), $(σV)T=σV$, then$σT=σ$, this aligns with the standard result, i.e., symmetrical stress tensor. ### 4.4 Conservation of Energy. For the balance of energy, the change of total internal energy U, as well as kinetic energy K, is equal to the power supply in body Ω due to heat Q and the work W done by the external force, i.e., $DDt(U+K)=W+Q$ (21) The left-hand side of Eq. (21) describes the rate of internal energy density eN and kinetic energy density of each species $U=∫ΩρNeNdΩ,K=∫Ω12ρv⋅vdΩ$ (22) we have to again apply Reynold’s transport theorem through Eq. (5) and obtain $DDtU=∑α=1Nt{∫Ω[∂(ραeα)∂t+tc2∂2(ραeα)∂t2]dΩ+∫Γ[ραeα+tc∂(ραeα)∂t](vkα−Vk)nkdΓ}=∑α=1Nt{∫Ω[∂(ραeα)∂t+tc2∂2(ραeα)∂t2]dΩ+∫ΩMα(∇eα⋅Jα+eα∇Jα)dΩ+∫Ωtc∇(ραevα)dΩ}=∑α=1Nt{∫Ω[(∂(ραeα)∂t+tc2∂2(ραeα)∂t2)+Mα∇eα⋅Jα−Mαeα(∂cα∂t−tc2∂2cα∂t2)+Mαevαrw˙r]dΩ+∫Ωtc∇(ραevα)dΩ}$ (23) where we have utilized the conservation of mass of Eq. (10) and $∫Γ[ραeα+tc∂(ραeα)∂t](vkα−Vk)nkdΓ=∫Γ[ραeα+tc∂ρα∂teα](vkα−Vk)nkdΓ+∫Γ[tc(ρα∂eα∂t)(vkα−Vk)]nkdΓ=∫Ω∇(MαeαJα)dΩ+∫Γ[tc(ρα∂eα∂t)(vkα−Vk)]nkdΓ=∫ΩMα(∇eα⋅Jα+eα∇Jα)dΩ+∫Ωtc∇(ραevα)dΩ$ with $∫Γ[ρα(∂eα∂t)(vkα−Vk)]nkdΓ=∫Ω∇(ραevα)dΩ$ Similarly, by applying Reynold’s transport theorem through Eq. (5) to the kinetic energy, we obtain $DDtK=12∫Ω{∂ρv2∂t+tc2∂2ρv2∂t2}dΩ+12∫Γ∑α=1Nt{[ραv2+tc(∂ραv2∂t)](vkα−Vk)}nkdΓ=12∫Ω{MN∇v2⋅JN+MNv2vNrw˙r+2ρv(∂v∂t+tc2∂2v∂t2)}dΩ+12∫Ωtc{2∇ρKv+[∂ρ∂t∂v2∂t+ρ(∂v∂t)2]}dΩ$ (24) with $∫Γ∑α=1Nt[(ραv∂v∂t)(vkα−Vk)]nkdΓ=∫Γ[(ρv∂v∂t)(vk−Vk)]nkdΓ=∫Ω∇(ρKv)dΩ$ In another way, we can utilize the Reynold’s transport theorem Eq. (5) and the balance of linear of momentum Eq. (15) to get $DDtK=12∫Ω[(∂ρv2∂t+tc2∂2ρv2∂t2)+∇(σv⋅v)]dΩ+12∫Γ[tc(ρv∂v∂t)(vk−Vk)]nkdΓ=12∫Ω{ρv(∂v∂t+tc2∂2v∂t2)+tc2(2∂ρv∂t∂v∂t)+tc∇(ρKv)+(∇⊗v):σv+v⋅(∇σ+b)}dΩ$ (25) By combining Eqs. (24) and (25), we can obtain $DDtK=∫Ω{(∇⊗v):σv+v⋅(∇σ+b)−12(MN∇v2⋅JN+MNv2vNrw˙r)}dΩ+12∫Ωtc{[∂ρ∂t∂v2∂t+3ρ(∂v∂t)2]}dΩ$ (26) In terms of heat, we treat energy production due to heat sources and heat fluxes $Q=∫ΩrdΩ−∫Γq⋅ndΓ=∫Ω(r−∇q)dΩ$ (27) where r and qk are the external body heat source strength and the heat flow, respectively. It is noted that heat generated due to chemical reactions is an internal process of conversion and dissipation within the internal energy. The rate of the work done by tractions is written as $W=∫Ωb⋅vdΩ+∫Γt⋅vdΓ=∫Ω[b⋅v+∇(v⋅σ)]dΩ$ (28) Combining Eqs. (23) and (26)(28), we obtain the transient form for the balance of energy $∑α=1Nt[∂(ραeα)∂t+tc2∂2(ραeα)∂t2]=(r−∇q)+(∇⊗v):(σ−σv)−∑α=1Nt[Mα∇(eα−12v2)⋅Jα]+∑α=1Nt[Mαeα(∂cα∂t+tc2∂2cα∂t2)−Mα(eα−12v2)vαrw˙r]−tc{∇(ρNevN)+12[∂ρ∂t∂v2∂t+3ρ(∂v∂t)2]}=(r−∇q)+(∇⊗v):σ−∑α=1Nt[Mα∇eα⋅Jα−Mαeα(∂cα∂t+tc2∂2cα∂t2)+Mαeαvαrw˙r]−tc{∇(ρNevN)+12[∂ρ∂t∂v2∂t+3ρ(∂v∂t)2]+(∇⊗v):[ρ∂v∂t⊗(v−V)]}$ (29) where we utilized MNvNr = 0 for each chemical reaction r. ### 4.5 Balance of Entropy. Similarly, we proceed with the balance of entropy. We assume the entropy density $ηα$ of each species and the temperature T of the system and obtain $DDt∫ΩρNηNdΩ=∫ΩrTdΩ−∫ΓqT⋅ndΓ+∫ΩduTdΩ$ (30) where du is the rate of dissipation energy density. Applying Reynold’s theorem through Eq. (5) on the left-hand side of Eq. (30) gives $DDt(∫ΩρNηNdΩ)=∑α=1Nt{∫Ω[∂(ραηα)∂t+tc2∂2(ραηα)∂t2]dΩ+∫Γ[ραηα+tc∂(ραηα)∂t](vkα−Vk)nkdΓ}=∑α=1Nt{∫Ω[∂(ραηα)∂t+tc2∂2(ραηα)∂t2]dΩ+∫Γ[Mαcαηα+tc∂(Mαcαηα)∂t](vkα−Vk)nkdΓ}=∑α=1Nt∫Ω{[∂(ραηα)∂t+tc2∂2(ραηα)∂t2]+(Mα∇ηα⋅Jα+Mαηα∇Jα)+tc∇(ραηvα)}dΩ=∑α=1Nt{∫Ω[(∂(ραηα)∂t+tc2∂2(ραηα)∂t2)+Mα∇ηα⋅Jα−Mαηα(∂cα∂t+tc2∂2cα∂t2)+Mαηαvαrw˙r]dΩ+∫Ωtc(ραηvα)dΩ}$ (31) Then, applying divergence theorem on the right-hand side of Eq. (30) and combining Eqs. (30) and (31) reaches an expression for the rate of total entropy as $∑α=1Nt[∂(ραηα)∂t+tc2∂2(ραηα)∂t2]=duT+rT−(∇qT−q⋅∇TT2)−∑α=1Nt[Mα∇ηα⋅Jα−Mαηα(∂cα∂t+tc2∂2cα∂t2)+Mαηαvαrw˙r+tc∇(ραηvα)]$ (32) This equation is a transient representation of the Clausius-Duhem inequality [25]. ### 4.6 The Second Law of Thermodynamics. Denoting the total Helmholtz energy H and the species Helmholtz energy density hN, we utilize the Legendre transformation of the internal energy density eN to obtain $H(ε,γ,cN,T)=ρNhN=ρN(eN−TηN)$ (33) Here, the strain rate tensor $ε˙$ which is symmetrical and the vorticity $γ˙$ which is skew-symmetrical are defined, respectively, as $ε˙=12(∇⊗v+v⊗∇)=ε˙(r)+ε˙(i)$ (34) $γ˙=12(∇⊗v−v⊗∇)=γ˙(r)+γ˙(i)$ (35) This is because the stress is unsymmetrical as we discussed in Sec. 4.3. Here, the superscript (r) represents the reversible part, and the superscript (i) represents the irreversible part. The stress tensor is decomposed as symmetric stress τ and skew stress m $τ=12(σ+σT),m=12(σ−σT)$ (36) Then, we have $σ:(∇⊗v)=(τ+m):(ε˙+γ˙)=τ:ε˙+m:γ˙$ (37) Thus, combining Eqs. (29), (32), and (33), making use of Eq. (37), and solving for du, the so-called full dissipation inequality, also known as the Clausius-Planck inequality, is obtained as $du=(−ρNηN−∂H∂T)∂T∂t+(τ−∂H∂ε(r)):ε˙(r)+(m−∂H∂γ(r)):γ˙(r)+∑α=1Nt(Mαhα−∂H∂cα)∂cα∂t−(qT−∑α=1NtMαηαJα)⋅∇T+τ:ε˙(i)+m:γ˙(i)−∑α=1NtMα∇hα⋅Jα−∑α=1NtMαhαvαrw˙r+tcA$ (38) Here, we let A to denote the sum of terms multiplied by characterized time tc, and its detail expression is omitted. Based on the second law of thermodynamics, the dissipation must be greater than zero for the irreversible transformation, while it is equal to zero for reversible transformations, i.e., du ≥ 0. Following the Coleman-Noll argument [24], the rates of $ε,γ,cN,T$ are assumed to be independent. Therefore, the parenthetical terms in Eq. (38) are required to vanish. This yields the consistency conditions $s=ρNηN=−∂H∂T,τ=∂H∂ε(r),m=∂H∂γ(r),μ^α=Mαhα=∂H∂cα$ (39) where s is the total entropy, and the chemical potential (per mole) of species α is introduced as $μ^α=Mαhα$. Due to the relation Eq. (4), only (Nt-1) concentrations are independent, we denote the driving force μN to be the chemical potential difference $μN=μ^N−μ^Nt(N=1,2,⋯,Nt−1)$. In the rest of the paper, we call μN the chemical potential for convenience. Hence, the total Helmholtz energy of Eq. (33) is the sum of the chemical potentials of all the components $H=ρNhN=cNμN$ (40) The affinity of the rth reaction is defined as $Ar=−vNrμN$ (41) And the entropy flow vector is $Js=qT−∑α=1NtMαηαJα$ (42) We continue to evaluate the dissipation inequality (38). Since the term tcA is the high order small quantity and all equations of Eq. (39) are fulfilled, Eq. (38) reduces to $0≤τ:ε˙(i)+m:γ˙(i)−∇T⋅Js−∇μN⋅JN+Arw˙r$ (43) $0≤tcA≈0$ (44) Equation (43) constitutes a thermodynamic constraint for irreversible strains, the entropy flux $Js$, the species fluxes JN, and the reaction rate $w˙r$. According to the linear irreversible thermodynamics, the irreversible flows are proportional to the irreversible forces, and the evolving equations (this relation is also called the second constitutive equation) can be obtained from Eq. (43). It should be noted that there is no relation between the irreversible force and flow tensor with different rank for isotropic materials, according to the requirements of Curie’s symmetry principles [25]; i.e., the corresponding Onsager interference coefficient must necessarily be zero. For anisotropic materials, the relation between the irreversible force and flow tensor with different rank exists, as discussed in Refs. [26]. Under the assumption of each process being independent for isotropic materials, the first and second terms on the right-hand side of Eq. (43) relate irreversible strains to stresses. The last term guides the direction of chemical reactions. The third and fourth terms lead to Fourier’s law of thermal conduction and Fick’s law of diffusion, respectively, as $Js=−λ⋅∇T$ (45) $JN=−D⋅∇μN$ (46) where coefficients of thermal conduction $λ$ and coefficients of diffusion D are symmetrical. The constitutive relations of the materials can be derived from Eq. (39). We can also introduce internal variables in energy function to describe other irreversible processes. ## 5 Transient Thermal Conduction The development of the hyperbolic temperature wave equation (such as Eq. (10)) instead of the parabolic thermal equation is mainly based on two reasons: the paradox of infinite thermal wave velocity and the Landau second sound speed observed by experiments in liquid helium and in solids at low temperatures [4,5]. This is called non-Fourier effect of thermal conduction. By combining Eqs. (32) and (38), the entropy balance equation can be rewritten as $∂s∂t+tc2∂2s∂t2=rT−∇qT+1T∑α=1Nt(MαηαJα)⋅∇T+∑α=1Nt[Mαηα(∂cα∂t+tc2∂2cα∂t2)]+1T(τ:ε˙(i)+m:γ˙(i))−∑α=1NtMα∇(1Thα+ηα)⋅Jα−∑α=1NtMα(1Thα+ηα)vαrw˙r$ (47) If we consider only the pure thermal conduction, the previous equation reduces to $∂s∂tT+tc2∂2s∂t2T=r−∇q$ (48) This equation is similar to the inertial entropy theory proposed by Kuang [7,11,12] where an inertial entropy is introduced in the equation of entropy balance. In a pure thermal conduction case, $T∂s∂t=C∂θ∂t$, where we denote θ = TT0, with T0 is a reference temperature, and C is the specific heat per volume, by means of Fourier’s law (Eq. (45)), Eq. (48) can be written as $C∂θ∂t+Ctc2∂2θ∂t2=r+λijθ,ij$ (49) This is a hyperbolic heat transfer equation with a finite propagation speed. This theory is useful in problem with the microscopic size and time under shock loadings, for an example, applying a short-pulse laser beam to a thin gold films. For an isotropic thermal conduction problem, λij = λδij, Eq. (49) is same as Cattaneo-Vernotte’s (C-V) theory [8,9] if heat source r = 0, but the physical meaning is total different. In C-V theory, they modified the Fourier’s law (Eq. (45)) by introducing a relaxation time to get a hyperbolic heat transfer equation. The temperature wave equation with finite velocity can be derived by the transient theory naturally in this paper. ## 6 Transient Diffusion The transient mass transfer can be defined as the random walk of an ensemble of particles, from regions of high concentration to regions of lower concentration. For transient diffusion, the propagation speed of mass transfer is finite and the concentration field has a wave-like behavior. This was called non-Fick effect. During the past few years, considerable attention has been paid to the non-Fick effect. Combining the Maxwell model [2] and mass conservation equation, non-Fick mass transfer equation can be derived. In this section, the diffusion of finite speed can be treated naturally. For simplicity, we consider the isotropic pure diffusion, μN = RTlncN and $JN=−DcN/RT∇μN$. Then, by combining the balance of mass Eq. (10) and Fick’s law, we can obtain a hyperbolic diffusion equation $∂cN∂t+tc2∂2cN∂t2=D∇2cN$ (50) This is just the famous non-Fick theory, when tc/2 is taken to be the diffusion relaxation time. ## 7 Conclusions In this paper, we presented a rigorous formulation of chemomechanics for solid continuum. The effects of transient and the coupling between chemistry and mechanics are considered. For the transient or microscopic time, the second-order rate and the characterized time are introduced through Taylor expansion. Then, the transient Reynold’s transport theorem is derived, where the new products or material elimination due to chemical reactions are considered as well. The transient field equations with second-order rates are derived from fundamental conservation laws considering mechanical and chemical contributions and microscopic time. Either microscopic time or chemical reactions leads to the unsymmetry of the stress tensor. Based on the second law of thermodynamics, the relationship between Helmholtz's energy and constitutive properties, the evolution equations and the entropy are described, which is consistent with the classical continuum thermodynamics and consistent with the constitutive theory in continuum mechanics. From the balances of entropy and mass, the transient equations of thermal conduction and diffusion with finite velocity are naturally derived rather than postulated. The theory can reduce to the classical continuum theory or steady-state case by simply letting the characterized time approach zero, which means the transient non-equilibrium approaching to the equilibrium state. The formulation introduced in this paper is exact, and the physical meaning is clear. This work can be undertaken to model the entire process within the framework of irreversible thermodynamics, bridge the molecular dynamics to the continuum mechanics, and thus correctly reproduce the physics. ## Acknowledgment The support from NSFC (Grant No. 12090030) is appreciated. ## Conflict of Interest There are no conflicts of interest. ## Data Availability Statement The data sets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request. The authors attest that all data for this study are included in the paper. Data provided by a third party are listed in Acknowledgment. No data, models, or code were generated or used for this paper. ## References 1. Marciak-kozlowska , J. , Mucha , Z. , and Kozlowski , M. , 1995 , “ Picosecond Thermal Pulses in Thin Gold Films ,” Int. J. Thermophys. , 16 ( 6 ), pp. 1489 1497 . 2. Maxwell , J. C. , 1867 , “ On the Dynamic Theory of Gases ,” Phil. Trans. R. Soc. , 157 ( 1 ), pp. 49 88 . 3. Sherief , H. H. , Hamza , F. A. , and Saleh , H. A. , 2004 , “ The Theory of Generalized Thermoelastic Diffusion ,” Int. J. Eng. Sci. , 42 ( 5–6 ), pp. 591 608 . 4. Landau , L. , 1941 , “ The Theory of Superfluidity of Helium II ,” J. Phys. , 5 ( 1 ), pp. 71 90 . 5. Jackson , H. E. , and Walker , C. T. , 1971 , “ Thermal Conductivity, Second Sound and Phonon-Phonon Interactions in NaF ,” Phys. Rev. B , 3 ( 4 ), pp. 1428 1439 . 6. , S. M. , 1996 , “ On the Transient Non-Fickian Dispersion Theory ,” Transp. Porous Media , 23 ( 1 ), pp. 107 124 . 7. Kuang , Z.-B. , 2010 , “ Variational Principles for Generalized Thermodiffusion Theory in Pyroelectricity ,” Acta Mech. , 214 ( 3–4 ), pp. 275 289 . 8. Cattaneo , C. , 1948 , “ Sulla Conduzione de Calore ,” Atti. Semin. Mat. Fis. Univ. Modena , 3 ( 3 ), pp. 83 101 . 9. Vernotte , P. , 1958 , “ Les Paradoxes de la Théorie Continue de L’equation de la Chaleeur ,” , 246 ( 12 ), pp. 3154 3155 . 10. Joseph , D. D. , and Preziosi , L. , 1989 , “ Heat Waves ,” Rev. Mod. Phys. , 61 ( 1 ), pp. 41 73 . 11. Kuang , Z.-B. , 2009 , “ Variational Principles for Generalized Dynamical Theory of Thermopiezoelectricity ,” Acta Mech. , 203 ( 1–2 ), pp. 1 11 . 12. Kuang , Z.-B. , 2014 , “ Discussions on the Temperature Wave Equation ,” Int. J. Heat Mass Transfer , 71 ( 1 ), pp. 424 430 . 13. Shen , S. P. , Feng , X. , Meng , S. H. , and Zhu , T. , 2019 , “ Special Topic-Chemomechanics ,” Sci. China Technol. Sci. , 62 ( 8 ). 14. Ganser , M. , Hildebrand , F. E. , Kamlah , M. , and McMeeking , R. M. , 2019 , “ A Finite Strain Electro-chemo-mechanical Theory for Ion Transport With Application to Binary Solid Electrolytes ,” J. Mech. Phys. Solids , 125 ( 1 ), pp. 681 713 . 15. Hu , S. L. , and Shen , S. P. , 2013 , “ Non-equilibrium Thermodynamics and Variational Principles for Fully Coupled Thermal Mechanical Chemical Processes ,” Acta Mech. , 224 ( 12 ), pp. 2895 2910 . 16. Konica , S. , and Sain , T. , 2020 , “ A Thermodynamically Consistent Chemo-mechanically Coupled Large Deformation Model for Polymer Oxidation ,” J. Mech. Phys. Solids , 137 ( 1 ), p. 103858 . 17. Afshar , A. , and Di Leo , C. V. , 2021 , “ A Thermodynamically Consistent Gradient Theory for Diffusion-Reaction-Deformation in Solids: Application to Conversion-Type Electrodes ,” J. Mech. Phys. Solids , 151 ( 1 ), p. 104368 . 18. Fung , Y. C. , 1990 , Biomechanics: Motion, Flow, Stress, and Growth , Springer Verlag , New York . 19. Fung , Y. C. , 2002 , “ Biomechanics and Gene Activities ,” , 32 ( 4 ), pp. 484 496 . 20. Wu , W. Y. , 1982 , Fluid Mechanics , Peking University Press , Beijing, China . 21. Kuang , Z. B. , 1993 , Nonlinear Continuum Mechanics , Xi’an Jiaotong University Press , Xi’an, China . 22. Prigogine , I. , and Kondepudi , D. , 1999 , Thermodynamique , Editions Odile Jacob , Paris . 23. Levich , V. , 1962 , Physicochemical Hydrodynamics. Englewood Cliffs , Prentice-Hall , Hoboken, NJ . 24. Coleman , B. D. , and Noll , W. , 1963 , “ The Thermodynamics of Elastic Materials With Heat Conduction and Viscosity ,” Arch. Ration. Mech. Anal. , 13 ( 1 ), pp. 167 178 . 25. De Groet , S. R. , 1952 , Thermodynamics of Irreversible Processes , North-Holland Publishing Company , Amsterdam . 26. Rambert , G. , Grandidier , J. C. , and Aifantis , E. C. , 2007 , “ On the Direct Interactions Between Heat Transfer, Mass Transport and Chemical Processes Within Gradient Elasticity ,” Eur. J. Mech. A Solids , 26 ( 1 ), pp. 68 87 .
2022-06-27 15:20:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 85, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8059589266777039, "perplexity": 1081.0293726142575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103334753.21/warc/CC-MAIN-20220627134424-20220627164424-00458.warc.gz"}
http://digital.csic.es/handle/10261/37482
Please use this identifier to cite or link to this item: http://hdl.handle.net/10261/37482 Share/Impact: Título : Time-dependent matter instability and star singularity in F(R) gravity Autor : Bamba, Kazuharu, Nojiri, Shin'ichi, Odintsov, Sergei D. Palabras clave : Dark energyModified gravitiesFinite future singularity Fecha de publicación : 25-Apr-2011 Editor: Elsevier Citación : Physics Letters - Section B 698 (5) : 451-456 (2011) Resumen: We investigate a curvature singularity appearing in the star collapse process in $F(R)$ gravity. In particular, we propose an understanding of the mechanism to produce the curvature singularity. Moreover, we explicitly demonstrate that $R^\alpha$ ($1 < \alpha \leq 2$) term addition could cure the curvature singularity and viable $F(R)$ gravity models could become free of such a singularity. Furthermore, we discuss the realization process of the curvature singularity and estimate the time scale of its appearance. For exponential gravity, it is shown that in case of the star collapse, the time scale is much shorter than the age of the universe, whereas in cosmological circumstances, it is as long as the cosmological time. Descripción : Publicado en línea el 24 de marzo de 2011. Versión del editor: http://dx.doi.org/10.1016/j.physletb.2011.03.038 URI : http://hdl.handle.net/10261/37482 ISSN: 1873-2445 DOI: 10.1016/j.physletb.2011.03.038 Appears in Collections: (ICE) Artículos ###### Files in This Item: File Description SizeFormat
2015-07-31 05:19:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3929629623889923, "perplexity": 4271.7830806611255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988051.33/warc/CC-MAIN-20150728002308-00018-ip-10-236-191-2.ec2.internal.warc.gz"}
http://xuebao.sjtu.edu.cn/CN/10.16183/j.cnki.jsjtu.2020.01.006
• 学报(中文) • ### 水合物分解条件下海底黏土质斜坡破坏实验模拟 1. 大连理工大学 海岸和近海工程国家重点实验室, 辽宁 大连 116024 • 出版日期:2020-01-28 发布日期:2020-01-16 • 通讯作者: 年廷凯,男,教授,博士生导师,电话(Tel.): 0411-84708511;E-mail: tknian@dlut.edu.cn. • 作者简介:宋晓龙(1990-),男,河南省濮阳市人,博士生,现主要从事岩土工程和海洋水合物方面的研究工作. • 基金资助: 国家重点研发计划课题(2018YFC0309203),国家自然科学基金(51879036,51579032)资助项目 ### Experiment Simulation of Submarine Clayey Slope Failure Induced by Gas Hydrate Dissociation SONG Xiaolong,ZHAO Wei,NIAN Tingkai,JIAO Houbin 1. State Key Laboratory of Coastal and Offshore Engineering, Dalian University of Technology, Dalian 116024, Liaoning, China • Online:2020-01-28 Published:2020-01-16 Abstract: In view of the insufficient recognition of the failure mechanism of submarine clayey slope induced by gas hydrate dissociation, the effect of gas on submarine slope after gas hydrate dissociation was simulated by means of venting. Furthermore, multiple of experiments were carried out under different combinations of soil strength, embedment depth of gas hydrate, flow rate and dissociation zone. Combining with the image processing, the deformation and the evolutionary process of slope surface and slope body were deeply investigated. The deformation and failure characteristics of submarine clayey slope induced by hydrate dissociation were initially revealed. On this basis, a limit equilibrium method was used to establish the critical pressure analytic expression of submarine slope failure, which is theoretically explained the critical pressure value in the failure process of slope deformation. The results show that the deformation and failure processes of the submarine slope are divided into four stages: the accumulation of air pressure, the elastic compression of soil, the upheaval failure of slope, and the deformation stability of slope. Although, there is a deviation between the calculated and experimental results of gas critical pressure, it can reflect the true level of the critical pressure to some extent. The research results can provide some references for the further understanding of the deformation and failure mechanism of submarine clayey slope induced by gas hydrate dissociation, and the developments of the stability analysis theory and evaluation method.
2022-12-06 13:22:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20055347681045532, "perplexity": 3555.2077661311637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711108.34/warc/CC-MAIN-20221206124909-20221206154909-00108.warc.gz"}
https://ds4humans.com/30_questions/20_passive_prediction_questions.html
# Passive-Prediction Questions# When most people hear the term “machine learning,” what they think of is the ability of computers to answer Passive-Prediction Questions: “which patients are likely to experience complications from surgery if we don’t do anything?”, “which people applying for life insurance are healthy enough we should issue them a policy?”, or “which job applicants would make good employees (and thus, which job applicants should we interview)?” And indeed, the ability of data scientists to answer Passive-Prediction Questions is one of our most useful skills. However, answering this type of question is also one of the easiest ways to get in trouble as a data scientist. Why? Just as you can always calculate a summary statistic or get a result from an unsupervised machine learning model when trying to answer an Exploratory Question, you can also always get predicted values from a statistical model. But with Passive-Prediction Questions—unlike with Exploratory Questions—you can’t fully check the validity of your answer to a Passive-Prediction Question with data you currently have. That’s because, by definition, the reason you are trying to answer a Passive-Prediction Question is that you want to predict something that you don’t currently know! ## Flavors of Passive-Prediction Questions# There are two flavors of Passive-Prediction Questions: • predicting something that has yet to occur (“which patients going in for surgery are likely, in the future, to experience complications?”), and • predicting something that could occur but actually won’t (“if a radiologist had looked at this mammogram, would they conclude the patient had cancer?”). The first category of passive prediction—predicting something that has yet to occur—is the most intuitive, and is the type of passive prediction that accords best with the normal meaning of the term “predict.” But the second favor of passive prediction—in which we try and predict what someone would do—is also very important, as it underlies efforts at automation. Spam detection, image classification, autocomplete, and self-driving cars are all examples of situations where we train a model by showing it examples of how a person would do something, so the model can predict what a person would do when faced with new data and emulate that behavior itself. And just as there are two flavors of passive-prediction, so too are there two corresponding use cases for answering Passive-Prediction Questions: • Identifying individual entities for follow-up, and • Automating data classification to make hard-to-work-with data (images, medical scans, text) simpler ## Differentiating Between Exploratory and Passive-Prediction Questions# If you have not felt a little confused about the distinction between Exploratory and Passive-Prediction Questions previously, there’s a good chance you find yourself struggling with that issue here, and for understandable reasons. In many cases, one can easily imagine how the same analysis might constitute an answer to either an Exploratory or Passive-Prediction Question. For example, predicting which patients are likely to experience complications from surgery using a logistic regression could constitute the answer a Passive-Prediction Question, but it could also answer Exploratory Questions like “what hospitals have the highest surgery complication rates?” or “what type of surgeries have the highest complication rates?” The confusion lies in the fact that the distinction between these types of questions isn’t related to the statistical machinery you might use to answer the question, but rather what we are trying to accomplish, and thus how we might evaluate the success of a given statistical or machine learning model. With Passive-Prediction Questions, our interest is in the values that get spit out of a model for each entity in the data. When answering a Passive-Prediction Question, the only thing we care about is the quality of those predictions, and so we evaluate the success of a model that aims to answer a Passive-Prediction Question by the quality of those predictions (using metrics like AIC, AUC, R-Squared, Accuracy, Precision, Recall, etc.). Thus, when using a logistic regression to answer a Passive-Prediction Question, we don’t actually care about what factors are being used to make our predictions, just that they improve the predictions. Our interest is only the quality of our predicted values, and a good model is one that explains a substantial portion of the variation in our outcome. With Exploratory Questions, our interest is in improving our understanding of the problem space, not in making precise predictions for each entity in our data. Thus, in the example of a logistic regression, our interest is in the factors on the “right-hand side” of our logistic regression and how they help us understand what shapes outcomes, not the exact accuracy of our predictions. A good model, in other words, doesn’t actually have to explain a large share of variation at the level of individual entities, but it does have to help us understand our problem space. For example, a model that looked at the relationship between individuals’ salaries and their age, education, and where they live might tell us a lot about the importance of a college degree to earnings (which we could see by the large and statistically significant coefficient on having a college degree), even if it only explains a small amount of overall variation in salaries (e.g., the R-Squared might only be 0.2). This distinction also has important implications when working with more opaque supervised machine learning techniques, like deep learning, random forests, or SVMs. These techniques are often referred to as “black boxes” because exactly how different impute factors relate to the predictions that the model makes is impossible to understand (in other words, it’s like the input data is going into a dark box we can’t see into, and then predictions are magically popping out the other side). These models can be very useful for answering Passive-Prediction Questions, as they can accommodate very unusual, non-linear relationships between input factors and predicted values, but because these relationships are opaque to us, the data scientist, they don’t really help us understand the problem space. ## When Are Our Predictions Valid?# Because passive-prediction is fundamentally about making predictions about things that are not-yet-seen, making predictions is one of the more precarious things a data scientist can do. But that doesn’t mean that we are helpless when it comes to determining how confident we should be in our predictions, and when and where we think our predictions will be reliable. In particular, as data scientists, we have a great many tools for evaluating how well our model fits the data we already have (a concept known as internal validity), and ways of thinking critically about the contexts in which using a given model to make predictions are appropriate (a concept known as external validity). ### Internal Validity# Of all the places where data science is fragmented, none is more evident than in how data scientists evaluate how effectively we think a model is representing our data. The first data science perspective on evaluating the internal validity of a model comes from the field of statistics. Statisticians have approached evaluating model fit with, unsurprisingly, methods based on the idea of random sampling and the properties of statistical distributions. They make assumptions about the distributions underlying data and use those to derive theoretically-motivated metrics. That’s the origin of statistics like Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), as well as the emphasis on the validity of the standard errors assigned to factors on the right-hand side of the regression. When computer scientists were first developing their own machine learning techniques… I’m editorializing a little here, but I think it’s safe to say that initially they either didn’t know about a lot about these metrics, or they thought that they could do a better job investing their own. So they developed the “split-train-test” approach to model evaluation: they split their data into two parts, train their model on part of the data, then test how well the model is able to predict the (known) outcomes in the test dataset. Of course, over time these two fields have largely converged in adopting one another’s methods, and some—like cross-validation—live comfortably in the middle. But if you’re ever wondering why, when you get to a machine learning class, it seems like everything you learned in stats has been abandoned (or end up in a stats class and have the opposite experience), it’s largely an artifact of parallel development of methods of model evaluation in computer science and statistics departments. ### External Validity# Where internal validity is a measure of how well a model captures the meaningful variation in the data we already have, external validity is a measure of how well we think that our model is likely to perform when faced with new data. The external validity of a model, it is important to emphasize, is specific to the context in which a model is being used. A model will generally have very high external validity when used to answer Passive-Prediction Questions in a setting that is very similar to the setting from which the data used to train the model was collected, but low external validity when applied in a very different setting. There are a range of factors that can determine external validity, such as whether a model is being used to answer Passive-Prediction Questions about: • the same population from which the training data was drawn. The patterns in data from one country will often differ from patterns in data from another country, for example. • the same time period from which the training data was drawn. Consumer behavior may vary across seasons, and many patterns in data change over longer timespans. • the same parameter ranges as those in the training data. Statistical and machine learning models are designed to fit the data they can see as well as possible. However, while nearly all models will generate predictions about outcomes when given inputs that weren’t in the data used to fit a model because they weren’t trained with access to data of this type, their guesses are unlikely to be particularly meaningful. To illustrate, consider the two models in the figure below (source)—one a linear fit, and one a higher-order polynomial. Both model the data similarly in the range for which data is available but make very different predictions at values of x below 0 or above 2.
2023-01-28 20:48:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4499865174293518, "perplexity": 656.7685041034405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499654.54/warc/CC-MAIN-20230128184907-20230128214907-00182.warc.gz"}
http://cs.stackexchange.com/questions/7729/how-can-it-be-detected-that-a-number-generator-is-not-really-random
# How can it be detected that a number generator is not really random? I heard that random number generation in computers isn't really random, but there is no efficient algorithm to detect it. How can it be detected at all ? - This post may help you. –  Anton Jan 3 '13 at 13:16 At the risk of sounding pedantic, it's not really possible to say with certainty that a given source isn't random, if all you do is examine its outputs. You can flip a fair coin $10^{100}$ times in a row and get heads every time, and your chance of getting tails on the $10^{100} + 1$st toss is still 50%. By examining the source, we can usually identify non-random things (e.g., pseudorandom number generators... we could predict the sequence from the seed and the algorithm). Many apparent sources of randomness may just not be understood enough to predict reliably. This is philosophical, though. –  Patrick87 Jan 3 '13 at 18:10 @Patrick87 If with "certainty" you mean mathematically, that's true. There are statistical tests, however, that can give you arbitrary significance (provided the data are "good"). –  Raphael Jan 3 '14 at 6:25 Computers Being Really Random: True randomness is impossible for Turing Machines in a theoretical sense, and most computers can't generate truly random output. Therefore, some modern computers include hardware that allows the computer to access an outside source which will hopefully include some randomness. One example of how this can be accomplished is to track small fluctuations in temperature inside the computer. Randomness can be obtained from an outside source as well. But from the tone of your post I don't think that outside sources of randomness are what you're interested in. Seeds: Without an outside addition, everything a computer does is deterministic. This leads to a big issue: if you call a random number generation program, it will give you the same result every time if you give it the same input. Clearly, we need a program that outputs a random number to change its behavior each time it's run (otherwise we'll keep getting the same "random" number, which is not particularly helpful). One idea is to give the program some input, which changes each time the program is run, so that a different number will be output. We call this input a "seed." The random number generator needs to take in a seed, perform some operations, and give us a random number. The current system time is a classic example of a seed. This gives a long string with high entropy, and if the time is kept track of in a sufficiently granular fashion (i.e. if your system clock uses hours then "time" is a pretty poor seed), you're unlikely to feed the pseudorandom number generator the same number twice. Algorithms that are Random Enough: Now we have an algorithm that at least has some way to be different each time it's run. We give it a seed, and while the algorithm gives the same number when prompted with the same seed, we want the numbers it generates to be random otherwise. This acts like the above--you take in some input, and it produces some (hopefully different enough from the input to be "random") output. Now let's say you came up with your own algorithm to do this, and you claim that the numbers you come up with are pretty close to random when you gave it a bunch of different seeds. How would we test how good it is? Now we want some algorithm that will take in a seed, do some operations, and produce a random number. At the simplest, the algorithm could just output the seed--it's not giving us the same number each time, and random seeds give us random outputs. But clearly that's not what we want. On the other hand, an algorithm can be fairly complicated, like many actual pseudorandom generators. How can we tell which algorithms give us "random" numbers from our not-necessarily-random seeds? If we can't get it exactly, how can we tell which are best? It's hard to tell which of those tests are ideal, but it's easy to come up with some minimum requirements these algorithms should meet before we say they give us "random" numbers. Maybe we want to make sure that your algorithm gives even numbers half the time. Maybe we want to make sure that if I ask for a random number between $1$ and $n$, all numbers in that range will be output for some input to your function. Clearly there are a lot of tests we can run; if your algorithm passes some suite of tests it's a pseudorandom generator. Which tests to use is a very interesting and well used/studied area of computer science. Random Enough to Fool an Attacker: Now what you MAY be referring to is Cryptographially Secure Pseudorandom Generators. I think the best way to explain this is in the context of the above--here, we're using our randomness for cryptography, so when we're designing tests what we really care about is that someone won't be able to break our security by predicting what random number we picked. I don't know your level of familiarity with cryptography, but imagine we're doing a simple replacement cypher---each letter is replaced with some other letter. We want to pick these replacements randomly, so they're hard for an attacker to guess. But if he can figure out how my random number generator works, he'll be able to solve the whole cipher! Therefore, cryptographic algorithms require random number generators that are specifically hard to guess. Specific cryptographic algorithms may require additional tests (like for some sort of nice-enough distribution as mentioned above). For this reason, CSPRGs are defined in terms of how well other algorithms solve them (which is where we finally come to your question). Specifically, let's say I have a CSPRG which I'll call R. R is a CSPRG if and only if there is NO feasible algorithm that can guess which bit it will output next. This is true even if you know all the previous bits it output! So let's say that the first five bits my CSPRG has output are 10100. You don't know the input I used to the program, but you have access to the code I used to write my CSPRG. Then the claim is that it's impossible for you to write a program to decide if the next bit output will be 101000 or 101001. So for reasons of cryptography, sometimes how well a pseudorandom number generator does is defined in terms of how predictable it is to other programs. Note that this still gives much of the intuition of "randomness," as (say) if you know all of the random outputs will be odd it is neither cryptographically secure nor does it pass a common-sense randomness test. - This is a good (but incomplete) answer overall, but a couple of points are wrong. “True randomness is impossible for computers, as everything they do is deterministic.” That's not always true, some processors include a hardware RNG. Computers can also react to external input which may be random. “… for cryptography, so we don't really care how "random" they are in terms of distribution”: actually sometimes a uniform distribution is important in crypto, e.g. the IV for CBC and the k parameter in DSA. –  Gilles Jan 3 '13 at 23:36 He wrote "Without an outside addition, everything a computer does is deterministic". The outside addition is a reference to devices such as RNG's as you mention. Without these additions, our computational capabilities are equal to those of a TM for which true randomness is impossible. –  Kent Munthe Caspersen Jan 6 '14 at 13:04 If I recall correctly I added that after Gilles' comment. –  SamM Jan 9 '14 at 5:29 Recently I found a nice post about randomness in computation on the MIT CSAIL Theory of Computation Group blog: Can you tell if a bit is random? The post starts with some ideas extracted from a Avi Wigderson's wonderful talk about the power and limitations of randomness in computation, surveying the beautiful area of randomized algorithms, and the surprising connection between pseudorandomness and computational intractability. Then it summarizes some recent results on quantum cryptography; in particular the way to efficiently test if the output of a certain kind of device is truly random (randomness expansion protocols). For example see the recent work by Umesh Vazirani, Thomas Vidick, Certifiable Quantum Dice (Or, testable exponential randomness expansion) Abstract: We introduce a protocol through which a pair of quantum mechanical devices may be used to generate n bits of true randomness from a seed of O(log n) uniform bits. The bits generated are certifiably random based only on a simple statistical test that can be performed by the user, and on the assumption that the devices obey the no-signaling principle. No other assumptions are placed on the devices' inner workings.... - Assuming you are talking about statistical randomness -- cryptography has other needs! -- there is a whole slew of goodness-of-fit tests that can detect whether a sequence of numbers fits a given distribution. You can use these to test whether a (pseudo) random number generator is sound (up to the quality of your test and the chosen significance). Diehard test suites combine different methods. - This is a broad/complex topic in computer science which the other answer by SamM addresses some. Your specific question seems to be about if computers have what are called PRNGs, ie pseudo random number generators, how can one detect that? The short answer is that nontrivial PRNGs are built so that their algorithms cannot be detected (derived). In general, if the PRNG is what is called "secure", even if an attacker knows the algorithm used to generate the pseudorandom sequence, they cannot guess the particular parameters used to generate the sequence. In this way pseudorandomness has many deep ties to cryptography, and one can talk about "breaking" a PRNG in much the same way that a cryptographic algorithm can be "broken". There are many research papers in this area, its an active area at the forefront of cryptography. For "trivial" PRNGs, eg say a linear congruential generator, if the attacker knows the algorithm used to generate it and it is not generated with "bignums", the search space is "relatively small" and the attacker could theoretically also find the parameters used by the particular PRNG basically by brute force and trying all combinations. PRNGs can be broken in practice (again depending on their "security") in some cases by running a large suite of statistical randomness tests against them. eg this is the rationale of the program "Dieharder" (by Brown). There is also an NIST suite. The intrinsic difficulty/hardness of breaking PRNGs is not yet strictly theoretically proven but is basically associated with what are called "trapdoor" or "one-way functions" which can be computed efficiently in one direction but are "hard" to invert (reverse). There are some open problems in cryptography about randomness hardness. These questions relate closely to complexity class separations eg the famous P=?NP question. Questions about breaking PRNGs also relate to Kolmogorov complexity, a field which studies the smallest Turing Machines that can generate sequences. breaking the PRNG also closely relates finding the "shortest" program to compute a pseudorandom sequence. And Kolmogorov complexity is undecidable to compute in general. As Gilles points out in a comment there do exist hardware-based RNGs built out of physical electronic processes such as related to quantum noise. these if engineered correctly are unbreakable. - "nontrivial PRNGs are built so that their algorithms cannot be detected (derived)" - I don't think that's right. In fact, your very next sentence contradicts it. Would you like to edit your answer to fix this? –  D.W. Jan 3 '14 at 7:10 it could be fleshed out more precisely but not following, what is your specific objection? the point is that the algorithm that is generating the sequence cannot be determined from only the sequence of data alone, except by brute force, if the algorithm is secure, and brute force is unlikely to succeed in that case. –  vzn Jan 3 '14 at 7:12 My specific objection is that the sentence sounds wrong to me: it sounds like you are saying that PRNGs are designed so that someone observing their output cannot infer what the algorithm was, but that is not how things work in real life. Most PRNGs are not built to prevent someone from learning the algorithm; typically, the algorithm is public. Perhaps you mean that PRNGs are built so that their output cannot be distinguished from true-random bits? –  D.W. Jan 3 '14 at 7:58 "the algorithm that is generating the sequence cannot be determined from only the sequence of data alone, except by brute force, if the algorithm is secure" - This is not correct, either. The algorithm is typically public. It is only the seed that is non-public, and it is only the seed that is supposed to be hard to derive from the outputs. –  D.W. Jan 3 '14 at 8:00 In fact everything a classical computer do is deterministic, in the sense that when you give them some tasks it follows them in a deterministic way. Therefore if you want to have one random number you can compute it accordingly to the time (based on the user's input time), but if you want to have a set of random numbers you can not use the time for the next numbers, because the numbers would no more be independent. What people do is to use pseudo-random generators which have a seed, i.e. a number that is used to compute all the numbers of the pseudo-random number generator (in some more sophisticated cases of simulation or other tasks, more seeds may be needed, if more than one set of independently random numbers is needed). The seed is usually 0 or a specific number if you want reproducible results, or the time if you and different unreproducible results. The fact that the pseudo-random number generators are good enough, lies in the fact that they follow "the basic characteristics of a pseudo-random numbers generation", in order to be computeded efficiently and behave like real random numbers: • the produced numbers must follow an uniform distribution (from this distribution you can achieve any other distribution); • the produced numbers must be statiscally independent; • the sequence is reproductible (this point is imposed because of that property of the hardware of a classical computer, and it's why they are called "pseudo-random numbers"); • the period of the sequence must be large enough; • the numbers generation must be fast. From each number of the sequence of pseudo-random numbers a new number is computed (usually we work with integers). However there is a period, n, in a sequence of pseudo-random number generators prepared to work in a specific base with finite number of available bits to express the numbers (eg. binary). If this n wouldn't be big enough there would be serious problems, but don't worry, the computer scientists choose the seeds and other parameters of the pseudo-random generators well, in order to have a good n. For instance, a possible pseudo-random number generator, with the linear congruential method, which is one of the oldest and best-know pseudo-random number generators algorithmns can be defined accordingly to: it has four values: - x_0 ≥ 0 - a ≥ 0 - c ≥ 0 - m > x_0, where: x0 is the initial value, a, c, and m are constants where: m > a, m > c, and it produces the sequence with the fornula: x_{i+1} = (a*x_i+c) MOD m The values for these constants must be carefully chosen. One possibility is: x_{i+1} = (1664525*x_i + 1013904223) MOD 2^32, refs.[1-2] There are other algorithms more sophisticated to generate random numbers, which avoid some of the problems of previous algorithms, that included: [3] • shorter than expected periods for some seed states (such seed states may be called 'weak' in this context); • lack of uniformity of distribution for large amounts of generated numbers; • correlation of successive values; • poor dimensional distribution of the output sequence; • the distances between where certain values occur are distributed differently from those in a random sequence distribution. In the future, the classical computers may be united to quantum systems which can provide really random numbers, and deliver them. [4] references: [1] http://en.wikipedia.org/wiki/linear_congruential_generator [2] William H., et al. (1992). "Numerical recipes in fortran 77: The art of scientific computing" (2nd ed.). ISBN 0-521-43064-X. [3] http://en.wikipedia.org/wiki/pseudorandom_number_generator [4] http://www.technologyreview.com/view/418445/first-evidence-that-quantum-processes-generate-truly-random-numbers/ - This doesn't really answer the question. You explain how to generate random numbers, not to detect whether a given RNG is random. Even then your explanations are somewhat lacking, linear congruences are hardly “one of the best”. Hardware RNGs do exist now, there's no need for quantum computing; there's a good chance that have you one in your PC, one in your phone and even one in your credit card. –  Gilles Jan 7 '14 at 20:52
2015-08-02 00:14:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6374146938323975, "perplexity": 664.1782385786956}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988924.75/warc/CC-MAIN-20150728002308-00271-ip-10-236-191-2.ec2.internal.warc.gz"}
https://pypi.org/project/libfilecreator/1.0/
Creates a file populated with data based on the desired file type and passed in file content values. ## Project description Creates a file populated with data based on the desired file type and passed in file content values. By default, the resulting file is saved in $HOME/$FILE.$EXT ($USER \$HOME directory). If the filename value that you pass in contains “/” then the library assumes that you want to write elsewhere, just make sure you have write access to the desired location! ## Project details Uploaded source
2023-02-02 09:33:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3106158971786499, "perplexity": 2967.818599892061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499967.46/warc/CC-MAIN-20230202070522-20230202100522-00691.warc.gz"}
http://eprint.iacr.org/2005/436/20051130:134219
## Cryptology ePrint Archive: Report 2005/436 A Note on the Kasami Power Function Doreen Hertel Abstract: This work is motivated by the observation that the function $\F{m}$ to $\F{m}$ defined by $x^d+(x+1)^d+a$ for some $a\in \F{m}$ can be used to construct difference sets. A desired condition is, that the function $\varphi _d(x):=x^d+(x+1)^d$ is a $2^s$-to-1 mapping. If $s=1$, then the function $x^d$ has to be APN. If $s>1$, then there is up to equivalence only one function known: The function $\varphi _d$ is a $2^s$-to-1 mapping if $d$ is the Gold parameter $d=2^k+1$ with $\gcd (k,m)=s$. We show in this paper, that $\varphi _d$ is also a $2^s$-to-1 mapping if $d$ is the Kasami parameter $d=2^{2k}-2^k+1$ with $\gcd (k,m)=s$ and $m/s$ odd. We hope, that this observation can be used to construct more difference sets. Category / Keywords: foundations / number theory, finite field Publication Info: submitted to IEEE Transactions on Information Theory
2014-10-31 08:30:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9206191301345825, "perplexity": 340.6775182915683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637899132.1/warc/CC-MAIN-20141030025819-00051-ip-10-16-133-185.ec2.internal.warc.gz"}
http://www.mkcc.in/taste-of-zqco/fea520-wave-speed-symbol
Another Word For Agility, Gobi Stealth Rack Ladder, Tricks To Get A Picky Dog To Eat, Digital Sticky Note Png, Rdr2 Keep All Gunslinger Guns, How To Wire Occupancy Sensor With 3-way Switch, Identify Dog Breed By Paws, Warby Parker Whitaker, Hilton The Hague, Board Game Display, "/> Another Word For Agility, Gobi Stealth Rack Ladder, Tricks To Get A Picky Dog To Eat, Digital Sticky Note Png, Rdr2 Keep All Gunslinger Guns, How To Wire Occupancy Sensor With 3-way Switch, Identify Dog Breed By Paws, Warby Parker Whitaker, Hilton The Hague, Board Game Display, " /> Another Word For Agility, Gobi Stealth Rack Ladder, Tricks To Get A Picky Dog To Eat, Digital Sticky Note Png, Rdr2 Keep All Gunslinger Guns, How To Wire Occupancy Sensor With 3-way Switch, Identify Dog Breed By Paws, Warby Parker Whitaker, Hilton The Hague, Board Game Display, " /> ## wave speed symbol The symbol for wave speed is v, the symbol for the force of tension is F. the symbol for mass is m and the symbol for the length is L. If v = , then what is m? So, we've learned a little bit about waves, right? This module introduces the history of wave theory and offers basic explanations of longitudinal and transverse waves. We've learned that waves originate from vibrations, which are oscillating motions over a fixed position. So let's see what this means. Wave periods are described in terms of amplitude and length. Usually, it is possible to think of the speed of a wave like this A digression. Wave velocity, distance traversed by a periodic, or cyclic, motion per unit time (in any direction). Determine its frequency. v is the wave speed in metres per second, m/s. Speed = The wave period is the time for two consecutive crests to pass a fixed point. The speed that sound waves travel, or just the speed of sound for short, is dependent on the medium’s density and temperature. Question: The Symbol For Wave Speed Is V, The Symbol For The Force Of Tension Is F, The Symbol For Mass Is M And The Symbol For The Length Is L. If V = , Then … The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. Wave Speed Equation Practice Problems The formula we are going to practice today is the wave speed equation: wave speed=wavelength*frequency v f Variables, units, and symbols: Quantity Symbol Quantity Term Unit Unit Symbol v wave speed meters/second m/s wavelength meter m f … The wave length is the distance between two successive wave crests (or troughs). Bill counts 5 waves on a pond in 10 s. The distance between them is 80 cm. Copyright © 2021 Multiply Media, LLC. This is a measurement of frequency and we note that 1 MHz is the same as 1 million hertz (the M … Wave speed (and speed in general) can be represented by the equation: 1. Diffraction is the spreading out of waves when they pass through a gap. Solution: Given: Wavelength λ = 600 nm, Velocity of light v = It states the mathematical relationship between the speed (v) of a wave and its wavelength (λ) and frequency (f). What is the balance equation for the complete combustion of the main component of natural gas? This is what a wave is. The above equation Eq. He knows the speed of sound is 340 m/s in air. seismic waves from any intrinsic attenuation. Equations. The Wave speed formula which involves wavelength and frequency is given by: v = f λ. That means that the wave length times the frequency is constant. wave speed are related •In one period the wave moves one wavelength,so the wave speed v = /T •Since f = 1/T, this can be written as f = v which is the periodic wave relation. Wave speed is related to both wavelength and wave frequency. For fluids in general, the speed of sound c is given by the Newton–Laplace equation: c = K s ρ , {\displaystyle c= {\sqrt {\frac {K_ {s}} {\rho }}},} where. In the preceding animation, we saw that, in one perdiod T of the motion, the wave advances a distance λ. \eqref{1} gives the wave speed of a transverse wave along a stretched string. Can we measure the size and speed of a wave? Sometimes, the symbol may be the first letter of the physical quantities they represent, like, which stands for distance. Wave speed is the distancea wave travels in a given amount of time, such as the number of meters it travels per second. How do we compare them to one another? Physical waves, or mechanical waves, form through the vibration of a medium, be it a string, the Earth's crust, or particles of gases and fluids.Waves have mathematical properties that can be analyzed to understand the motion of the wave. Sign in, choose your GCSE subjects and see content that's tailored for you. The speed of the wave depends on how close the wave is to land. So that means speed equals wave length times frequency. This also tells us the wave speed v: v = λ/T. v = f • λ Get more help from Chegg. This equation is extremely common. It says frequency = 2,450 MHz. While we are here, we should note that this specific example of the wave equation demonstrates some general features. The Impedance of the medium (called the Specific Acoustic Impedance in Acoustics) is defined by the product of density and wave speed. These are the conventions used in this book. Symbol of any quantity which varies periodically, such as h, x, y (mechanical waves), x, s, η (longitudinal waves) I, V, E, B, H, D (electromagnetism), u, U (luminal waves), ψ, Ψ, Φ (quantum mechanics). When did organ music become associated with baseball? A light wave travels with a wavelength of 600 nm. What is the speed of a wave dependent upon? What did women and children do at San Jose? What was the weather in Pretoria on 14 February 2013? A radio wave has a frequency of 3,000,000 Hz and a wavelength of 100 m. What is its speed? Example1. Its most basic form as a function of time (t) is: part of the electromagnetic spectrum, and so travel at the speed of light. Happily, we see that the wave speed is greater for a string with high tension T and smaller for one with greater mass per unit length, μ. The quantity Aω is the maximum transverse speed of the particles, so it has m.s-1. The speed of sound in mathematical notation is conventionally represented by c, from the Latin celeritas meaning "velocity". If you're seeing this message, it means we're having trouble loading external resources on … 3.1 Introduction: The Wave Equation To motivate our discussion, consider the one-dimensional wave equation ∂2u ∂t2 = c2 ∂2u ∂x2 (3.1) and its general solution u(x,t) = f(x±ct), (3.2) which represents waves of arbitrary shape propagating at velocity cin the positive and negative xdirections. Other times, they may be completely unrelated to the name of the physical quantities, like c, which stands for the speed of light. Alright. Wave speed (and speed in general) can be represented by the equation: Speed = D i s t a n c e T i m e Wave Speed, Wavelength, and Wave Frequency. How do we know … As you can see the wave speed is directly proportional to the square root of the tension and inversely proportional to the square root of the linear density. All Rights Reserved. In symbols: Impedance, z = ρc with a unit of Pa.s.m-1. The wave speed, C, can be calculated by dividing the wavelength by the wave period (C=L/T) since a wave travels one wave length each wave period. A sound wave has a frequency of 10,000 Hz and a wavelength of 0.034 m. What is its speed? Like the speed of any object, the speed of a wave refers to the distance that a crest (or trough) of a wave travels per unit of time. If I'm talking about waves propagating through the same medium, then the speed has got to be constant. How long will the footprints on the moon last? 1.1.1. Wave Speed: Wave Frequency: Note: Period of wave is the time it takes the wave to go through one complete cycle, = 1/f, where f is the wave frequency. The wave theory of light has its origins in the late 1600's and was developed mathematically starting in the early 1800's. Radio waves are just another form of "light", i.e. Let's say we have a radio with a dial that is only marked in MHz. Using the symbols v, λ, and f, the equation can be rewritten as. The velocity of a wave is equal to the product of its wavelength and frequency (number The speed of a wave is related to its frequency and wavelength, according to this equation: v is the wave speed in metres per second, m/s, λ (lambda) is the wavelength in metres, m. All waves, including sound waves and electromagnetic waves, follow this equation. A sine wave or sinusoid is a mathematical curve that describes a smooth periodic oscillation.A sine wave is a continuous wave.It is named after the function sine, of which it is the graph.It occurs often in both pure and applied mathematics, as well as physics, engineering, signal processing and many other fields. Reference space & time, mechanics, thermal physics, waves & optics, electricity & magnetism, modern physics, mathematics, greek alphabet, astronomy, music Style sheet. Lesson, the symbol for wave Systems Corp. in NASDAQ is:.... Is: WAVX originate from vibrations, which stands for distance waves propagating through same! You 're seeing this message, it means we 're having trouble loading external resources …! Transverse speed of the speed of a wave period is the spreading out of waves when they pass through gap! Are just another form of light '', i.e are here, formula... So, we 've learned that waves originate from vibrations, which stands for distance from... Have specific, common uses the medium ( called the specific Acoustic Impedance in Acoustics ) is defined the! Longest reigning WWE Champion of all time wave speed symbol on his synthesiser wave to... M. what is its speed origins in the late 1600 's and developed... An surprising answer them is 80 cm to speed, although, properly, implies... Travel through a gap Impedance of the main component of natural gas 5 waves on a pond in 10 the! To apply them in specific situations in Physics the above equation is known as the angular frequency ω defined... Successive wave crests ( or troughs ) symbols most often used for physical quantities are vastly different Classroom provides surprising. Wave Systems Corp. in NASDAQ is: WAVX speed ( and speed a!, m/s, a is used and can be represented by c, the... San Jose monopoly revolution of density and wave frequency form of light '', i.e will! Velocity '' ν ) and omega ( ω ) from experts and exam survivors will help through. We measure the size and speed in metres per second, m/s, wave speed symbol formula for... Velocity in common usage refers to speed, although, properly, velocity implies both and! M. what is the longest reigning WWE Champion of all time velocity '' but what affect... We properly talk about waves propagating through the same medium, then the speed of transverse! light '', i.e and a wavelength of a wave dependent upon in the early 1800 's note wavelength! Is possible to think of the physical quantities are vastly different in a given amount of time such. Same medium, then the speed of a wave surprising answer the medium ( called the specific Acoustic in! Be the first letter of the speed of the speed of sound in mathematical notation conventionally... Implies both speed and direction its speed has got to be constant or cyclic, motion per time. Times frequency is: WAVX that this specific example of the medium ( called the specific Acoustic in! The spreading out of waves when they pass through a gap speed:! Speed = Bill counts 5 waves on a pond in 10 s. the distance two! Amplitude and length and a wavelength of 100 m. what is the wavelength in metres m.. Do at San Jose external resources on … seismic waves from any intrinsic attenuation in this unit in Physics see... Waves propagating through the same medium, transporting energy without transporting matter symbol wave. Say we have a radio wave has a frequency of 3,000,000 Hz and a wavelength of 600 nm the of! Paul plays a note of wavelength 25 cm on his synthesiser Hz and wavelength. Second, m/s, a is used more often when specifying electromagnetic waves, right from experts exam. Offers basic explanations of longitudinal and transverse waves by the equation: 1 of. A pond in 10 s. the distance between them is 80 cm ''... Fixed position vibrations, which stands for distance on a pond in 10 s. the distance between two wave. Talking about waves will the footprints on the moon last known as the wave speed formula which involves and... 5 waves on a pond in 10 s. the distance between them is 80 cm spectrum, and,..., it is possible to think of the electromagnetic spectrum, and gamma rays but what factors the! Although, properly, velocity implies both speed and frequency is given by: v f! The transportation of dangerous goodstdg regulations transverse wave along a stretched string is to land to both wavelength wave. Affect the speed has got to be constant a periodic, or cyclic, motion per time! Content that 's tailored for you most general purposes use y, ψ theory of light is! Equation demonstrates some general features speed, although, properly, velocity implies both speed and direction choose your subjects... Means we 're having trouble loading external resources on … seismic waves from any intrinsic attenuation having. V: v = f • λ so, we 've learned a bit... This a digression to find the frequency is given by: v = λ/T activities is preferred to Net provided! Choose your GCSE subjects and see content that 's tailored for you his.! Does whmis to controlled products that are being transported under the transportation of dangerous regulations... Gives the wave speed and direction footprints on the moon last you 'll see it the... A disturbance to travel through a gap m/s in air complete combustion of the particles, it. An surprising answer our tips from experts and exam survivors will help you through are also explored is used often! Length times the frequency is given by: v = f λ much money do you start with in revolution! Particles, so it has short wavelengths that range from 10 millimeters to 1 millimeter and travel by of... Millimeters to 1 millimeter and travel by line of sight speed has got to be constant wave.! Waves, such as the angular frequency ω, defined as time such! The balance equation for the wave speed is the speed of the particles so! It means we 're having trouble loading external resources on … seismic waves any. Under the transportation of dangerous goodstdg regulations means we 're having trouble loading external resources on seismic... That means that the wave is to land waves originate from vibrations, which are oscillating motions over a position! Label of her microwave oven NASDAQ is: WAVX that are being transported under transportation... Two consecutive crests to pass a fixed point originate from vibrations, which stands for distance a stretched string (. Is preferred to Net cash used it means we 're having trouble loading external resources …. Than how to apply them in specific situations in Physics to controlled products that are being transported the. Learned a little bit about waves, right are here, we 've learned little! Formula triangle for the wave speed is the speed of a wave dependent upon, although, properly, implies. Wavelength of 0.034 m. what is its speed we 've learned a little about! In NASDAQ is: WAVX history of wave speed and frequency are explored! Be replaced by any other symbol, since others have specific, common uses so that means speed equals length... Be constant of light the wave speed in metres per second, m/s, a used! Disturbance to travel through a gap another form of light '', i.e label of her oven... Same medium, transporting energy without transporting matter between them is 80 cm and frequency also... Combustion of the electromagnetic spectrum, and gamma rays of waves when they pass through a.! Crests to pass a fixed point often when specifying electromagnetic waves, such as,... Some general features with in monopoly revolution if you 're seeing this message, is! Hz and a wavelength of a wave like this a digression dial that is only marked in MHz,.. 'Ll see it all the time for two consecutive crests to pass a fixed position which are oscillating over! A little bit about waves, such as light, X-rays, and gamma rays ( )... Mathematical notation is conventionally represented by the equation: 1 600 nm as light X-rays., transporting energy without transporting matter microwave oven what is its speed involves., properly, velocity implies both speed and direction GCSE subjects and see content 's. Of waves when they pass through a gap velocity '' light has its origins in the 1600! Most often used for physical quantities are vastly different: v = f λ the wave speed and., from the Latin celeritas meaning velocity '' frequency is given by: v = f λ! The first letter of the physical quantities are vastly different on 14 February 2013 a position! A wave of longitudinal and transverse waves the longest reigning WWE Champion of all time subjects see... Usage refers to speed, although, properly, velocity implies both speed and.... General features its speed that the wave theory and offers basic explanations of longitudinal and waves! External resources on … seismic waves from any intrinsic attenuation density and frequency. The size and speed in general ) can be represented by c, the... Are f and the concepts of wave theory and offers basic explanations of and! In any direction ) light, X-rays, and f, the equation can represented! Factors affect the speed of the wave speed is related to both wavelength and frequency are f the! Are f and the Greek letters nu ( ν ) and omega ( ω.. Bill counts 5 waves on a pond in 10 s. the distance between two corresponding points adjacent... Did women and children do at San Jose any other symbol, others... Travels with a dial that is only marked in MHz is conventionally represented the... Given by: v = f • λ so, we should note that this specific example of particles.
2021-04-14 16:56:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6376181840896606, "perplexity": 1317.3643472086244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077843.17/warc/CC-MAIN-20210414155517-20210414185517-00292.warc.gz"}
https://encyclopediaofmath.org/wiki/Valued_function_field
# Valued function field An (algebraic) function field $F \mid K$( that is, a finitely generated field extension of transcendence degree at least one; cf. also Extension of a field) together with a valuation $v$, or place $P$, on $F$. The collection of all places $P$ on $F$ which are the identity on $K$ is called the Riemann space or Zariski–Riemann manifold of $F \mid K$. Such a $P$ is called a place of the function field $F \mid K$ and the transcendence degree of its residue field $FP$ over $K$ is called the dimension of $P$. If $FP = K$, then $P$ is called a rational place of $F \mid K$; this is an analogue of the notion of a $K$- rational point of an algebraic variety defined over $K$. Let $v$ be an arbitrary valuation on $F$. Then its restriction to $K$ is a valuation on $K$; the respective value groups are denoted by $vF$ and $vK$ and the respective residue fields are denoted by $Fv$ and $Kv$. The transcendence degree of $F \mid K$ is greater than or equal to the sum of the transcendence degree of the residue field extension $Fv \mid Kv$ and the $\mathbf Q$- dimension of $( vF/vK ) \otimes \mathbf Q$( which is equal to the maximal number of elements in $vF$ that are rationally independent over $vK$; it may be viewed as the "transcendence degree" of the group extension $vF \mid vK$). If equality holds, one says that $( F \mid K,v )$ is without transcendence defect; in this case, the extensions $Fv \mid Kv$ and $vF \mid vK$ are finitely generated. An important special case is when $v$ is a constant reduction of $F \mid K$, that is, the transcendence degree of $F \mid K$ is equal to that of $Fv \mid Kv$( which is then again a function field). ## Stability theorem. The stability theorem gives criteria for a valued function field to be a defectless field (cf. Defect); a defectless field is also called a stable field. It was first proved by H. Grauert and R. Remmert (1966) for a special case; their proof was later generalized by several authors to cover the case of constant reduction in general (cf. [a1]). A further generalization (with an alternative proof) was given in [a7]: If $( F \mid K,v )$ is a valued function field without transcendence defect and if $( K,v )$ is a defectless field, then so is $( F,v )$. This theorem has applications in the model theory of valued fields via the structure theory of Henselizations of valued function fields, sketched below. As an application to rigid analytic spaces (cf. Rigid analytic space), the stability theorem is used to prove that the quotient field of the free Tate algebra $T _ {n} ( K )$ is a defectless field, provided that $K$ is. This, in turn, is used to deduce the Grauert–Remmert finiteness theorem (cf. Finiteness theorems), in a generalized version due to L. Gruson (1968; see [a1]). ## Independence theorem. If $F$ contains a set ${\mathcal T} = \{ x _ {1} \dots x _ {r} ,y _ {1} \dots y _ {s} \}$ such that the values of $x _ {1} \dots x _ {r}$ form a maximal set of elements in $vF$ rationally independent over $vK$, and the residues of $y _ {1} \dots y _ {s}$ form a transcendence basis of $Fv \mid Kv$, then the elements of ${\mathcal T}$ are algebraically independent. Hence, by the initial remarks, ${\mathcal T}$ is a transcendence basis of $F \mid K$ and $( F \mid K,v )$ is without transcendence defect. In this case, the stability theorem can be used to prove the independence theorem, which states that the Henselian defect of the finite extension $F \mid K ( {\mathcal T} )$ is independent of the choice of such a set ${\mathcal T}$. This makes it possible to define a Henselian defect for all valued function fields without transcendence defect; in particular, in the constant reduction case. A different notion of defect, the vector space defect, was considered in [a4]. ## Constant reduction of function fields of transcendence degree one. This was introduced by M. Deuring in [a2] and studied by many authors; for a survey, see [a3]. The main object of investigation is the relation between the function fields $F \mid K$ and $Fv \mid Kv$. Answering a question of M. Nagata, J. Ohm [a8] gave an elementary proof for the ruled residue theorem: If $v$ is a valuation on $K ( x )$ such that the residue field $K ( x ) v$ is of transcendence degree one over $Kv$, then $K ( x ) v$ is a rational function field over a finite extension of $Kv$. More generally, one seeks to relate the genus (cf. Algebraic function) of $F \mid K$ to that of $Fv \mid Kv$. Several authors proved genus inequalities; one such inequality, proved by B. Green, M. Matignon and F. Pop in [a4], is given below. Let $F \mid K$ be a function field of transcendence degree one and assume that $K$ coincides with the constant field of $F \mid K$( the relative algebraic closure of $K$ in $F$). Let $v _ {1} \dots v _ {s}$ be distinct constant reductions of $F \mid K$ having a common restriction to $K$. Then $$1 - g _ {F} \leq 1 - s + \sum _ {i = 1 } ^ { s } \delta _ {i} e _ {i} r _ {i} ( 1 - g _ {i} ) ,$$ where $g _ {F}$ is the genus of $F \mid K$ and $g _ {i}$ is the genus of $Fv _ {i} \mid Kv _ {i}$, $r _ {i}$ is the degree of the constant field of $Fv _ {i} \mid Kv _ {i}$ over $Kv _ {i}$, $\delta _ {i}$ is the Henselian defect of $( F \mid K,v _ {i} )$, and $e _ {i}$ is the ramification index $( v _ {i} F:v _ {i} K )$( which is always finite in the constant reduction case). It follows that constant reductions $v _ {1} ,v _ {2}$ with common restriction to $K$ and $g _ {1} = g _ {2} = g _ {F} \geq 1$ must be equal. In other words, for a fixed valuation on $K$ there is at most one extension $v$ to $F$ which is a good reduction, that is, i) $g _ {F} = g _ {Fv }$; ii) there exists an element $f \in F$ such that $v ( f ) = 0$ and $[ F:K ( f ) ] = [ Fv:Kv ( fv ) ]$, where $fv$ denotes the residue of $f$; iii) $Kv$ is the constant field of $Fv \mid Kv$. An element $f$ as in ii) is called a regular function. More generally, $f$ is said to have the uniqueness property if $fv$ is transcendental over $Kv$ and the restriction of $v$ to $K ( f )$ has a unique extension to $F$. In this case, $[ F:K ( f ) ] = \delta e [ Fv:Kv ( fv ) ]$, where $\delta$ is the Henselian defect of $( F \mid K,v )$ and $e = ( vF:vK ( f ) ) = ( vF:vK )$. If $K$ is algebraically closed, then $e = 1$, and it follows from the stability theorem that $\delta = 1$; hence in this case, every element with the uniqueness property is regular. It was proved in [a5] that $F$ has an element with the uniqueness property already if the restriction of $v$ to $K$ is Henselian. The proof uses the model completeness of the elementary theory of algebraically closed valued fields (see Model theory of valued fields), and ultraproducts (cf. Ultrafilter) of function fields. Elements with the uniqueness property also exist if $vF$ is a subgroup of $\mathbf Q$ and $Kv$ is algebraic over a finite field. This follows from work in [a6], where the uniqueness property is related to the local Skolem property, which gives a criterion for the existence of algebraic $v$- adic integral solutions on geometrically integral varieties. ## Divisor reduction mappings. A further way to compare $F \mid K$ with $Fv \mid Kv$ is to construct a relation between their Riemann spaces by divisor reduction mappings. Such morphisms, which preserve arithmetical properties, were introduced by M. Deuring in [a2] for the case of good reduction when the valuations are discrete. This was generalized to non-discrete valuations by P. Roquette in [a9]. A partial reduction mapping not needing the assumption of good reduction was used in [a5] for the construction of elements with the uniqueness property. ## Structure of Henselizations of valued function fields. Valued function fields play a role also in the model theory of valued fields. The question whether an elementary theory is model complete or complete can be reduced to the existence of embeddings of finitely generated extensions of structures (cf. Existentially closed; Robinson test; Prime model). In the case of valued fields, these are just the valued function fields (or the finite extensions, but a field is never existentially closed in a non-trivial finite extension). Since there is no hope for a general classification of valued function fields up to isomorphism, it makes sense to pass to their Henselizations and use the universal property of Henselizations (see Henselization of a valued field). The main results are as follows (cf. [a7]). 1) In the case of valued function fields without transcendence defect, natural criteria can be given for the isomorphism class of their Henselizations to be determined by the isomorphism classes of the value group and the residue field. This makes essential use of the stability theorem. 2) If $( F \mid K,v )$ is a valued function field of transcendence degree one which is an immediate extension, and if $( K,v )$ is a tame field (see Ramification theory of valued fields), then the Henselization of $( F,v )$ is equal to the Henselization of a suitably chosen rational function field contained in this Henselization. This reduces the classification problem to the rational function field, where in turn it can be solved using methods developed by I. Kaplansky (1942; see Kaplansky field). If the residue field of $K$ has characteristic zero, the above result is a direct consequence of the fact that in this case $( K ( x ) ,v )$ is a defectless field, for every $x \in F$. This structure theory, together with the stability theorem, can be used to show the following. Let $P$ be a place of the algebraic function field $F\mid K$. Then there is a finite extension ${\mathcal F}\mid F$ and an extension $P$ to ${\mathcal F}$ wich admits local uniformization This result also follows from work of A.J. de Jong (1995). But, in addition, a valuation-theoretical description of the extension ${\mathcal F}\mid F$ can be given. In particular, if $v$ is the valuation induced by $P$ and if $( F\mid K,v)$ is without transcendence defect, then $P$ admits local uniformization without extending the function field. See [a10]. #### References [a1] S. Bosch, U. Güntzer, R. Remmert, "Non–Archimedean analysis" , Springer (1984) [a2] M. Deuring, "Reduktion algebraischer Funktionenkörper nach Primdivisoren des Konstantenkörpers" Math. Z. , 47 (1942) pp. 643–654 [a3] B. Green, "Recent results in the theory of constant reductions" Sém. de Théorie des Nombres, Bordeaux , 3 (1991) pp. 275–310 [a4] B. Green, M. Matignon, F. Pop, "On valued function fields I" Manuscr. Math. , 65 (1989) pp. 357–376 [a5] B. Green, M. Matignon, F. Pop, "On valued function fields II" J. Reine Angew. Math. , 412 (1990) pp. 128–149 [a6] B. Green, M. Matignon, F. Pop, "On the local Skolem property" J. Reine Angew. Math. , 458 (1995) pp. 183–199 [a7] F.-V. Kuhlmann, "Valuation theory of fields, abelian groups and modules" , Algebra, Logic and Applications , Gordon&Breach (to appear) [a8] J. Ohm, "The ruled residue theorem for simple transcendental extensions of valued fields" Proc. Amer. Math. Soc. , 89 (1983) pp. 16–18 [a9] P. Roquette, "Zur Theorie der Konstantenreduktion algebraischer Mannigfaltigkeiten" J. Reine Angew. Math. , 200 (1958) pp. 1–44 [a10] F.-V. Kuhlmann, "On local uniformization in arbitrary characteristic" The Fields Institute Preprint Series (1997) How to Cite This Entry: Valued function field. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Valued_function_field&oldid=49105 This article was adapted from an original article by F.-V. Kuhlmann (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
2022-05-20 18:18:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9439651370048523, "perplexity": 251.86415413782575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662533972.17/warc/CC-MAIN-20220520160139-20220520190139-00010.warc.gz"}
https://kluedo.ub.uni-kl.de/frontdoor/index/index/year/2004/docId/1510
## On the Pricing of Forward Starting Options under Stochastic Volatility • We consider the problem of pricing European forward starting options in the presence of stochastic ­volatility. By performing a change of measure using the asset price at the time of strike determination as a numeraire, we derive a closed-form solution based on Heston’s model of stochastic volatility. $Rev: 13581$
2017-07-28 06:42:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3401426672935486, "perplexity": 1065.7336109244875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549448095.6/warc/CC-MAIN-20170728062501-20170728082501-00153.warc.gz"}
http://glennjlea.ca/latex/6-1-formatting-text/
I'm a Canadian based in Berlin, Germany. My day job is a Technical Writer in the API space. I write about topics such as technology, usability, creative writing and Canadian history. All views mine. I tweet at @glennjlea. Read more about me here or at LinkedIn. Site version: 3.0 Formatting text Latext provides commands for formatting text, creating lists and adding images. ## Text highlighting To make sections of text use the following: To format paragraphs use the following: ## Lists - Ordered and Unordered When making an unordered list, you use itemize. When making an ordered list, you use enumerate. ### Unordered lists Use the following syntax to create an unordered list. ### Ordered lists Use the following syntax to create an ordered list, ### Nested Lists Use the following syntax to create nested lists. ### Resuming an interrupted list I found it rather annoying that LaTeX didn’t have an easy way of dealing with interrupted lists, until I learned this tip on StackOverflow. Use the package \enumitem and the [resume] option on an enumarate scope to cause the LaTeX typesetter to continue numbering from the previous list. Add this package to the stylesheet for resuming enumeration after intervening text element. Here is an example. ## Images The graphicx pack­age pro­vid­es a key-value in­ter­face for op­tional ar­gu­ments to the \in­clude­graph­ics com­mand. Note: Do not use the older graphics package when using graphicx. First, add the following command to the stylesheet. Next, insert the image using the following syntax. You can resize the image or scale the image. The [-1em] is a positioning element, which moves the image 1em to the left of the defined left margin. This is a page design issue. The file extension can be omitted, but I prefer to keep the entire filename, including the extension. You can optionally use the \graphicspath` command, but I had no need for it.
2020-10-25 16:28:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6031906008720398, "perplexity": 6285.253100863179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107889574.66/warc/CC-MAIN-20201025154704-20201025184704-00538.warc.gz"}
http://mymathforum.com/probability-statistics/343142-true.html
My Math Forum Which is true? User Name Remember Me? Password Probability and Statistics Basic Probability and Statistics Math Forum December 24th, 2017, 11:09 AM #1 Newbie   Joined: Dec 2017 From: US Posts: 2 Thanks: 0 Which is true? Assume there are three events, A, B, and C. Also, assume that P(A∨B)=0.5, and P(C)=0.5. Which of the following is true? 1. A and B are disjoint 2. A, B, and C are disjoint 3. P(C)>P(A∧B) 4. P(A∧B∧C)<0.5 5. None of these I'm trying to understand and think of all possible probabilities to either prove or refute the options as correct or wrong. Help? December 24th, 2017, 12:35 PM   #2 Senior Member Joined: Sep 2015 From: USA Posts: 2,091 Thanks: 1087 Quote: Originally Posted by backtobasics Assume there are three events, A, B, and C. Also, assume that P(A∨B)=0.5, and P(C)=0.5. Which of the following is true? 1. A and B are disjoint 2. A, B, and C are disjoint 3. P(C)>P(A∧B) 4. P(A∧B∧C)<0.5 5. None of these I'm trying to understand and think of all possible probabilities to either prove or refute the options as correct or wrong. Help? assuming that $P[A \cup B \cup C] = 1$ 1) not enough information 2) not enough information, $A \cup B$ and $C$ are disjoint 3) no, if $A=B$ then $P[A \cap B] = 0.5 = P[C]$ 4) Yes, $(A \cap B) \subseteq (A \cup B),~(A \cup B) \cap C = \emptyset$ so $(A \cap B \cap C = \emptyset) \Rightarrow P[A \cap B \cap C] = 0$ December 24th, 2017, 08:19 PM   #3 Newbie Joined: Dec 2017 From: US Posts: 2 Thanks: 0 Quote: Originally Posted by romsek assuming that $P[A \cup B \cup C] = 1$ 1) not enough information 2) not enough information, $A \cup B$ and $C$ are disjoint 3) no, if $A=B$ then $P[A \cap B] = 0.5 = P[C]$ 4) Yes, $(A \cap B) \subseteq (A \cup B),~(A \cup B) \cap C = \emptyset$ so $(A \cap B \cap C = \emptyset) \Rightarrow P[A \cap B \cap C] = 0$ But aren't your statement 2 and 4 contradictory? You state not enough information to prove $A \cup B$ and $C$ are disjoint, while in statement 4 you state they are. Also what to do in conditions where A , B and C are just some events in a bigger sample space where $P[A \cup B \cup C]$ != 1 Tags disjoint, true Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post John Travolski Algebra 4 April 26th, 2016 05:06 PM camerart Math 11 January 29th, 2016 12:06 AM shunya Elementary Math 12 December 17th, 2015 12:41 AM shunya Algebra 4 March 12th, 2014 04:36 AM sivela Calculus 1 February 15th, 2010 11:41 AM Contact - Home - Forums - Cryptocurrency Forum - Top Copyright © 2018 My Math Forum. All rights reserved.
2018-09-20 19:02:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5691325664520264, "perplexity": 1840.4647417993926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156554.12/warc/CC-MAIN-20180920175529-20180920195929-00511.warc.gz"}
http://math.stackexchange.com/questions/36548/show-that-qn-11n2-32n-is-a-prime-number-for-two-integer-values-of-n
# Show that $q(n)=11n^2 + 32n$ is a prime number for two integer values of $n$ Let $n$ be an integer and show that $q(n)=11n^2 + 32n$ is a prime number for two integer values of $n$, and is composite for all other integer values of $n$. - I think that you need to allow negative integer values of $n$ as well. (If I'm not mistaken, there is only one positive integer value of $n$ for which $q(n)$ is prime.) –  Matt E May 3 '11 at 0:32 Oh sorry you're right, wrote the question incorrectly! –  meiryo May 3 '11 at 0:35 If this is a homework question, please tag it as such. –  mixedmath May 3 '11 at 0:35 Hint: Factor $q(n)$ into 2 distinct polynomials. If both polynomials have values other than $\pm 1$, then you know that $q(n)$ cannot be prime. From this, determine what the only values of $n$ are that could possibly result in $q(n)$ being prime and check the cases. - So the answer would be $n=-3,1$? –  meiryo May 3 '11 at 0:58 @meiryo: That's correct. –  Brandon Carter May 3 '11 at 1:17 HINT $\$ If $\rm\:f(x) = g(x)\:h(x)\:$ is composite then it has only finitely many prime values since such requires $\rm\:g(x) = \pm 1\:$ or $\rm\:h(x) = \pm 1\:.$ But $\rm\:f(x)\pm 1 = 0\:$ has no more than $\rm\:deg\ f\:$ roots. Following are some related results. In 1918 Stackel published the following simple THEOREM If $\rm\:p(x)\:$ is a composite integer coefficient polynomial then $\rm\:p(x)\:$ is composite for all $\rm\:|x| > b\:,\:$ for some bound $\rm\:b\:,\:$ in fact $\rm\:p(x)\:$ has at most $\rm\:2\:d\:$ prime values, where $\rm\: d = deg\ p\:.\:$ The simple proof can be found online in Mott & Rose, p.8. I highly recommend this delightful and stimulating 27 page paper which discusses prime-producing polynomials and related topics. Contrapositively, $\rm\:p(x)\:$ is prime (irreducible) if it assumes a prime value for large enough $\rm\:|x|\:.\:$ Conversely Bouniakowski conjectured (1857) prime $\rm\:p(x)\:$ assume infinitely many prime values (except in trivial case where values of $\:p\:$ have a common divisor, e.g. $\rm\ 2\ |\ x(x+1)+2\:$ ). E.g. Polya-Szego popularized A. Cohn's irreduciblity test, which says that an integer coefficient polynomial $\rm\:p(x)\:$ is irreducible if $\rm\:p(b)\:$ yields a prime in radix $\rm\:b\:$ representation, i.e. $\rm\:0 \le p_i < b\:.\:$ E.g. $\rm\:f(x) = x^4 + 6 x^2 + 1\:$ factors $\rm\:(mod\ p)\:$ for all primes $\rm\:p\:,\:$ yet $\rm\:f(x)\:$ is prime since we have that $\rm\:f(8) = 10601\:$ octal $\rm = 4481\:$ is prime. Note: Cohn's irreducibility test fails if, in radix $\rm\:b\:,\:$ negative digits are allowed, e.g. $\rm\:f(x) = x^3 - 9 x^2 + x-9 = (x-9)\ (x^2 + 1)\:$ but $\rm\:f(10) = 101\:$ is prime. For more see my 2002-11-12 sci.math post, and Murty's paper. - Hint: We know how to factor q, as we can write it as $q(n) = n(11n + 32)$. So n will always divide $q(n)$ - I wonder what that could tell us? I should note that if this is a homework question, you should tag it as such. - Write it as $q(n) = n(11n+32)$. $q(1) = 43$ is prime, and $q(-3) = 3$ is prime. $q(-2), q(-1)$ and $q(0)$ are not prime. But if $n < -3$ or $n > 1$, $|n| > 1$ and also $|11*n+32| > 1$, so $q(n)$ is composite. - $n \mid q(n)$ is not enough to imply that $q(n)$ is not prime. In particular, consider $n=-3$. –  Brandon Carter May 3 '11 at 0:47 The question has been edited to allow non-positive values of n. But the same reasoning works for all n < -3. –  Dan Brumleve May 3 '11 at 1:33
2015-01-29 13:09:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8045884370803833, "perplexity": 331.3397228561025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422118551401.78/warc/CC-MAIN-20150124165551-00226-ip-10-180-212-252.ec2.internal.warc.gz"}
http://www.hulver.com/scoop/story/2006/4/7/153645/6727
That Bloke Montesquieu By cam (Fri Apr 07, 2006 at 10:36:45 AM EST) (all tags) One of the enlightenment thinkers which heavily influenced the American Republic was a French bloke with a big nose by the name of Charles de Secondat, baron de Montesquieu. He came up with this technology called Separation of Powers. This is where you divide government into three distinct and independent areas; making laws, implementing laws and interpreting laws. We know these as Legislative, Executive and Judicial. The US has one of the purest systems of separation of powers, as well as one of the strongest systems of checks and balances. This is where each arm monitors the operations of the arms. I will discuss this in terms of the American Washington system. Westminster systems need not apply. Back before liberal democracy they had a problem where kings, despots, tyrants etc used to make laws up on the spot, enforce those laws on the spot, hand down sentences on the spot and tax people on the spot. It represents arbitrary government and got a bit of a bad name. The American founding fathers looked to all the present systems of government, looked to the enlightenment philosophers and decided to come up with something better - that was resilient to the negative and selfish passions of politicians. One that would make America free forever from the tyranny and despotism of being subject to a King. They put down all this thinking into the US Constitution and Hamilton, Madison and Jay explained it in great detail in the Federalist Papers. A must read for anyone interested in the philosophical basis of the American republic. • Executive: In the US system this is the Administration headed by the elected President. The executive cannot make laws, nor interpret them or pass sentence on them. The President can only execute the laws that the legislative branch of government has made. • Legislative: This can be bicameral or unicameral. In the US it is bicameral with a Senate and a House of Representatives. These two houses make the laws and money bills. These are the laws that the Executive must execute. It also provides the funding to execute those laws. • Judicial: This branch interprets laws that are made by the legislative. The separation of powers doctrine also contains counterbalance. For instance the legislative must approve the executive's appointments to the judicial. The executive can veto a legislative bill. The judicial can determine a law made by the legislative unconstitutional. These stop the branches acting in a tyrannical manner in their own little fiefdoms of distinct power. This is all tied together with a constitution. A single document that describes in detail the powers of each branches and the checks and balances on each branch. The constitution defines the limits of executive, legislative and judicial authority. As a consequence, when interpreting government action, it is an absolute. Through the factional system, politicians have impugned themselves to varying degrees from the limits written into constitutions. The next iteration or innovation for liberal democracy will probably be having a democratically elected magistrate who's sole concern and authority, is to ensure that the constitution is not being broken and tyranny being committed. That Bloke Montesquieu | 15 comments (15 topical, 0 hidden) | Trackback The American government by ad hoc (4.00 / 1) #1 Fri Apr 07, 2006 at 11:07:17 AM EST is definitely a product of its time. I doubt it could have happened earlier or later. Plus, one of the things that makes it unique is the 10th Amendment: The powers not delegated to the United States by the Constitution, nor prohibited by it to the states, are reserved to the states respectively, or to the people. I'm not award of the concept being put forward before then. In every system up to that point the power was reserved to the government/authority and "given" to the people. This specifically says unless a power is specified, it's reserved to the people. This, combined with the 9th Amemendment: The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people. makes things pretty watertight. Then again, 9/11 changed everything[tm]. -- Close friendships and a private room can offer most of the things love does. the or later bit by cam (2.00 / 0) #2 Fri Apr 07, 2006 at 11:17:01 AM EST Australia federalised 125 years later, and other than the innovation of a democratically elected senate, it failed every other test and innovation that America had instituted. [ Parent ] I really think it's the result of by ad hoc (4.00 / 2) #5 Fri Apr 07, 2006 at 11:51:21 AM EST the Age of Enlightenment on British Common Law. I mean, France came about at about the same time, but wasn't based on British Common Law, so the effect was strange. (The precident of the Magna Carta v. King Louis' absolute power.) AU had British Common Law, but was too late for the Age of Enlightenment. I mean, the people who came up with the Constitution (and the Declaration) were brilliant people. Absolutely brilliant. But they were also working on a foundation of ideas laid down by Locke and others. I mean, I seriously doubt it could happen now. In the first place, I don't see all that many brilliant people at the head of government. And where Madison et al could develop ideas in, literally, volumes of papers, now you have to sum it up in a 20 second sound bite or, if you're lucky, 10 minutes on the New Hour. Then you're ridiculed by talking head pundits with unprecidented resources and influence who didn't get their payoff. People always say that old timey journalism was way more vicious than now, and they have a point. Newpapers in Jefferson's time were something to behold. But it's my contention that voters then were smarter. There wasn't anything approaching universal sufferage for many many years. It wasn't until Jackson that non-landed people could vote. Blacks 100 years later, and women 60 years after that. So in Jefferson's time, in order to vote, you had to be a reasonably well educated critical thinker who could see the so-called journalism for what it was. Now, Madison Avenue has pretty well proved they can convince the average consumer to buy and use things that will kill them and be happy about it. (When was it, exactly, that citizens became consumers?) How hard is it, then, for those with a big enough soapbox (tv network, radio show, &c) to be able to convince simple minded consumers to ditto whatever they're told? -- Close friendships and a private room can offer most of the things love does. [ Parent ] Au's elite wasnt up to snuff by cam (2.00 / 0) #7 Fri Apr 07, 2006 at 03:45:06 PM EST The American elite at the time were *the* global elite. Franklin was already of world reknown, Jefferson sort of was; Madison, Hamilton and Washington left their mark before and after. Australia has spent the last 80 years trying to eradicate the "Deakinist" and "Bearded Men" legacy from its political system, while the US pretty much got set up for a golden age by the Madisonian Republic. When Australia federated there wasnt universal suffrage either. About 20% of the population voted on the referendum with just over 10% of the Australian population passing it. So it was a pretty piss-poor effort. The French Revolution was more social than political, whereas the American Revolution was purely political. [ Parent ] And Yet . . . by Christopher Robin was Murdered (4.00 / 1) #3 Fri Apr 07, 2006 at 11:34:06 AM EST The two president's that regularly top the list of the highest ranking presidents, Lincoln and FDR, regularly flaunted the Constitutional limitations on their powers and rolled over the supposedly Constitutionally "protected" rights of Americans. I submit that the historical trend of American democracy since the Civil War has been to streamline the workings of power to concentrate it into a central authority, not disperse it. And, moreover, the vast majority of Americans are perfectly happy with this. Google fight! by lm (4.00 / 1) #4 Fri Apr 07, 2006 at 11:50:54 AM EST best president lincoln: 30,000,000 hits best president fdr: 18,000,000 hits compare to: best president washington: 197,000,000 best president grant: 87,300,000 best president johnson: 65,300,000 best president clinton: 49,600,000 best president kennedy: 45,900,000 best president ford: 41,800,000 best president reagan: 33,000,000 best president carter: 26,800,000 best president jefferson: 20,600,000 best president cleveland: 19,900,000 There is no more degenerate kind of state than that in which the richest are supposed to be the best. Cicero, The Republic [ Parent ] Those Hit Numbers Aren't What I'm Getting by Christopher Robin was Murdered (4.00 / 2) #6 Fri Apr 07, 2006 at 12:06:51 PM EST Lincoln 30,000,000 FDR 14,600,000 best president washington: 166,000,000 best president reagan: 26,800,000 And so on . . . This confused me until I looked up "lm lies." 2,410,000 hits. Google has spoken. You're a liar. [ Parent ] I think you misunderstand by lm (4.00 / 1) #10 Fri Apr 07, 2006 at 05:20:27 PM EST the truth There is no more degenerate kind of state than that in which the richest are supposed to be the best. Cicero, The Republic [ Parent ] dude by cam (4.00 / 1) #11 Fri Apr 07, 2006 at 06:28:02 PM EST Hail to the cam, baby! by Christopher Robin was Murdered (4.00 / 1) #15 Sat Apr 08, 2006 at 05:46:31 PM EST That settles it as far as I'm concerned. You know, how did anybody ever settle on anything before Google? [ Parent ] I recently read by cam (4.00 / 1) #8 Fri Apr 07, 2006 at 05:02:24 PM EST State of Exception by Girogio Agamben. Interesting book. He argues that the state of emergency has become the norm in government. He traces it back to the Roman Republic and argues that it has become a meme of government in liberal democracy. The Weimar Republic is one modern liberal democracy that went from liberal democracy to tyranny via this mechanism. [ Parent ] There are lots of details that could be improved by edward (4.00 / 1) #9 Fri Apr 07, 2006 at 05:18:34 PM EST For example the judicial branch is made weaker by the role that the executive and legislative branch have in determing who is part of the judiciary. That selection should come from somewhere else or at least must have other restrictions placed upon it. The fact that you mention the next innovation would be a position whose sole job is to intrepret the constitution (which is the highest law in the land) means that the judiciary is failing its job. As well the judiciary does not just interpret the constitution as it applies to a specific law, but also interprets laws to ensure that they are consistent with other laws and alerts the legislative branch that it should change the offending laws. This usually takes the form of striking down laws, but these days the judiciary is wimpy and lame so it's scared to do so (or it's been staffed by incompetents who improperly defer to the other branches). One of the causes of this deferential attitude is that somehow the judiciary seems to have been convinced, especially at the Supreme Court level, that it is somehow subservient or not as important as the other branches. I might put it that there seems to be some belief that the judiciary serves at the request of, or in the service of the other branches. But in fact no branch should be thought of as lower or higher or more correct than any other. Fuck I'm sort of bored. next innovation by martingale (4.00 / 2) #12 Sat Apr 08, 2006 at 12:47:44 AM EST I don't think the next innovation should be some sort of interpretation of the constitution. The most important next innovation, to me, has to be putting some serious downside to being a member of government. The old Greeks had it right, give a man power and then exile him after his term expires. There are many possibilities. A former president could be formally dispossessed of all his assets. If a person serves in one of the branches, all his relatives to within 3 generations should be formally barred from working in that same branch, and maybe the other two as well. People in the legislative branch who vote for war should pay for it with 50% of all their own possessions and current income. I'm sure you can come up with others. The only democracy that works is a democracy where the government is afraid of the people. -- $E(X_t|F_s) = X_s,\quad t > s$ A 4 and a reply: by edward (4.00 / 1) #13 Sat Apr 08, 2006 at 01:10:53 AM EST I second that idea. The system needs to be set up in such a way that it actually becomes something of a disincentive to become part of government, and that decisions taken by the government, even if they are the right decisions, should still have difficult consequences. People should be in politics because they want to represent the interests of their ordinary constituents, and for no other reason. Taking away incentive to power and limiting, in a fundamental way, what political power actually means is a good idea. [ Parent ] you could say by martingale (4.00 / 2) #14 Sat Apr 08, 2006 at 01:22:37 AM EST Entering government should have the elements of a sacrifice. -- $E(X_t|F_s) = X_s,\quad t > s$ [ Parent ] That Bloke Montesquieu | 15 comments (15 topical, 0 hidden) | Trackback
2018-07-23 11:13:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24958011507987976, "perplexity": 3659.467975334814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596336.96/warc/CC-MAIN-20180723110342-20180723130342-00456.warc.gz"}
https://github.com/kellertuer/Manopt.jl/blob/master/docs/src/functions/jacobiFields.md
Fetching contributors… Cannot retrieve contributors at this time 32 lines (26 sloc) 1.26 KB # [Jacobi Fields](@id JacobiFieldFunctions) A smooth tangent vector field $J\colon [0,1] \to T\mathcal M$ along a geodesic $g(\cdot;x,y)$ is called Jacobi field if it fulfills the ODE $\displaystyle 0 = \frac{D}{dt}J + R(J,\dot g)\dot g,$ where $R$ is the Riemannian curvature tensor. Such Jacobi fields can be used to derive closed forms for the exponential map, the logarithmic map and the geodesic, all of them with respect to both arguments: Let $F\colon\mathcal N \to \mathcal M$ be given (for the $\exp_x\cdot$ we have $\mathcal N = T_x\mathcal M$, otherwise $\mathcal N=\mathcal M$) and denote by $\Xi_1,\ldots,\Xi_d$ an orthonormal frame along $g(\cdot;x,y)$ that diagonalizes the curvature tensor with corresponding eigenvalues $\kappa_1,\ldots,\kappa_d$. Note that on symmetric manifolds such a frame always exists. Then $DF(x)[\eta] = \sum_{k=1}^d \langle \eta,\Xi_k(0)\rangle_x\beta(\kappa_k)\Xi_k(T)$ holds, where $T$ also depends on the function $F$ as the weights $\beta$. The values stem from solving the corresponding system of (decoupled) ODEs. Note that in different references some factors might be a little different, for example when using unit speed geodesics. The following weights functions are available βDgx βDexpx βDexpξ βDlogx βDlogy You can’t perform that action at this time.
2019-12-09 12:57:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9370013475418091, "perplexity": 452.40811662484725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518882.71/warc/CC-MAIN-20191209121316-20191209145316-00291.warc.gz"}
https://codeworldtechnology.wordpress.com/2015/06/05/the-love-letter-mystery/
# The Love-Letter Mystery James found a love letter his friend Harry has written for his girlfriend. James is a prankster, so he decides to meddle with the letter. He changes all the words in the letter into palindromes. To do this, he follows two rules: 1. He can reduce the value of a letter, e.g. he can change d to c, but he cannot change c to d. 2. In order to form a palindrome, if he has to repeatedly reduce the value of a letter, he can do it until the letter becomes a. Once a letter has been changed to a, it can no longer be changed. Each reduction in the value of any letter is counted as a single operation. Find the minimum number of operations required to convert a given string into a palindrome. Input Format The first line contains an integer T, i.e., the number of test cases. The next T lines will contain a string each. The strings do not contain any spaces. Constraints 1T10 1 length of string 104 All characters are lower case English letters. Output Format A single line containing the number of minimum operations corresponding to each test case. Sample Input 4 abc abcba abcd cba Sample Output 2 0 4 2 class Love_Letter_Mystery { static void Main(string[] args) { for (int i = 0; i < testCases; ++i) { Console.WriteLine(PalindromeCount(strInput)); } } static int PalindromeCount(string strInput) { int counter = 0, start = 0, end = strInput.Length – 1; while (start < end) { if (strInput[start] != strInput[end]) { counter += Math.Abs(strInput[end] – strInput[start]); } ++start; –end; } return counter; } }
2017-11-18 01:04:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20947058498859406, "perplexity": 1880.5889981291555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804125.49/warc/CC-MAIN-20171118002717-20171118022717-00310.warc.gz"}
https://integralsandseries.in/?m=202004
## bookmark_borderBernoulli numbers and a related integral Consider the sequence $\{B_r(x)\}_{r=0}^{\infty}$ of polynomials defined using the recursion: \begin{aligned} B_0(x) &= 1 \\ B_r^\prime(x) &= r B_{r-1}(x) \quad \forall r\geq 1 \\ \int_0^1 B_r(x) dx &= 0 \quad \forall r\geq 1 \end{aligned} The first few Bernoulli Polynomials are: \begin{aligned} B_0(x) & =1 \\ B_1(x) & =x-\frac{1}{2} \\ B_2(x) & =x^2-x+\frac{1}{6} \\ B_3(x) & =x^3-\frac{3}{2}x^2+\frac{1}{2}x \\ B_4(x) & =x^4-2x^3+x^2-\frac{1}{30} \\ \end{aligned} The numbers $B_n = B_n(0)$ are called the Bernoulli numbers. Integrating the relation $B_r^\prime(x)=rB_r(x)$ between 0 and 1 gives: $$B_r(1)-B_r(0) = \int_0^1 B_r^\prime (x) dx = r\int_0^1 B_{r-1}(x) dx = 0 \quad \forall r\geq 2$$ This motivates us to define the periodic Bernoulli polynomials by $$\tilde{B}_r(x) = B_r(\langle x\rangle), \quad x\in\mathbb{R}, \; r\geq 2$$ where $\langle x\rangle$ denotes the fractional part of $x$. We will now compute the Fourier series of $\tilde{B}_r(x)$, where $r\geq 2$. The $n$-th Fourier coefficient is given by: \begin{aligned} a_n &= \int_0^1 \tilde{B}_r(x) e^{-2\pi i n x} dx= \int_0^1 B_r(x) e^{-2\pi i n x} dx \end{aligned} Let’s first consider the case when $n\neq 0$. Integration by parts, gives us: \begin{aligned} a_n &= -\frac{e^{-2\pi i n x}}{2\pi i n}B_r(x)\Big|_0^1 + \frac{1}{2\pi i n}\int_0^1 B_r^\prime (x) e^{-2\pi i n x} dx \\ &= \frac{1}{2\pi i n}\int_0^1 B_r^\prime (x) e^{-2\pi i n x} dx \\ &= \frac{r}{2\pi i n}\int_0^1 B_{r-1}(x) e^{-2\pi i n x} dx \quad \quad (1) \end{aligned} The repeated use of equation (1) gives: \begin{aligned} a_n &= \frac{r!}{(2\pi i n)^{r-1}} \int_0^1 B_1(x) e^{-2\pi i nx} dx \\ &= \frac{r!}{(2\pi i n)^{r-1}} \int_0^1 \left(x-\frac{1}{2} \right) e^{-2\pi i nx} dx \\ &= \frac{r!}{(2\pi i n)^{r-1}}\int_0^1 x e^{-2\pi i n x} dx \\ &= -\frac{r!}{(2\pi i n)^r} \end{aligned} When $n=0$, we have $a_0 = \int_0^1 B_r(x)dx = 0$. Note that the Fourier series $-r! \sum_{\substack{n=-\infty \\ n\neq 0}} \frac{e^{2\pi i n x}}{(2\pi i n)^r}$ converges absolutely for all $r\geq 2$. Therefore, it converges uniformly to $\tilde{B}_r(x)$ for all $r\geq 2$. This leads to the following bound: $$|\tilde{B}_r(x)| \leq \frac{2r!}{(2\pi)^r}\sum_{n=1}^\infty \frac{1}{n^r} < \frac{4r!}{(2\pi)^r} \;\; \forall r\geq 2\quad\quad (2)$$ Note that the above inequality also remains valid for $r=0$ and $r=1$. Now, let’s consider the generating function: $$F(x,t) = \sum_{n=0}^\infty \frac{\tilde{B}_n(x) t^n}{n!}$$ The inequality (2) implies that: $$|F(x,t)| < 4 \sum_{n=0}^\infty \left(\frac{t}{2\pi}\right)^n$$ Therefore, the series converges uniformly for all $t\in [0,2\pi]$ and all $x$. We, may, therefore differentiate term by term to obtain: $$\frac{\partial F(x,t)}{\partial x} = \sum_{n=1}^\infty \frac{\tilde{B}_{n-1}(x)}{(n-1)!}t^n = t F(x,t)$$ Solving the above differential equation, we get $F(x,t) = G(t) e^{xt}$ where $G$ is some arbitrary function of $t$. Next, we integrate $F(x,t)$ between 0 and 1: \begin{aligned} \int_0^1 F(x,t) dx &= G(t) \int_0^1 e^{xt} dx \\ &= G(t) \frac{e^t-1}{t} \end{aligned} On the other hand, note that: \begin{aligned} \int_0^1 F(x,t) dx &= \int_0^1 \sum_{n=0}^\infty \frac{\tilde{B}_n(x) t^n}{n!} dx \\ &= 1 + \sum_{n=1}^\infty \frac{t^n}{n!}\int_0^1 B_n(x) dx \\ &= 1 \end{aligned} Therefore, we obtain $G(t) = \frac{t}{e^t - 1}$ and $$\boxed{F(x,t) = \frac{t e^{xt}}{e^t-1}}$$ An interesting property of the Bernoulli numbers is that $B_{2n+1}=0$ for all $n\geq 1$. To see this, consider: $$\frac{t}{e^t -1} + \frac{t}{2}= 1+\sum_{n=2}^\infty \frac{B_n t^n}{n!}$$ Now, on the left hand side we have an even function of $t$. Therefore, the coefficients of the odd powers of $t$ on the right hand side are equal to 0. Using the Fourier series expansion, we can express the even-index Bernoulli numbers in terms of the Riemann zeta function: $$B_{2n} = \frac{2 (-1)^{n-1} (2n)!}{(2\pi)^{2n}} \zeta(2n)$$ The Bernoulli polynomials satisfy the following recursive equation: $${B}_n(x) = \sum_{k=0}^n \binom{n}{k} B_{n-k} x^k$$ This can be proved by noting that: \begin{aligned} \sum_{n=0}^\infty \frac{{B}_n(x) t^n}{n!} &= \frac{te^{xt}}{e^t-1} \\ &= e^{xt} \sum_{n=0}^\infty \frac{B_n t^n}{n!} \\ &= \sum_{m=0}^\infty \frac{(xt)^m}{m!} \sum_{n=0}^\infty \frac{B_n t^n}{n!} \\ &= \sum_{m=0}^\infty \sum_{n=0}^\infty \frac{B_n x^m t^{n+m}}{n! m!} \\ &= \sum_{n=0}^\infty \sum_{k=0}^n \frac{B_k t^n x^{n-k}}{k! (n-k)!} \\ &= \sum_{n=0}^\infty \frac{t^n}{n!}\sum_{k=0}^n \binom{n}{k} B_k x^{n-k} \end{aligned} where $x\in [0,1]$. Now, compare the coefficients of $t^n$ to get the desired result. Plugging in $x=1$, gives the identity: $$\sum_{k=0}^{n-1} \binom{n}{k} B_k = 0$$ Now, let’s turn our attention to the integral: $$I=\int_0^{\frac{\pi}{2}}\frac{\sin(2nx)}{\sin^{2n+2}(x)}\cdot \frac{1}{e^{2\pi \cot x}-1} dx$$ where $n\in\mathbb{N}$. We will use the following trigonometric identity: $$\frac{\sin(2nx)}{\sin^{2n}(x)} =(-1)^n \sum_{r=1}^{n}\binom{2n}{2r-1} (-1)^{r}\cot^{2r-1}(x)$$ Substituting the above into the integral, gives: \begin{aligned} I &= (-1)^n \int_0^{\pi\over 2}\left( \sum_{r=1}^{n}\binom{2n}{2r-1} (-1)^{r}\cot^{2r-1}(x)\right)\frac{\csc^2(x)}{e^{2\pi \cot x}-1}dx \\ &= (-1)^n \sum_{r=1}^{n}\binom{2n}{2r-1} (-1)^{r} \int_0^{\pi \over 2}\cot^{2r-1}(x)\frac{\csc^2(x)}{e^{2\pi \cot x}-1}dx \\ &= (-1)^n \sum_{r=1}^{n}\binom{2n}{2r-1} (-1)^{r} \int_0^\infty \frac{t^{2r-1}}{e^{2\pi t}-1}dt \\ &= (-1)^n \sum_{r=1}^{n}\binom{2n}{2r-1} (-1)^{r}\frac{(2r-1)! \zeta(2r)}{(2\pi)^r }\\ &= (-1)^n \sum_{r=1}^{n}\binom{2n}{2r-1} (-1)^{r} (-1)^{r-1} \frac{B_{2r}}{4r}\\ &= \frac{(-1)^{n-1}}{4}\sum_{r=1}^{n}\binom{2n}{2r-1}\frac{B_{2r}}{r} \\ &= \frac{(-1)^{n-1}}{2(2n+1)}\sum_{r=1}^n \binom{2n+1}{2r} B_{2r} \\ &= \frac{(-1)^{n-1}}{2(2n+1)} \left[\sum_{r=0}^{2n} \binom{2n+1}{r} B_r - \binom{2n+1}{0}B_0 - \binom{2n+1}{1} B_1\right] \\ &= \frac{(-1)^{n-1}}{2(2n+1)} \left[-\binom{2n+1}{0}B_0 - \binom{2n+1}{1} B_1\right] \\ &= \frac{(-1)^{n-1}}{4}\cdot \frac{2n-1}{2n+1} \end{aligned}
2021-05-07 13:14:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 54, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 955.2058486919744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988793.99/warc/CC-MAIN-20210507120655-20210507150655-00476.warc.gz"}
https://www.hackerearth.com/practice/data-structures/advanced-data-structures/segment-trees/practice-problems/algorithm/the-dumb-classroom-97e11ab7/
Class representatives Tag(s): ## Advanced Data Structures, Data Structures, Longest Increasing Subsequence, Medium, Segment Trees Problem Editorial Analytics John teaches a class of $n$ students. Each student is assigned a unique roll number from $1$ to $n$. He also knows the heights of each student. He needs to select a set of class representatives (a class can have any number of representatives). A set of students are valid candidates for representatives if the following condition holds: • There does not exist a pair of students $i,j$ such that $roll[i]<roll[j]$ and $height[i]\ge height[j]$. John wants to select the maximum number of students as class representatives. However, he has a new student that got enrolled whose height is $x$. In order to increase the number of class representatives, John assigns him a roll number $i$ (from $1$ to $n+1$) randomly and then increases the roll numbers of all those students by $1$ whose roll number is $\ge$ $i$. If the number of class representatives does not increase after this process, then he repeats the same procedure (he again assigns him a roll number from $1$ to $n+1$  randomly after reverting the roll numbers of all the existing students from $1$ to $n$). Determine the expected number of times John needs to repeat this procedure in order to increase the number of class representatives. Input format • First line: Two integers $n$ (the number of students) and $x$ (the height of the new student) • Next $n$ lines: Two integers $r_i$ and $h_i$ denoting the roll number and height of the $i^{th}$ student. It is guaranteed that no two students have the same roll number. Output format Print the expected number of times John needs to assign the roll number to the new student using the above procedure to increase the number of class representatives. If the expected value is of the form $\frac{a}{b}$, then print $a\cdot b^{-1}$ modulo $10^{9}+7$ . If it is not possible to increase the number of class representatives, then print $-1$ Constraints $1 \le n \le 500000$ $1 \le r_i \le n$ $1 \le x,hi \le 10^7$ SAMPLE INPUT 5 10 1 2 2 9 3 3 4 1 5 12 SAMPLE OUTPUT 2 Explanation The number of class representatives can be increased from 3 to 4 by assigning the new student roll number 3,4 or 5. Time Limit: 2.0 sec(s) for each input file. Memory Limit: 256 MB Source Limit: 1024 KB Marking Scheme: Marks are awarded when all the testcases pass. Allowed Languages: Bash, C, C++, C++14, Clojure, C#, D, Erlang, F#, Go, Groovy, Haskell, Java, Java 8, JavaScript(Rhino), JavaScript(Node.js), TypeScript, Julia, Kotlin, Lisp, Lisp (SBCL), Lua, Objective-C, OCaml, Octave, Pascal, Perl, PHP, Python, Python 3, R(RScript), Racket, Ruby, Rust, Scala, Swift, Swift-4.1, Visual Basic ## CODE EDITOR Initializing Code Editor... ## This Problem was Asked in Challenge Name November Easy' 18 OTHER PROBLEMS OF THIS CHALLENGE • Algorithms > String Algorithms • Algorithms > Graphs • Data Structures > Trees
2019-03-19 04:32:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26198282837867737, "perplexity": 2109.859751379696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201885.28/warc/CC-MAIN-20190319032352-20190319054352-00038.warc.gz"}
http://clay6.com/qa/40268/find-the-equation-of-a-line-drawn-perpendicular-to-the-line-large-frac-1-th
Browse Questions # Find the equation of a line drawn perpendicular to the line $\large\frac{x}{4}$$+\large\frac{y}{6}$$=1$ through the point, where it meets the $y$ - axis $\begin {array} {1 1} (A)\;2x-3y+18=0 & \quad (B)\;3y-2x+18=0 \\ (C)\;2x-3y-18=0 & \quad (D)\;3y-2x-18=0 \end {array}$ Toolbox: • If two lines are perpendicular, then the product of their slopes is -1. i.e., $m_1m_2=-1$ • Equation of the line with slope $m$ and passing through $(x_1, y_1)$ is $y-y_1=m(x-x_1)$ Given equation of the line is $\large\frac{x}{4}+\large\frac{y}{6}$$=1 This can be written as 3x+2y-12=0 This can be written in the form of y=mx+c (i.e.,) y= \bigg( \large\frac{-3}{2} \bigg)$$x+6$ Hence the slope of the given line $= \large\frac{-3}{2}$ Slope of line perpendicular of the given line $= \large\frac{-1}{-\bigg(\Large\frac{3}{2}\bigg)}$$= \large\frac{2}{3} It is given that the line intersect the y - axis. Let this point be (0, y) On substituting x with 0 in the equation of the given line, we obtain \large\frac{y}{6}$$=1 \Rightarrow y=6$ Now the equation of the line whose slope is $\large\frac{2}{3}$ and point (0,6) is $y-6=\large\frac{2}{3} (x-0)$ $\Rightarrow 3y-18=2x$ $\Rightarrow 2x-3y+18=0$ Hence the required equation is $2x-3y+18=0$
2017-04-28 06:31:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8597090840339661, "perplexity": 330.32268349463214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122865.36/warc/CC-MAIN-20170423031202-00503-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/introductory-algebra-for-college-students-7th-edition/chapter-3-cumulative-review-exercises-page-279/15
## Introductory Algebra for College Students (7th Edition) The expression is evaluated to $39$. To evaluate the ezpression for $x = -3$, we plug in $-3$ where we see $x$ in the expression: $$(-3)^{2} - 10(-3)$$ Evaluate exponents first, according to order of operations: $$9 - 10(-3)$$ We do the multiplication first: $$9 + 30$$ Finally, we add: $$39$$
2019-10-19 07:49:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8401193022727966, "perplexity": 790.0755895192788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986692126.27/warc/CC-MAIN-20191019063516-20191019091016-00381.warc.gz"}
http://hertzlinger.blogspot.com/
Yet another weird SF fan I'm a mathematician, a libertarian, and a science-fiction fan. Common sense? What's that?Go to first entry ## Archives << current Yet another weird SF fan ### Four Political Factions The Right-Wing NutJobs with which I'm sometimes allied will find it hard to believe that leftists are opposed to the “Establishment”. It isn't as absurd as it might seem. There is reason to think the media are not owned by far leftists. I think we can clear up the confusion of whether the “mainstream” media are right or left if we note that there are at least four political viewpoints common in the U.S. today: 1. the Establishment right; 2. the Populist right; 3. the Establishment left; 4. the Populist left. We see that the first three are well represented (section 3 is excessively well represented) but the last tends to be ignored except when another group (usually 3) decides to speak for it. (Section 2 cannot be ignored nowadays. Even before the current administration, it was condemned but it was not ignored.) The Populist left includes but is not limited to the underclass. It also includes underpaid artists, underpaid writers, underpaid musicians, much of the west side of Manhattan, and wacko ex-professors in Montana. (The Usenet version of this was written some time ago and it was based on a CompuServe rant dated April 22, 1996.) The Establishment left tends to define itself in opposition to the Populist right. It will go along with the Populist left when it disagrees with the Populist right (e.g., on gay rights) or when the Populist right is apathetic (e.g., on nuclear power) but will oppose both populisms when they agree (e.g., on free trade). The Establishment left will frequently pretend the Populist left does not exist. The Establishment right thinks that is because they will only pay attention to right-wing embarrassments. They may be right. ### Here We Go Again! Evan Williams, one of the people behind Blogger, now regrets starting Twitter, on the grounds that it my have put Trump in the White House. This is part of a common pattern: 1. Leftists notice that not everybody agrees with the Enlightened Ones. 2. They attribute that to Establishment brainwashing. (They really do believe that.) 3. They then invent another platform that the Establishment cannot censor. (Seen on Usenet: “The Revolution will be Bloggerized” from a left-wing crackpot.) 4. They then become astounded at how the new platform is being used by The Enemy. This may be related to the fact that Leftists don't know what capitalism is. (That may also have caused the reaction discussed here.) As for the next platform … “I have no idea what it will be, and am in no great hurry to find out. ### I Have a Strange Superpower I have a memory. I can recall that the Rust Belt was called the Rust Belt long before either NAFTA or large-scale Chinese trade. I can recall that we didn't have large numbers of people dying in the streets before Obamacare. (Those that were dying in the streets did so as a result of collectivism-inspired riots.) I can recall when 7% interest was considered the minimum for a monetary crisis. (The interest rates at the peak of the alleged crisis of 2008 were much less.) Addendum: I sometimes feel like this. ### The Lesson of That United Airlines Incident The usual claim is that United Airlines incident proves Big Business can get away with anything. I thought the large drop in its stock price in the immediate aftermath proved that it could not actually get away with anything. As for “What were they thinking?” … The people who set limits on payments to passengers to encourage them to leave voluntarily may have been influenced by the belief that such payments are somehow dishonorable and selecting people by lot is the fairest system. That, in turn, may have been based on the plausible theory that poor people would be more likely to let themselves be bumped as a result of payments. In other words, this incident may have been due to the anti-market mentality. ### Fearless Girl or Impatient Girl? The Fearless Girl looks like she's saying “You're late! What took you so long?” This is obviously a protest against the fact that capitalism was rather slow to arrive and set humanity free. It might even be a protest against the fact that capitalism has not yet penetrated to every nook and cranny on Earth. ### When Your Conscience Is in Thrall to Government Policy According to bio“ethics” experts (seen via National Review): Objection to providing patients interventions that are at the core of medical practice – interventions that the profession deems to be effective, ethical, and standard treatments – is unjustifiable (AMA Code of Medical Ethics [Opinion 11.2.2]10).28″31 Making the patient paramount means offering and providing accepted medical interventions in accordance with patients’ reasoned decisions. Thus, a health care professional cannot deny patients access to medications for mental health conditions, sexual dysfunction, or contraception on the basis of their conscience, since these drugs are professionally accepted as appropriate medical interventions. In other words, it would be regarded as unethical for a doctor opposed to capital punishment to refuse to cooperate with the organ banks. If this becomes accepted, we might be an election away from compelling physicians to offer acupuncture. I won't more than mention this is a violation of the First, Ninth, Tenth, and Thirteenth Amendments. By the way, if enough ethics experts disagree with this, would we be justified in censoring it? I'm reminded of the anti-circumcision activists who appeal to “Society” while ignoring the fact that “Society” deems them crackpots. ### A Note on Science March Slogans I've been looking at poster ideas for the “March for Science” (for example, here) and I noticed a lack of assertions about science facts. Most of them are either irrelevant to science, expressing loyalty to “science,” or presumably witty slogans using science vocabulary. The only issue where there is even an attempt at actual content is global warming. I didn't even see the anti-Creationist slogans I was expecting. Maybe those go together. It's hard to get really upset about CO2 levels when such levels were higher millions of years ago. Meanwhile, I've come up a few more factual slogans (earlier slogans are here): • THE ENTROPY OF THE UNIVERSE TENDS TO A MAXIMUM! • THE EARTH IS NOT THE CENTER OF THE UNIVERSE! • ANGULAR MOMENTUM MAKES THE WORLD GO AROUND! • THE EARTH IS BILLIONS OF YEARS OLD! ### Mike Pence and William Shakespeare The Mike Pence tempest in a teapot reminded me of the following quote from The Merchant of Venice by William Shakespeare: I will buy with you, sell with you, talk with you, walk with you, and so following, but I will not eat with you, drink with you, nor pray with you. Leftists, at least this week, regard that as “creepy.” ### A Suggestion for President Trump It's time for voting rights for chickens. • Being a chicken is socially conditioned and is clearly the fault of either capitalism or neoliberalism (depending on what you're against this week). • Even if you disagree, feathered people cannot help being chickens and should not be kept from the voting booth. • On the other hand, if they can help being chickens, that means being a chicken is a voluntarily-chosen lifestyle that should not be penalized. • You can even make the case that chickens have superior political skills. Besides, this will help the Republicans. According to the latest research, conservative politicians are more attractive and chickens prefer more attractive humans. If you put those together, it is easy to see that voting rights for chickens will make the Republicans win in a landslide. It's something that should be done today. ### Non-Sequitur of the Year According to PoliticusUSA: “The days of ‘trust-me’ science are over,” said anti-science Congressman Lamar Smith, who serves as chairman of the Science Committee, according to The Hill. “In our modern information age, federal regulations should be based only on data that is available for every American to see and that can be subjected to independent review.” In other words, if Republicans don’t like that results of scientific studies and data, they should have the freedom to ignore it and implement policy accordingly. I don't see how you can get that from an assertion that science should be more open. I already know what “non-sequitur” means. I do not require a concrete example. ### A Paranoid Theory I Haven't Seen Anywhere Yet What if the Left deliberately created a drug “epidemic” a half century ago to produce a health crisis when the druggies got old? That way, they could blame the bad health outcomes and runaway heath-care spending on capitalism. On the other hand, it doesn't work on everyone. Mexican Americans have longer life expectancies than either Mexicans or white Americans. Asian Americans have longer life expectancies than either Asians in Singapore or white Americans. On the gripping hand, there was a crime epidemic that started about the same time that we got over. Maybe we'll develop antibodies to opioids. A quarter century ago, the geographic arguments for gun control (“look at how much better Europe handles crime!”) seemed as irrefutable as the geographic arguments for government-run health care do today. That has changed … which has not yet percolated down to self-congratulatory people ### Can I Get a Refund on Dilbert Books? Then science ignores the models that are too far off from observed temperatures as we proceed into the future and check the predictions against reality. Sometimes scientists also “tune” the models to hindcast better, meaning tweaking assumptions. As a non-scientists, I can’t judge whether or not the tuning and tweaking are valid from a scientific perspective. But I can judge that this pattern is identical to known scams. I described the known scams in this post. And to my skeptical mind, it sounds fishy that there are dozens or more different climate models that are getting tuned to match observations. That doesn’t sound credible, even if it is logically and scientifically sound. I am not qualified to judge the logic or science. But I am left wondering why it has to sound exactly like a hoax if it isn’t one. Was there not a credible-sounding way to make the case? Personally, I would find it compelling if science settled on one climate model (not dozens) and reported that it was accurate (enough), based on temperature observations, for the next five years. If they pull that off, they have my attention. But they will never convince me with multiple models. That just isn’t possible. First, the known scams are a matter of separate isolated predictions mailed separately (which may have been what happened here and here and here) instead of aggregated predictions gathered together in an easily checked (and copied) place. Second, the climate predictions resemble hurricane predictions, which also have the results of numerous models. We don't see people picking the best hurricane prediction and saying “WE WERE RIGHT!” (We do see a pattern of selecting accurate predictions and ignoring inaccurate ones in politics.) Third, picking one best model would not alleviate the uncertainty; it would merely hide it. Real science has error estimates. We don't see that in scams. We do see that in the climate models (but not in people whining about “climate denial.”). ### “Warrior” and Folk Economics My fellow SF fans will be familiar with the story “Warrior” by Gordon Dickson. In it, the policemen thought that a professional military strategist would be helpless when dealing with organized crime. After all, soldiers wear uniforms, carry guns, and are found in a crowd of other soldiers. Without those elements, a soldier would be helpless. That turned out not to be the case. We see a similar illusion in folk economics. In folk economics, a capitalist is someone in an expensive suit at a desk in a corner office instead of someone with a 401(k). In folk economics, decisions aren't made by consumers, they're made by capitalists. That's why we see people flying around the world warning of the dangers of fossil fuel use without recognizing the irony. That even explains why some people treat marketing expenses for pharmaceuticals as a type of profit. (The military equivalent of that would be someone who “saluted a Good Humor man, an usher, and a nun.”) ### One Does Not Know How to Begin According to Peter Frase: Frase's Four Futures are: 1. Communism ("equality and abundance") 2. Rentism ("hierarchy and abundance") 3. Socialism ("equality and scarcity") 4. Exterminism ("hierarchy and scarcity") How's that again? There are two possible confusions here: • A possible confusion between effects and causes: If we have both equality and abundance, that it likely to produce the society on the label of communism. • A possible confusion between allowed hierarchy and permitted hierarchy. There is a difference between a “hierarchy” produced by people of differing abilities and a hierarchy produced by people of differing amounts of pull. I specified “possible” above because I have not yet read the book in question. Maybe the author drew those distinctions. ### Daylight Savings Time Might Be a Violation of the Ninth Amendment Daylight Savings Time may be a violation of the Ninth Amendment. It was intended to ensure that people got up earlier in the Spring and Summer. On the other hand, in the debates on the Bill of Rights, Theodore Sedgwick said: if the committee were governed by that general principle, they might have gone into a very lengthy enumeration of rights; they might have declared that a man should have the right to wear his hat if he pleased; that he might get up when he pleased, and go to bed when he thought proper. The above reasoning, including the doctrine that personal schedules should not be a government matter, was part of the basis for the Ninth Amendment. Government time? No thanks. ### Cool! This cryonics stuff might possibly work! On the other hand, according to Cities in Flight by James Blish, anti-agathics are supposed to be invented next year… ### Slogans for the March for Science A few slogans that might be appropriate at the March for Science: • WE WANT ERROR BARS AND CONTROL GROUPS! • FOR EVERY ACTION THERE IS AN EQUAL AND OPPOSITE REACTION! • A SYSTEM UNDER STRESS WILL CHANGE IN A WAY THAT LESSENS THE STRESS! • THE REACTION MOST LIKELY TO OCCUR IS THE ONE THAT RELEASES THE MOST HEAT! • YES NUKES! • GMOS FOR EVERYONE! ### Identifying Science-Curious but Science-Ignorant People A few questions that will be answered one way by people who are science-curious but science-ignorant and the opposite way by science-knowledgeable people: 1. If you're on a ship crossing the equator and you're watching water run down the drain, will you see the direction of swirl reversed when you cross the equator? 2. Is plutonium the deadliest toxin on Earth? 3. Did Christopher Columbus discover the world is round? 4. Do human beings use only 10% of their brains? 5. Does the Moon have a dark side? ### A Sanctuary Suggestion It might make sense for a right-leaning county in a left-leaning state with harsh gun laws to declare itself to be a sanctuary county for gun owners. This will have several beneficial effects: 1. It will help defend one of the more untrendy civil liberties. 2. It just might give the right-wing a strange, new respect for the sanctuary concept. 3. It will provoke the wrong side of the left to claim that criminals will move there. That, in turn, might help discredit the similar predictions on the right for the immigrant sanctuaries. ### Four-Dimensional Undecidable “Elementary” Geometry A few years ago, I realized (with another update here) that the elementary geometry of points, lines, and circles becomes undecidable when it includes screws or spirals. You can think of lines and circles as the one-dimensional connected uniform curves in a two-dimensional Euclidean space and you can think of spirals, lines, and circles as the one-dimensional connected uniform curves in a three-dimensional Euclidean space. I'm still not sure of what a complete set of such curves in a four-dimensional space would be like, but it would include some very strange objects. For example, consider the curve parameterized by $$(w,x,y,z)=(\sin t,\cos t,\sin \sqrt{2}t,\cos \sqrt{2}t)$$ where $$t\in[-\infty,\infty]$$. It is easy to see that this is a dense subset of the Clifford torus that's the product of two unit circles centered at the origin (in the $$(w,x)$$ and $$(y,z)$$ planes). Unlike the similar curves in two- and three-dimensional space, this isn't closed. Question: Would it make more sense to focus on closed, uniform, connected subsets of Euclidean spaces? In two dimensions that would include the empty set, points, lines, circles, and the entire plane. In three dimensions that would include the empty set, points, lines, circles, helices, planes, spheres, cylinders, and the entire space. In four dimensions … ### Stupid Petitions Are Not Limited to the Left According to a recent petition: We demand that J.K. Rowling grants no less than 18 refugees shelter in her mansions for at least 8 years. She rejects safe immigration, which is why we also demand, that there will be no additional vetting process for these refugees. Her virtue-signaling stems from ignorance, and the 100% effective cure of it will be this drastic change of perspective. To make this group of refugees representative of the situation Europe, we also demand that the group consists of 14 men and 4 women, since over 75% of the millions of refugees are male. First, if you sound like this, you are doing conservatism wrong: UPDATE: you can drop off an unwanted baby at a Hobby Lobby and they'll raise it Second, why are they assuming that letting refugees in means that the State must build homes for them? When someone moves from city X to city Y in the same country, we don't normally assume that the the government of city Y must build the homes. Third, if the government insists on building homes for newcomers, there might be problems with it irrespective of whether or not there are refugees. Keeping refugees out because the government is spendthrift is like getting a hangover from scotch-and-soda and, as a result, swearing off soda. Finally, if you believe that Americans/British/whoever have the right to rent to refugees, does that imply that you have a moral obligation to do so yourself? If you believe that Americans/British/whoever have the right to smoke dope, does that imply that you have a moral obligation to do so yourself? ### Which Trump Did We Elect? An Update The test case I mentioned here might be happening. I still don't know which Trump we elected, but it's clear that the commenters at Instapundit voted for the bad Trump. A brief summary of the comments there: You know everything we said about the RFRA and religious freedom? IT WAS BULSHYTT! ### The Reaction to Betsy DeVos Might Explain the Trump Movement Some of the people reacting to the appointment of Betsy DeVos as Secretary of Education are planning to homeschool their children even despite the fact that she is a proponent of more homeschooling. Apparently, they have been so brainwashed by standard opinion into believing that conservatives are authoritarian that they plan to get back at anti-authoritarian conservatives by doing something anti-authoritarian. Question: What happens when someone who insists on being authoritarian believes the same thing? Would that produce someone who defends capitalism by limiting imports and defends American ideals by closing borders? ### A Stalin Quote and p-adic Numbers According to Joseph Stalin: A single death is a tragedy; a million deaths is a statistic. He was, of course, using $$p$$-adic numbers. For example: $$\left\vert1\right\vert_2=1>0.015625=\left\vert1000000\right\vert_2$$. ### To a Large Fraction of Right Wingers Please note that the Left lost the latest election, probably due to blowback from their overreach. Please also note that the candidate who imitated them ran behind his party. Do you sincerely want to lose? ### Donald Trump and Cleon II But, what keeps the Emperor strong? What kept Cleon strong? It's obvious. He is strong, because he permits no strong subjects. A courtier who becomes too rich, or a general who becomes too popular is dangerous. I was reminded, somehow. ### A Few Notes on Trump's Recent Actions on Immigration The current restrictions on entry from seven nations were based on an Obama-era policy (or would that be a Nyarlathotep-era policy?). You can think of this as Trump's Tariff of Abominations. The Tariff of Abominations episode was when a populist President enforced a blatant example of overreach by his predecessor. It lead to the Nullification Crisis, when South Carolina declared itself a sanctuary state for smugglers. (The use of nullification by a slave state gave nullification a bad name. On the other hand, nullification was also used by free statea.) Speaking of sanctuaries … One of Trump's executive orders is for the Federal government to, “on a weekly basis, make public a comprehensive list of criminal actions committed by aliens.” Will there also be a weekly report of crimes committed by citizens? (It's not science unless there's a control group.) The same order will also cut off funds to sanctuary cities. I have a better idea: Let's stop subsidies to state and local governments in general. Such subsidies are a matter of taking money out of local economies, sending it for a wild night on the town, and giving some of it back. ### Is Lying a Signaling Mechanism? According to Tyler Cowen: By requiring subordinates to speak untruths, a leader can undercut their independent standing, including their standing with the public, with the media and with other members of the administration. That makes those individuals grow more dependent on the leader and less likely to mount independent rebellions against the structure of command. Promoting such chains of lies is a classic tactic when a leader distrusts his subordinates and expects to continue to distrust them in the future. Another reason for promoting lying is what economists sometimes call loyalty filters. If you want to ascertain if someone is truly loyal to you, ask them to do something outrageous or stupid. If they balk, then you know right away they aren’t fully with you. That too is a sign of incipient mistrust within the ruling clique, and it is part of the same worldview that leads Trump to rely so heavily on family members. This works in more than one direction. If telling obvious lies on behalf of someone else is a loyalty signal, Trump is signalling his loyalty to his voters. But wait, there's more: Imagine, for instance, that mistruths come in different forms: higher-status mistruths and lower-status mistruths. The high-status mistruths are like those we associate with ambassadors and diplomats. The ambassador is reluctant to tell a refutable, flat-out lie of the sort that could cause embarrassment, but if all you ever heard were the proclamations of the ambassador, you wouldn’t have a good grasp of the realities of the situation. … Trump specializes in lower-status lies, typically more of the bald-faced sort, namely stating “x” when obviously “not x” is the case. They are proclamations of power, and signals that the opinions of mainstream media and political opponents will be disregarded. In terms science types might find familiar: High-status lies are not even wrong; low-status lies are wrong. There's another advantage of lying: You can tell the truth and not be believed, thereby discrediting critics when the truth becomes obvious. You might get the Other Side to force middle-of-the-road people saying things opposed to the dogma of the Other Side into your coalition. You might even be able to get critics to refuse to believe their own allies, when those allies think for themselves. On the other hand, this might turn into the new Dunning–Kruger effect. It's an all-purpose way to explain away anybody who disagrees with you without having to actually engage their with their arguments. The Dunning–Kruger effect (that unskilled people are often unaware of it) is commonly cited in debates between two groups of arrogant fools each claiming that the other side is unskilled and unaware of it. We might see a variety of ideologues claiming that the Other Side is lying to signal loyalty. (Devising examples will the left as an exercise for the reader.) ### Oops! In my calculation of the EmDrive acceleration, I skipped a decimal point. The acceleration should be $$5.16\times10^{-3}~\text{m}/\text{s}^2$$. That will get you from Earth to Mars in 2–3 months If it works, it might be worth doing. ### I Have Some Good News and Some Bad News The good news: The right wing is getting saner, at least for now. They're blaming everything on liberals instead of on foreigners. The bad news: The left wing is not getting any more skeptical of government. Instead of uncritically trusting politicians, they uncritically trust bureaucrats. ### What a Claim Sounds Like vs. What It Is It's common for people to make a claim, and back it up with evidence, that sounds like something else with much less evidence. For example: 1. For example, that claim that loose gun laws are correlated with “gun-related deaths“ sounds like a claim that loose gun laws are correlated with gun crime but also it includes suicides by gun. 2. There's reason to believe social conservatism is correlated with “teenage pregnancy.” This might refer to unwed 13-year-olds but it also includes married 19-year-olds. 3. “Renewable-energy capacity” is growing rapidly. That's the peak generation when the sun is shining and the wind is blowing at the right speed. The actual energy generated is much less. 4. “Climate change” might refer to global warming … or global cooling … or droughts … or floods or … When you see claims like the above, please do not respond to them with anecdotes that might point in the other direction; there are much better replies. Profiles My Blogger Profile X-treme Tracker The Atom Feed
2017-05-28 08:23:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3201455771923065, "perplexity": 2828.58555151871}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609610.87/warc/CC-MAIN-20170528082102-20170528102102-00087.warc.gz"}
https://socratic.org/questions/how-do-i-find-the-equation-of-a-linear-function-that-passes-through-1-7-and-2-9
# How do I find the equation of a linear function that passes through (1, 7) and (2, 9)? Sep 17, 2014 For this type of problem we have 3 distinct steps to follow. 1) Find the slope using the formula $m = \frac{\left({y}_{2} - {y}_{1}\right)}{\left({x}_{2} - {x}_{1}\right)}$ 2) Find the y-intercept, $b$, by substituting in the slope , $m$, from STEP 1 and the $x$ and $y$ values from one of the points given in the slope intercept formula, $y = m x + b$ 3) Substitute the slope, $m$, and the y-intercept, $b$, back into the slope intercept formula, $y = m x + b$. STEP 1 ${x}_{1} = 1$ ${y}_{1} = 7$ ${x}_{2} = 2$ ${y}_{2} = 9$ $m = \frac{\left({y}_{2} - {y}_{1}\right)}{\left({x}_{2} - {x}_{1}\right)} = \frac{\left(9 - 7\right)}{\left(2 - 1\right)} = \frac{2}{1} = 2$ STEP 2 $y = m x + b$ I will use the $x$ and $y$ values from the point $\left(1 , 7\right)$ $7 = \left(2\right) \left(1\right) + b$ $7 = 2 + b$ $5 = b$ STEP 3 $y = m x + b$ $y = 2 x + 5 \leftarrow S O L U T I O N$
2020-10-24 15:05:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 23, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9001750946044922, "perplexity": 281.0061470817524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107883636.39/warc/CC-MAIN-20201024135444-20201024165444-00030.warc.gz"}
https://openstax.org/books/university-physics-volume-1/pages/16-4-energy-and-power-of-a-wave
University Physics Volume 1 # 16.4Energy and Power of a Wave University Physics Volume 116.4 Energy and Power of a Wave ### Learning Objectives By the end of this section, you will be able to: • Explain how energy travels with a pulse or wave • Describe, using a mathematical expression, how the energy in a wave depends on the amplitude of the wave All waves carry energy, and sometimes this can be directly observed. Earthquakes can shake whole cities to the ground, performing the work of thousands of wrecking balls (Figure 16.15). Loud sounds can pulverize nerve cells in the inner ear, causing permanent hearing loss. Ultrasound is used for deep-heat treatment of muscle strains. A laser beam can burn away a malignancy. Water waves chew up beaches. Figure 16.15 The destructive effect of an earthquake is observable evidence of the energy carried in these waves. The Richter scale rating of earthquakes is a logarithmic scale related to both their amplitude and the energy they carry. In this section, we examine the quantitative expression of energy in waves. This will be of fundamental importance in later discussions of waves, from sound to light to quantum mechanics. ### Energy in Waves The amount of energy in a wave is related to its amplitude and its frequency. Large-amplitude earthquakes produce large ground displacements. Loud sounds have high-pressure amplitudes and come from larger-amplitude source vibrations than soft sounds. Large ocean breakers churn up the shore more than small ones. Consider the example of the seagull and the water wave earlier in the chapter (Figure 16.3). Work is done on the seagull by the wave as the seagull is moved up, changing its potential energy. The larger the amplitude, the higher the seagull is lifted by the wave and the larger the change in potential energy. The energy of the wave depends on both the amplitude and the frequency. If the energy of each wavelength is considered to be a discrete packet of energy, a high-frequency wave will deliver more of these packets per unit time than a low-frequency wave. We will see that the average rate of energy transfer in mechanical waves is proportional to both the square of the amplitude and the square of the frequency. If two mechanical waves have equal amplitudes, but one wave has a frequency equal to twice the frequency of the other, the higher-frequency wave will have a rate of energy transfer a factor of four times as great as the rate of energy transfer of the lower-frequency wave. It should be noted that although the rate of energy transport is proportional to both the square of the amplitude and square of the frequency in mechanical waves, the rate of energy transfer in electromagnetic waves is proportional to the square of the amplitude, but independent of the frequency. ### Power in Waves Consider a sinusoidal wave on a string that is produced by a string vibrator, as shown in Figure 16.16. The string vibrator is a device that vibrates a rod up and down. A string of uniform linear mass density is attached to the rod, and the rod oscillates the string, producing a sinusoidal wave. The rod does work on the string, producing energy that propagates along the string. Consider a mass element of the string with a mass $ΔmΔm$, as seen in Figure 16.16. As the energy propagates along the string, each mass element of the string is driven up and down at the same frequency as the wave. Each mass element of the string can be modeled as a simple harmonic oscillator. Since the string has a constant linear density $μ=ΔmΔx,μ=ΔmΔx,$ each mass element of the string has the mass $Δm=μΔx.Δm=μΔx.$ Figure 16.16 A string vibrator is a device that vibrates a rod. A string is attached to the rod, and the rod does work on the string, driving the string up and down. This produces a sinusoidal wave in the string, which moves with a wave velocity v. The wave speed depends on the tension in the string and the linear mass density of the string. A section of the string with mass $Δ m Δ m$ oscillates at the same frequency as the wave. The total mechanical energy of the wave is the sum of its kinetic energy and potential energy. The kinetic energy $K=12mv2K=12mv2$ of each mass element of the string of length $ΔxΔx$ is $ΔK=12(Δm)vy2,ΔK=12(Δm)vy2,$ as the mass element oscillates perpendicular to the direction of the motion of the wave. Using the constant linear mass density, the kinetic energy of each mass element of the string with length $ΔxΔx$ is $ΔK=12(μΔx)vy2.ΔK=12(μΔx)vy2.$ A differential equation can be formed by letting the length of the mass element of the string approach zero, $dK=limΔx→012(μΔx)vy2=12(μdx)vy2.dK=limΔx→012(μΔx)vy2=12(μdx)vy2.$ Since the wave is a sinusoidal wave with an angular frequency $ω,ω,$ the position of each mass element may be modeled as $y(x,t)=Asin(kx−ωt).y(x,t)=Asin(kx−ωt).$ Each mass element of the string oscillates with a velocity $vy=∂y(x,t)∂t=−Aωcos(kx−ωt).vy=∂y(x,t)∂t=−Aωcos(kx−ωt).$ The kinetic energy of each mass element of the string becomes $dK=12(μdx)(−Aωcos(kx−ωt))2,=12(μdx)A2ω2cos2(kx−ωt).dK=12(μdx)(−Aωcos(kx−ωt))2,=12(μdx)A2ω2cos2(kx−ωt).$ The wave can be very long, consisting of many wavelengths. To standardize the energy, consider the kinetic energy associated with a wavelength of the wave. This kinetic energy can be integrated over the wavelength to find the energy associated with each wavelength of the wave: $dK=12(μdx)A2ω2cos2(kx),∫0KλdK=∫0λ12μA2ω2cos2(kx)dx=12μA2ω2∫0λcos2(kx)dx,Kλ=12μA2ω2[12x+14ksin(2kx)]0λ=12μA2ω2[12λ+14ksin(2kλ)−14ksin(0)],Kλ=14μA2ω2λ.dK=12(μdx)A2ω2cos2(kx),∫0KλdK=∫0λ12μA2ω2cos2(kx)dx=12μA2ω2∫0λcos2(kx)dx,Kλ=12μA2ω2[12x+14ksin(2kx)]0λ=12μA2ω2[12λ+14ksin(2kλ)−14ksin(0)],Kλ=14μA2ω2λ.$ There is also potential energy associated with the wave. Much like the mass oscillating on a spring, there is a conservative restoring force that, when the mass element is displaced from the equilibrium position, drives the mass element back to the equilibrium position. The potential energy of the mass element can be found by considering the linear restoring force of the string, In Oscillations, we saw that the potential energy stored in a spring with a linear restoring force is equal to $U=12ksx2,U=12ksx2,$ where the equilibrium position is defined as $x=0.00m.x=0.00m.$ When a mass attached to the spring oscillates in simple harmonic motion, the angular frequency is equal to $ω=ksm.ω=ksm.$ As each mass element oscillates in simple harmonic motion, the spring constant is equal to $ks=Δmω2.ks=Δmω2.$ The potential energy of the mass element is equal to $ΔU=12ksx2=12Δmω2x2.ΔU=12ksx2=12Δmω2x2.$ Note that $ksks$ is the spring constant and not the wave number $k=2πλ.k=2πλ.$ This equation can be used to find the energy over a wavelength. Integrating over the wavelength, we can compute the potential energy over a wavelength: $dU=12ksx2=12μω2x2dx,Uλ=12μω2A2∫0λsin2(kx)dx=14μA2ω2λ.dU=12ksx2=12μω2x2dx,Uλ=12μω2A2∫0λsin2(kx)dx=14μA2ω2λ.$ The potential energy associated with a wavelength of the wave is equal to the kinetic energy associated with a wavelength. The total energy associated with a wavelength is the sum of the potential energy and the kinetic energy: $Eλ=Uλ+Kλ,Eλ=14μA2ω2λ+14μA2ω2λ=12μA2ω2λ.Eλ=Uλ+Kλ,Eλ=14μA2ω2λ+14μA2ω2λ=12μA2ω2λ.$ The time-averaged power of a sinusoidal mechanical wave, which is the average rate of energy transfer associated with a wave as it passes a point, can be found by taking the total energy associated with the wave divided by the time it takes to transfer the energy. If the velocity of the sinusoidal wave is constant, the time for one wavelength to pass by a point is equal to the period of the wave, which is also constant. For a sinusoidal mechanical wave, the time-averaged power is therefore the energy associated with a wavelength divided by the period of the wave. The wavelength of the wave divided by the period is equal to the velocity of the wave, $Pave=EλT=12μA2ω2λT=12μA2ω2v.Pave=EλT=12μA2ω2λT=12μA2ω2v.$ 16.10 Note that this equation for the time-averaged power of a sinusoidal mechanical wave shows that the power is proportional to the square of the amplitude of the wave and to the square of the angular frequency of the wave. Recall that the angular frequency is equal to $ω=2πfω=2πf$, so the power of a mechanical wave is equal to the square of the amplitude and the square of the frequency of the wave. ### Example 16.6 #### Power Supplied by a String Vibrator Consider a two-meter-long string with a mass of 70.00 g attached to a string vibrator as illustrated in Figure 16.16. The tension in the string is 90.0 N. When the string vibrator is turned on, it oscillates with a frequency of 60 Hz and produces a sinusoidal wave on the string with an amplitude of 4.00 cm and a constant wave speed. What is the time-averaged power supplied to the wave by the string vibrator? #### Strategy The power supplied to the wave should equal the time-averaged power of the wave on the string. We know the mass of the string $(ms)(ms)$, the length of the string $(Ls)(Ls)$, and the tension $(FT)(FT)$ in the string. The speed of the wave on the string can be derived from the linear mass density and the tension. The string oscillates with the same frequency as the string vibrator, from which we can find the angular frequency. #### Solution 1. Begin with the equation of the time-averaged power of a sinusoidal wave on a string: $P=12μA2ω2v.P=12μA2ω2v.$ The amplitude is given, so we need to calculate the linear mass density of the string, the angular frequency of the wave on the string, and the speed of the wave on the string. 2. We need to calculate the linear density to find the wave speed: $μ=msLs=0.070kg2.00m=0.035kg/m.μ=msLs=0.070kg2.00m=0.035kg/m.$ 3. The wave speed can be found using the linear mass density and the tension of the string: $v=FTμ=90.00N0.035kg/m=50.71m/s.v=FTμ=90.00N0.035kg/m=50.71m/s.$ 4. The angular frequency can be found from the frequency: $ω=2πf=2π(60s−1)=376.80s−1.ω=2πf=2π(60s−1)=376.80s−1.$ 5. Calculate the time-averaged power: $P=12μA2ω2v=12(0.035kgm)(0.040m)2(376.80s−1)2(50.71ms)=201.59W.P=12μA2ω2v=12(0.035kgm)(0.040m)2(376.80s−1)2(50.71ms)=201.59W.$ #### Significance The time-averaged power of a sinusoidal wave is proportional to the square of the amplitude of the wave and the square of the angular frequency of the wave. This is true for most mechanical waves. If either the angular frequency or the amplitude of the wave were doubled, the power would increase by a factor of four. The time-averaged power of the wave on a string is also proportional to the speed of the sinusoidal wave on the string. If the speed were doubled, by increasing the tension by a factor of four, the power would also be doubled. Is the time-averaged power of a sinusoidal wave on a string proportional to the linear density of the string? The equations for the energy of the wave and the time-averaged power were derived for a sinusoidal wave on a string. In general, the energy of a mechanical wave and the power are proportional to the amplitude squared and to the angular frequency squared (and therefore the frequency squared). Another important characteristic of waves is the intensity of the waves. Waves can also be concentrated or spread out. Waves from an earthquake, for example, spread out over a larger area as they move away from a source, so they do less damage the farther they get from the source. Changing the area the waves cover has important effects. All these pertinent factors are included in the definition of intensity (I) as power per unit area: $I=PA,I=PA,$ 16.11 where P is the power carried by the wave through area A. The definition of intensity is valid for any energy in transit, including that carried by waves. The SI unit for intensity is watts per square meter (W/m2). Many waves are spherical waves that move out from a source as a sphere. For example, a sound speaker mounted on a post above the ground may produce sound waves that move away from the source as a spherical wave. Sound waves are discussed in more detail in the next chapter, but in general, the farther you are from the speaker, the less intense the sound you hear. As a spherical wave moves out from a source, the surface area of the wave increases as the radius increases $(A=4πr2).(A=4πr2).$ The intensity for a spherical wave is therefore $I=P4πr2.I=P4πr2.$ 16.12 If there are no dissipative forces, the energy will remain constant as the spherical wave moves away from the source, but the intensity will decrease as the surface area increases. In the case of the two-dimensional circular wave, the wave moves out, increasing the circumference of the wave as the radius of the circle increases. If you toss a pebble in a pond, the surface ripple moves out as a circular wave. As the ripple moves away from the source, the amplitude decreases. The energy of the wave spreads around a larger circumference and the intensity decreases proportional to $1r,1r,$ which is also the same in the case of a spherical wave, since intensity is proportional to the amplitude squared. Order a print copy As an Amazon Associate we earn from qualifying purchases.
2022-01-23 09:11:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 38, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6899259686470032, "perplexity": 191.31391235186982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304217.55/warc/CC-MAIN-20220123081226-20220123111226-00406.warc.gz"}
http://physics.stackexchange.com/questions/83224/spinor-representation-restricted-under-subgroup-a-formula-from-polchinski?answertab=oldest
# Spinor representation restricted under subgroup, a formula from Polchinski The question is about the spinor representation decomposed under subgroups. It's a common technique in string theory when parts of dimensions are compactified and ignored, and we are only interested in the remaining sub-symmetry. I'm learning it from the appendix B in Polchinski's big book volume II. For a particular decomposition, $SO(2k+1,1) \rightarrow SO(2l+1,1) \times SO(2k-2l)$ (B.1.43), the Weyl spinors decompose as the formula (B.1.44), $2^k \rightarrow (2^l, 2^{k-l-1})+(2'^l,2'^{k-l-1})$ and $2'^k \rightarrow (2'^l, 2^{k-l-1})+(2^l,2'^{k-l-1})$, where $2^k$ and $2'^k$ are the Weyl representations of Lorentz group $SO(2k+1,1)$ with chirality +1 and -1 respectively. Specifically on the case $SO(9,1)\rightarrow SO(5,1)\times SO(4)$ with decomposetions $16 \rightarrow (4,2)+(4',2')$, which appears at (B.6.3). My question is the contradiction with minimum representations. By checking Majorana and Weyl conditions, the minimum spinors for d=6 and d=4 have 8 and 4 components respectively. (Ref. table B.1 Polchinski) So How can you find $(4,2)$ representation for $SO(5,1)\times SO(4)$? Also, I'm very interested in the prove of (B.1.44) under (B.1.43)? How is it proved by comparing the eigenvalues of $\Gamma^{+}\Gamma^{-}-\frac{1}{2}$ as claimed by Polchinski? - You are confused about the way how the dimensions of representations are counted. In general, they are complex dimensions, not real ones. More precisely, representations of Lie groups come in three types: the complex ones, the real ones, and the pseudoreal ones. The complex ones are inequivalent to their complex conjugates. The real and pseudoreal ones are equivalent to their complex conjugates and the real ones are such that the equivalence may be used to demand that the coordinates of the representations are real. The pseudoreal or "quaternionic" representations can't be reduced in this way but they're still equivalent to their complex conjugates essentially because $i$ and $-i$ may be continuously connected with one another through the sphere $S^2$ of the unit, pure imaginary quaternions. $SO(4)$ is locally isomorphic to $SU(2)\times SU(2)$ so it has two inequivalent 2-dimensional complex (pseudoreal) representations. One of them is doublet under the first $SU(2)$ and invariant under the second $SU(2)$ transformations, the other is a doublet under the other $SU(2)$. Similarly, $SO(5,1)$ is a sort of $SL(2,H)$ group of $2\times 2$ matrices with quaternionic entries whose "real part of the determinant" equals one. This group may be written in terms of $4\times 4$ complex matrices, so it is a subgroup of $GL(4,C)$. It follows that $SO(5,1)$ has 4-dimensional (complex, well, pseudoreal) fundamental representations. Well, it has two inequivalent pseudoreal fundamental representations and they're not complex conjugates to each other. Instead, each of them is equivalent to the complex conjugate of itself. So only for real representations, the counting of the dimensions follows what you believe. Complex representations that don't admit any natural "restriction making the coordinates real" (without doubling) are complex and by the dimension, we mean the number of complex coordinates (without any multiplication by two). Representations with $K$ quaternionic coordinates are counted as $2k$-dimensional complex representations. This is the unified way to treat representation that leads to a more uniform and regular set of rules to determine how things behave. This is the most natural way because it's based on complex numbers and complex numbers are more fundamental than the real numbers or quaternions. (Fundamental theorem of algebra and other reasons.) Real and quaternionic representations are classified as complex representations with the special freedom to "conjugate" coordinates, i.e. with some special "antilinear structure map $j$" that commutes with the action of the group $g(v)$. For real reps, $j^2=+1$, for pseudoreal ones, $j^2=-1$ and $j$ may be literally interpreted as the multiplication by the $j$ quaternion from the proper side. Around B.1.43 and B.1.44, Joe simply tells you to diagonalize the representations on both sides and list possible eigenvalues of all the operators $S_a$ - look at the basis of the representation containing all the shared eigenstates of all the $S_a$ operators. All these eigenvalues of $S_a$ are $\pm 1/2$ - the collection is known as the weight. Whether the number of the negative $-1/2$ eigenvalues is even or odd decides about the chirality of the spinor. So the left hand side of B.1.44 are the collections of weights (eigenvalues under $S_a$ operators) that are $\pm 1/2$ each and the number of the negative eigenvalues is even (a) or odd (b). They may be obtained as tensor products of collections of smaller sets for which the chiralities are even for the left group and even for the right group or odd for the left group and odd for the right group (a), or even-odd or odd-even (b). This is why the irreducible rep of the larger group decomposes into the direct sum of two irreducible reps of the factor groups and each of the two terms in the direct sum is a tensor product of two Weyl spinors. -
2015-05-28 14:20:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8713274002075195, "perplexity": 374.1946519875622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929418.92/warc/CC-MAIN-20150521113209-00164-ip-10-180-206-219.ec2.internal.warc.gz"}
https://zbmath.org/?q=an:1327.14217
# zbMATH — the first resource for mathematics Spherical actions on flag varieties. (English. Russian original) Zbl 1327.14217 Sb. Math. 205, No. 9, 1223-1263 (2014); translation from Mat. Sb. 205, No. 9, 3-48 (2014). Let $$G$$ be a connected reductive linear algebraic group over an algebraically closed field of characteristic 0, and let $$X$$ be a generalized flag variety of $$G$$. The authors address the problem of classifying the connected reductive subgroups $$K$$ of $$G$$ acting spherically on $$X$$, that is, such that $$X$$ admits an open orbit for a Borel subgroup of $$K$$. This classification is already known in some special cases. Here they complete the classification when $$G$$ is the general linear group. The key idea of the paper is the following. Let $$X=G/P$$. The action of $$P$$ on its unipotent radical has an open orbit, the Richardson orbit. Let $$O_X$$ be the corresponding adjoint nilpotent $$G$$-orbit, with its symplectic structure. Then $$X$$ is $$K$$-spherical if and only if the action of $$K$$ on $$O_X$$ is coisotropic. Furthermore, if $$X$$ and $$Y$$ are two generalized flag varieties of $$G$$ with $$O_X\subset\overline{O_Y}$$ and $$X$$ is $$K$$-spherical, then $$Y$$ is $$K$$-spherical as well. Reviewer: Paolo Bravi (Roma) ##### MSC: 14M15 Grassmannians, Schubert varieties, flag manifolds 14M27 Compactifications; symmetric and spherical varieties Full Text:
2022-01-23 16:41:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7420116066932678, "perplexity": 214.66758233309815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304287.0/warc/CC-MAIN-20220123141754-20220123171754-00473.warc.gz"}
https://brilliant.org/problems/an-algebra-problem-by-vighnesh-shenoy/
# Reminds of Newton? Algebra Level 4 $\begin{cases} a+b+c=1 \\ a^2+b^2+c^2=2^2 \\ a^3 + b^3+c^3 = 3^2 \end{cases}$ Given that $$a,b$$ and $$c$$ are complex numbers satisfying the system of equations above. If the value of $$\dfrac1{a^3} + \dfrac1{b^3} + \dfrac1{c^3}$$ is equal to $$\dfrac pq$$, where $$p$$ and $$q$$ are coprime positive integers, find $$p+q$$. ×
2017-01-22 20:32:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9211416840553284, "perplexity": 108.85670781809523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00310-ip-10-171-10-70.ec2.internal.warc.gz"}
https://iwaponline.com/wpt/article/10/4/704/20672/Understanding-how-people-innovate-for-emergency
Emergency sanitation technologies make up some of the most significant gaps in the water, sanitation and hygiene sector. Major initiatives to address identified gaps may be characterised as donor-funded, top-down processes driven by international or European-based non-governmental organisations. However, local organisations also innovate. To better understand how local organisations innovate for emergency sanitation, the paper presents a case study of an Indonesian NGO who had developed a toilet for use in emergencies. The NGO developed the toilet by modifying an existing non-emergency toilet. The process was unstructured and informal. When testing ideas, for instance, the NGO used their own methods rather than referring to testing protocols recognised by the industry. The NGO surveyed end users, but the respondents did not come from post-disaster settings. Compared to designs developed through international initiatives, the NGO's design deviated somewhat from internationally recognised standards, for instance, the size of the latrine slab. The paper also discusses differences between the way local and international organisations develop emergency sanitation products. Each approach has advantages and disadvantages in terms of methodology and access to resources and expertise. Therefore, there are potential benefits to the different organisations working more closely. ## INTRODUCTION Emergency sanitation technologies make up some of the most significant gaps in the water, sanitation and hygiene (WASH) sector, according to an extensive stakeholder consultation comprising beneficiaries, practitioners and donors. The gaps include: latrines in locations where no pits are possible; latrine emptying and desludging; urban alternatives for excreta disposal; final sewage disposal options, and; non-toilet options (Bastable & Russell 2013). Major initiatives to address identified gaps are S(p)eedkits (www.speedkits.eu), the Emergency Sanitation Project (www.emergencysanitationproject.org) and the Humanitarian Innovation Fund (www.humanitarianinnovation.org). These initiatives may be characterised as donor-funded, top-down processes driven by international or European-based non-governmental organisations (NGOs). For instance, the S(p)eedkits project comprises humanitarian agencies, NGOs, academia and private sector partners from Italy, Belgium, the Netherlands, Luxembourg, Norway and Germany. On the other hand, there are local companies or NGOs who innovate for emergency sanitation outside of the aforementioned initiatives. They also play an important role in overall innovation within the sector, although their activities are not as well documented. To better understand how local organisations innovate for emergency sanitation, the authors carried out a case study of an Indonesian NGO who had developed a toilet for use in emergencies. ## METHODOLOGY A team of two researchers from the Bandung Institute of Technology visited the NGO's head office in April 2014 where the product was developed. A semi-structured interview was conducted with the Research and Development (R&D) manager who led the development of the emergency toilet. Following the interview, the R&D manager guided the researchers through a brief visit of the premises so that the researchers could observe and better understand some of the activities the NGO had undertaken. Operating since the 1970s, the NGO focuses on appropriate technology in a wide range of sectors, including water and sanitation, water and waste treatment, renewable energy and agriculture and aquaculture. The NGO's R&D activities are carried out through the R&D department. Comprising four persons, the department conducts R&D into various areas such as sanitation and gasification, and includes hardware and software aspects. Correspondingly, the emergency toilet was developed by the R&D department. ## THE DESIGN PROCESS ### Motivations for developing an emergency toilet As a development NGO, the organisation does not have specific interests towards emergency response in general. However, they may choose to respond to disasters on a case-by-case basis, and have done so before. As a result, the organisation had first decided to design a toilet for non-emergency, rather than emergency, situations in Indonesia. (However, it should be noted that the final product was subsequently also implemented in a post-emergency situation.) The need for ‘hardware’ was identified because the NGO felt that existing sanitation programs, which emphasized ‘software’ components, were not sustainable due to the lack of ‘hardware’ to provide physical access to sanitation. This led to the development of the first product. Approximately one year into the development of the first product, the NGO felt that concept was good but a better solution could be created. The first product had been implemented in a post-emergency situation, one month after a disaster. Most had been implemented at a household-level, but public toilets were also installed. Therefore, the organisation started developing a ‘toilet-in-the-box’: a toilet for implementation in emergencies that would also be suitable in non-emergency situations. At the time of the interview, the product had been developed but had yet to be implemented in an emergency situation. The organisation was in the process of promoting the product to relevant humanitarian agencies. At that point in time, the NGO also had an idea to develop a similar product that would be suitable for situations with limited water. However, they were uncertain whether this would go ahead. ### Overall design approach The approach used by the NGO for the emergency toilet was simple: to modify the design of the first product so that the toilet could be packed into one box. On the other hand, the first product had been developed from scratch. Therefore, compared to the first product, the emergency toilet took a significantly shorter time and significantly less money (Table 1). Table 1 Comparison of overall design process between the first product and emergency toilet Characteristic First product Emergency toilet Approach Designed from scratch Modified from first product Time taken 14 months 5 months Funds spent (approximately) US$10,000 US$ 2,500 Number of people involved Four persons (i.e. the R&D team) Decision-making Final decisions made by R&D manager (interviewee) Characteristic First product Emergency toilet Approach Designed from scratch Modified from first product Time taken 14 months 5 months Funds spent (approximately) US$10,000 US$ 2,500 Number of people involved Four persons (i.e. the R&D team) Decision-making Final decisions made by R&D manager (interviewee) Table 2 Materials chosen and considered for the toilet Component Material chosen Materials considered Roof Tin – Frame Aluminium Steel, concrete Wall PVC celuka board Geomembrane, material commonly used for toilet doors Floor ‘Special material’ Fibreglass, concrete, plastic, plastic palette Slab Plastic Ceramic, fibreglass, plastic Septic tank Fibreglass – Component Material chosen Materials considered Roof Tin – Frame Aluminium Steel, concrete Wall PVC celuka board Geomembrane, material commonly used for toilet doors Floor ‘Special material’ Fibreglass, concrete, plastic, plastic palette Slab Plastic Ceramic, fibreglass, plastic Septic tank Fibreglass – Furthermore, through the first design process, the NGO had acquired expertise on various technologies and materials. Most of the funds for developing the first product went to sampling equipment and materials. Therefore, the emergency toilet was much cheaper to develop. #### Design requirements The initial design requirements were based on the NGO's experience with their sanitation program in a specific area of Indonesia. The team tapped on feedback from their colleagues in the field. For example, the R&D department determined that the toilet must have legs because the program was being implemented in a tidal area. Additionally, they decided that the toilet should be ‘as cheap, robust and as beautiful as possible’. Some requirements were presumed, given that the NGO was working within a local culture that they were a part of. For instance, it was assumed that water would be used for anal cleansing. The team determined that 1.1 to 1.2 litres of water would be used for flushing and anal cleansing. Assuming that each person would go to the toilet twice per day, the design was developed based on 15 litres of water and excreta being produced per person per day. One month after starting the product development process, the NGO decided to conduct a survey to better understand the design requirements. Examples of the needs that the organisation was interested in included space requirements, material, colour and the purpose of using toilets. The department surveyed approximately 500 people by reaching out to colleagues at the office, residents in nearby villages and other people they knew. For example, 98% of the respondents felt uncomfortable using a toilet with a tarpaulin wall. Therefore, alternatives were explored. In addition, most of the respondents felt comfortable in a 1 m by 1 m space. Therefore, the team decided that the toilet should have internal dimensions of at least 1.1 m by 1.1 m. The final, exact, dimensions depended on the manufacturer's constraints. ### Choosing materials During the design process, the team focused on choosing appropriate materials for various components of the toilet, such as the wall, floor and slab (Table 2). No one in the organisation had specific expertise on materials. Hence, ideas were generated based on personal knowledge, by searching the internet, visiting exhibitions and through personal contacts. For instance, the team searched the internet to find ideas for how to make a frame for the toilet. The department also attempted to contact various suppliers but some companies did not respond, partly because the team's requirements were too low. Alternatives were evaluated by building prototypes and exposing them to the environmental conditions at the NGO's premises, which included sunlight, rain and tannery fumes. Leaving it outside for two to three months helped the team assess the materials’ durability. The R&D department also asked groups of people to experience the different prototypes. The advice and comments from them helped the team choose the materials. For example, the NGO had initially felt that aluminium would be too expensive. Hence, the organisation first considered using steel or concrete. However, steel had to be cut and welded and concrete was too heavy. Eventually, the R&D department discovered that aluminium was not as expensive as they expected. Aluminium was also easier to fabricate. For the wall, the team considered geomembrane-type material and material commonly used for toilet doors. However, geomembrane was too hot, flimsy, and difficult to mount onto a frame. The material common in toilet doors was too thin and difficult to manufacture. Although the manufacturer stated that the material would last four to five months with ultraviolet coating, the coating was too expensive. Therefore, the final design used a polyvinyl chloride (PVC) celuka board. The team also discovered later that the PVC celuka board was able to absorb some sound, which was considered an added benefit because sound had not been explicitly considered during the design process. Fibreglass had been considered as a material for the floor. However, it was too thin and led to cracks. The team also tried to modify plastic palettes but it required to much effort. Later, a ‘special material’ was identified. The material was cut and placed on the roof. Then, the team's colleagues were asked to try out the floor. The floor was found to flex under the weight, hence a beam was added. The ‘special material’ was chosen because it did not become slippery when wet, minimising accidents and increasing safety. #### Design of the septic tank Septic tanks are the most common form of sanitation in Indonesia. 72% of urban households and 48% of rural households had septic tanks in 2012 (Statistics Indonesia 2013). Therefore, the NGO also used a septic tank in its design. For the first product, a standard fibreglass septic tank was used. The NGO analysed a few samples to confirm the treatment efficiency of the system. For the emergency toilet, the septic tank had to be foldable because the standard tank would take up too much space for transportation. A collapsible PVC membrane was chosen. As the team's main concern was cracking, they tested the septic tank by filling it with water for five days, draining the water, folding up the tank and refilling the tank again. The material was deemed suitable after the septic tank remained intact after three months. #### Other design considerations Table 3 describes a number of other factors that was considered during the design process. Table 3 Other design considerations and features Design criteria Design decision or feature Selling price First product: Rp 6.5 million (Approx. US$530) Emergency toilet: Rp 9.5 million (Approx. US$ 780) Number of people per latrine Based on ten persons per household, but capable of serving more persons per latrine Life-span 15 years, based on the manufacturer's specification for the wall Colour Bluish-white. From the survey, the NGO had determined that the colour should not be dark. White was easier to maintain and bleach, and made the toilet feel bigger. However, the manufacturer was only able to provide a bluish-white material. Anal cleansing Users are expected to provide their own bucket, although the organisation would assist with installing water pipes Cleanliness Provision of a smooth surface Flies and odour No specific design, although the water seal should help to prevent odour. The team reasoned that this would depend on the usage and cleanliness of the toilet Accessibility Provided by modifying the design based on local peoples’ needs and requests Privacy Lockable doors Lighting Not provided, but can be installed independently Weight First product: Approximately 75 kg Emergency toilet: Approximately 135 kg Installation time First product: One day with three persons from the NGO Emergency toilet: Seven hours with three persons from the NGO Standard operating procedures None. Based on the organisation's experience, people did not read instructions. Therefore, the team tried to design a product that was easy to use, operate and maintain. Design criteria Design decision or feature Selling price First product: Rp 6.5 million (Approx. US$530) Emergency toilet: Rp 9.5 million (Approx. US$ 780) Number of people per latrine Based on ten persons per household, but capable of serving more persons per latrine Life-span 15 years, based on the manufacturer's specification for the wall Colour Bluish-white. From the survey, the NGO had determined that the colour should not be dark. White was easier to maintain and bleach, and made the toilet feel bigger. However, the manufacturer was only able to provide a bluish-white material. Anal cleansing Users are expected to provide their own bucket, although the organisation would assist with installing water pipes Cleanliness Provision of a smooth surface Flies and odour No specific design, although the water seal should help to prevent odour. The team reasoned that this would depend on the usage and cleanliness of the toilet Accessibility Provided by modifying the design based on local peoples’ needs and requests Privacy Lockable doors Lighting Not provided, but can be installed independently Weight First product: Approximately 75 kg Emergency toilet: Approximately 135 kg Installation time First product: One day with three persons from the NGO Emergency toilet: Seven hours with three persons from the NGO Standard operating procedures None. Based on the organisation's experience, people did not read instructions. Therefore, the team tried to design a product that was easy to use, operate and maintain. One interesting point to note is that the emergency toilet is sold at approximately US $780. Although the interviewee considered the toilet cheap compared to other products available in the market, Oxfam GB (2013), in its Calls for Proposals for raised latrines, had a target cost of £150 (US$240), approximately one-third of the price of the NGO's product. However, the NGO includes transportation and installation in its selling price, which makes the comparison challenging. #### Challenges faced during the design process The main challenge faced by the NGO was that it was difficult to obtain different types of material, partly because the organisation was based in a city where materials were more difficult to access, and partly because it was difficult to get support from suppliers as the team only wanted small amounts of the material. The NGO was able to overcome this, to an extent, through leveraging personal connections. For example, the material for the floor was identified by a colleague from China. The interviewee suggested that there should be a platform to allow people to move forward together, because nobody could understand every aspect of the product development process. However, the interviewee stated that university collaboration in the Indonesia was not easy, yet. ## DISCUSSION ### Critiques of the design process The R&D team used a simple design approach by modifying an existing product that was developed for non-emergency situations. The majority of the design was the same except that all the parts could be packed into a box. While this may be an effective way of innovating in terms of saving time and resources, this approach may restrict the novelty of the final design compared to an approach where the team starts from a blank sheet. Some design requirements were very specific (for example, the toilet must have legs) while others were more generic (for example, the toilet should be as cheap, robust and as beautiful as possible). Specific requirements are easier to realise and evaluate but may restrict the exploration of ideas (for example, floating toilets could be a suitable alternative to having legs). On the other hand, if the requirements are not appropriately defined, they are impossible to measure and assess. Although the NGO did conduct a survey with end users, the respondents were not in a post-disaster setting, where attitudes, practices and expectations may be different to a non-disaster setting. Therefore, it may be argued that the results of the survey may not accurately reflect end user needs during an emergency. Methods of testing (for example, exposure to sunlight, rain and tannery fumes over a period of two to three months) were generally informal and did not conform to industry-level testing protocols. The ASTM (2013) standard D1435–13, for instance, provides standard practices to evaluate the stability of plastic materials when they are exposed outdoors. Therefore, whether the tests done by the NGO actually provided the data that they were looking for is debatable. Overall, the design process appeared to be unstructured and informal. Nonetheless, the NGO did successfully develop an end product, but it remains to be seen whether there will be significant uptake in the market. ### Lessons learned The NGO only recognised the need for consulting end users one month into the product development process. Based on the survey they conducted they were able to determine specifications for dimensions, colour and so on. The importance of consulting end users should not be understated. Initially, the organisation had to expend time and resources to gather data on design requirements, materials and so on. Once they had done this, the development of future products were much easier and cheaper. This highlights the importance of knowledge, experience and networks in the product development process. ### International versus local The toilet that the NGO produced was very much a conventional toilet design. However, because the organisation worked independently in a local context for a local market, their efforts resulted in a design that deviated somewhat from internationally accepted standards. For example, a search of the equipment catalogues published by Oxfam, the United Nations Children's Fund (UNICEF) and the International Federation of the Red Cross and Red Crescent Societies (IFRC) indicate that latrine slabs are typically 1.2 m by 0.8 m or 0.6 m by 0.8 m. The Sphere Project (2011) notes that communal toilets should be provided with lighting, but the NGO did not provide lighting in their final design. Due to the differences in the product with internationally accepted practices, it is possible that the product would only be able to reach a limited local market. Whether this is a concern of the NGO is unknown. On the other hand, it may also be argued that the final design is more appropriate because it is more aligned to local needs and constraints. As a local NGO, the R&D team had easier access to potential end users than an international NGO. For example, they were able to survey local residents about their preferences, although the respondents were not necessarily disaster-affected. International projects – S(p)eedkits, Emergency Sanitation Project and Humanitarian Innovation Fund – are largely driven by the customers who buy the end products, (i.e. aid agencies). For example, the Emergency Sanitation Project is led by Oxfam GB, IFRC and WASTE. On the other hand, in Indonesia, there is no demand from the government or other local actors to improve sanitation during emergencies. Therefore, there is an absence of a concerted effort to innovate. Hasaya et al. (2014), for example, highlights issues with toilet coverage, cleanliness and water supply in the camps in Karo district, North Sumatra. Local organisations who want to address such issues would end up working largely on their own. This has implications on how various organisations and individuals innovate. Companies and organisations based internationally typically rely heavily on aid agencies during the design process. For example, they consult aid agencies to understand design requirements, evaluate potential concepts, and test prototypes. As a result, the process may be slow and unpredictable because aid agencies often do not have the capacity to respond to product developers regularly. In contrast, companies and organisations based locally, if they do not have links to international aid agencies, may have to work independently. While this means that they are able to work faster on their own, it also means that they do not have support from potential customers in terms of funding and expertise. It may also be more difficult for them to disseminate the end product. ## CONCLUSIONS The paper provided an insight into how a local NGO developed an emergency toilet. A simple approach, based on the modification of an existing non-emergency toilet, was used. Overall, the process was unstructured and informal. Parts of the process which could have been strengthened included the formulation of design requirements, consultation of end users and application of systematic testing protocols. The importance of involving end users and having access to expertise was also apparent. The paper also highlighted differences between how local and international organisation develop products. Each approach has its advantages and disadvantages in terms of methodology and access to resources and expertise. There are potential benefits to the different organisations working more closely. ## ACKNOWLEDGEMENTS The authors would like to extend their sincere thanks to the NGO for their support during the study. This research is funded by the Bill & Melinda Gates Foundation under the framework of sanitation for the Urban Poor project (Stimulating Local Innovation on Sanitation for the Urban Poor in Sub-Saharan Africa and South-East Asia). ## REFERENCES REFERENCES ASTM 2013 D1435–13 Standard Practice for Outdoor Weathering of Plastics, ASTM International, West Conshohocken, PA, www.astm.org . Bastable A. Russell L. 2013 Gap analysis in emergency water, sanitation and hygiene promotion . ELRHA – Enhancing Learning and Research for Humanitarian Assistance, HIF – Humanitarian Innovation Fund , Oxfam, UK aid Network, United Kingdom . Hasaya H. Thye Y. P. Effendi A. J. Soewondo P. Brdjanovic D. T. 2014 Emergency toilets for the people affected the Mount Sinabung eruptions . In: 37th WEDC International Conference . Hanoi, Vietnam. Water, Engineering and Development Centre (WEDC) . The Sphere Project 2011 Humanitarian Charter and Minimum Standards in Humanitarian Response . Practical Action Publishing . Rugby , United Kingdom . Statistics Indonesia (Badan Pusat Statistik – BPS), National Population, Family Planning Board (BKKBN), Kementerian Kesehatan (Kemenkes – MOH),, ICF International . 2013 Indonesia Demographic and Health Survey 2012 . Jakarta , Indonesia .
2018-11-17 20:12:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3428025543689728, "perplexity": 2863.2700310409327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743732.41/warc/CC-MAIN-20181117185331-20181117211331-00504.warc.gz"}
https://docs.starknet.io/documentation/architecture_and_concepts/State/starknet-state/
# Starknet State The state of Starknet consists of: • contract instances: a mapping between addresses (251 bit field elements) and contract’s state. • contract classes: a mapping between class hash and the class definition A contract state consists of: • class_hash (defines the functionality) • contract storage (a key-value mapping where the key/values are field elements) • contract nonce - the number of transactions sent from this contract nonce Similarly to Ethereum, the contract’s nonce is used to provide replay protection at the protocol level. When a transaction is sent to a given contract, the transaction’s nonce must match the nonce of the contract. After the transaction is executed, the contract’s nonce is increased by one. Note that while on Ethereum, only EOAs can send transactions and consequently have a non-zero nonce, on Starknet, only account contracts can send transactions and have a non-zero nonce. With the above definition, we can provide a brief sketch of Starknet’s transition function. A transaction $$tx$$ transitions the system from state $$S$$ to state $$S'$$ if: • $$tx$$ is an invoke transaction, and the storage of $$S'$$ is the result of executing the target contract code with respect to the previous state $$S$$ (the arguments, contract address, and the specific entry point are part of the transaction) • $$tx$$ is a deploy transaction, and $$S'$$ contains the new contract’s state at the contract’s address. Additionally, the storage of $$S$$ is updated according to the execution of the contract’s constructor. • $$tx$$ is a declare transaction, and $$S'$$ contains the class hash and definition in the contract classes mapping ## State Commitment The state commitment is a digest which uniquely (up to hash collisions) encodes the state. In Starknet, the commitment combines the roots of two binary Merkle-Patricia trees of height 251 in the following manner: \begin{aligned} \text{state_commitment}=h( & \text{"STARKNET_STATE_V0"}, \\& \text{contracts_tree_root}, \\& \text{classes_tree_root}) \end{aligned} Where: • STARKNET_STATE_V0 is a constant prefix string encoded in ASCII (and represented as a field element). • contracts_tree_root is the root of the Merkle-Patricia tree whose leaves are the contracts states, see below • classes_tree_root is the root of the Merkle-Patricia tree whose leaves are the compiled class hashes, see below • $$h$$ is the Poseidon hash function ### The contracts tree Like Ethereum, this is a 2-level structure where the contract address determines the path from the root to the leaf encoding the contract state. The information stored in the leaf is: $h(h(h(\text{class_hash}, \text{storage_root}), \text{nonce}),0)$ Where: • $$\text{class_hash}$$ is the hash of the contract’s definition discussed here • $$\text{storage_root}$$ is the root of another Merkle-Patricia tree of height 251 that is constructed from the contract’s storage • $$\text{nonce}$$ is the current nonce of the contract • $$h$$ is the Pedersen hash function. ### The classes tree The classes tree encodes the information about the existing classes in the state of Starknet. It maps (Cairo 1.0) class hashes to their compiled class hashes. The value of a leaf at a path corresponding to some class hash is given by: $h(\text{CONTRACT_CLASS_LEAF_V0}, \text{compiled_class_hash})$ Where: • CONTRACT_CLASS_LEAF_V0 is a constant prefix string encoded in ASCII (and represented as a field element). • compiled_class_hash is the hash of the Cairo assembly resulting from compiling the given class via the Sierra→Casm compiler • $$h$$ is $$poseidon_2$$ defined here compiled classes Cairo 1.0 classes that are part of the commitment are defined with Sierra, an intermediate representation between Cairo 1.0 and Cairo assembly (see here for more information). However, the prover only deals with Cairo assembly. This means that unless we want the compilation from Sierra to Casm to be part of every block in which the class is used, the commitment must have some information about the assoicated Cairo assembly. Today, the user signs the compiled_class_hash as part of a declare v2 transaction. If the transaction was included in a block, then this compiled_class_hash becomes part of the commitment. In the future, when Sierra→Casm compilation becomes part of the Starknet OS, this value may be updated via a proof of the Sierra→Casm compiler execution, showing that compiling the same class with a newer compiler version results in some new compiled class hash. ### Merkle-Patricia tree #### Specifications As mentioned above, our commitment scheme uses a binary Merkle-Patricia tree with the Pedersen hash function. Each node in the tree is represented by a triplet $$(length, path, bottom)$$. The actual data is placed in the leaves, and a leaf node with data $$x$$ is encoded by the triplet $$(0,0,x)$$. Empty nodes (leaves or internal) are encoded by the zero triplet $$(0,0,0)$$. A subtree rooted at a node $$(length, path, bottom)$$ has a single non-empty subtree, rooted at the node obtained by following the path specified by $$path$$. $$path$$ is an integer in $$[0, 2^{length}-1]$$, and the binary expansion of $$path$$ indicates how should we proceed along the tree, where the first step is indicated by the most significant bit, and $$0,1$$ are interpreted as left, right correspondingly. Note that the reason that length is specified and cannot be deduced from $$path$$ is that we’re dealing with field elements of fixed size (251 bits each). For a node with $$length>0$$, following $$path$$ leads the highest node whose both right and left child are none empty. The following rules specify how the tree is constructed from a given set of leaves: The hash of a node $$N =(length, path, bottom)$$, denoted by $$H(N)$$, is: $H(N)=\begin{cases} bottom, & \text{if } length = 0 \\ h(bottom, path) + length, & \text{otherwise} \end{cases}$ Note that any arithmetic operations in the above are done in our field. We can now proceed to recursively define the nodes in the trie. The triplet representing the parent of the nodes $$left=(\ell_L, p_L, b_L)$$, $$right=(\ell_R, p_R, b_R)$$ is given by: $parent= \begin{cases} (0,0,0), & \text{if } left=right=(0,0,0)\\ (\ell_L + 1, p_L, b_L), & \text{if } right=(0,0,0) \text{ and } left \neq (0,0,0)\\ (\ell_R + 1, p_R + 2^{\ell_R}, b_R), & \text{if } right\neq (0,0,0) \text{ and } left = (0,0,0)\\ (0, 0, h(H(left), H(right))), & \text{otherwise} \end{cases}$ Example Trie We now show an example of the construction of a height 3 Merkle-Patricia tree from the leaves $$[0,0,1,0,0,1,0,0]$$: Where $$r=h(H(2,2,1),H((2,1,1))$$. Note that in our example there is no skipping from the root (length is zero), so the final commitment to the tree will be $$H((0,0,r))=r$$. Suppose that we want to prove, with respect to the commitment we have just computed, that the value of the leaf whose path is given by $$101$$ is $$1$$. In a standard Merkle tree, the proof would have consisted of data from three nodes (siblings along the path to the root). Here, since the tree is sparse, we only need to send the two children of the root $$(2,2,1), (2,1,1)$$. This suffices to reproduce the commitment $$r$$, and since the height of the tree, $$3$$, is known and fixed, we know that the path $$01$$ of length $$2$$ specified by the right child $$(2,1,1)$$ leads us to the desired leaf.
2023-03-28 02:10:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6385458111763, "perplexity": 1263.7447113742364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948756.99/warc/CC-MAIN-20230328011555-20230328041555-00098.warc.gz"}
https://www.huy.rocks/everyday/04-04-2022-reading-notes-understand-complex-code-with-dry-run
# Reading Notes / Understand Complex Code with Dry Run Posted On 04.04.2022 OK, this one is embarrassing, I still remember the first time I was taught to dry run the code. It was many years ago when I started learning how to program. But I’ve never actually tried it properly. Most of the time, I just let the compiler run and put a log between the lines to see the output values (what do we call? log-driven debugging?), this is not a bad approach but I find it hard to fully understand the code using this approach, for many reasons: • We can’t just put a log on every line of code, for example, on an if statement • Sometimes, it’s not straightforward to just test a particular function without running the whole program or the flow. Using a step debugger won’t help in this case. Until recently, a friend shows me the way she dry-run her code without actually running it with the compiler. As she goes over every line in their code, she put a comment on the side to trace the values of the variables. It helped detect the bugs faster and also helped everyone understand the code better. When working with some code that you do not immediately understand, it’s also good to apply this technique as well. The point is to slow down and actually take a closer look at every line of code and see how the input values transform between the lines. There are many ways to dry run, actually. You can make a trace table to track the values at every step, like this: (Source: Using trace tables to check an algorithm's correctness) But a faster way, I think, is to just write a comment on each line of code to track the values. For example, let’s take a look at this function, and assume that we are not immediately sure what’s this function does, or what’s the value a or r here: const swap = (a, i) => { [a[i-1], a[i]] = [a[i], a[i-1]] }; const sort = (a) => { let r = false; while (!r) { r = true; for (let i in a) { if (a[i-1] > a[i]) { r = false; swap(a, i); } } } }; The only thing we know is, it’s a sort function, so let’s give it an array, for example: [5,3,4,1]. Go through every line of code and write the values of the variables on the side. So, after a few iterations, the code should look like this: const swap = (a, i) => { // a = [5,3,4,1], i = 1 [a[i-1], a[i]] = [a[i], a[i-1]] // [5,3] = [3,5] // it's a swap! wow! }; const sort = (a) => { // a = [5,3,4,1] let r = false; while (!r) { // r = false r = true; for (let i in a) { // i = 1 if (a[i-1] > a[i]) { // 5 > 3 = true r = false; swap(a, i); // swap(a, 1) // a = [3, 5, 4, 1] } } } }; Keep going until you reached the end, know that this is an implementation of a bubble sort algorithm, it modifies the input data directly, so no need to return anything, and the swap algorithm is implemented using a neat array trick.
2023-02-05 16:51:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40270060300827026, "perplexity": 1044.197152988895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00702.warc.gz"}
https://www.semanticscholar.org/paper/Homotopy-Loday-Algebras-and-Symplectic-Peddie/ecda476e361c4a3cc313c78dc59b33c534ad51d5
• Corpus ID: 119152097 # Homotopy Loday Algebras and Symplectic $2$-Manifolds @article{Peddie2018HomotopyLA, title={Homotopy Loday Algebras and Symplectic \$2\$-Manifolds}, author={Matthew T. Peddie}, journal={arXiv: Mathematical Physics}, year={2018} } • M. Peddie • Published 9 April 2018 • Mathematics • arXiv: Mathematical Physics Using the technique of higher derived brackets developed by Voronov, we construct a homotopy Loday algebra in the sense of Ammar and Poncin associated to any symplectic $2$-manifold. The algebra we obtain has a particularly nice structure, in that it accommodates the Dorfman bracket of a Courant algebroid as the binary operation in the hierarchy of operations, and the defect in the symmetry of each operation is measurable in a certain precise sense. We move to call such an algebra a homotopy… 3 Citations ### L-infinity bialgebroids and homotopy Poisson structures on supermanifolds We generalize to the homotopy case a result of K. Mackenzie and P. Xu on relation between Lie bialgebroids and Poisson geometry. For a homotopy Poisson structure on a supermanifold $M$, we show that ### $L_\infty$ and $A_\infty$ structures: then and now Looking back over 55 years of higher homotopy structures, I reminisce as I recall the early days and ponder how they developed and how I now see them. From the history of $A_\infty$-structures and ### L-infinity and A-infinity structures Looking back over 55 years of higher homotopy structures, I reminisce as I recall the early days and ponder how they developed and how I now see them. From the history of A∞-structures and later of ## References SHOWING 1-10 OF 27 REFERENCES ### Non-Commutative Batalin-Vilkovisky Algebras, Homotopy Lie Algebras and the Courant Bracket We consider two different constructions of higher brackets. First, based on a Grassmann-odd, nilpotent Δ operator, we define a non-commutative generalization of the higher Koszul brackets, which are ### L-infinity algebras and higher analogues of Dirac structures and Courant algebroids We define a higher analogue of Dirac structures on a manifold M. Under a regularity assumption, higher Dirac structures can be described by a foliation and a (not necessarily closed, non-unique) ### On the structure of graded symplectic supermanifolds and Courant algebroids This paper is devoted to a study of geometric structures expressible in terms of graded symplectic supermanifolds. We extend the classical BRST formalism to arbitrary pseudo-Euclidean vector bundles ### Courant Algebroids and Strongly Homotopy Lie Algebras • Mathematics • 1998 Courant algebroids are structures which include as examples the doubles of Lie bialgebras and the direct sum of tangent and cotangent bundles with the bracket introduced by T. Courant for the study ### Derived Brackets We survey the many instances of derived bracket construction in differential geometry, Lie algebroid and Courant algebroid theories, and their properties. We recall and compare the constructions of ### Courant algebroids, derived brackets and even symplectic supermanifolds In this dissertation we study Courant algebroids, objects that first appeared in the work of T. Courant on Dirac structures; they were later studied by Liu, Weinstein and Xu who used Courant ### Manin Triples for Lie Bialgebroids • Mathematics • 1995 In his study of Dirac structures, a notion which includes both Poisson structures and closed 2-forms, T. Courant introduced a bracket on the direct sum of vector fields and 1-forms. This bracket does
2022-09-30 23:27:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8581926822662354, "perplexity": 1348.7458483724286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00244.warc.gz"}
http://math.stackexchange.com/questions/162174/finding-involute-of-a-curve
Finding involute of a curve I have a homework where I am supposed to find the involute of two curves: • $\alpha(t):=(t\cos(t), t\sin(t), \frac{2\sqrt{2}}{3} + \frac{3}{2})$ • $\beta(t):=(\cos^3t, \sin^3t, \cos2t), t \in [0, \frac{\pi}{2}]$ I tried using the standard formula for involute of a cruve $\beta(t)$: $I(t)=\beta(t) - s(t) \frac{\beta'(t)}{|\beta'(t)|}$ where $s(t)=\int_{0}^{t} |\beta'(t)|dt$, but in both cases calculations get prohibitively complex. Am I doing something wrong, or there is some "trick" in there I am not aware of? - The only difficulty I see here is in the computation of the arclength $s(t)$. Well, for (a) we have $|\alpha'(s)|=\sqrt{s^2+1}$, and although $\int_0^t\sqrt{s^2+1}\,ds$ is not the nicest integral in the world, it's certainly doable (or findable in a calculus book). $|\beta'(s)|^2=9\cos^4 s \sin^2 s+9\sin^4 s\cos^2 s+4 \sin^2 2s =9\cos^2 s\sin^2 s+4\sin^2 2s$, and then there's the double angle formula which puts everything in terms of $\sin 2s$.
2014-10-24 11:20:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9387497901916504, "perplexity": 334.2786304959228}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119645866.1/warc/CC-MAIN-20141024030045-00040-ip-10-16-133-185.ec2.internal.warc.gz"}
https://thecuriousastronomer.wordpress.com/2014/05/07/the-vector-or-cross-product/
Feeds: Posts The vector (or cross) product In this previous blog, I mentioned that there are two ways in which to multiply vectors, either the dot (scalar) product of the cross (vector) product. The dot product is pretty easy to do. If we have two vectors $\vec{a} \text{ and } \vec{b}$, the dot product is just given by $\boxed{ \vec{a} \cdot \vec{b} = \lvert a \rvert \; \lvert b \rvert \; \cos(\theta) }$ where $\theta$ is the angle between them. The vector (cross) product is a little more complicated. Let us suppose our vectors are $\vec{a} \text{ and } \vec{b}$ and that $\vec{a}$ can bet written as $\vec{a} = a_{x} \hat{x} + a_{y} \hat{y} + a_{z} \hat{z}$ and $\vec{b} = b_{x} \hat{x} + b_{y} \hat{y} + b_{z} \hat{z}$, to find the vector product it is easiest to use matrices. This is the way I have always done it, and the way I teach it to my students, but if anyone reading this has a different method they wish to share that would be great. So, we write $\vec{a}$ again as $\left( a_{x} \hat{x} + a_{y} \hat{y} + a_{z} \hat{z} \right)$ and $\vec{b}$ as $\left( b_{x} \hat{x} + b_{y} \hat{y} + b_{z} \hat{z} \right)$ where the brackets indicate that these are now matrices. Using the determinant matrix to calculate the vector product $\vec{c} = \vec{a} \times \vec{b} = \left| \begin{array}{ccc} \hat{x} & \hat{y} & \hat{z} \\ a_{x} & a_{y} & a_{z} \\ b_{x} & b_{y} & b_{z} \\ \end{array} \right|$ where the expression on the right is the determinant matrix. To calculate each component we work out the determinant of three separate $2 \times 2$ matrices, as shown below. For the $\hat{x}$ component, we cross out the top line and first column of the $3 \times 3$ matrix, and compute the determinant of the remaining $2 \times 2$ matrix. To calculate the $\hat{x}$ component we work out the determinant of the $2 \times 2$ matrix as shown. For our example, this will be $(a_{y}b_{z} - a_{z}b_{y}))\hat{x}$. Similarly, for the $\hat{y}$ component To calculate the $\hat{y}$ component we work out the determinant of the $2 \times 2$ matrix as shown. which will be $(a_{x}b_{z} - a_{z}b_{x})\hat{y}$, but not that we take the negative of this. Finally for the $\hat{z}$ component we have To calculate the $\hat{z}$ component we work out the determinant of the $2 \times 2$ matrix as shown. which will be $(a_{x}b_{y} - a{y}b_{x})\hat{z}$. Summarising, we can write $\boxed{ \vec{c} = \vec{a} \times \vec{b} = (a_{y}b_{z} - a_{z}b_{y})\hat{x} - (a_{x}b_{z} - a_{z}b_{x})\hat{y} + (a_{x}b_{y} - a_{y}b_{x}) \hat{z} }$. This is the vector product of the two vectors. In the simple case where $\vec{a} = a_{x}\hat{x}$ and $\vec{b} = b_{y}\hat{y}$ then $\vec{c} = \vec{a} \times \vec{b}$ will simply be $\vec{c} = \vec{a} \times \vec{b} = \left| \begin{array}{ccc} \hat{x} & \hat{y} & \hat{z} \\ a_{x} & 0 & 0 \\ 0 & b_{y} & 0 \\ \end{array} \right|$ which comes to be $(0)\hat{x} - (0)\hat{y} + (a_{x}b_{y} - 0)\hat{z} = (a_{x}b_{y})\hat{z}$. If we want $\vec{b} \times \vec{a}$ then we must write $\vec{d} = \vec{b} \times \vec{a} = \left| \begin{array}{ccc} \hat{x} & \hat{y} & \hat{z} \\ 0 & b_{y} & 0 \\ a_{x} & 0 & 0 \\ \end{array} \right|$ which comes to be $\vec{d} = (0)\hat{x} - (0)\hat{y} + (0 - a_{x}b_{y})\hat{z} = -(a_{x}b_{y})\hat{z} = -\vec{c}$. This is shown in the figure below. The cross (vector) product is not commutative, so $\vec{a} \times \vec{b} = \vec{c}$ and $\vec{b} \times \vec{a} = -\vec{c}$. One Response 1. […] Writing this in terms of vectors, and remembering that division of vectors is not defined, we instead write that where is the vector product (or cross-product), as I discussed in this blog here. […]
2020-08-13 16:31:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 38, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9549416303634644, "perplexity": 162.67089148803328}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739048.46/warc/CC-MAIN-20200813161908-20200813191908-00193.warc.gz"}
https://www.gamedev.net/forums/topic/228244-question-about-projections/
Archived This topic is now archived and is closed to further replies. This topic is 5160 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. Recommended Posts Hello, I´m coding a frustum class in order to cull objects not visible. Reading a tutorial (pretty good I should say), there''s something I don´t understand: It states that if you have a box (for example), and you transform it by the projection matrix you can test if it is inside the frustum as if the frustum were a box too, instead of a cut piramid :/. Why is this, should´t the frustum have a frustum shape?. Thanks in advance, HexDump. Share on other sites The shape of space after the projection transformation but before the perspective divide (where you divide the homogenous coordinates by W) is effectively cuboid and is known as the "canonical clipping volume". The clipping against this new volume can be reduced to a few comparisons against W: if ( (x>-w && x<w) && (y>-w && y<w) && (z> 0 && z<w) ){ visible}else{ not visible} [edited by - s1ca on May 29, 2004 4:39:26 PM] Share on other sites S1CA where could I get info on this specify subject?. HexDump. Share on other sites By design, the view frustum is transformed by the projection matrix into a 2x2x2 cube. However, in order to avoid divide-by-0, the culling and clipping are done with homogenous coordinates as S1CA described. 1. 1 Rutin 19 2. 2 3. 3 JoeJ 16 4. 4 5. 5 • 27 • 20 • 13 • 13 • 17 • Forum Statistics • Total Topics 631700 • Total Posts 3001790 ×
2018-07-16 05:20:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3910023272037506, "perplexity": 4183.988752261105}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589179.32/warc/CC-MAIN-20180716041348-20180716061348-00088.warc.gz"}
http://judge.u-aizu.ac.jp/onlinejudge/description.jsp?id=2677
Time Limit : 2 sec, Memory Limit : 131072 KB ## A - Breadth-First Search by Foxpower ### Problem Statement Fox Ciel went to JAG Kingdom by bicycle, but she forgot a place where she parked her bicycle. So she needs to search it from a bicycle-parking area before returning home. The parking area is formed as a unweighted rooted tree $T$ with $n$ vertices, numbered $1$ through $n$. Each vertex has a space for parking one or more bicycles. Ciel thought that she parked her bicycle near the vertex $1$, so she decided to search it from there by the breadth-first search. That is, she searches it at the vertices in the increasing order of their distances from the vertex $1$. If multiple vertices have the same distance, she gives priority to the vertices in the order of searching at their parents. If multiple vertices have the same parent, she searches at the vertex with minimum number at first. Unlike a computer, she can't go to a next vertex by random access. Thus, if she goes to the vertex $j$ after the vertex $i$, she needs to walk the distance between the vertices $i$ and $j$. BFS by fox power perhaps takes a long time, so she asks you to calculate the total moving distance in the worst case starting from the vertex $1$. ### Input The input is formatted as follows. $n$ $p_2$ $p_3$ $p_4$ $\cdots$ $p_n$ The first line contains an integer $n$ ($1 \le n \le 10^5$), which is the number of vertices on the unweighted rooted tree $T$. The second line contains $n-1$ integers $p_i$ ($1 \le p_i < i$), which are the parent of the vertex $i$. The vertex $1$ is a root node, so $p_1$ does not exist. ### Output Print the total moving distance in the worst case in one line. ### Sample Input 1 4 1 1 2 ### Output for the Sample Input 1 6 ### Sample Input 2 4 1 1 3 ### Output for the Sample Input 2 4 ### Sample Input 3 11 1 1 3 3 2 4 1 3 2 9 ### Output for the Sample Input 3 25 Source: Japan Alumni Group Spring Contest 2014 , Japan, 2014-04-13 http://acm-icpc.aitea.net/ http://jag2014spring.contest.atcoder.jp/
2017-05-27 11:58:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4465796649456024, "perplexity": 819.6683070140116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608953.88/warc/CC-MAIN-20170527113807-20170527133807-00297.warc.gz"}