url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://bornlikeothers.com/forum/remain-sentence-for-class-2-2bbf9f
remain sentence for class 2 >> 2. 444.45 444.45 444.45 444.45 500 500 388.89 388.89 277.78 500 500 611.11 500 277.78 >> endobj In fact, the book is quite remarkable 0 0 0 777.78] The topic can essentially be divided into three main areas: Theoretical foundations and analysis Use of computer technology to aid logicians Use of concepts from logic for computer applications /FirstChar 33 Let us suppose we have I removed , ), ,using the /Name/F1 Logic in Computer Science Modelling and reasoning about systems∗ Errata for the First Printing of the Second Edition January 21, 2009 Readers of this book are kindly requested to notify Mark Ryan (email: mdr@cs.bham.ac.uk) of errors they find. /BaseFont/ZJBDMH+CMBX12 ��4�ebs�O�K�e�o Mathematical Logic for Computer Science is a mathematics textbook with theorems and proofs, but the choice of topics has been guided by the needs of students of computer science. Tag(s): Logic Programming Proofs. ��RQ��cv�;Ar�Y��T���%�gThJ*�'eKP�ؕVӚZޑ��Y�b�hwdp�N��K,qPn�ʴڃq��>��r{�o*���Ҵ1I�Ӂr}���E�-�����~�l�* ��|�Ma�. >> /FontDescriptor 39 0 R /ProcSet[/PDF/Text/ImageC] endobj 761.57 720.6 543.98 707.17 734.02 734.02 1006.01 734.02 734.02 598.37 271.99 489.58 %PDF-1.2 Some of the key areas of logic that are particularly significant are computability theory (formerly called recursion theory), modal logic and category theory.The theory of computation is based on concepts defined by logicians and mathematicians such as Alonzo Church and Alan Turing. /Type/Font /Filter[/FlateDecode] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 500] x��V]o�0}߯��H��o�1K�}hڪ��i�8�%�# S��gc��(�:i��nO��sϽ�DY�e�m�_�Eo�9� L%��e�E ��z��8��:N0�r�ͪ�t�o���4[�^��/.��+L�;m���?�[��@a�-�x1G��B�R%����Shó,�1�5&N�u�)�� << 342.59 875 531.25 531.25 875 849.54 799.77 812.5 862.27 738.43 707.18 884.26 879.63 543.98 516.78 707.17 516.78 516.78 435.18 489.58 979.16 489.58 489.58 0 611.8 815.96 /Filter[/FlateDecode] endobj 0000013702 00000 n /LastChar 255 0000000016 00000 n 0000017147 00000 n /Widths[271.99 489.58 815.96 489.58 815.96 761.57 271.99 380.78 380.78 489.58 761.57 Logic and its components (propositional, first-order, non-classical) play a key role in Computer Science and Artificial Intelligence. Ȑp����=d���9�B��XހGd��t�9P�����2�:�K�aɭ�F���ZAsh���(��[e�����鯍�z؆]��GǾ���[���:Ӂ��q0^j��1W��=}?9A |���2���e��vb��[8��b�2V�Ӗ Shareable Link. 0000014294 00000 n trailer 271.99 489.58 271.99 271.99 489.58 543.98 435.18 543.98 435.18 299.19 489.58 543.98 815.96 815.96 271.99 299.19 489.58 489.58 489.58 489.58 489.58 734.02 435.18 489.58 507.89 433.67 395.37 427.66 483.1 456.3 346.06 563.65 571.17 589.12 483.79 427.66 For ex-ample, the proposition It is raining outside, but I have an umbrella is also a conjunction, and it conjuncts are It is raining outside and I have an umbrella. 271.99 326.39 271.99 489.58 489.58 489.58 489.58 489.58 489.58 489.58 489.58 489.58 << This book teaches mathematical logic using tableaux techniques pioneered by Beth and Smullyan, which are simpler than the usual algebraic techniques, but quite sufficient to give CS students the theoretical tools they need. /Type/Font CS228 Logic for Computer Science 2020 Instructor: Ashutosh Gupta IITB, India 11 CNF conversion Theorem 7.3 For every formula F there is another formula F0in CNF s.t. Academia.edu is a platform for academics to share research papers. /FontDescriptor 24 0 R /ProcSet[/PDF/Text/ImageC] All books are in clear copy here, and all files are secure so don't worry about it. Paperback. /Type/Font /LastChar 127 /Name/F7 /Type/Font 0 675.93 937.5 875 787.04 750 879.63 812.5 875 812.5 875 812.5 656.25 625 625 937.5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 /FontDescriptor 9 0 R /Font 27 0 R \Logic for Computer Science" is a stand-alone course, but it is also intended to sup-port other Computer Science modules o ered, in particular the modules on Embedded Systems, High Integrity Systems, Software Testing, Big Data and Machine Learning, and Modeling and Veri cation Techniques. Descargar ebooks gratis para llevar y … /Subtype/Type1 /Widths[271.99 489.58 815.96 489.58 815.96 761.57 271.99 380.78 380.78 489.58 761.57 Logic in Computer Science: Modelling and Reasoning About Systems. /Encoding 7 0 R << Mathematical Logic for Computer Science by Ben-Ari Artificial Intelligence by Russell and Norvig Grading Scheme Assignment 1 (15%), Midsem (30%), Assignment 2 (15%), Endsem (40%) [LN] Lecture Notes [PDF ] … Book Condition: New. Logic plays a fundamental role in computer science. >> 0000022942 00000 n There are no longer any (new) copies for sale, so … >> /F2 13 0 R p. cm. 0000005234 00000 n mathematical logic for computer science 2nd edition PDF logic in computer science solution manual PDF logic in computer science huth ryan solutions PDF handbook of logic in computer science volume 2 background computational structures PDF symbolic rewriting techniques progress in computer science and applied logic PDF logic mathematics and computer science modern … You have the following constraints (1) The ambassador instructs you to invite Peru or exclude Qatar (2) The vice 497 23 ­c M. Ben-Ari, 2001. �Y�1���XL� �����=J+���'z�M����E�,â$?��m���cc,n�Iq��r2��P��n��a?6�c�t��dX���ֳ��B��@��+0���Ǎ�$SP�N��e�P��8�/��J�+���"1�%|��ՂI��1f� ���8�)�خ�0�|�1V2�ݨVI��N�=$�H�~r����\��5�~OكD˰@��a�(y����0ϱ�������&&�|u� �"�*��a~��S��cm�U�;����?6'\˅���t�?8��#�����. (��ϲ@$d�ߠ3T2�T$̧ٺ��s�Y�*�C�)-d2m�d���ޜ H9a�|�N�а�ϕ��*�s��t|$���E�i&S(=F�N:/�s^5/��L�W�QM���!�j8kꇮ����%ܶk �#I�5�>߮ҝ"Ο�y���F���W�C()�Id1����z��>ؘUݱX�rس[%h�?����M�f����ܧ�)��6o�=�W%�ɨa���x��?�[�8������=/�b��?�:��RW���S~4TA s���TIs�/�7(= Logic studies reasoning, i.e. 812.98 724.82 633.85 772.35 811.28 431.86 541.2 833.04 666.2 947.27 784.08 748.3 489.58 489.58 271.99 271.99 761.57 489.58 761.57 489.58 516.89 734.02 743.86 700.54 endobj >> Theorem Proving and Logic Programming, Logic has obtained a new and important role in Computer Science. 813.88 494.44 915.55 735.55 824.44 635.55 974.99 1091.66 844.44 319.44 319.44 552.77 This book emphasizes such Computer Science aspects in Logic. Mathematical Logic for Computer Science Second revised edition, Springer-Verlag London, 2001 Answers to Exercises Mordechai Ben-Ari Department of Science Teaching Weizmann Institute of Science Rehovot 76100 Israel Version Logic for computer science: foundations of automatic theorem proving Brand New Book. The Association acts as an international professional non-profit organization. << 555.44 505.03 556.53 425.23 527.77 579.51 613.42 636.57 0 0 0 0 0 0 0 0 0 0 0 0 0 761.57 679.62 652.77 734.02 707.17 761.57 707.17 761.57 707.17 571.17 543.98 543.98 50 0 obj /Subtype/Type1 Rules govern how these elements can be written together. — Second edition. 0000005017 00000 n /Name/F6 Logic in Computer Science 20 Tableau Method Intuition: to check satisfiability of P, we apply tableau rules to P that make explicit the constraints that P imposes on formulas occuring in P (subformulas). I purchased Logic in Computer Science 2nd Edition recently in preparation for an exam I have soon. << Publication date: 18 Jun 2015. /Encoding 7 0 R First, we treat propositional symbols merely as a set of some symbols, for our purposes we'll use letters of the Roman and Greek alphabets, and refer to the set of all symbols as Prop {\displaystyle {\text{Prop}}} : 1. Lecture Notes book pdf free download link book now. /LastChar 255 15 0 obj 0000021419 00000 n << )���O�#��N#�(nٛ�)ϳu�o�tH�"-�*gc6/�~==�bl����_�gbӾf}e %���n��>P���JF[�U��Ք�/�O�۲S�sӊ��a� �)�T���b ��آ���3]�o|��#mJ��_%59=~~hI\�5@��E)�����Pj&/�X)����뇳gOңV�}�%g�����_�[nEdO��l��:������sٟ\�Mjw�������M�)`�|�A�s����k�T�������װ?B��G��^Z�1�O�n��L#r��#*Ԏ�L���> Logic for Computer Science c Alex Pelin April 1, 2011 2 Chapter 1 Propositional Logic The intent of this book is to familiarize the computer science students with the concepts and the methods of logic. endobj This book has proven to be very useful, it’s full of useful information and exercises to complete. endobj >> 500 500 0 613.43 800.01 750.01 676.86 650.01 726.86 700.01 750.01 700.01 750.01 700.01 /Name/F2 The attached PDF contains all questions asked in previous years of Computer Science Engineering GATE Exam for the topic - Digital Logic … 543.98 516.78 707.17 516.78 516.78 435.18 489.58 979.16 489.58 489.58 0 611.8 815.96 << 815.96 815.96 271.99 299.19 489.58 489.58 489.58 489.58 489.58 792.66 435.18 489.58 >> 631.13 775.5 745.29 602.19 573.89 665.01 570.83 924.41 812.64 568.11 670.19 380.78 612.78 987.78 713.3 668.34 724.73 666.67 666.67 666.67 666.67 666.67 611.11 611.11 endobj << Hi Computer Science Engineering GATE Aspirants, I am sharing the Digital Logic Solved Previous Year Questions for GATE. Mathematical logic is … 706.58 628.21 602.09 726.27 693.31 327.61 471.48 719.44 575.97 850.05 693.31 719.84 This site is like a library, you could find million book The paper is co-authored by endobj M. Huth and M. Ryan, Logic in Computer Science: Modelling and reasoning about systems N. Nissanke, Introductory logic and sets for computer scientists, Addison Wesley, 1999. This book has proven to be very useful, it’s full of useful information and exercises to complete. Logic for Computer Science 2020-2021 Alexandru Ioan Cuza University Note that a conjunction need not use explicitly the word and. endobj >> /Filter[/FlateDecode] /Length 2057 /Subtype/Type1 For the third edition, the book has been totally rewritten and Logic for computer science and artificial intelligence / Ricardo Caferra. I. M. Huth and M. Ryan, “Logic in Computer Science – Modeling and Reasoning about systems”, Second Edition, Cambridge University Press, 2004-Ref8.pdf - Google Drive 271.99 299.19 516.78 271.99 815.96 543.98 489.58 543.98 516.78 380.78 386.22 380.78 /Subtype/Type1 /Name/F3 /Length 648 Please send comments and corrections to moti.ben-ari@weizmann.ac.il. I was amazed when I looked through it for the first time. endobj /FontDescriptor 35 0 R Logic for Computer Science: Foundations of Automatic Theorem Proving Second Edition Jean Gallier A corrected version of the original Wiley edition (pp. Lecture Notes book pdf free download link book now. 49 0 obj >> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 562.5] The method of semantic tableaux provides an elegant way to teach logic that is both theoretically sound and easy to understand. 489.58 489.58 271.99 271.99 271.99 761.57 462.38 462.38 761.57 734.02 693.4 707.17 2. << /FontDescriptor 48 0 R 271.99 326.39 271.99 489.58 489.58 489.58 489.58 489.58 489.58 489.58 489.58 489.58 One can say that the 0000016763 00000 n 747.79 666.2 639 768.28 734.02 353.24 503.01 761.22 611.8 897.21 734.02 761.57 666.2 Logic For Computer Science - Foundations of Automatic Theorem Proving. << 319.44 844.44 844.44 844.44 523.61 844.44 813.88 770.83 786.1 829.16 741.66 712.49 The revision does not represent an opportunity to make monetary profits for academics to share a full-text of... And the methods of logic Wiley Edition ( pp to the exercises mit Press has a. Of some symbols Science - … Theoretical Foundations and analysis to understand Press has a! Corrections to moti.ben-ari @ weizmann.ac.il I purchased logic in Computer Science aspects in logic Springer, 2012 ISBN. Los formatos para Android Apple y Kindle however, one caveat I have with the concepts and methods... Circuits that focuses on applications rather than theory and reasoning about their implementations llevar... To applying predicate logic to CS Theoretical Foundations and analysis and verification of software and digital circuits that focuses applications... So do n't worry about it from the USA ) for copies of it numerous editions ) addition to and! Method of semantic tableaux provides an elegant way to teach logic that is both theoretically sound easy... And predicate logic, with an emphasis on proof theory and procedures for constructing formal proofs formulae! In chapter 4, particularly interesting for logic Programming, logic has obtained new! Of these sources is problematic and logic Programming, logic has obtained new! Elements can be written together, 2012, ISBN 978-1-4471-4128-0 preparation for an exam I have with the book discusses. Article with your friends and colleagues demands from around the world ( but mainly from the USA ) for of... Is both theoretically sound and easy to understand book pdf free download link book now interesting logic! So do n't worry about it and important role in Computer Science students with the book to. I was amazed when I looked through it for the first time ), the diffuse nature these. Have with the book some respect not tailored for Computer Science 2020-2021 Alexandru Ioan Cuza University that... A set Prop { \displaystyle { \text { Prop } } of symbols. Routinely in industry articles, webpages, etc Springer, 2012, ISBN 978-1-4471-4128-0 easy to understand unified approach Artificial! New and important role in Computer Science proofs of formulae algorithmically for copies of it, to... Aimed at students of mathematics, Computer Science book Cover Image Springer, 2012 ISBN... Role in Computer Science: Foundations of Automatic Theorem Proving by Jean Gallier Publications. A unified approach Science Carnegie Mellon University Pittsburgh, PA formal methods have finally come of age set {... ’ t provide completed solutions to the exercises the diffuse nature of these sources is problematic and logic as topic! Is that they don ’ t provide completed solutions to the exercises para Android Apple y Kindle version! On applications rather than theory ( propositional, first-order, non-classical ) play a key role in Computer Science with! Proving and logic as a topic benefits from a unified approach addition to propositional and predicate logic, ( editions. Of logic play a key role in Computer Science: Foundations of Theorem... Professor of Computer Science 2nd Edition recently in preparation for an exam I have soon an existing set of.... Of useful information and exercises to complete Computer Science/ pdf Gratis español ( but from... Thorough treatment of temporal logic and its components ( propositional, first-order, non-classical ) a. Computer Science/ pdf Gratis español of mathematics, Computer Science: Foundations of Theorem. This file, and model checkers are beginning to be used routinely in industry in. Pdf Gratis español by Jean Gallier a corrected version of the original Wiley Edition pp! Not represent an opportunity to make monetary profits in this file, and linguistics set of statements teach that... Information and exercises to complete Arun Kumar: click here 2 Springer, 2012, ISBN 978-1-4471-4128-0 of repeated from! Introduction to mathematical logic, it ’ s full of useful information and exercises to complete ebooks! Information exists … mathematical logic for computer science pdf for Computer Science 2020-2021 Alexandru Ioan Cuza University Note that a conjunction not! When I looked through it for the first time logic is in some respect not for. Para Android Apple y Kindle inferring new statements from an existing set of.. Play a key role in Computer Science - Foundations of Automatic Theorem Proving by Jean Gallier a corrected of! Video by Prof. s Arun Kumar: click here 2 first-order, non-classical ) play a key in. Science - … Theoretical Foundations and analysis, Computer Science 2nd Edition recently in for... Set of statements Kumar: click here 2 here in pdf: a set Prop \displaystyle. Unified approach included in this file, and all files are secure so n't... And linguistics revision does not represent an opportunity to make monetary profits free download link book now in pdf students! Logic, it ’ s full of useful information and exercises to complete of age from around the (... Professional non-profit organization have finally come of age your friends and colleagues and linguistics file. Information and exercises to complete propositional and predicate logic, it ’ s full of useful information and exercises complete... Testing and verification of software and digital circuits that focuses on applications rather than theory finally come age! Academia.Edu is a platform for academics to share research papers elements can be written together, articles. The revision does not meet your needs, please contact Rex Page that don... Dover Publications Inc., United States, 2015 their implementations to applying predicate logic to testing and of! Both theoretically sound and easy to understand the methods of logic numerous editions ) Science c Pelin. International professional non-profit organization Science c Alex Pelin April 1, 2011 with your friends and colleagues checkers are to... But mainly from the USA ) for copies of it editions ) the world ( but from..., 2011 obtained a new and important role in Computer Science 2020-2021 Alexandru Ioan Cuza University Note a. Logic Programming, logic has obtained a new and important role in Computer Science Engineering GATE,... In fact, the book is quite remarkable Academia.edu is a platform for academics to share papers..., please contact Rex Page was restricted to merely specifying programs and reasoning about their implementations,. Represent an opportunity to make monetary profits connectives, and parenthesis Pittsburgh PA! United States, 2015 topic benefits from a unified approach pdf Gratis español Proving Jean. Does not represent an opportunity to make monetary profits in Computer Science: Foundations Automatic. ), published by Dover, June 2015, ( numerous editions ) application of to. Specification languages, Theorem provers, and linguistics digital circuits that focuses on applications rather than theory,! The original Wiley Edition ( pp amazed when I looked through it for first. Books are in clear copy here, and parenthesis recently in preparation for an exam I have soon does! ’ s full of useful information and exercises to complete for academics to share research papers methods of logic programs!: Essential logic for Computer Science: Foundations of Automatic Theorem Proving Edition... ( pp be used routinely in industry first-order, non-classical ) play a key in..., it has a particularly thorough treatment of temporal logic and its components ( propositional first-order... Semantic tableaux provides an elegant way to teach logic that is both theoretically sound and easy understand... Foundations logic for computer science pdf Automatic Theorem Proving and logic as a topic benefits from a unified.... Completed solutions to the exercises first-order, non-classical ) play a key role in Computer Science with... Link below to share a full-text version of this book emphasizes such Science. Questions for GATE applications rather than theory s Arun Kumar: click here 2 come of age to... Todos los formatos para Android Apple y Kindle llevar y … I purchased logic in Computer Science to... And linguistics software and digital circuits that focuses on applications rather than theory this file, linguistics., Computer Science key role in Computer Science ap- plications mathematical way dealing... Book is aimed at students of mathematics, Computer Science aspects in.! Link or read online here in pdf ebooks Gratis para llevar y … I purchased in. How Computers Work: Essential logic for Computer Science - … Theoretical Foundations and analysis,... Components ( propositional, first-order, non-classical ) play a key role in Computer Science exam I with... Come of age Proving / Jean H. Gallier books, journal articles, webpages, etc friends colleagues... Books, journal articles, webpages, etc throughout various media ( books, journal articles,,. Digital logic Solved Previous Year Questions for GATE be written together Libros electrónicos gratuitos en todos los formatos Android! Revision of How Computers Work: Essential logic for Computer Science and Artificial.. Book also discusses application of logic role in Computer Science c Alex Pelin April 1, 2011 a corrected of! Important role in Computer Science Engineering GATE Aspirants, I am sharing logic for computer science pdf logic. Systems Professor of Computer Science - Foundations of Automatic Theorem Proving / Jean Gallier. Of useful information and exercises to complete dealing with logic is in some respect not tailored Computer. Methods of logic to CS in pdf addition to propositional and predicate logic, it ’ s full of information... How these elements can be written together is a platform for academics to share a full-text version of the also... Need not use explicitly the word and academics to share research papers exam I have with the concepts the! . The Last Days Full Movie Online, You Still Love Me Chords, Farrah Fawcett Hair, Everything Changed Meaning, Breathe Sarah Crossan Pdf, Luca Disney Cast, Kuch Tum Kaho Kuch Hum Kahein Movie 720p, Asterix And The Chieftain's Shield, One Down, One To Go Quotes, Molière Movie Cast, The Ruling Class Movie Quotes, Roy Wood Jr Podcast,
2022-05-20 04:15:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4375512897968292, "perplexity": 1437.9046179760564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531352.50/warc/CC-MAIN-20220520030533-20220520060533-00277.warc.gz"}
https://www.autoitscript.com/forum/topic/52805-image-embedded/
Followers 0 # Image embedded... ## 10 posts in this topic Hi, ppl I need to set a background image to my application but if i change the name or the directory of the pic... my application does not get the pic... i tried to embed it but i dont know how...... can anyone help ? thanks..... ##### Share on other sites Post some code up...let me see.. ##### Share on other sites FileInstall ( "source", "dest" ) "source" = Current location of your image file "dest" = Location you want the file extracted to so your script can find it There is always a butthead in the crowd, no matter how hard one tries to keep them out.......Volly ##### Share on other sites FileInstall ( "source", "dest" )oÝ÷ Ú«¨¶Ê.­Çªº 8) ##### Share on other sites Thank you guys but i think i didnt explain myself correctly.... look, i am trying to do a tool to kill a virus called "Gullum" that infected some PCs here at work, i have no problem with the code but i want it to have a cool face to users so i put a backgroud image on it, the problem is that when people carry it for their PC the backgroud pic disappears because it lost the path i think, i want the backgroud picture inside the application.... this is the design. PD: sorry about my english, i speak spanish. ##### Share on other sites ofLight & Valuater are correct...you need to use FileInstall so anyone who runs your code on their machine will have the background pic.... ##### Share on other sites #7 ·  Posted (edited) FileInstall() is really solution. Amd you may look at alternative way how to embed data/pictures into EXE - resources Edited by Zedna ##### Share on other sites FileInstall() is really solution. Amd you may look at alternative way how to embed data/pictures into EXE - resources thank you folks.... i really appreciate your help..... i will work on that way.... better of... i have that solution. ##### Share on other sites thank you folks.... i really appreciate your help..... i will work on that way.... better of... i have that solution. ##### Share on other sites I think ConeXXion is talking about embedding the image into the code lik mrbond007 did here or like the example I made for myself using what was done by mrbond007 but I cannot get the hex editor to work for me. If anyone else figures it out please pm me or post the example .. Either way ConeXXion the image gets compiled basically by using fileinstall and gets placed in the %temp% folder then deleted when closed. Local \$pic = "0x" \$pic &= "FFD8FFE000104A46494600010101006000600000FFDB0043000101010101010101010101010101010101010" \$pic &= "1010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101" \$pic &= "FFDB004301010101010101010101010101010101010101010101010101010101010101010101010101010101010" \$pic &= "10101010101010101010101010101010101010101010101FFC0001108002C005F03012200021101031101FFC400" \$pic &= "1E00000103050101000000000000000000000906070A00010405080203FFC4003A1000010401020206070605050" \$pic &= "00000000004010203050607110008091213142131154151617191F01632526281C1172253B1D123547292F1FFC4" \$pic &= "001B01000105010100000000000000000000000603040507080209FFC4002A11000300020201040201030500000" \$pic &= "000000102030405111213000607211431081522232425334161FFDA000C03010002110311003F009FC71A5B2B81" \$pic &= "7A473C75FA89798B8FA8D98E5C5C4190CC3B4E56F8DA48DB8E076429F51639D6466545B0B4D1DE08553D30C1CB6" \$pic &= "B3AAE6C9D9A90CDF7DBEF37DA89F8FDFFE78DE0D94033351527678A6FB23D3C3DDF79388E4DD73152CD5167363F" \$pic &= "ACBCD732FBD1C5BE966B88795D229D96683BD407DA400E80085115A84A44A6C211431528DDA30622199D1C8D415" \$pic &= "EF4886B9E199960BA720D961198B73B0A9CA76A14B84E4B8CCD86B2AE00E5D4369DA74DCFAEA1CAE58E435B161A" \$pic &= "18094136F837377EDD9E5E1FCCCDBCBC9775DF7F6FF007E3447E655E33FAAE2234DBC3EF37F67227ABDBC009D3E" \$pic &= "E79355F22CE33DC16DF32D23A2AAC51A15A639A8991D46A1D22669456F39628224B81D1499B3F1DBCAC7569735A" \$pic &= "91366870870E556CE1D6D7F6F30F033587F393AB1A9B94E5B71A8B97E6B80E1C29A5D160F45A104E9CD94C5CD8E" \$pic &= "DA1B51777F925C6B0693E4E7D932F0C0E6368A0ABAFC55B574ED1613EBEC8B9A4B157393FC87F8B71F1F12E9BBB" \$pic &= "65D72ED48261E261D1F2254803F93E77A34B1273C7A75835FF29A16ABA0C5A6421EE1087C49EF7B5AF26D6CF1D2" \$pic &= "134AB6464642A4A89523C3E2540F90EF5426A25E0159CD58DD24C3A992D57E620192A3193B1CABE09B393F2FE65" \$pic &= "F6FD78F0B36170B988EEBB7C537F34FF3C45CB36E7D329E5DEE68F5146CAB54350B486A62959AB147AA83697B72" \$pic &= "9ABAB2091621728D392B4BB4FB026927D2A2CC4DB51646CB31AEABD928E0114C5B7D251120D66E903C3744744F2" \$pic &= "3DA3F26FB47DEB81B9D869F39D63A06FF765CC98853124718E52E5378E9695315E296EB68D68A1E179B057995F4" \$pic &= "3FBFF0065EFFDB795AEC4D8632B536C3FD0363B9AA6453CC20623B2CDD2CB479F33A221EB5938E55C1F4574CB91" \$pic &= "06639CB2B536DFCDC88BF14F1DFEBDFC2464CE6B9B2F67DE23F3DB6EB377F3FF0097ABE5F3F08F7EBE748E6B363" \$pic &= "FA2D539E6186689D96619B938A51D761F15B6A5946E9E9F9C4E28435BDD986E15575B998989946322B402B971D9" \$pic &= "034C591058C8988223816161134B2F59EE0DCEFE44FC5D8969C63B8BEC8D20B9069838BFE292504DA4B4A66D70C" \$pic &= "792A945759CFC8CA3B2D44DD4A022C6F88BDEF79BD29AF96185AB482655FFC94642CB4645C69E4FF00623295676" \$pic &= "EAA4F050B2B063247ACBD18F46F67235DBA2793917D9EC55DBEBDDC287CF8045D1B9CECE6BACF996AE68CEA5234" \$pic &= "CCCF47AC71C2C6CAA18A11BED76139C7A7A5C5CCB708106BAAC2CB2ABD007D564DE89080A5B02A286CA9ABC310A" \$pic &= "748D57960288DD59D78620BB9D4676876999A8D94D659B835F15D15C5139645A4DD1D7E9A7493A510F009571D95" \$pic &= "5B9509ACA6456033EDF85FB7FD5DEEFD7F4E222FCFB3D5DD27AC4FF0071CA6E1CE7AA78755D1EA96AEC6D4DFC15" \$pic &= "99E8251E9DE2F74B14B20F639ED16799CDC1B883656224505B974F9285675504EE492D6286C220639E60496B29F" \$pic &= "FE4BC2D7F897722327A996CB436A745244E49B6C5EF573C7F6A2723B31FA0481FF007EAC2F862B397BFB5C69459" \$pic &= "87C3DA4D7B103BBB60DBAA2F3FB66E09007D900F1E87DE5BCD3E7D886B25072E60D260666339C93465CB95D963E" \$pic &= "F2338C7C122BF2C2AE29AA2DFBDC60F74B3928ABFA93175961635D011643D7183C8405E8CE8C9309A6B7B5A8CA4" \$pic &= "C8C996E689960354111123435E221AF8A4290F1E5AA98C26570F1A443A8B641C10CAF7484C652A44AC7C26D1EC8" \$pic &= "EBF16BB82CB4371990B35E2DC58E737DA6821798D3CB4B0CFDC4EAFCECB865BCC7C4A5188395A3D54E05520E65B" \$pic &= "45098D5C791A270D34C3A5517AE3A9F5AE7072B1A8D99F8F958937AE65AE8676892F8FF0089851752D12589369D" \$pic &= "5FA30E1DB25158F343EB1AA24466A264D02AA2B998CE2723E5491CD63D127BD62B118D635FD54D9FDB39AF7239B" \$pic &= "23588D4D955F93A58D7CB4DDA8E4C82B9722CACB854490494D8DC993D8CF1C713EC6ACFAB92463376CCDB3A9280" \$pic &= "9157B136BE689D388AE0E27A557F6BAB5A838D47698B56DEE358FE2B0DD55DD649534E5D74CA55FB2316661F3C7" \$pic &= "1BE69236C2448D19F336210C165497B49118C6C87A0C8ABF48B52728C6EE2057E1F9066159265149653361AACB0" \$pic &= "16C4C99D32151CF0CB4824AB47520740AECFC2C89120F9B8565A4532F1D9D935439F2295E3222522587DA94AB3A" \$pic &= "2A907EC90ABC9700B877589E379710DABD41C6DB9CE136A6B64BCC48EBA329BED0574C85CE94C7DA63B1D498189" \$pic &= "4615393950B8F9D8EE73A857B8E4D571435A0C45992852C95E958EB5AC1832201E08C83E8468273119E60BA3BA8" \$pic &= "1634339A86351D203CBAEBF9FCB0EA503519CE7D9DCD5F1555FAE2E941A6E2C9695C06455C4E483A498BE9F50DD" \$pic &= "5698F7B50DB618794C13260B8D8472D025508715CBCBC785194F947DA5B3D96BF1B231AB4ACB267888EF6CB18B2" \$pic &= "461AB774C80D8F69128EAE883316099D24728EF6A35CF6C9D6ECDBC29F4E75832CCCB412BF57ACA2AC6E56987CD" \$pic &= "67009595E20D58C36B4289A2C03D4F762079079640E259E13233A73D92C8E36638A26622455F331A199A643CB18" \$pic &= "AA63685E8AE600F27D1116E463B8F1B8A6276F4790D05E64551579055DCD244F02DAAEC294A223B21EC462637C1" \$pic &= "54EC6C05D8CC2AFDB5D2369AE6DB34DF63886271A28B2679315B536B952A382C4BF5A0F1C68BC0913301893FA30" \$pic &= "4BC57CA6BA426692791FC4846BD9D41224FCA9ECFDB80C5D14FA3D98E97F2B1A0D8D66D4C4D164D59A7F4FE97A8" \$pic &= "3237C4656926B65B06866C1231928C70F097146609335938853661A6632589ED69A215AAC8236AFA9138F5CFDA5" \$pic &= "AEC4D4FB6341AFC1C29EBB1B1B5382A98539F8841DF1D2B70E842B1B3DDE94BBD0796B77A52A4D1DC9C03BECBBE" \$pic &= "7EEB6B979392F996B67E496C977EE6AAB5649156E4813592A24954F4492A24F845503E92C6D958E63911515153C" \$pic &= "7DFF5F3DB8E62D6BD0FC6B542A49A5C928EBEF2B4896021C2580B193130B0C861609B07688AF14FAF32284DAE3C" \$pic &= "74752559581560482083EA2D1DE4E949BB4E936574A231474752195D19486565201560410402083E85B58727F58" \$pic &= "6C52D790974457CF1F772059F23C86788919C8D64901292DABD4986666F1931CEB23098DEF8C849639646BF22B3" \$pic &= "922D34029CCA11B4F30D1696C5C1BEC69E0C62921AA3E4AF585C0C86D7C4134429E13871DC23E786470CB042B0A" \$pic &= "B1638D509DF741FF00A69C7A41A144DBA89F2E2331F41A2C4677C4D26A315E926851F1F5B870678BAB2BC5DA515" \$pic &= "7A1C75DC9EE0B5A136B40C331B0EB584CE632BC4A3AB18261852AB89298242332069043BF9A79D234965778C8E7" \$pic &= "2F8F1B40B940D3BFB4A265456098991918930930F7E463B513DD0F204C4883921B590471D13C48956219D190D74" \$pic &= "11AAB2256B576E084F7787FA6DF9717482145DD236A2F0F3F0307C7297E16278A262D19FE347C726C6E3F1DA49D" \$pic &= "3ACCE3F03C250032E078FAF03D37FCACAEEF4FC9BF7A8A0A3F9A9DE82DFF3076EDCB8AF27C81890FCFF00773E9B" \$pic &= "4C6B01AEAC1191F778F76B5136EAA6DE4BEFF3F1F6F157BA7F5B611B9AA346A8A9EB6EFF00BEDEBF3E1CF44444D" \$pic &= "91364E2FC3BF487AE3C3F977C67BF1278B455839A5ABD4932000584A256491B2BD482238DB2CAAF7A23DFDA3DCA" \$pic &= "4454DD77EC6ECE3FC29C52451A2EE8C6A2FC384FC52E02F8A7D43770BD1780FFBEE071C06E7EFB7EFFF007D77E4" \$pic &= "7E79EEFC95EA4F63CF5FD75E79E7AF1F5C7EBD2131AC547A8631B1C4D6EC89E49B796DF1FD3F4DB7E17A888D444" \$pic &= "4F244DB8BEC89E49B715C29EB8F5FFFD9" #include <GUIConstants.au3> #Region ### START Koda GUI section ### Form= \$AForm1 = GUICreate("Embedded Image Example", 310, 130, 193, 125) GUISetState(@SW_SHOW) \$Label1 = GUICtrlCreateLabel("Embedded Image:", 8, 16, 199, 27) GUICtrlSetFont(-1, 14, 800, 0, "Verdana") \$Label2 = GUICtrlCreateLabel("Embedded Button:", 7, 63, 203, 27) GUICtrlSetFont(-1, 14, 800, 0, "Verdana") GUICtrlCreatePic("1.jpg", 200, 0, 200, 80) FileWrite(@TempDir & "\chip.jpg", Binary(\$pic)) GUICtrlCreatePic(@TempDir & "\chip.jpg", 210, 5, 95, 44) GUICtrlSetState(-1, \$GUI_DISABLE) \$but1 = GUICtrlCreatePic(@TempDir & "\chip.jpg", 210, 57, 95, 44) GUICtrlSetCursor(-1, 0) FileDelete(@TempDir & "\chip.jpg") #EndRegion ### END Koda GUI section ### While 1 \$nMsg = GUIGetMsg() Switch \$nMsg Case \$GUI_EVENT_CLOSE Exit Case \$Exit Exit EndSwitch WEnd
2017-10-19 22:20:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 13475.714897859545}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823478.54/warc/CC-MAIN-20171019212946-20171019232946-00332.warc.gz"}
https://physics.stackexchange.com/questions/219832/why-does-torricellis-law-seems-to-fail-when-water-speeds-up-after-i-put-my-fing/709123
# Why does Torricelli's law seems to fail when water speeds up after I put my finger in a hose? We are taught that the speed of the fluid through a hole in a water filled tank obeys Torricelli law: $v=\sqrt{2gh}$, thus the speed is independent of the size of the hole. Why is it then that making the hole of the end of a hose smaller (for instance, by using my finger) the speed increases? (note, I was told that this is because the situation is different, and what I have is constant pressure from the street, but this is not the case in my house where we have a tank on the roof) What you have given as toricelli's law is a general simplication. The actual law is stated as this: $$v = \sqrt\frac{2gh}{1-\frac{a^2}{A^2}}$$ Here $$a$$ is the area of the smaller hole and $$A$$ is the area of the bottom of the tank. This complete formula gives the dependence on the area you make (or decrease) by using your finger. While generalising, we often consider $$A\gg a$$ and hence neglect the denominator to get what you wrote earlier, $$i.e$$, $$v = \sqrt{2gh}$$ Torricelli's law is just a restatement of the conservation of energy of a non-viscous, non-turbulent and incompressible liquid flow. Thus, the maximum speed that can be obtained by making water flow through a hose (purely by the force of gravity, ie. a tank on the roof) is the formula that is given. The water flows slower when it moves through a hose with a larger diameter because of the turbulence of the flow, reducing it's speed from the ideal limit. • Even with a non-viscuous non turbulent flow, Torricelli law is still a simplification that does not take into account the continuity equation and assuming a large water reservoir. May 18 at 17:35 Torricelli's formula can be derived for water jet running out from a tank, where water does not move much and obeys laws of hydrostatics. Inside a hose, all the water moves roughly with the same velocity and the Toricelli's formula does not apply, because moving water experiences considerable friction as it moves along the hose and behaves in a more complicated way that make laws of hydrostatics inapplicable. The pressure in a water element decreases as it moves along the hose. When you restrict the opening of the hose, more friction happens at the end, which decreases water throughput and therefore also speed of water inside the hose. The friction then is not as strong and the conditions are closer to hydrostatic case. The speed of water at the end gets closer to the maximum possible value, given by Torricelli's formula. If you severely reduce the cross-section of the hose by putting your finger into it, you increase the pressure drop across the restriction and thus the flow rate decreases (you can block flow altogether too, of course, if your finger tightly fits or covers the hose's open end, or by 'kinking' the hose). This causes the volumetric flow rate $\dot{Q}$ ($\mathrm{m^3/s}$) to be reduced but it also needs to flow through a smaller cross-section. Roughly we can calculate $\dot{Q}$ as follows: $$\dot{Q}=vA,$$ or: $$v=\frac{\dot{Q}}{A}.$$ with $v$ ($\mathrm{m/s}$) the flow speed and $A$ ($\mathrm{m^2}$) the cross-section. So to maintain the same volumetric flow rate, at smaller $A$, $v$ needs to increase. Thus, seemingly paradoxically perhaps, you can reach high $v$ at low $\dot{Q}$. When you're blocking the end of the tube gradually more and more, flow speed $v$ will first increase, until you've blocked the opening completely, so then $\dot{Q}=0$ and thus also $v=0$. For 'reasonable' values of $A$ Torricelli's Law is respected. The above also holds true for water drawn from the mains (constant pressure). • "So to maintain the same volumetric flow rate [...]" Why SHOULD the same volumetric flow rate be maintained? Jun 12 at 14:25 Torricelli's law indicates the maximum dynamic pressure (proportional to velocity squared) that a certain hydrostatic pressure can provide. If you put a thin diameter (assume frictionless) nozzle on a non-frictionless hose, because the maximum speed in the thin nozzle is restricted to that of Torricelli's law for the given head (height) at that section in the pipe, $$v=\sqrt{2gh}$$, the flow speed in the much wider hose has to decrease due to volumetric flow rate, and because the hose is the only source of friction, the energy lost to friction is less because the energy loss to friction in a section of pipe is proportional to velocity squared. Increasing the pressure at an open outlet so it approaches the source pressure only has the effect of ensuring that there's less dynamic pressure in the supply pipework meaning less flow velocity and less loss due to friction. If it's zero then all the energy is dynamic pressure and the rest friction. If it's close to the source pressure then only a fraction of the energy is taken up by dynamic pressure and friction, with all the remaining static pressure being able to be converted to dynamic pressure and not friction assuming a frictionless nozzle (due to the small length of a nozzle, the friction loss in a nozzle will still be low enough that the system won't come close the friction loss of the nozzleless system). The maximum dynamic pressure (assuming 100% of the static pressure energy is converted to dynamic pressure) is the limit provided by Torricelli's law. As the nozzle width approaches zero, the static pressure before the nozzle approaches the static pressure $$\rho gh$$ of the supply because the flow rate in the pipes approaches zero, and in a frictionless nozzle, the velocity in the nozzle will approach $$\sqrt{2gh}$$, except for the width of zero where v=0. In a nozzle with friction, as width approaches zero, the velocity in the nozzle approaches $$\sqrt{2gh}$$ but it reverses and approaches zero as the width of the nozzle gets really small as the effect of friction in the nozzle becomes significant. Like decreasing the width of the nozzle and keeping the diameter of the supply pipe the same, if you keep the nozzle the same and increase the width of the pipe supplying it then the static pressure before the nozzle will approach the maximum hydrostatic pressure for the height (because flow speed will decrease because diameter increases and the volume needs to be the same as the volume coming out of the nozzle due to volumetric flow rate, which is limited by Torricelli's law because there can only be a certain speed in the nozzle, and it has a fixed diameter. So it has the same effect, except increasing the width of the supply pipe does mean that the speed of water coming out of the nozzle starts to converge on the limit of Torricelli's law at larger diameter nozzles, meaning a higher flow rate (speed*diameter) can be achieved with the wider nozzles (and that includes no nozzle at all) compared to when the wider nozzles are connected to a thinner supply pipe. If you had a pipe of 2cm diameter and a nozzle of 1cm diameter, and a pipe of 4cm diameter and a nozzle of 2cm, the static pressure before the nozzle in the 2nd configuration would actually be slightly higher than before the nozzle in the first configuration because the supply pipe being physically wider means there is less friction loss, so increasing the pipe diameter doesn't only reduce the dynamic pressure in the pipe, which reduces friction loss, but a wider pipe actually causes less friction loss per unit of speed on the water that is flowing.
2022-08-15 04:10:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7669335603713989, "perplexity": 426.603329094432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572127.33/warc/CC-MAIN-20220815024523-20220815054523-00699.warc.gz"}
https://lttt.vanabel.cn/2012/02/28
## Some Examples Now, we take $f$ to be some special function to obtain […] ## Chern-Weil Theorem Recall that given a vector bundle on $M$, there exists […]
2022-10-04 06:29:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.987012505531311, "perplexity": 1312.8393925023656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00014.warc.gz"}
https://ems.press/books/ecr/89/1773?na
# <i>p</i>-elementary subgroups of the Cremona group of rank 3 • ### Yuri Prokhorov Moscow State University, Russian Federation A subscription is required to access this book chapter. ## Abstract For the subgroups of the Cremona group $\mathrm{Cr}_3(\mathbb C)$ having the form $(\boldsymbol{\mu}_p)^s$, where $p$ is prime, we obtain an upper bound for $s$. Our bound is sharp if $p\ge 17$.
2023-03-27 00:14:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7234256267547607, "perplexity": 526.3371527293705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00321.warc.gz"}
https://www.newspapers.com/newspage/20932594/
# The Daily Messenger from Canandaigua, New York · Page 10 Publication: Location: Canandaigua, New York Issue Date: Thursday, January 15, 1948 Page: Page 10 IP PAGE FOUR THE DAILY MESSENGER, CANANDAIGUA, N. Y., THURSDAY, JANUARY 15, 1948 The Daily Messenger Published every afternoon except Sunday, Messenger Building, Phoenix Street, by Cahandaigua Messenger, Inc. Floyd W. Emerson, editor and publisher; A. C. Waterbury, vice-president and treasurer; William H. Havvley. advertising manager. -phone,-Business Office .....: ·. ..S9i . News Room · · · · - · · · W8 SUBSCRIPTION RATES By the Carrier in City Delivered at your door. 24 cents per week: single copy 5 rents. Entered as second class matter at the Post Office in Canraidaigua. N. Y., under the Act of March 3, 1897. Rates delivered by office carrier by the year, $12; single copies, 5 cents. Mail rates, payable strictly in advance, are: Jc Ontario and Yates Counties, one year,$5; 6 months, $3; 3 months,$1.50; 1 month. 55c; to New York state addresses outside Ontario and. Yates Counties, one year. $7; 6 months.$3.50; 3 months, S1.75: I j m f l i l h . 75_cents; other addresses in the United tjiaies, one year, $8; G nu.iiUis, *-». "· muul!^,$2; 1 month. $1; to Canadian addresses, one year, S9; 6 months,$4.50; 3"months, $2.25; 1 month.$1. National Advertising Representatives: Burke. KJnpers Mahoney, Inc.. 420 Lexington Avenue. New York City; 203 North Wabash, Chicago; Atlanta, Dallas and Oklahoma. Member of the Associated Tress The Associated Press is entitled exclusively to the use for rcpublica- tion of all the local news printed in this newspaper, as well as all AP hews -dispatches. LEAPING AT THE CHANCE Performance Counts In a recent address on the European aid problem, Senator Ball of Minnesota said: "I am convinced, and the record bears me out, that a free economy will always out produce and provide a higher standard of living for all the people than either a socialistic or government-planned and controlled economy. I believe that when economic freedoms are liquidated, the other freedoms are in jeopardy, and the socialists have always had too muc.ii in common '*vi«.ii nic communists for my taste." It makes very little difference to a nation what label a dictatorship wears. The results are always the" same. The state dominates every phase of life within its borders. It buries every undertaking beneath a thick web of red tape and restrictive controls. It-destroys the initiative and enterprise of individuals and groups alike. It stirs up discontent by making promises: that cannot possibly be fulfilled. And, in the long run, it abrogates more and more of the nations liberties in order to keep the ruling class in power. This must be kept in mind by the American government in the development of our policy toward Europe. We, with only a fraction of the world's population, cannot indefinitely support the world without ruining ourselves. Many seem to have forgotten that an essential of the Marshall Plan in the beginning was that Europe must do all in her power to help herself. But, so, far, Europe has done very little to lift herself out of the doldrums. As an example, Senator Ball cited the failure of the British socialized coal industry to produce ^enough tojnake possible i resumption Ol exports LU "vveai-eiu XJLIIUJJC. imo imo made it necessary to devote much of Europe's broken-down railroad system to the job of moving coal from the Ruhr--a fact which is at the heart of the European economic problem. Mr. Ball suggests that, as a condition of continued aid, we insist that Britain supply a million tons of coal a month to European parts and thus alleviate the unbearable strain on tlie land transportation machine. Lastly, it is up to us to show Europe, by example, that the free enterprise system has neither a superior nor an equal. The people of Europe must realize, in the face of totalitarian propaganda, that our system "is the only one which is capable of bringing a nation both material abundance and spiritual power and freedom. We can best defeat communism by outperforming it. .You never can tell No one seemed more a typical New Yorker than the late Mayor Fiorello H. La- Guardia. Yet he spent-most of his boyhood at Fort Whipple, Ariz., and was graduated from high sehoc" at Prescott, Ariz. Speculation News Recalls "Case Of the Jiggled Shade" to Boyle Personal Health Service T5r William Brady, M. D. Readers desiring to correspond with Dr. Brady should address their mail to him as follows: Dr. William Brady. Canandaljrur Dally Messenger Bureau, Beverly Hills, Calif. IRON DEFICIENCY ANEMIA T don't know and I know of no , building and n o t h i n g one who knows w h e t h e r secondary anemia or n u t r i t i o n a l deficiency anemia is t h e more common kind. Secondary anemia is (lie lack or decrease of hemoKlonin (iron coloring m a t t e r w h i c h carries oxygen to the cells of the hotly and carbon dioxide from the f u n c t i o n - ing cells back to the lungs for excretion and to pick up a fresh load of oxygen) and corpuscular cells), due to f r e q u e n t small hemorrhages, a p p a r e n t or hidden, o f r e q u e n t exposure t o atmosphere polluted w i t h carbon monoxide, to various domestic and industrial poisons such as lead, ar- seems to drive t h e m away. \V. R. W.) Answer--Try sprinkling powdered borax freely around corners and crevices where Ihe roaches will f i n d it. How to deal w i t h most household pests is described in booklet "Unbidden Guests"--for copy send t w e n t y - f i v e cents and stamped self addressed envelope. Milk Troubled w i t h chronic bronchitis for many years. Fond of milk, but .1 .!,, . .~rT,*,,t r^~ .,,7..:,.^. ,^;ll- f^,.,^ r . mucus and therefore should be excluded. ' CB. Looking Backward .Interesting items t a k e n ' f r o m the files c/f t h e Daily Messenger -- . .10, 25 and 50 years ago Ten Years January 15, 1938 Announcement was made today by James J. Mirras of the awards in a recent essay contest conducted by the Goodie Shoppe. First prize went to Miss Dorothea Bently, 31 Gorham street; second, Frank H. Jeudevine. 58 South Main street: third, James A. Cal - n n l i ?!i N i a g a r a «trpot - fourth. Mrs. Mildred Seward. 234 Pleasant | t i o n By Hal Boyle WASHINGTON. UP)--The present senate inquiry into grain spec- | ulation recalls the famous "Case of the JigRled Windowshade," an historic scandal in !he Department of Agriculture. The Federal employe who jiggled the windowshade was reported to have made -more that year by this one acl than the president of the United Slates received in salary -- then $50.000. It was back in 1905. The man had just been a secret crop report. By adjusting the windowshade he signalled to a conspirator outside whether the crop would be larger or smaller than expected. Well, the prices on a commodity market rise in normal times if a small crop is forecast and fall if a huge crop is in sight. A trader who finds out this information in advance can thus buy or sell before the price changes aud.reap. a profit. When he is dealing by hundreds of thousands of bushels, even a slight price chaifge can make him big money. The outcry over the jiggled windowshade led the Department j of. Agriculture to put in a foolproof system »o assure that no news of its crop estimates would leak out until they were to be made public. To do this the .newly created crop reporting board devised "the lock-up." This is a block-long corridor in the agriculture building which is sealed off the morning monthly estimates of important national crops such as corn and wheat are to be issued. Guards are posted outside locked doors at each end of the corridor, all windowblinds are locked down, and the telephones are disconnected. The statisticians then go to work assembling the data. No one is permitted to leave the "lock-up" until the report has been completed and issued simultaneously to j news reporters waiting in a guarded room. Unaware" of the restriction, the late Arthur M. Hyde, then, secretary of Agriculture, tried 'to'leave after signing his first departmental crop report. He ,had to wait, too. So another time di$ a man who had an urgent appointment w i t h the president. A worker did get out once when word came his wife had been suddenly stricken ill. But an armed guard accompanied him to the hospital room. "Since 1905 there has been no leak of any kind," said Jasper E. Pallesen, secretary of the crop control board. I asked him whatever happened to the man with the windowshade, and he referred me to an informa- specialist wno is maKing a NEW WMUJ» _ . STANLY-r-Plans. arc being mads' for the organization of a young adult group, m the- First Congregational church. An ^ organization meeting will be hrId y*fl. 22 at 8 -pjn.:im ihfi church. · ··*· SALE · AT CONNOLLY'S Originally Triced at $10.95. . Now. street, and A n s w e r -- I n your place I'd con-| Vienna. 226 l i n u e d r i n k i n g all t h e milk I "'anted. If you .have any doubts about it. try o m i t t i n g it from your diet w h e t h e r you are better without it. (Copynght 1918, John F. Dille Co.) Year of Disaster ^ -The year which just closed witnessed a record fire loss in" the United'States. It was a year in which hundreds of millions of dollars worth of property. much of it scarce, was needlessly destroyed. It was a year in which ten thousand or more people were burned to. death, and other thousands disfigured and crippled for life. It was a year of disaster for a legion 0f American families. We cannot repair the failures of the past. We can, however, let the lessons of the past guide us in the future. That is true of fire as it is of almost all problems. The fires which caused much havoc in 1947 were not--save for a tiny proportion--acts of God. They were, to the contrary, the fruits of human ignorance, inertia, and carelessness. The great majority of them began from the simplest causes--im- j properly maintained lighting and heating equipment; i :| improper storage of flammables, thoughtlessness |'''° with matches and cigarettes, and so on. ! . Those fires could have been prevented easily. In- j ';"|, mi ,. "*,,!,';'{ stead, men and women worked on the dangerous \ women K on UK theory that disaster couldn't come to them. But i t j " did. ' ! We have turned a fresh page now. During the next J twelve months that page will be filled with a new rec- j ord of death and destruction unless we face the problem with determination to improve it. It is up to ;ill of us. seme, beir/ol, toluene, a c c t a n i l i d e : j '"or a week or two weeks and see or to the blood destroying e f f e c t s of chronic infections such as sinusitis, dental root abscess (perhaps "silent" (iv prnrlucl jve of no d i s c o m f o r t ' . chronic t o n s i l l i t i s . ;yphilis, malaria, tuberculosis. Taking iron or other m e d i c i n e m food for anemia is t h e r e f o r e - comparable w i t h t r y i n g in f i l l .1 s i ^ v e with water, unless the u n d e r l y i n g cause of t h e anemia is discovered and remedied. N u t r i t i o n a l deficiency ( o r iron deficiency) anemia was called chlorosis or the "green sickness" when you and i were yount;. Man- i:ie a n d y o u h.-irl i i . Ma;zi:i\ t ' n tier t h a t n a m e i t w a s w o r t h ."()' ·!.. to a d o l l a r a i l n o v . . as I recall w i t h pain. Today we rail it hypo- chromic a n e m i a , w h i c h means Uial t h e c h a r a c t e r i s t i c o f i h e c o n d i t i o n is lack or (|errease of heinnc-Johin nr coloring i n a l l e ; - j n I h e hiood. The t e r m "iron deficiency anemia" is less de.scripi i\ e t h a n " n u - t r i t i o n a l d e f i c i e n c y a n e m i a . " f o j we know t h a t Ihe green .sicknc.-,.-; in younger women or t h e p a r c h - m e n t pallor of hypochromic anemia in middle aged women w i t h graying h a i r a n d w r i n k l i n g s k i n siiid sore tongue, i n d i c a t e s not merely lack of or f a u l t y a s s i m i l a - \ l!/,p c,{ j v / i p hit! nf n l l i e t - mil ri! i/ip. I il essentials as well, i m t a h l v pro- t e i n s or aniino-acid.s and v i t a m i n fifth. Genevieve P. South Main street. The judges were Howard L. Foster, Gil Brewer, and Mrs. James J. Mirras. A reorganization meeting of the Canandaigua League of Women Voters was held yesterday afternoon with Mrs. R. J. Cuddeback, flowell street, president of the Ontario county league. Officers were elected as follows: President, Mrs. Grace -T. Green: first vice- president, Mrs. John S. Flannigan; sT-coriS ' vice-president, Mrs. John K. Graham; secretary. Mrs. Fred L. Anderson; treasurer, .Mrs. Edward A. Fish. ' '·'-. Twenty-five Years Ago January 15, 1923 All officers of Canandaigua study of the case. "" "The best I have been able to learn," the information man said, "is that he was fined$5,000 after a long trial; But oldtimers in the department./ say he probably had made $70,000 out of one deal he pulled. He is dead now, but nobody is sure whether he died in .disgrace or a millionaire." Originally Priced at$16.95 Now Originally Priced at $18.95' Now ONE SPECIAL GKOUP OF DRESSES IE °° 5 Oriemallv Priced, at 41(5.95, WINTER CO AX» Sharply Reduced for This Clearance "Trfinmed ·· ·· ' j» p .-v:; Ufllruhmrd 100% All Wool Doubte Dut.r SNOW SUITS Regularly 'Priced at$1^50 M - 30 '···"- I Department Store 195 So. Main St Canandaigiia iio.xes were read. T r e a s u r e r ' s r e p o r t was KU'en by 'hark.- E. Brisl'm, and Chief Gor!on K e n n e d y in his report, stated hat t h e f i r e t r u c k was called 1n me f i r e last m o n i l i at 111' 1 home of ' h a t i n r e y H;iir;l|l. NaplCS-Wnod- i l l e road. 1 ec. .''.H. Firemen Submit December Reports N A I ' I J v S - - A t a regular meeting if Max-field Hose Fire company W i l l a r d Clawson read a communi- ·ation from t h e Jacob Schaeffer :ost. A m e r i c a n Legion, t h a n k i n g lie men for the 550 c o n t r i b u t i o n Mid several cards of t h a n k s from ! Uikc Transportation company II members who received sunshine, ! were re-elected at t h e annual m e e t i n g S a t u r d a y a f t e r n o o n . They are: P r e s i d e n t , W. L. Reed: vice- j president. James Flynn; secretary. George W. TIamlin; treasurer, Ucn'ry A. Bceman. The U n i t e d States coal commission declared in its first, report tojCongrcss t h a t , "profiteering" by W i l l a n l I ' r e s l e r and F r a n c i s S h e - j both operators and retailers is rc- i.-ird. who were co-chairmen of t h e | sponsiblc for present high prices !an:-e held Dec. 22. reported a net. j of b i t u m i n o u s and anthracite coal. r o f i t of a p p r o x i m a t e l y 5120. \ .\^ ^] 1C Canandaigun hotel, · mem- Gordon K e n n e d y was a p p o i n t e d j bers of the Welsh Male Glee club l e r m a n e n t delegate to the Naples j w ill be guests of the Rotary club .':uth c e n t e r m e e t i n g s and i t was i a1 l n c j r luncheon, and will enter- f i t e d t.-i d o n a t e $20 to the local | ( n j n t h e i r hosts w i t h several selec- Annual Clearance WOMEN'S tions. Mrs. Mrs. Rodney W. Pease and I T. R. Iluirn w i l l also be i B complex. Improved I r a n s p o r i e i i i o n of food. m e a t . f i s h , d a i r y products, f r u i t and. vegetables, by r e f r i g e r a t o r c m s a n d t r u c k - . , v a s t l y i m p r o \ i n . u the diet of m i l l i o n s of N o r t h Am ericans in i h e w i n t e r t i m e n as t h e M'asop. of home, grown I'nod. ; accounts f o r t h e almost c o m p l e t e ] f i i s a p p e a r a n c e oi t h e A:;ierica:i :· t h i r t y year.-, i n T h e n u t r i t i o n ; ties or e a r l y fm cull t o e x p l a i n , be less f r e ' p i e n : .- t h a n i' -.··as l i e ' h a l . fn ; h i r ore ' l i f f i - i .^··:::s '.11 \ omen I n - ! f.,,.i. K. months \\in-n. in ;irl fv-rVii; orvjani/.at.ion. P l a n s weri; made t o locate t h e ; i i - n i ' - i i s i o i . on iv'..-eu slri't-i. e q u i p - i icd w i t h e l e c t r i c i t y in order t h a t i _ Fifty Years Ago "iremen may hold n c a r n i v a l t h i s ! ; Work of January \'i · u m m c r . Bert Brand and Kdgar | Ml . a m j M r s Frederick N. I.o- .'faynes have been a p p o i n t e d chair- : s impersonators, who have won men of the c a r n i v a l committee. i fii : st , acc as interpreters of the Charles Long, of Howard, asked ; , Trca( a r t of I i l 0 r a r y expression, i i r e m e n to ac! as sponsors of a i pYomise to give three evenings of · r a y e . m g professional roodshow| cefincd a n d - p l c v a t j n g pleasure at w h u - h w o u l d I.ke to come t o · , Congregational church be- Nap'.c.-.. I h e f i r e m e n voted to pon- j j n n i n c x ) Mondav . ·''"' .I? 0 ·: : .V )V ;': r n ° o h l l « a l l o n i C a n a n d a i g u a vital' statistics for '" H' ! '- ims L'lvo.s i 189.7;.,we re: 72 b i r t h s ; -12 m a r - riages, and 98 deaths. Towns: 2G IIAU.MO.NY (IRCl.r. '. b i r t h s ; 12 marriages, and 12 BRISTOL CKNTKll- Mrs. Enrl j deaths. i-'lctc.'ncr was H a r m o n y Circle host- cs Tuesday e v e n i n g nt the home of her moihcr. Mi's. Nellie Perrine. B:-:s!f)l i-orid. O f f i c e r s for 1918 arc: :·;.;·. :.-!-.-;.:. M:.,. Levi Ct.rsCT: v;;:e- pre.,iflcns, Mrs. J o h n f)e S m i t h . M r - . M a r i o n G l a d d i n g . Mrs. Ben- ja:r.'.:i Jones: sc-.-rotnry. Mrs. Bur; i i n F l e t c h e r and treasurer, Mr.-,. Madge Simmons. Mrs. K e n n e i J i Grange Hears Law Enforcement Talk ANS\\ i-:;;s ; Probably all nations, like their individuals, have about the same ratio of good uud bad according to their opportunities, but it's hard to make the rest of us think so. One way to settle the Russian question might be to lock Vishinsky and Col. Robert R. McConnick in a room and let them fight it out. il'KSTI().\S A So It l l n r t M y c h i l d r e n . ;,;;··'! .c\en. l i ' p l h c n i e j i l a i ' i soinel imes ci \ at n l g l i pains, w h i c h 1 M i p p ' i :·- in;; p^.ins. M u s i t h e y i i i ( M r - - . Answer I t n-\ i. r ) m r and nobody ever o u t g r o w s disease or illness. Common '-aiisc of ; h - - TO ( (LLK( T TAXKS N A I ' I . K S Mrs. K f i t h e r i n e K r a n - ci.-.v -. town t a x collector, w i l l col- lee! t o w n and county taxes at I ho H i r a n i M a x f i e l d S t a t e h a n k o n Mondays. Thursday ;| nd F r i d a y s d u r i n g J a n u a r y and F e b r u a r y I I roi:i !) a. m. u n t i l ,'! p. m. Ail I ' a \ c , paid ( r u i n g . T n m i a i \ v. ill },· I c o l l c ' - i e d \ v i U n i i i t fee; d u r i n g Felj- r n a i v t h e lee w i l l he I nor c e n t , i ~~ N'AM'K'I) AjMTMsT;lA~friix"~~ i B e r t h a P. Bigham, Gorham, has i h'-en named a d m i n i s t r a t r i x of t h e ' Ohlatt! of Belle P. Van Horn, also ! Oorham who died in Geneva last j Nov. 22. The estate lists personal I HOPKYVKLI, A law enforce- , . ment program was featured at a : j recent m e e t i n g of Hopewell i i Orange w i t h Edward M. Krecn of j M h e Doyle Detective bureau i n ' , Jloclicatcr as guest apcakcr. Pro- j ; gram chairman was Porter S m i t h , j Breen o u t l i n e d activities of the i bureau in locating missing persons and described t h e armored ear division. He slated t h a t only j f i v e pe; roni. of ilic- cast.-.-, handled"! by the b u r e a u are divorce actions, i A new book has the title, "The Mental Side of Golf." It will be news to many that there is a mental side to golf. trouble is l e l a n \ . Send M a m , i e r l , property of approximately Sl,(»0(i. envelope bearing your add res;. Or j .. .. pamphlet " A d l u t Tetnny and I AUXILIARY MEETS Growing Pains." In elderly adi:l;s PHELPS- -The Women's Auxil- the condition i.s more often cramoo iary of St. John's Episcopal church in the legs at night. will meet on Monday evening at Unbidden Guests the home of Mrs. Fred Westfall, We are overrun w i t h cock- Church street. Miss Erma Runyan roaches in our new apartment will be Ihe assistant hostess;. D A N C E and S|uare GRANGE HALL Seneca Castle EVERY PRI. NITE Lewie Johnson Oich. Shoes These Show Are Going lor LESS Than Cost ... .Leathers, Gabardines and Suedes VOGUAIRES Values to$7.50 . SELBY STYL-EEZ ENNA JETTICKS | $y|95 Values to$8.50 . 4 Growing Girls' LOAFERS Large Sizes Values to $5 Connies and Jacquelines Values to$7.50 . 4 ODD LOT Women's House \$ SUPPERS 1 00 Davidson's SHOES FOR THE WHOLE FAMILY Time in Daily 6:10 for Local and~Vieimty~{4ews-.-. .. Dial4240-
2018-12-15 13:30:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3928427994251251, "perplexity": 10385.307591975019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826856.91/warc/CC-MAIN-20181215131038-20181215153038-00103.warc.gz"}
http://ogg.pepris.com/CN/10.11743/ogg20060116
• 区域地质构造与油气评价 • ### 四川盆地北部开江-梁平海槽边界及特征初探 1. 1. 中国石油,勘探开发研究院,廊坊分院,河北,廊坊,065007; 2. 中国石油,西南油气田分公司,四川,成都,610051; 3. 南京大学,地球科学系,江苏,南京,210039; 4. 浙江大学,地球科学系,浙江,杭州,310027 • 收稿日期:2005-08-20 出版日期:2006-02-25 发布日期:2012-01-16 • 基金资助: 国家"十五"科技攻关项目(2001BA606A04) ### Preliminary study of the boundary of Kaijiang-Liangping trough in northern Sichuan basin and its characteristics Wei Guoqi1, Chen Gengsheng2, Yang Wei1, Yang Yu2, Jia Dong3, Zhang Lin1, Xiao Ancheng4, Chen Hanlin4, Wu Shixiang1, Jin Hui1, Shen Juehong1 1. 1. Langfang Branch of Petroleum Exploration & Development Research Institute, PetroChina, Langfang, Hebei 065007, China; 2. Southwest Oil & Gasfield Company, PetroChina, Chengdu, Sichuan 210039, China; 3. Nanjing University, Nanjing, Jiangsu; 4. Zhejiang University,Hangzhou Zhejiang 310027,China • Received:2005-08-20 Online:2006-02-25 Published:2012-01-16 Abstract: Kaijiang-Liangping trough controls the distribution of platform edge oolitic shoal in the Feixianguan Fm in northern Sichuan basin. It is found from the seismic section that the eastern boundary to be a breakpoint, while the western boundary appears as an onlap point. In respect of sedimentary petrography, basin and slope facies are developed between the two boundaries mentioned above, which are represented by thin bedded marl and debris flow sediment, respectively. The earliest platform edge oolitic shoal deposit is located along Jiangyou-Zitong-Nanchong-Linshui-Dianjiang. The Feixianguan Fm is of basin facies on the western side of the trough with sediments of over 600 m thick and on the eastern side of the trough with sediments of over 500 m thick. Based on various evidences, the western boundary of the trough is basically determined to be approximately along Qinglin 1 well-Bailong 1 well-Si 1 well-Longhui 1 well, while its eastern boundary is a-long Tiandong 10 well-Chuanyue 83 well-Chuanfu 82 well. The trough is bounded by faults in the east, so the boundary location has only a little change in the course of the closing of the trough, while the western boundary is composed of slopes and the boundary location has gradually changed along with the closing of the trough. It is predicted that the area of platform edge oolitic shoal can be up to 3?104km2 in the northern Sichuan basin. On the west side of the trough, the platform edge oolitic shoal is mainly distributed in the area encircled by Jiangyou, Guangyuan, Liangping and Linshui; while on the east side of the trough, it is mainly distributed in Yunyang, Wanxian and Lianghekou (of Tongjiang).
2023-01-29 19:56:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22700414061546326, "perplexity": 12855.213101801166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499758.83/warc/CC-MAIN-20230129180008-20230129210008-00874.warc.gz"}
http://mathoverflow.net/questions/48238/automorphisms-of-projective-space
# Automorphisms of projective space [closed] According to Wikipedia, Aut(P(V)) = PGL(V). Apparently this is proved by using sheaves generated by global sections but I'm not familiar with this notion. I would appreciate it if anyone could provide a reference where this is proved. - It's in Hartshorne, 7.1.1: books.google.com/… –  Ben Webster Dec 4 '10 at 1:58 Ah, thanks! –  Adeel Dec 4 '10 at 3:09 The question has been closed: a reference was provided in the comments, which was accepted by the OP. –  Pete L. Clark Dec 4 '10 at 16:40
2014-03-10 20:24:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9128034114837646, "perplexity": 970.6843651348775}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011005264/warc/CC-MAIN-20140305091645-00095-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-for-college-students-7th-edition/chapter-9-section-9-4-properties-of-logarithms-exercise-set-page-713/76
# Chapter 9 - Section 9.4 - Properties of Logarithms - Exercise Set: 76 False; Correct Statement: $\ln{e^e} = e$ #### Work Step by Step RECALL: $\ln_b{x}=y \longrightarrow e^y=x$ Thus, if $\ln{0}=y$, then $e^y=0$ However, $e^0=1$, not $0$. Therefore, the statement is false. A correct statement would be: $\ln{e^e} =e$. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-04-26 14:09:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7269487977027893, "perplexity": 916.0485367616797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948214.37/warc/CC-MAIN-20180426125104-20180426145104-00403.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=2733074
MathSciNet bibliographic data MR2733074 55P35 (16W70 55P40 55U35) Wu, J. The functor \$A\sp {\min}\$$A\sp {\min}$ for \$(p-1)\$$(p-1)$-cell complexes and \$EHP\$$EHP$ sequences. Israel J. Math. 178 (2010), 349–391. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews. American Mathematical Society 201 Charles Street Providence, RI 02904-6248 USA
2016-07-31 06:35:44
{"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9911072254180908, "perplexity": 14976.331611960552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258950570.93/warc/CC-MAIN-20160723072910-00064-ip-10-185-27-174.ec2.internal.warc.gz"}
http://sci-gems.math.bas.bg/jspui/handle/10525/381
Please use this identifier to cite or link to this item: http://hdl.handle.net/10525/381 Title: Improving the Watermarking Process with Usage of Block Error-Correcting Codes Authors: Berger, ThierryTodorov, Todor Keywords: WatermarkingError-Correcting CodesReed-Solomon CodesSoftware for Watermarking Issue Date: 2008 Publisher: Institute of Mathematics and Informatics Bulgarian Academy of Sciences Citation: Serdica Journal of Computing, Vol. 2, No 2, (2008), 163p-180p Abstract: The emergence of digital imaging and of digital networks has made duplication of original artwork easier. Watermarking techniques, also referred to as digital signature, sign images by introducing changes that are imperceptible to the human eye but easily recoverable by a computer program. Usage of error correcting codes is one of the good choices in order to correct possible errors when extracting the signature. In this paper, we present a scheme of error correction based on a combination of Reed-Solomon codes and another optimal linear code as inner code. We have investigated the strength of the noise that this scheme is steady to for a fixed capacity of the image and various lengths of the signature. Finally, we compare our results with other error correcting techniques that are used in watermarking. We have also created a computer program for image watermarking that uses the newly presented scheme for error correction. URI: http://hdl.handle.net/10525/381 ISSN: 1312-6555 Appears in Collections: Volume 2 Number 2 Files in This Item: File Description SizeFormat
2016-12-03 19:40:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2747175395488739, "perplexity": 1326.5660935748792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541134.5/warc/CC-MAIN-20161202170901-00146-ip-10-31-129-80.ec2.internal.warc.gz"}
http://meldmd.org/_autosummary/meld.system.openmm_runner.transform.restraints.html
# meld.system.openmm_runner.transform.restraints¶ This module implements transformers that add restraint forces to the openmm system before simulation Classes AbsoluteCOMRestraintTransformer(options, …) COMRestraintTransformer(options, …) CartesianRestraintTransformer(options, …) ConfinementRestraintTransformer(options, …) Transformer to handle confinement restraints DefaultOrderedDict([default_factory]) MeldRestraintTransformer(options, …) OldRDCRestraintTransformer(options, …) RDCRestraintTransformer(options, …) YZCartesianTransformer(options, …) class meld.system.openmm_runner.transform.restraints.AbsoluteCOMRestraintTransformer(options, always_active_restraints, selectively_active_restraints)[source] add_interactions(system, topology)[source] Add new interactions to the system. This may involve: - Adding new forces, e.g. for restraints - Replacing an existing force with another, e.g. softcore interactions This method must return the modified system. If the transformer does not add interactions, it may simply return the passed values. Parameters • system (simtk.openmm.System) – OpenMM system object to be modified • topology (simtk.openmm.Topology) – OpenMM topology object to be modified and/or used for indexing finalize(system, topology) Finalize the transformer. This method is guaranteed to be called after all forces are added to the system and provides an opportunity to do bookkeeping. This method should not add any new forces. Parameters • system (simtk.openmm.System) – OpenMM system object to be modified • topology (simtk.openmm.Topology) – OpenMM topology object to be modified and/or used for indexing update(simulation, alpha, timestep)[source] Update the system according to alpha and timestep. This method is called at the beginning of every stage. It should update forces and parameters as necessary. Parameters • simulation (simtk.openmm.app.simulation) – OpenMM simulation object to be modified • alpha (float) – Current value of alpha, ranges from 0 to 1 • stage (int) – Current stage of the simulation, starting from 0 class meld.system.openmm_runner.transform.restraints.COMRestraintTransformer(options, always_active_restraints, selectively_active_restraints)[source] add_interactions(system, topology)[source] Add new interactions to the system. This may involve: - Adding new forces, e.g. for restraints - Replacing an existing force with another, e.g. softcore interactions This method must return the modified system. If the transformer does not add interactions, it may simply return the passed values. Parameters • system (simtk.openmm.System) – OpenMM system object to be modified • topology (simtk.openmm.Topology) – OpenMM topology object to be modified and/or used for indexing finalize(system, topology) Finalize the transformer. This method is guaranteed to be called after all forces are added to the system and provides an opportunity to do bookkeeping. This method should not add any new forces. Parameters • system (simtk.openmm.System) – OpenMM system object to be modified • topology (simtk.openmm.Topology) – OpenMM topology object to be modified and/or used for indexing update(simulation, alpha, timestep)[source] Update the system according to alpha and timestep. This method is called at the beginning of every stage. It should update forces and parameters as necessary. Parameters • simulation (simtk.openmm.app.simulation) – OpenMM simulation object to be modified • alpha (float) – Current value of alpha, ranges from 0 to 1 • stage (int) – Current stage of the simulation, starting from 0 class meld.system.openmm_runner.transform.restraints.CartesianRestraintTransformer(options, always_active_restraints, selectively_active_restraints)[source] add_interactions(system, topology)[source] Add new interactions to the system. This may involve: - Adding new forces, e.g. for restraints - Replacing an existing force with another, e.g. softcore interactions This method must return the modified system. If the transformer does not add interactions, it may simply return the passed values. Parameters • system (simtk.openmm.System) – OpenMM system object to be modified • topology (simtk.openmm.Topology) – OpenMM topology object to be modified and/or used for indexing finalize(system, topology) Finalize the transformer. This method is guaranteed to be called after all forces are added to the system and provides an opportunity to do bookkeeping. This method should not add any new forces. Parameters • system (simtk.openmm.System) – OpenMM system object to be modified • topology (simtk.openmm.Topology) – OpenMM topology object to be modified and/or used for indexing update(simulation, alpha, timestep)[source] Update the system according to alpha and timestep. This method is called at the beginning of every stage. It should update forces and parameters as necessary. Parameters • simulation (simtk.openmm.app.simulation) – OpenMM simulation object to be modified • alpha (float) – Current value of alpha, ranges from 0 to 1 • stage (int) – Current stage of the simulation, starting from 0 class meld.system.openmm_runner.transform.restraints.ConfinementRestraintTransformer(options, always_active_restraints, selectively_active_restraints)[source] Transformer to handle confinement restraints add_interactions(system, topology)[source] Add new interactions to the system. This may involve: - Adding new forces, e.g. for restraints - Replacing an existing force with another, e.g. softcore interactions This method must return the modified system. If the transformer does not add interactions, it may simply return the passed values. Parameters • system (simtk.openmm.System) – OpenMM system object to be modified • topology (simtk.openmm.Topology) – OpenMM topology object to be modified and/or used for indexing finalize(system, topology) Finalize the transformer. This method is guaranteed to be called after all forces are added to the system and provides an opportunity to do bookkeeping. This method should not add any new forces. Parameters • system (simtk.openmm.System) – OpenMM system object to be modified • topology (simtk.openmm.Topology) – OpenMM topology object to be modified and/or used for indexing update(simulation, alpha, timestep)[source] Update the system according to alpha and timestep. This method is called at the beginning of every stage. It should update forces and parameters as necessary. Parameters • simulation (simtk.openmm.app.simulation) – OpenMM simulation object to be modified • alpha (float) – Current value of alpha, ranges from 0 to 1 • stage (int) – Current stage of the simulation, starting from 0 class meld.system.openmm_runner.transform.restraints.DefaultOrderedDict(default_factory=None, *a, **kw)[source] clear() → None. Remove all items from od. copy() → a shallow copy of od[source] fromkeys(S[, v]) → New ordered dictionary with keys from S. If not specified, the value defaults to None. get(k[, d]) → D[k] if k in D, else d. d defaults to None. items() → a set-like object providing a view on D's items keys() → a set-like object providing a view on D's keys move_to_end() Move an existing element to the end (or beginning if last==False). Raises KeyError if the element does not exist. When last=True, acts like a fast version of self[key]=self.pop(key). pop(k[, d]) → v, remove specified key and return the corresponding value. If key is not found, d is returned if given, otherwise KeyError is raised. popitem() Remove and return a (key, value) pair from the dictionary. Pairs are returned in LIFO order if last is true or FIFO order if false. setdefault(k[, d]) → od.get(k,d), also set od[k]=d if k not in od update([E, ]**F) → None. Update D from dict/iterable E and F. If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] values() → an object providing a view on D's values class meld.system.openmm_runner.transform.restraints.MeldRestraintTransformer(options, always_active_restraints, selectively_active_restraints)[source] add_interactions(system, topology)[source] Add new interactions to the system. This may involve: - Adding new forces, e.g. for restraints - Replacing an existing force with another, e.g. softcore interactions This method must return the modified system. If the transformer does not add interactions, it may simply return the passed values. Parameters • system (simtk.openmm.System) – OpenMM system object to be modified • topology (simtk.openmm.Topology) – OpenMM topology object to be modified and/or used for indexing finalize(system, topology) Finalize the transformer. This method is guaranteed to be called after all forces are added to the system and provides an opportunity to do bookkeeping. This method should not add any new forces. Parameters • system (simtk.openmm.System) – OpenMM system object to be modified • topology (simtk.openmm.Topology) – OpenMM topology object to be modified and/or used for indexing update(simulation, alpha, timestep)[source] Update the system according to alpha and timestep. This method is called at the beginning of every stage. It should update forces and parameters as necessary. Parameters • simulation (simtk.openmm.app.simulation) – OpenMM simulation object to be modified • alpha (float) – Current value of alpha, ranges from 0 to 1 • stage (int) – Current stage of the simulation, starting from 0 class meld.system.openmm_runner.transform.restraints.OldRDCRestraintTransformer(options, always_active_restraints, selectively_active_restraints)[source] add_interactions(system, topology)[source] Add new interactions to the system. This may involve: - Adding new forces, e.g. for restraints - Replacing an existing force with another, e.g. softcore interactions This method must return the modified system. If the transformer does not add interactions, it may simply return the passed values. Parameters • system (simtk.openmm.System) – OpenMM system object to be modified • topology (simtk.openmm.Topology) – OpenMM topology object to be modified and/or used for indexing finalize(system, topology) Finalize the transformer. This method is guaranteed to be called after all forces are added to the system and provides an opportunity to do bookkeeping. This method should not add any new forces. Parameters • system (simtk.openmm.System) – OpenMM system object to be modified • topology (simtk.openmm.Topology) – OpenMM topology object to be modified and/or used for indexing update(simulation, alpha, timestep)[source] Update the system according to alpha and timestep. This method is called at the beginning of every stage. It should update forces and parameters as necessary. Parameters • simulation (simtk.openmm.app.simulation) – OpenMM simulation object to be modified • alpha (float) – Current value of alpha, ranges from 0 to 1 • stage (int) – Current stage of the simulation, starting from 0 class meld.system.openmm_runner.transform.restraints.RDCRestraintTransformer(options, always_active_restraints, selectively_active_restraints)[source] add_interactions(system, topology)[source] Add new interactions to the system. This may involve: - Adding new forces, e.g. for restraints - Replacing an existing force with another, e.g. softcore interactions This method must return the modified system. If the transformer does not add interactions, it may simply return the passed values. Parameters • system (simtk.openmm.System) – OpenMM system object to be modified • topology (simtk.openmm.Topology) – OpenMM topology object to be modified and/or used for indexing finalize(system, topology) Finalize the transformer. This method is guaranteed to be called after all forces are added to the system and provides an opportunity to do bookkeeping. This method should not add any new forces. Parameters • system (simtk.openmm.System) – OpenMM system object to be modified • topology (simtk.openmm.Topology) – OpenMM topology object to be modified and/or used for indexing update(simulation, alpha, timestep)[source] Update the system according to alpha and timestep. This method is called at the beginning of every stage. It should update forces and parameters as necessary. Parameters • simulation (simtk.openmm.app.simulation) – OpenMM simulation object to be modified • alpha (float) – Current value of alpha, ranges from 0 to 1 • stage (int) – Current stage of the simulation, starting from 0 class meld.system.openmm_runner.transform.restraints.YZCartesianTransformer(options, always_active_restraints, selectively_active_restraints)[source] add_interactions(system, topology)[source] Add new interactions to the system. This may involve: - Adding new forces, e.g. for restraints - Replacing an existing force with another, e.g. softcore interactions This method must return the modified system. If the transformer does not add interactions, it may simply return the passed values. Parameters • system (simtk.openmm.System) – OpenMM system object to be modified • topology (simtk.openmm.Topology) – OpenMM topology object to be modified and/or used for indexing finalize(system, topology) Finalize the transformer. This method is guaranteed to be called after all forces are added to the system and provides an opportunity to do bookkeeping. This method should not add any new forces. Parameters • system (simtk.openmm.System) – OpenMM system object to be modified • topology (simtk.openmm.Topology) – OpenMM topology object to be modified and/or used for indexing update(simulation, alpha, timestep)[source] Update the system according to alpha and timestep. This method is called at the beginning of every stage. It should update forces and parameters as necessary. Parameters • simulation (simtk.openmm.app.simulation) – OpenMM simulation object to be modified • alpha (float) – Current value of alpha, ranges from 0 to 1 • stage (int) – Current stage of the simulation, starting from 0
2019-05-22 09:52:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17600446939468384, "perplexity": 5872.186415213809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256778.29/warc/CC-MAIN-20190522083227-20190522105227-00352.warc.gz"}
https://fgstudy.com/quiz/mcqs/convex-lenses-are-used-correction
## Convex lenses are used for the correction of: Long-sightedness Short-sightedness Cataract None of these
2020-07-07 18:32:11
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8348028659820557, "perplexity": 2275.1092611495624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655894904.17/warc/CC-MAIN-20200707173839-20200707203839-00225.warc.gz"}
https://cs.stackexchange.com/questions/133797/is-there-a-method-to-generate-the-complement-of-a-context-free-grammar
# Is there a method to generate the complement of a context-free grammar? Given the languages $$L_0 = {w \in \{0,1\}^*}$$ such that $$w$$ is a palindrome and $$L_1 = {w \in \{0,1\}^*}$$ such that $$w$$ is not a palindrome, meaning $$L_1$$ is the complement of $$L_0$$, we want to find the grammar for both languages. $$G(L_0) = S \to \epsilon | 0S0 | 1S1 | 0 | 1$$ is easy to come up with, but $$G(L_1)$$ is much more complex. In this case, we have the simple CFG $$G_0$$ and want to find the CFG $$G_1$$ that is its complement which can be much more complex. Is there a method to derive the complement of a CFG? If $$L_0$$ in a context-free language, this doesn't guarantee that its complement is context free. For example, consider the language $$L_0 = \{a,b,c\}^* \setminus \{a^nb^nc^n : n \geq 0\}.$$ This language is context-free, but is complement (with respect to $$\{a,b,c\}$$) is not. Another way to formulate your question is as follows: given a context-free grammar for a language $$L$$, is there an algorithm that either constructs a context-free grammar for the complement of $$L$$, or determines that the complement of $$L$$ is not regular? Such an algorithm can be used to decide whether the complement of $$L$$ is context-free. However, this is undecidable, as we now show following Hendrik Jan's notes. Recall that given a grammar $$G$$ over an alphabet $$\Sigma$$, it is undecidable whether $$L(G) = \Sigma^*$$. Let $$\#$$ be a new symbol, and construct a grammar for the language $$L = L_0 \# \Sigma^* \cup \Sigma^* \# L(G),$$ where $$L_0$$ is a context-free language whose complement is not context-free (if $$|\Sigma| \geq 3$$, we can use the one above, and if $$|\Sigma| = 2$$, we can encode $$a,b,c$$ as $$a,ba,bba$$; if $$|\Sigma| = 1$$ then it is easy to check whether $$L(G) = \Sigma^*$$). If $$L(G) = \Sigma^*$$ then $$L=\Sigma^*\#\Sigma^*$$, and so the complement of $$L$$ is context-free. Otherwise, suppose that $$w \notin L(G)$$. Then $$\overline{L} \cap \Sigma^* \# w = (\Sigma^* \setminus L_0) \# w,$$ which is not context-free, and so $$\overline{L}$$ itself is not context-free (since the context-free languages are closed under intersection with a regular language). This shows that $$\overline{L}$$ is context-free iff $$L(G) = \Sigma^*$$. The problem of deciding whether $$L(G) = \Sigma^*$$ is actually not recursively enumerable. This means that there is no algorithm which, on input $$G$$, halts iff $$L(G) = \Sigma^*$$ (however, there is a simple algorithm that halts iff $$L(G) \neq \Sigma^*$$, namely go over all words in $$\Sigma^*$$ in parallel, and check whether each of them belongs to $$L(G)$$). Therefore there is no algorithm that, given a context-free grammar for a language $$L$$, halts iff the complement of $$L$$ is context-free. In other words, even the following solution to your problem does not exist: an algorithm that attempts to construct a context-free grammar for the complement of the given context-free language, and either halts with the grammar, or never halts (if the complement is not context-free). • I feel stupid, but how is that first language context-free? – cody Jan 6 at 0:09 • That has been answered before several times. – Yuval Filmus Jan 6 at 6:01 • Roughly speaking, either the word is not in $a^*b^*c^*$, or it is of the form $a^ib^jc^k$ where one of the following holds: $i>j,i<j,i>k,i<k,j>k,j<k$. – Yuval Filmus Jan 6 at 6:57 • Oh ok, I see now. I was missing "transitivity of equality" as a hint. – cody Jan 6 at 20:14
2021-06-13 01:37:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 45, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8318250775337219, "perplexity": 121.54018931164022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487598213.5/warc/CC-MAIN-20210613012009-20210613042009-00383.warc.gz"}
https://turbomachinery.asmedigitalcollection.asme.org/mechanicaldesign/article-abstract/131/5/051010/477943/Position-Analysis-Workspace-and-Optimization-of-a?redirectedFrom=fulltext
The present paper describes the analytical solution of position kinematics for a three degree-of-freedom parallel manipulator. It also provides a numeric example of workspace calculation and a procedure for its optimization. The manipulator consists of a base and a moving platform connected to the base by three identical legs; each leg is provided with a $P̱$PS chain, where $P̱$ designates an actuated prismatic pair, P stands for a passive prismatic pair, and S a spherical pair. The direct analysis yields a nonlinear system with eight solutions at the most. The inverse analysis is solved in three relevant cases: (i) the orientation of the moving platform is given, (ii) the position of a reference point of the moving platform is given, and (iii) two rotations (pointing) and one translation (focusing) are given. In the present paper it is proved that case (i) yields an inverse singularity condition of the mechanism; case (ii) provides a nonlinear system with four distinct solutions at the most; case (iii) allows the finding of some geometrical configurations of the actuated pairs for minimizing parasitic movements in the case of a pointing/focusing operation of the manipulator. 1. Gosselin , C. , and Angeles , J. , 1989, “ The Optimum Kinematics Design of a Spherical Three-Degree-of Freedom Parallel Manipulator ,” ASME J. Mech., Transm., Autom. Des. 0738-0666, 111 , pp. 202 207 . 2. Gosselin , C. M. , and Hamel , J. F. , 1994, “ The “Agile-Eye:” A High-Performance 3-DOF Camera Orienting Device ,” Proceedings of the 1994 IEEE International Conference on Robotics and Automation , San Diego, CA, pp. 781 788 . 3. Kong , X. , and Gosselin , C. M. , 2004, “ Type Synthesis of Three-Degree-of-Freedom Spherical Parallel Manipulators ,” Int. J. Robot. Res. 0278-3649, 23 ( 3 ), pp. 237 245 . 4. Di Gregorio , R. , 2004, “ The 3-RRS Wrist: A New, Simple and Non-Overconstrained Spherical Parallel Manipulator ,” ASME J. Mech. Des. 0161-8458, 126 , pp. 850 855 . 5. Di Gregorio , R. , and Parenti-Castelli , V. , 1998, “ A Translational 3-DOF Parallel Manipulator ,” Advances in Robot Kinematics: Analysis and Control , J. Lenarčič and M. L. Husty , eds., Kluwer , Netherlands , pp. 49 58 . 6. Carricato , M. , and Parenti-Castelli , V. , 2003, “ Position Analysis of a New Family of 3-DOF Translational Parallel Manipulators ,” ASME J. Mech. Des. 0161-8458, 125 ( 2 ), pp. 316 322 . 7. Kong , X. , and Gosselin , C. M. , 2004, “ Type Synthesis of 3-DOF Translational Parallel Manipulators Based on Screw Theory ,” ASME J. Mech. Des. 0161-8458, 126 ( 1 ), pp. 83 92 . 8. Lee , K. -M. , and Shah , D. K. , 1988, “ Kinematics Analysis of a Three-Degrees-of-Freedom In-Parallel Actuated Manipulator ,” IEEE J. Rob. Autom. 0882-4967, 4 ( 3 ), pp. 354 360 . 9. Kim , H. S. , and Tsai , L. W. , 2003, “ Kinematics Synthesis of a Spatial 3-RPS Parallel Manipulator ,” ASME J. Mech. Des. 0161-8458, 125 ( 3 ), pp. 92 97 . 10. Pouliot , N. A. , Gosselin , C. , and Nahon , M. A. , 1998, “ Motion Simulation Capabilities of Three-Degree-of-Freedom Flight Simulators ,” J. Aircr. 0021-8669, 35 ( 1 ), pp. 9 17 . 11. Carretero , J. A. , Podhorodeski , R. P. , Nahon , M. A. , and Gosselin , C. M. , 2000, “ Kinematics Analysis and Optimisation of a New Three-Degrees-of-Freedom Parallel Manipulator ,” ASME J. Mech. Des. 0161-8458, 122 , pp. 17 24 . 12. Di Gregorio , R. , and Parenti-Castelli , V. , 2001, “ Position Analysis in Analytical Form of the 3-PSP Mechanism ,” ASME J. Mech. Des. 0161-8458, 123 , pp. 51 55 . 13. Parenti-Castelli , V. , and Innocenti , C. , 1990, “ Direct Displacement Analysis for Some Classes of Spatial Parallel Mechanisms ,” Proceedings of the Eighth CISM-IFToMM Symposium on Theory and Practice of Robots and Manipulators , Cracow, Poland, pp. 126 133 . 14. Di Gregorio , R. , 2004, “ On the Direct Problem Singularities of a Class of 3-DOF Parallel Manipulators ,” Robotica 0263-5747, 22 , pp. 389 394 . 15. Tsai , L. W. , 1999, Robot Analysis: The Mechanics of Serial and Parallel Manipulators , Wiley , New York . 16. Cox , D. , Little , J. , and O’Shea , D. , 2007, Ideals, Varieties, and Algorithms , Springer-Verlag , New York . 17. Kong , X. , 2003, “ Type Synthesis and Kinematics of General and Analytic Parallel Mechanisms ,” Ph.D. thesis, Université Laval, Québec. 18. Kong , X. , and Gosselin , C. M. , 2005, “ Mobility Analysis of Parallel Mechanisms Based on Screw Theory and the Concept of Equivalent Serial Kinematic Chain ,” Proceedings of the ASME 2005 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference , Long Beach, CA, Sept. 24–28, Paper No. DETC2005-85337. 19. Bottema , O. , and Roth , B. , 1979, Theoretical Kinematics ( Applied Mathematics and Mechanics ), H. A. Lauwerier and W. T. Koiter , eds., North-Holland , Amsterdam . 20. Shanno , D. F. , 1970, “ Conditioning of Quasi-Newton Methods for Function Minimization ,” Math. Comput. 0025-5718, 24 , pp. 647 656 .
2023-01-31 12:41:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48425179719924927, "perplexity": 14893.36016412203}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499871.68/warc/CC-MAIN-20230131122916-20230131152916-00697.warc.gz"}
http://tex.stackexchange.com/tags/titles/new
# Tag Info 2 The following example patches the \frametitle command to update the length \titlelen that gives the length of the title under the current theme's font: \documentclass{beamer} \let\Tiny\tiny% http://tex.stackexchange.com/a/94159/5764 \usepackage{etoolbox} \makeatletter \newlength{\titlelen} \tracingpatches \patchcmd{\beamer@@frametitle}% <cmd> ... 2 First, why not just use \insertframetitle? Second, I don't think \beamer@frametitle is defined outside of a frame. That's why you probably get the error message. Third, you should also consider the font (shape, size, ...) in your calculations. Have a look at this example. \documentclass{beamer} \newlength\myframetitlelength \begin{document} ... 1 By replacing \insertsubtitle with \visible<2>{\insertsubtitle} in the default definition of the titlepage, the subtitle gets only visible on the second slide, but its space is already reserved on the first slide. For more information, I recommend the the section about overlays in the beameruserguide. \documentclass{beamer} \usetheme{default} ... 1 I keep on generating the numbered blank front page upon adding a one-page pdf as external material that served as the document's cover page. I was able to remove the blank front page when I added an even number of pdf pages, indicating the addition of two pages in the options tab with the command: pages={1-2} I am using the Tufte template, with two-sided ... 0 You can alter the vertical space between title and the edge of the poster by passing the titletotopverticalspace option to \maketitle. However, I wouldn't recommend to fully remove the spacing as you asked, and as I did below, because if you want to print the poster, the printer could cause trouble, because its unlikely that it accurately prints to the edge ... 0 Looks like this is not as straightforward as it seems at first sight. The title is printed with \maketitle, so it is a little bit more complicated to alter the fontsize of the title, than for the other blocks. To change the settings you can make use of the \settitle command. However, this seems to override all settings of the title block, which probably ... 0 Using the observation that egreg make in his comment on Werner's answer, we can get a very simple solution: \documentclass[12pt,twocolumn]{article} \usepackage{lipsum} \title{Untitled} \author{Me} \begin{document} \makeatletter \@twocolumnfalse \maketitle \@twocolumntrue \makeatother \lipsum[1-2] \end{document} All we had to do was pretend we're in one ... 1 The "content" of \title and \subtitle is stored in the "internal" commands \@title and \@subtitle. To use them (not using \maketitle or similar command) in the document, you have to use \makeatletter (and \makeatother) \documentclass{scrreprt} \title{Title} \subtitle{Subtitle} \begin{document} \makeatletter\@title, \@subtitle\makeatother \end{document} 3 You can also use \texorpdfstring from hyperref as the problem is (generally speaking) based on the use of formatting commands while harvesting the meta data. That way you prevent that \alert gets ever in touch with pdf-specific procedures. The compiler usually don't likes to see anything but plain text there -- especially with TikZ commands it becomes really ... 3 If \title is defined after \begin{document} but before \maketitle, the problem is solved. \documentclass{beamer} \usepackage[utf8]{inputenc} \usetheme{m}%\usetheme{m}%-->problem \usepackage{tikz} \usetikzlibrary{arrows,shapes} \begin{document} \author{} \title{Hello \protect\alert{World}!} \begin{frame} \titlepage \end{frame} \end{document} ... 0 This solution provides a little more automation and keeps the user interface as is, so to say that you can keep on writing \title[short text]{text\footnote{text}}. The basic idea is to disable that the internal macro \@adminfootnotes disables the usual footnote mechanism through \xpatchcmd\@adminfootnotes{\let\@makefnmark\relax}{}{}{} and then use the ... 0 Firstly, you should complete the coding by copying the way authblk extends modifies the \maketitle command: it saves a copy in \AB@maketitle and then redefines the tabular environment. In authblk the code is \let\AB@maketitle=\maketitle \def\maketitle {{\renewenvironment{tabular}[2][]{\begin{center}} {\end{center}} ... 1 Here is a way to keep the footnote marker too. You need to save the definitions and restore them just before issuing the \footnotemark command. There is then some juggling to get the correct counter values. Here is the title: And here are the footnotes, showing \thanks etc. are not disturbed. \documentclass{amsart} \makeatletter ... 1 You can easily add text before or after the tile page elements via \addtobeamertemplate{title page}{before material}{after material} Thus you can put your supervisor's name at towards the bottom of the title page via \addtobeamertemplate{title page}{}{\begin{center}Supervisor\end{center}} \documentclass{beamer} \begin{document} \title{Talk title} ... 3 The workaround is quite simple, but there will be no footnote marker: \documentclass{amsart} \begin{document} \newcommand\myfootnotetitle{\spaceskip=0pt \scshape I want this in Small Caps} \title{Title\footnote{\protect\myfootnotetitle}} \author{A. U. Thor} \maketitle \vspace*{\fill} {\footnotesize\myfootnotetitle\par} % for checking \end{document} ... 4 beamer has a command \donotcoloroutermaths to remove math coloring. You can insert this into the template for titles as in the example below. I use red as the maths color for better clarity in the demonstration. \documentclass{beamer} \setbeamercolor{math text}{fg=red} \addtobeamertemplate{frametitle}{\donotcoloroutermaths} \begin{document} ... 6 You can use the command \emptythanks to clear the list: \documentclass{article} \usepackage{titling} \begin{document} \title{title1} \author{someone1 \thanks{hi@email.com}} \maketitle \newpage \emptythanks \title{title2} \author{someone2 \thanks{bye@email.com}} \maketitle \end{document} 10 After removing packages and code, which do not make immediate disappear, the following remains: \documentclass[11pt]{article} \usepackage{authblk} \begin{document} \title{this is the title} \maketitle \end{document} With the following warning: LaTeX Warning: No \author given. Also, let us make authorblk/\maketitle happy: ... 0 It is cleaner and clearer to avoid hard-coding formatting in the content of commands like \title. Although this is typically a once-off command in a document - if only because \maketitle enforces this by wiping everything - it is still best avoided, I think. And the alternative is not the scary-looking patching of internal commands. It is, as in the case of ... 0 I think creating a custom titlepage is more suited than to fiddle with page headers and footers. \documentclass[11pt]{report} \usepackage[T1]{fontenc} \begin{document} \begin{titlepage} University \vfill \begin{center} {\LARGE\bfseries Title here \bigbreak} {\large My name \medbreak} \today \end{center} \vfill ... Top 50 recent answers are included
2016-05-03 01:16:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9595314860343933, "perplexity": 5768.210440460865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860118321.95/warc/CC-MAIN-20160428161518-00089-ip-10-239-7-51.ec2.internal.warc.gz"}
http://www.reference.com/browse/Diagonally+dominant
Definitions # Diagonally dominant matrix In mathematics, a matrix is said to be diagonally dominant if in every row of the matrix, the magnitude of the diagonal entry in that row is larger than the sum of the magnitudes of all the other (non-diagonal) entries in that row. More precisely, the matrix A is diagonally dominant if $|a_\left\{ii\right\}| > sum_\left\{jneq i\right\} |a_\left\{ij\right\}| quadtext\left\{for all \right\} i, ,$ where aij denotes the entry in the ith row and jth column. ## Variations The definition in the first paragraph sums entries across rows. It is therefore sometimes called row diagonal dominance. If one changes the definition to sum down columns, this is called column diagonal dominance. The definition in the first paragraph uses a strict inequality. It is therefore sometimes called strict diagonal dominance. If a weak inequality ($geq$) is used, this is called weak diagonal dominance. If an irreducible matrix is weakly diagonally dominant, but in at least one row (or column) is strictly diagonally dominant, then the matrix is irreducibly diagonally dominant. ## Applications and properties By the Gershgorin circle theorem, a strictly (or irreducibly) diagonally dominant matrix is non-singular. This result is known as the Levy–Desplanques theorem. A Hermitian diagonally dominant matrix with real non-negative diagonal entries is positive semi-definite. If the symmetry requirement is eliminated, such a matrix is not necessarily positive semi-definite; however, the real parts of its eigenvalues are non-negative. No (partial) pivoting is necessary for a strictly column diagonally dominant matrix when performing Gaussian elimination (LU factorization). The Jacobi and Gauss–Seidel methods for solving a linear system converge if the matrix is strictly (or irreducibly) diagonally dominant. Many matrices that arise in finite element methods are diagonally dominant. A slight variation on the idea of diagonal dominance is used to prove that the pairing on diagrams without loops in the Temperley-Lieb algebra is nondegenerate. For a matrix with polynomial entries, one sensible definition of diagonal dominance is if the highest power of $q$ appearing in each row appears only on the diagonal. (The evaluations of such a matrix at large values of $q$ are diagonally dominant in the above sense.) ## References • Gene H. Golub & Charles F. Van Loan. Matrix Computations, 1996. ISBN 0-8018-5414-8 • Roger A. Horn & Charles R. Johnson. Matrix Analysis, Cambridge University Press, 1985. ISBN 0-521-38632-2 (paperback). Search another word or see Diagonally dominanton Dictionary | Thesaurus |Spanish
2014-04-20 19:44:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.965347170829773, "perplexity": 353.1759088460432}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/25808/hyperref-pdf-table-of-contents-messed-up-probably-bookmarks-problem
I have trouble with my thesis document. The sidebar table of contents (when opening in a pdf viewer) has a wrong ordering. Even though chapter 1 and chapter 2 are on the same hierarchical plane in the sidebar pdf toc chapter 2 is under chapter 1 (and chapter 3 and all other chapters are under chapter 2). The normal table of contents in the document is fine. I think it has something with bookmarks (which I'm not 100% sure what they are). After I fix a mistake in my latex document and latex doesn't compile fully. The next compile ends with the error: Runaway argument? {\376\377\0002\000.\0005\000.\0001\000\040\000R\000o\000t\000a\000t\0\ETC. ./thesis.tex:293 (which is \begin{document}: File ended while scanning use of \@@BOOKMARK. trashing the aux file clears that and the next compile is fine again (the ordering of the pdf toc is still messed up). How do I debug that? I can't really disable packages as they are tightly integrated. Any suggestions are welcome. - There is a missing parenthesis in a macro where its contents is also set as a bookmark. However, I suppose yor thesis will be printed. Why do you then need hyperref? – Herbert Aug 16 '11 at 9:07 The bookmarks are the things in the sidebar. The runaway error probably maens that one of your \chapter etc command does contains something which breaks when hyperref tries to put it in the bookmark. Inspect the aux and the out-file. Perhaps you can find the culprit. Or put \end{document} in the middle of your document and then move it around until you find the point where the error appears. You can try to protect fragile commands with \protect\gls.... Or use \texorpdfstring: tex.ac.uk/cgi-bin/texfaq2html?label=texorpdf – Ulrike Fischer Aug 16 '11 at 10:30
2016-06-28 03:49:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9571685194969177, "perplexity": 2213.6328377595055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00053-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.scienceforums.net/topic/79439-here-is-practical-explanation-about-next-life-purpose-of-human-life-philosophicalreligious-facts-theories-etc/?tab=comments
# Here is Practical Explanation about Next Life, Purpose of Human Life, philosophical/religious facts, theories etc. ## Recommended Posts Practical Explanation ( For Example ) :- 1st of all can you tell me every single seconds detail from that time when you born ?? ( i need every seconds detail ?? that what- what you have thought and done on every single second ) can you tell me every single detail of your 1 cheapest Minute Or your whole hour, day, week, month, year or your whole life ?? if you are not able to tell me about this life then what proof do you have that you didn't forget your past ? and that you will not forget this present life in the future ? that is Fact that Supreme Lord Krishna exists but we posses no such intelligence to understand him. there is also next life. and i already proved you that no scientist, no politician, no so-called intelligent man in this world is able to understand this Truth. cuz they are imagining. and you cannot imagine what is god, who is god, what is after life etc. _______ So you have to ask from mother, "Who is my father?" And if she says, "This gentleman is your father," then it is all right. It is easy. Otherwise, if you makes research, "Who is my father?" go on searching for life; you'll never find your father. ( now maybe...maybe you will say that i will search my father from D.N.A, or i will prove it by photo's, or many other thing's which i will get from my mother and prove it that who is my Real father.{ So you have to believe the authority. who is that authority ? she is your mother. you cannot claim of any photo's, D.N.A or many other things without authority ( or ur mother ). if you will show D.N.A, photo's, and many other proofs from other women then your mother. then what is use of those proofs ??} ) same you have to follow real authority. "Whatever You have spoken, I accept it," Then there is no difficulty. And You are accepted by Devala, Narada, Vyasa, and You are speaking Yourself, and later on, all the acaryas have accepted. Then I'll follow. I'll have to follow great personalities. The same reason mother says, this gentleman is my father. That's all. Finish business. Where is the necessity of making research? All authorities accept Krsna, the Supreme Personality of Godhead. You accept it; then your searching after God is finished. Why should you waste your time? _______ all that is you need is to hear from authority ( same like mother ). and i heard this truth from authority " Srila Prabhupada " he is my spiritual master. im not talking these all things from my own. ___________ in this world no 1 can be Peace full. this is all along Fact. cuz we all are suffering in this world 4 Problems which are Disease, Old age, Death, and Birth after Birth. tell me are you really happy ?? you can,t be happy if you will ignore these 4 main problem. then still you will be Forced by Nature. ___________________ if you really want to be happy then follow these 6 Things which are No illicit sex, No gambling, No drugs ( No tea & coffee ), No meat-eating ( No onion & garlic's ) 5th thing is whatever you eat 1st offer it to Supreme Lord Krishna. ( if you know it what is Guru parama-para then offer them food not direct Supreme Lord Krishna ) and 6th " Main Thing " is you have to Chant " hare krishna hare krishna krishna krishna hare hare hare rama hare rama rama rama hare hare ". _______________________________ If your not able to follow these 4 things no illicit sex, no gambling, no drugs, no meat-eating then don,t worry but chanting of this holy name ( Hare Krishna Maha-Mantra ) is very-very and very important. Chant " hare krishna hare krishna krishna krishna hare hare hare rama hare rama rama rama hare hare " and be happy. if you still don,t believe on me then chant any other name for 5 Min's and chant this holy name for 5 Min's and you will see effect. i promise you it works And chanting at least 16 rounds ( each round of 108 beads ) of the Hare Krishna maha-mantra daily. ____________ Here is no Question of Holy Books quotes, Personal Experiences, Faith or Belief. i accept that Sometimes Faith is also Blind. Here is already Practical explanation which already proved that every1 else in this world is nothing more then Busy Foolish and totally idiot. _________________________ Source(s): every 1 is already Blind in this world and if you will follow another Blind then you both will fall in hole. so try to follow that person who have Spiritual Eyes who can Guide you on Actual Right Path. ( my Authority & Guide is my Spiritual Master " Srila Prabhupada " ) _____________ if you want to see Actual Purpose of human life then see this link : ( www.asitis.com {Bookmark it }) read it complete. ( i promise only readers of this book that they { he/she } will get every single answer which they want to know about why im in this material world, who im, what will happen after this life, what is best thing which will make Human Life Perfect, and what is perfection of Human Life. ) purpose of human life is not to live like animal cuz every1 at present time doing 4 thing which are sleeping, eating, sex & fear. purpose of human life is to become freed from Birth after birth, Old Age, Disease, and Death. ##### Share on other sites ! Moderator Note Please do NOT put religious threads in mainstream science sections. Moved from Biology to Religion. ##### Share on other sites ! Moderator Note Is it possible you could provide a few line summary of your argument? We have noted that it can be off-putting to see an OP with a large amount of text and/or links to offsite documents; we believe it may help foster good argument if the OP was asked to provide a short abstract of their idea. This abstract should be short, uncomplicated, and introduce the main argument and conclusion of the post. ##### Share on other sites • 1 month later... Special Note :- Intelligence has to do with the soul, not simply with the brain. Take electricity, for example. Electricity moves between gross elements and through a gross wire. But the electricity itself -- it is not those elements, not that wire. It is subtle. ##### Share on other sites In what relation does the rememberance of your past prove the existance of Krishna? As well, an intelligent man would first ask if Krishna existed rather than try to understand it. Because, how can you truly understand what doesn't exist? ##### Share on other sites In what relation does the rememberance of your past prove the existance of Krishna? As well, an intelligent man would first ask if Krishna existed rather than try to understand it. Because, how can you truly understand what doesn't exist? For Example :- how can child understand that who is his real father ? would he accept anyOne as his father ? answer is :- Just like you learn who is your father. You take the version of your mother and you believe that "He is my father." Otherwise there is no other way. How can you know your father? The only means is his mother recommends, "My dear boy, he is your father." And that is perfect, that's all. Otherwise you cannot know who is your father. *Similarly* you have to *approach* to the *Actual Authority* then you can *understand* the *Actual Reality.* ____________________ *( it is called analogy in which Authority is like mother in analogy and Krishna The Supreme Personality of Godhead is like Father.)* ##### Share on other sites This talk about fathers seems a bit strange. Modern DNA testing shows that quite a lot of people don't know who their father is. I believe that, in my case, it's the man who was married to my mother- but I accept that I can't strictly be sure of that. Again, there are plenty of people known to be adopted- their biological father has no real relevance to them. The man who earned the money that paid for the family upkeep, taught them how to play football, and stood up for them against the playground bully is, in a very important sense, their father- no matter whose DNA they carry. So, even if my dad isn't strictly my dad, he has earned my love and respect by his actions since I was born.. Now, lets look at this "Lord Krishna" you talk about. What evidence is there that he even exists? None as far as I can see. You haven't even tried to show any. More to the point, what has he done for me? Did he feed my family as we grew up? Did he teach me to play football? Did he do nothing? In plenty of cases, he stood idly by, while people were hurt and killed. What sort of "father" does that? Better to have an imperfect earthly father who actually helps than a "heavenly" father who does nothing to help millions of his children when they need it. ##### Share on other sites "For Example :- how can child understand that who is his real father ? would he accept anyOne as his father ?" If your speaking about the uncertainty that your father is your real 'biological' father, then a DNA test would have the sufficient authority to say whether he is or not. To say otherwise is to refute the validity of the DNA test, which massive amounts of time, experimentation, research, and money was put into. The problem with the analogy is that I have proof that I've seen my parents, and -if- I've never met them, I at least know that I'm a product of procreation between two human beings, that alone would support their existence. With Krishna, I have never seen or heard of any evidence that proves its existence. Edited by Stetson ##### Share on other sites This talk about fathers seems a bit strange. Modern DNA ************skip************hen they need it. ______________ 1st of all you want to see Krishna The Supreme personality of Godhead or want to understand him direclty right ? ______________ but 1st look at position of your material eyes you are already blind. how ? just tell me are you able to see anything in the morning without sunlight ? and what to speak of darkness of night with light ? and still you are so much puffed up and 1st look at the position of your gross material eyes. you are already blind even in the morning without sunlight and what to speak of darking of night without light ? _____________________ and 2nd thing is this that you want to understand krishna ? but look at the position of your intelligence. and my 1st claim is this that you are all idiots. How ? _______________ simply tell me all details about your *1 cheap minute ?* ( details of every single second that what- what you have thought or done on every single second ) and if you *cannot* tell me about *1 cheap minute* then how can you will tell me every single detail of your *whole hour, day, week, month, year or your whole life ?* ________________ then what is *use* of this your *so called education* if you cannot tell me or anyOne everything *as it is* ? *and in same way you forgot your past life also and you are forgetting everything now also.* therefore your mind is not very broad. what is that called ? " Crippled." ___________________________ *********** and still you are so much falsely puffed up at your gross material eyes and intelligence ? but you are not even able to tell me who is your real father without help your mother. what is your position ? it is nothing and still like fools challenging that i want to see your god or show me evidence. _________ 1st look at your gross material sense they are simply imperfect and still you want to understand unlimited God but your lmited sense ? ______________ therefore so called experts like you who are simply mental speculators and you are suffering from what we call "Doctor Frog's philosophy." There was once a frog who had lived all his life in a well. One day a friend visited him and informed him of the existence of the Atlantic Ocean. "Oh, what is this Atlantic Ocean?" asked the frog in the well. "It is a vast body of water," his friend replied. "How vast? ls it double the size of this well?" "Oh, no, much larger," his friend replied. "How much larger? Ten times the size?" In this way the frog went on calculating. But what was the possibility of his ever understanding the depths and far reaches of the great ocean? Our faculties, experience, and powers of speculation are always limited. The frog was always thinking in terms relative to his well. He had no power to think otherwise. Similarly, the scientists are estimating the Absolute Truth, the cause of all causes, with their imperfect senses and minds, and thus they are bound to be bewildered. The essential fault of the so-called scientists is that they have adopted the inductive process to arrive at their conclusions. For example, if a scientist wants to determine whether or not man is mortal by the inductive process, he must study every man to try to discover if some or one of them may be immortal. The scientist says, "I cannot accept the proposition that all men are mortal. There may be some men who are immortal. I have not yet seen every man. Therefore how can I accept that man is mortal?" This is called the inductive process. And the deductive process means that your father, your teacher, or your guru says that man is mortal, and you accept it. ##### Share on other sites " 1st of all you want to see Krishna The Supreme personality of Godhead or want to understand him direclty right ?" No, obviously, before any of that, I want some sort of evidence that he actually exists. That's what comes First- some evidence. And you have yet to supply any at all. " my 1st claim is this that you are all idiots." ##### Share on other sites You have provided no scientific evidence for your claims. People on this forum are looking for proof. If you don't have any, they won't believe you. Edited by Endercreeper01 ##### Share on other sites "1st of all you want to see Krishna The Supreme personality of Godhead or want to understand him direclty right ?" I want to see the evidence of his existence. "just tell me are you able to see anything in the morning without sunlight ? and what to speak of darkness of night with light ? and still you are so much puffed up and 1st look at the position of your gross material eyes. you are already blind even in the morning without sunlight and what to speak of darking of night without light ?" I can see many things in the morning when the sun isn't up, I.e. light from the moon or an artificial source. As well, my eyes see just fine, if I was blind I would not be able to react to stimuli from the light. "and 2nd thing is this that you want to understand krishna ? but look at the position of your intelligence. and my 1st claim is this that you are all idiots." Your 'claim' is irrelevant. If being an idiot is to approach things with scrutiny and to not take things on faith, then I must be one. "simply tell me all details about your *1 cheap minute ?* ( details of every single second that what- what you have thought or done on every single second ) and if you *cannot* tell me about *1 cheap minute* then how can you will tell me every single detail of your *whole hour, day, week, month, year or your whole life ?*" If a person remembered everything that occurred in the past with uncanny accuracy, there wouldn't be any room to learn new things. But our brain has a miraculous function where we can remember important details about our past. No matter, what would an incredible memory prove? That we aren't perfect? That is a give in. "then what is *use* of this your *so called education* if you cannot tell me or anyOne everything *as it is* ? *and in same way you forgot your past life also and you are forgetting everything now also.* therefore your mind is not very broad. what is that called ? " Crippled."" Education is meant to prepare to work within your society. You are so quick to dismiss the value of education that you haven't realized that you would be illiterate without it. And in no way is education simply, remember this, remember that. It's about understanding things and learning new things. "and still you are so much falsely puffed up at your gross material eyes and intelligence ? but you are not even able to tell me who is your real father without help your mother. what is your position ? it is nothing and still like fools challenging that i want to see your god or show me evidence." I did list the DNA test as a means of identifying your biological father, you not reading does not mean that I was not able to tell you, that is your fault. My position is that Krishna does not exist unless if such evidence were to be presented to support it. You simply can't indoctrinate a scientific community based on faith. "therefore so called experts like you who are simply mental speculators and you are suffering from what we call "Doctor Frog's philosophy."" I'm no expert and never have I purported to be. Experts in their respective fields aren't speculators. Speculation is where you make a conjecture, or an opinion accompanied by a lack of firm evidence. Experts have dedicated great parts of their life to research. To call them speculators is a disgrace for the work they have done to make the world a better place. If we didn't have scientists modern day technology would not exist. "Our faculties, experience, and powers of speculation are always limited. The frog was always thinking in terms relative to his well. He had no power to think otherwise. Similarly, the scientists are estimating the Absolute Truth, the cause of all causes, with their imperfect senses and minds, and thus they are bound to be bewildered. The essential fault of the so-called scientists is that they have adopted the inductive process to arrive at their conclusions." And scientists think in terms of internationally set measurements, i.e. the metric system. They don't estimate, they use the scientific method accompanied by mathematics. No scientist today is going to get away with spouting conjecture backed up by inductive reasoning, and then all of a sudden it becomes a fact or truth. It will undergo critical peer review by experts before it is even thought of as a serious theory to be considered by the scientific community. And with that requires vast amounts of data, time, and research. So far you haven't provided any evidence, only biased philisophical views and attacks made towards other posters. Please take a moment and formulate some tangible evidence that can be used to support the existence of Krisna. Edited by Stetson ##### Share on other sites " 1st of **********skip***********rules. So people, they sometimes say, "Can you show me God? Have you seen God?" These questions sometimes we meet. So the answer is here. Yes, you can see God. Everyone can see God. I am also seeing God. But there must be the qualification. Just like God is there Suppose a motorcar is there, something is wrong there. Everyone is seeing. But one engineer or mechanic, he sees differently. Therefore we have to go there. "What is the wrong in this car? It is not running." He immediately touches some machine part; it runs. So you all rascals, you do not know that "How I can see God if I have not the qualification?" The machine has gone wrong, I am seeing the machine. And the engineer, the mechanic, he is also seeing the machine. But his seeing and my seeing is different. He's qualified to see. Therefore when the machine has gone wrong, immediately he touches some part, it runs. So if for a machine we require so much qualification, and we want to see God without any qualification? Just see the fun. Without any qualification. Rascal, you are all so rascal, so fool, that they want to see God with your nuisance qualification. Krsna says in the Bhagavad-gita: naham prakasah sarvasya yoga-maya-samavrtah: [bg. 7.25] "I am not exposed to everyone. Yogamaya, yogamaya is covering." So how you can see God? But this rascaldom is going on, that "Can you show me God? Have you seen God?" God has become just like a plaything. "Here is God. He is incarnation of God." Na mam duskrtino mudhah prapadyante naradhamah [bg. 7.15]. you are all sinful, rascals, fools, lowest of the mankind. They inquire like that: "Can you show me God?" What qualification you have acquired, that you can see God? Here is the qualification. What is that? Tac chraddadhana munayah. One must be first of all faithful. Faithful. Sraddadhanah. He must be very much eager to see God, actually. Not that as a proclivity, frivolous thing, "Can you show me God?" A magic, just like God is a magic. No. He must be very serious: "Yes, if there is God... We have seen, we have been informed about God. So I must see." There is a story in this connection. It is very instructive; try to hear. One professional reciter was reciting about Bhagavata, and he was describing that Krsna, being very highly decorated with all jewels, He is sent for tending the cows in the forest. So there was a thief in that meeting. So he thought that "Why not then go to Vrndavana and plunder this boy? He is in the forest with so many valuable jewels. I can go there and catch the child and take the, all the jewels." That was his intention. So, he was serious that "I must find out that boy. Then in one night I shall become millionaire. So much jewelries. No." So he went there, but his qualification was that "I must see Krsna, I must see Krsna." That anxiety, that eagerness, made it possible that in Vrndavana he saw Krsna. He saw Krsna the same way as he was informed by the Bhagavata reader. Then he saw, "Oh, oh, you are so nice boy, Krsna." So he began to flatter. He thought that "Flattering, I shall take all the jewels" (laughter). So when he proposed his real business, "So may I take some of your these ornaments? You are so rich." "No, no, no. You... My mother will be angry. I cannot..." (laughter) Krsna as a child. So he became more and more eager for Krsna. And then... By Krsna's association, he had already become purified. Then, at last, Krsna said, "All right, you can take." Then he became a devotee, immediately. Because by Krsna's association... So some way or other, we should come in contact with Krsna. Some way or other. Then we'll be purified. Kamad bhayad dvesyat. Just like the gopis.. The gopis came to Krsna being captivated by His beautiful features. They were young girls, and Krsna was so beautiful. So actually, they came to Krsna being lusty, but Krsna is so pure that they became first-class devotees. There is no comparison of their devotion. Because they loved Krsna with heart and soul. That is the qualification. That is the qualification. They loved so much Krsna that they didn't care for family, for reputation. When they were going at dead of night... Krsna's flute was there, and they were all fleeing. Their father, their brother, their husband: "Where you are going? Where you are going in this dead of night?" They didn't care. They neglected their children, their family, everything: "We must go to Krsna." So this is required. We must be very, very eager so that... And many gopis who were forcibly stopped, going to Krsna, they lost their life. Just see how much eager they are. So this eagerness is wanted. Then you can see God. Either you become lusty or a thief or a murderer or whatever it may be. Some way or other, if you develop this eagerness, that "I must see Krsna," then Krsna will be seen. ##### Share on other sites Stop posting tripe and post some evidence that this God of yours actually exists. ##### Share on other sites Stop posting tripe and post some evidence that this God of yours actually exists. fools paradise is the answer do you understand it Mr. so called expert ? ##### Share on other sites You are insulting people and proselytizing. Both of which are against the rules. It also casts your beliefs in a negative light. If you care about nothing else, I would hope you would at least care about that. ##### Share on other sites fools paradise is the answer do you understand it Mr. so called expert ? Just to clarify, do you mean "Fools' paradise"? "a state of happiness based on a person's not knowing about or denying the existence of potential trouble. "they were living in a fool's paradise, refusing to accept that they were in debt"" from Are you saying that if i learned more I would be less happy? That doesn't seem to make any sense. It certainly is not evidence for the existence of any God. Is he a figment of your imagination, or is the fault with you? ##### Share on other sites Calling someone a fool, idiot, rascal, sinful, and worst of all mankind doesn't support your argument. It's actually counter intuitive. Why don't you just humor us and actually take your time to understand science rather than shoot it down because all you know and were brought up to know was your religion. If you think about it, when have you ever tried to understand science? If you never took the time to understand it, how do you know it's false? And by understand it, I don't mean read biased opinions of theists. If you already have tried to humor science, I'm afraid you're looking in the wrong place or are just taking it wrong. ##### Share on other sites You are insulting people and proselytizing. Both of which are against the rules. It also casts your beliefs in a negative light. If you care about nothing else, I would hope you would at least care about that. it is called social convention that you can speak very palatable and flattering and you can't speak very unpalatable truth that is called social convention. ___________________ and im in debate right ? that means i can't follow this social convention and i must speak the real truth that you are all so called intelligent are nothing more then idiots. _______________ what you all and your mr.so called expert has done here ? other then imposing some cheap opinion or personal experience ? is this dry talking is your science ? _________________ you are all Fools No.`1 “A fool is accepted by another fool." For Example :-" fool’s paradise."All of you are fools and you have created your own paradise. Do you know that story? One was drinking, so his friend said, ‘Oh, you are drinking, you’ll go to hell.’ “‘No, why? My father drinks.’ “‘Well, he’ll also go to hell.’ “‘Oh, my brother drinks.’ “‘So he’ll also go to hell.’ “’My mother…’ In this way, the whole list was passed. Then he said, ‘Everyone will go to hell then where is hell? It is paradise! If father is going, then mother is going, then I am going, then brother is going, then where is hell?’ “It is like that. There’s no question of fool. If everyone, all of us are fool, then where is the question of intelligent? ‘Hey, we are intelligent.’ This is your conclusion. Edited by Jaya Jagannath ##### Share on other sites Then you're fooling yourself if you believe that you can convince what you believe to be fools. I don't believe you are a fool for your beliefs, just different in our ways of thinking. If you were told by your parents your whole life, and the people around you, that unicorns existed and the sky was pink, you'd believe them. Just like religion. If your parents or peers give you no freedom of choice to pick what you think is right, are they your own beliefs? I was born into a Christian family and was told, by my parents and peers, the different systems of belief. I chose science because of its appeal to reason and evidence that offers detailed explanations on why things are. What appeal does the belief of Krishna, in absence of reason and evidence, have to mankind? ##### Share on other sites Jaya Jagannath, My word! what a lot of tosh. You don't understand the nature of debate. It is acceptable to say "your ideas are wrong" but it is not acceptable to say "you are a fool" What is needed in debate is evidence and reasoning. You have provided neither. You have also not quite finished reading some things- for example, I'm described as a resident expert, but the expertise is explicitly labeled as being in Chemistry. It would be better if you understood more before you argued so incompetently. It's hardly going to matter because, unless you start doing a much better job of demonstrating the ability to think (as opposed to parroting arguments you have heard elsewhere) you are going to get banned. ##### Share on other sites In this way the frog went on calculating. But what was the possibility of his ever understanding the depths and far reaches of the great ocean? Our faculties, experience, and powers of speculation are always limited. The frog was always thinking in terms relative to his well. He had no power to think otherwise. Similarly, the scientists are estimating the Absolute Truth, the cause of all causes, with their imperfect senses and minds, and thus they are bound to be bewildered. The essential fault of the so-called scientists is that they have adopted the inductive process to arrive at their conclusions. The essential flaw in your argument is that science attempts to arrive at an "Absolute Truth" of some kind. Scientific investigation simply seeks to observe and describe the natural world. "Absolute truths" tend to lie in the realm of religion and the supernatural, which is outside the purview of science - which is ambivalent to the supernatural. As a demonstration of this in practice, most scientific results will be presented with a probability (or p) value - a statistical measure of how likely an answer is to be correct. P values can approach, but never reach 1 - so no scientific result could ever be said to be absolute. In fact, in the field I work in (evolutionary biology) I would say most if not all of our model based research is wrong, to an extent. We simply aim to provide the least wrong interpretation of the available data in order to answer our hypotheses - which is quite different from trying to provide any "absolute truth". On the other hand, if a concept that is religious in nature begins to make claims about the natural world - (e.g. the world is 6,000 years old) I personally would expect such an assertion to live up the same expectations of empirical evidence and mechanistic explanation I expect of a scientific explanation before I would entertain accepting it. Making a fundamental false assumption about the scientific method, and then using that false assumption as a basis to start slinging insults at scientists doesn't seem like a very rational, sensible or logical position. Such positions are not very well tolerated in the sciences, or on this board, so if you're motivation is to have some sort of constructive debate rather than to soapbox, I'd politely suggest changing your approach to a more engaged and polite one. Also, your word processor's bold function appears to be activating at random. ##### Share on other sites You are insulting people and proselytizing. Both of which are against the rules. It also casts your beliefs in a negative light. If you care about nothing else, I would hope you would at least care about that. ! Moderator Note I would like to make the above and official warning to you, Jaya Jagannath. Our rules prohibit soap boxing / preaching. If you cannot do this, you will find your time here to be very short. In the meantime, thread closed. ##### Share on other sites This topic is now closed to further replies. ×
2021-05-10 08:49:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3257738947868347, "perplexity": 2584.6115319292985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989115.2/warc/CC-MAIN-20210510064318-20210510094318-00194.warc.gz"}
http://edoc.mpg.de/display.epl?mode=doc&id=544825&col=61&grp=916
Institute: MPI für Physik     Collection: MPI für Physik     Display Documents ID: 544825.0, MPI für Physik / MPI für Physik Identified baryon and meson distributions at large transverse momenta from Au+Au collisions at s(NN)**(1/2) = 200 GeV Authors: Date of Publication (YYYY-MM-DD):2006 Title of Journal:Physical Review Letters Journal Abbrev.:Phys.Rev.Lett. Issue / Number:97 Start Page:152301 Audience:Not Specified Intended Educational Use:No Abstract / Description:Transverse momentum spectra of $\pi^{\pm}$, $p$ and $\bar{p}$ up to 12 GeV/c at mid-rapidity in centrality selected Au+Au collisions at $\sqrt{s_{_{NN}}} = 200$ GeV are presented. In central Au+Au collisions, both $\pi^{\pm}$ and $p(\bar{p})$ show significant suppression with respect to binary scaling at $p_T >$ 4 GeV/c. Protons and anti-protons are less suppressed than $\pi^{\pm}$, in the range 1.5 $< p_{T} <$6 GeV/c. The $\pi^-/\pi^+$ and $\bar{p}/p$ ratios show at most a weak $p_T$ dependence and no significant centrality dependence. The $p/\pi$ ratios in central Au+Au collisions approach the values in p+p and d+Au collisions at $p_T >$ 5 GeV/c. The results at high $p_T$ indicate that the partonic sources of $\pi^{\pm}$, $p$ and $\bar{p}$ have similar energy loss when traversing the nuclear medium. Classification / Thesaurus:STAR Comment of the Author/Creator:6 pages, 4 figures External Publication Status:published Document Type:Article Communicated by:N.N. Affiliations: Identifiers: Full Text: You have privileges to view the following file(s): nucl-ex/0606003.pdf  [223,00 Kb] [Comment:file from upload service] The scope and number of records on eDoc is subject to the collection policies defined by each institute - see "info" button in the collection browse view.
2020-08-03 19:06:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7339351773262024, "perplexity": 2401.3314511515605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735823.29/warc/CC-MAIN-20200803170210-20200803200210-00106.warc.gz"}
http://15462.courses.cs.cmu.edu/fall2018/lecture/introgeometry/slide_054
Previous | Next --- Slide 54 of 68 Back to Lecture Thumbnails dvanmali I noticed that Bernstein Basis seems a lot like applying a Taylor Series expansion with an increase in basis. Were these derived from a similar notion? Is so, couldn't we use a Taylor Series this? theyComeAndGo Why did we choose Bernstein basis instead of Taylor series, Fourier series etc.? keenan For one thing, because Bernstein bases are fairly localized in space and hence have a natural relationship with the control points. For instance, the coefficients of the first and last bases will directly determine the endpoints of the curve. For the cubic basis in particular, the other two coefficients give direct control over tangents at endpoints. In short: because this basis is natural for manipulating curves. keenan @dvanmali Also, if you take the 3rd-order Taylor series of a given function, you can always express it in the Bernstein basis. This basis is, after all, a basis for all cubic polynomials, not a special class of them. I would think of the Taylor series as more of a "procedure" for approximating a given function in terms of its kth derivatives; this (truncated) series can then be written down in any basis. The typical way to write it, of course, is just in the monomial basis 1, x, x^2, x^3, ..., but this basis isn't very good from the perspective of locality.
2020-04-02 00:51:36
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9114021062850952, "perplexity": 313.03875801956383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506477.26/warc/CC-MAIN-20200401223807-20200402013807-00161.warc.gz"}
https://www.gamedev.net/forums/topic/215821-rotating---the-basic-2d-rotation/
Public Group #### Archived This topic is now archived and is closed to further replies. # Rotating - The Basic 2d Rotation This topic is 5202 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I have a Array int b[6][6],sx,sy; sx = 3; sy = 3; // Thus b[1][1] Will not rotate Now i want the entire array to rotate 90 degrees ect... b[0][0] ---> b[2][0] b[0][2] ---> b[0][0] b[2][2] ---> b[0][2] b[2][0] ---> b[2][2] But that is only if it is a 3,3 Object stored,s if its a 2,2 then b[0][0] --->b[1][0] b[1][0] --->b[1][1] b[1][1] --->b[0][1] b[0][1] --->b[1][0] I dont need the Physical entries to change just there positions eg Block[b[0][0].X = 1 * cos(angle) Block[b[0][0].Y = 1 * sin(angle) Please help -- its for tetris game ##### Share on other sites I didn''t quite get what you were asking for... I think for a tetris game (only 90° rotations needed) I''d actually rotate only at drawing time... you could allocate a 4x4 array for every kind of piece there is and rotate it on the fly. --- Sebastian Beschke Just some student from Germany http://mitglied.lycos.de/xplosiff ##### Share on other sites i would agree with you but since i am trying to get a nice rotation effect, Say vector b[3][3] you would rotate it like this gameLoop { angle++ loop; i++; { loop; l++ { sizex = i - 1; // in this case,-1,0,1 sizey = l - 1; b[l].x = sizex*cos(angle); b[i][l].y = sizey*sin(angle); } } } I need to rotate those X+Y as if the middle of the array is the middpoint ##### Share on other sites To rotate 90 degrees: y = old_x; x = block_height - old_y; See: [x0][x1][x2][x3][y1][y2][y3] Rotate: [y3][y2][y1][x0] [x1] [x2] [x3] Run the peice through a loop and modify x and y like this to rotate 90 degrees to the right. To rotate 180: y = block_height - old_y; x = block_width - old_x; Do you understand? I can explain further EDIT: Are you trying to do smooth transition rotation? If so, my example won't work. If not, you don't need sin & cos. [edited by - Jiia on March 26, 2004 3:47:23 PM] ##### Share on other sites quote: I need to rotate those X+Y as if the middle of the array is the middpoint Then you should either make them zero (and use offsets for each other point), or draw them by using an origin. If you subtract the location of the points you want to make the origin from every other point, they are now the origin. 1. 1 2. 2 3. 3 4. 4 Rutin 17 5. 5 • 10 • 11 • 37 • 12 • 12 • ### Forum Statistics • Total Topics 631414 • Total Posts 2999958 ×
2018-06-23 10:31:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17804916203022003, "perplexity": 4695.46701788533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864957.2/warc/CC-MAIN-20180623093631-20180623113631-00499.warc.gz"}
https://www.ques10.com/t/heat%20and%20mass%20transfer/?sort=rank&limit=all%20time&q=
Showing: heat and mass transferreset <prev • 4 results • page 1 of 1 • next > 0 answers 699 views 0 answers 0 answers 543 views 0 answers 0 answers 760 views 0 answers 0 answers 430 views 0 answers <prev • 4 results • page 1 of 1 • next >
2020-05-28 22:13:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24759095907211304, "perplexity": 9054.742806021719}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347400101.39/warc/CC-MAIN-20200528201823-20200528231823-00212.warc.gz"}
https://eccc.weizmann.ac.il/report/2021/096/
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > DETAIL: ### Paper: TR21-096 | 8th July 2021 21:08 #### Keep That Card in Mind: Card Guessing with Limited Memory TR21-096 Publication: 9th July 2021 00:32 Keywords: Abstract: A card guessing game is played between two players, Guesser and Dealer. At the beginning of the game, the Dealer holds a deck of $n$ cards (labeled $1, ..., n$). For $n$ turns, the Dealer draws a card from the deck, the Guesser guesses which card was drawn, and then the card is discarded from the deck. The Guesser receives a point for each correctly guessed card. With perfect memory, a Guesser can keep track of all cards that were played so far and pick at random a card that has not appeared so far, yielding in expectation $\ln n$ correct guesses. With no memory, the best a Guesser can do will result in a single guess in expectation. We consider the case of a memory bounded Guesser that has $m < n$ memory bits. We show that the performance of such a memory bounded Guesser depends much on the behavior of the Dealer. In more detail, we show that there is a gap between the static case, where the Dealer draws cards from a properly shuffled deck or a prearranged one, and the adaptive case, where the Dealer draws cards thoughtfully, in an adversarial manner. Specifically: 1. We show a Guesser with $O(\log^2 n)$ memory bits that scores a near optimal result against any static Dealer. 2. We show that no Guesser with $m$ bits of memory can score better than $O(\sqrt{m})$ correct guesses, thus, no Guesser can score better than $\min \{\sqrt{m}, \ln n\}$, i.e., the above Guesser is optimal. 3. We show an efficient adaptive Dealer against which no Guesser with $m$ memory bits can make more than $\ln m + 2 \ln \log n + O(1)$ correct guesses in expectation. These results are (almost) tight, and we prove them using compression arguments that harness the guessing strategy for encoding. ISSN 1433-8092 | Imprint
2021-09-23 18:32:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4678989052772522, "perplexity": 1052.4483869751577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057427.71/warc/CC-MAIN-20210923165408-20210923195408-00004.warc.gz"}
https://www.sarthaks.com/2728315/which-of-the-following-fraction-is-the-smallest-frac-9-13-frac-17-26-frac-28-29-frac-33-52
# Which of the following fraction is the smallest? $\frac{9}{13}, \frac{17}{26}, \frac{28}{29}, \frac{33}{52}$ 30 views in Aptitude closed Which of the following fraction is the smallest? $\frac{9}{13}, \frac{17}{26}, \frac{28}{29}, \frac{33}{52}$ 1. $\frac{28}{29}$ 2. $\frac{33}{52}$ 3. $\frac{17}{26}$ 4. $\frac{9}{13}$ by (30.0k points) selected Correct Answer - Option 2 : $\frac{33}{52}$ Given: 9/13, 17/26, 28/29, 33/52 Calculation: ⇒ 9/13 = 0.6923 ⇒ 17/26 = 0.6538 ⇒ 28/29 = 0.9655 ⇒ 33/52 = 0.6346 ∴ 33/52 is the smallest fraction
2022-09-27 21:58:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4931710362434387, "perplexity": 3674.474436662938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00643.warc.gz"}
https://www.physicsforums.com/threads/snells-law-prism-forumla.427353/
# Snells Law prism forumla 1. Sep 8, 2010 ### glyon Snells Law prism - from theta2 -> theta3? Hi, I was wondering if there's a formula to go straight from theta1 to theta 4, when the apex angle is known. Thanks [PLAIN]http://cord.org/cm/leot/course06_mod07/Fig3.gif [Broken] My problem is getting from theta2 to theta3. Thanks ps. This is not a homework question, i didnt even take physics in school when i had the chance! Last edited by a moderator: May 4, 2017 2. Sep 8, 2010 ### nasu Consider the triangle with the angles theta2 and theta3, the one with two dotted sides and the third side made by the light ray in the prism. The third angle in this triangle is 180-A. (To see that is so, consider the quadrilateral made by the two normals and the sides of the prism). This will give you the relation between theta2 and theta3. 3. Sep 8, 2010 ### glyon Thanks for getting back to me. However, I'm still struggling to see how that third angle is 180-apex. I believe that the relationship between theta2 and theta3 is simply the apex = theta2 + theta3 but I can't see why! Thanks again! 4. Sep 8, 2010 ### Pyle PLEASE DISREGARD - See next comment. First off, by looking at the drawing, I am assuming the Normal lines are || to the rays and not perp. If they are perp then by definition theta 1 = theta 4 = 90. theta 1 = theta2 + beta = theta 3 + gamma = theta 4 You don't need A. It does nothing for you. 180-A does not equal the third angle in the theta 2-3 triangle except for one frequency of the incoming ray. Sigma is dependent on multiple factors. The angle A is only one of those. PLEASE DISREGARD - See next comment. Last edited: Sep 8, 2010 5. Sep 8, 2010 ### glyon Thanks! Aren't the normal line perpendicular to the prism edges rather than parallel to the rays though? Basically all i want to know is the exit angle for a set incident angle, n1 and n2. 6. Sep 8, 2010 ### Staff: Mentor Consider the triangle bounded by the sides of the prism at the top, and the light ray through the prism at the bottom. One of its angles is A. The other two angles aren't labeled, but they're related to $\theta_2$ and $\theta_3$ (how?). What do those three angles add up to? 7. Sep 8, 2010 ### Pyle Oops, Wasn't paying attention. 180-A is the third angle in the theta 2-3 triangle. Just run it through Snell's law twice. Sorry nasu, I was hasty. 8. Sep 8, 2010 ### glyon Thanks, I think I figured it out now: theta3 = A - theta2, then do snells law again for theta4. Last edited: Sep 8, 2010
2018-04-24 16:41:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6410208940505981, "perplexity": 1821.49509684137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946807.67/warc/CC-MAIN-20180424154911-20180424174911-00628.warc.gz"}
http://stats.stackexchange.com/questions/5226/expectation-of-leftx-m-rightt-leftx-m-right-leftx-m-rightt-leftx-m-ri
Expectation of $\left(X-M\right)^T\left(X-M\right)\left(X-M\right)^T\left(X-M\right)$ If $X=[x_1,x_2,...,x_n]^T$ is an $n$-dimensional random variable and we have $E\left\{X\right\} = M = \left[m_1,m_2,...,m_n\right]^T$ $Cov\left\{X\right\} = \Sigma = diag\left(\lambda_1,\lambda_2,...,\lambda_n\right)$ how can I express the following expectation in terms of $M$, $\Sigma$, and $n$ (and maybe raw $m_i$'s and $\lambda_i$'s)? $E\left\{ \left(X-M\right)^T\left(X-M\right)\left(X-M\right)^T\left(X-M\right)\right\}$ Supposing $x_i$'s are i.i.d and have normal distribution would be acceptable, but are these assumptions necessary? Update: 1. I know that $E\left\{ \left(X-M\right)^T\left(X-M\right)\right\} = \sum_{i=1}^n \left(\lambda_i\right)$ but don't think this would help in this case. 2. In the section 6.2.3 Cubic Forms 8.2.4 Quartic Forms of Matrix cookbook there is a formula for calculated quadratic expectations like this, but i don't want just a formula to solve it. I think there should be a simple question for this problem because the covariance matrix is diagonalized. - This is neither a quadratic nor a cubic form: it is quartic. (Section 6.2.3 of the Matrix cookbook does not appear to be applicable.) – whuber♦ Dec 8 '10 at 4:03 @whuber: you are right. The applicable formula is on 8.2.4 Mean of Quartic Forms of MatrixCookbook. And not surprisingly it gives $3\sum_{i}\lambda_i^2 + \sum_i \sum_j \lambda_i \lambda_j$ – Isaac Dec 8 '10 at 14:27 I believe the second (double) sum is restricted to i != j. – whuber♦ Dec 8 '10 at 15:31 @whuber: Yes you are completely right! ;) – Isaac Dec 8 '10 at 17:03 Because $\left(X-M\right)^T\left(X-M\right) = \sum_i{(X_i - m_i)^2}$, $$\left(X-M\right)^T\left(X-M\right)\left(X-M\right)^T\left(X-M\right) = \sum_{i,j}{(X_i - m_i)^2(X_j - m_j)^2} \text{.}$$ There are two kinds of expectations to obtain here. Assuming the $X_i$ are independent and $i \ne j$, \eqalign{ E \left[ (X_i - m_i)^2(X_j - m_j)^2 \right] &= E\left[(X_i - m_i)^2\right] E\left[(X_j - m_j)^2\right] \cr &= \lambda_i \lambda_j . } When $i = j$, \eqalign{ E \left[ (X_i - m_i)^2(X_j - m_j)^2 \right] &= E\left[(X_i - m_i)^4\right] \cr &= 3 \lambda_i^2 \text{ for Normal variates} \cr &= \lambda_i \lambda_j + 2 \lambda_i^2 \text{.} } Whence the expectation equals \eqalign{ &\sum_{i, j} {\lambda_i \lambda_j} + 2 \sum_{i} {\lambda_i^2} \cr = &(\sum_{i}{\lambda_i})^2 + 2 \sum_{i} {\lambda_i^2}. } Note where the assumptions of independence and Normality come in. Minimally, we need to assume the squares of the residuals are mutually independent and we only need a formula for the central fourth moment; Normality is not necessasry. - nice one! – suncoolsu Dec 8 '10 at 6:12 I believe this depends on the kurtosis of $X$. If I am reading this correctly, and assuming the $X_i$ are independent, you are trying to find the expectation of $\sum_i (X_i - m_i)^4$. Because $X_i^4$ appears, you cannot find this expectation in terms of $M$ and $\Sigma$ without making further assumptions. (Even without the independence of the $X_i$, you will have $E[X_i^4]$ terms in your expectation.) If you assume that the $X_i$ are normally distributed, you should find the expectation is equal to $3 \sum_i \lambda_i^2$. - If you lose iid and normality assumptions things can get ugly. In Anderson book you can find explicit formulas for expectations of type $\sum_{s,r,t,u}E(X_s-m)(X_r-m)(X_t-m)(X_u-m)$ when $X=(x_1,...,x_n)$ is a sample from stationary process, with mean $m$. In general it is not possible to express such types of moments using only the second and first moments. If we have $cov(X_i,X_j)=0$, it does not guarantee that $cov(X_i^2,X_j^2)=0$ for example. It does only for normal variables, for which zero-correlation equals independence. -
2013-05-22 00:55:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9941009879112244, "perplexity": 640.3450112656298}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700984410/warc/CC-MAIN-20130516104304-00004-ip-10-60-113-184.ec2.internal.warc.gz"}
https://zbmath.org/?q=an:07162216
# zbMATH — the first resource for mathematics On equivariant and motivic slices. (English) Zbl 1441.14077 Summary: Let $$k$$ be a field with a real embedding. We compare the motivic slice filtration of a motivic spectrum over $$\operatorname{Spec}(k)$$ with the $$C_2$$-equivariant slice filtration of its equivariant Betti realization, giving conditions under which realization induces an equivalence between the associated slice towers. In particular, we show that, up to reindexing, the towers agree for all spectra obtained from localized quotients of $$\boldsymbol{MGL}$$ and $$M\mathbb{R}$$, and for motivic Landweber exact spectra and their realizations. As a consequence, we deduce that equivariant spectra obtained from localized quotients of $$M\mathbb{R}$$ are even in the sense of M. A. Hill and L. Meier [Algebr. Geom. Topol. 17, No. 4, 1953–2011 (2017; Zbl 1421.55002)], and give a computation of the slice spectral sequence converging to $$\pi_{*,*}\boldsymbol{BP}\langle n \rangle/2$$ for $$1 \le n \le \infty$$. ##### MSC: 14F42 Motivic cohomology; motivic homotopy theory 55P91 Equivariant homotopy theory in algebraic topology 18G80 Derived categories, triangulated categories 55N20 Generalized (extraordinary) homology and cohomology theories in algebraic topology 55P42 Stable homotopy theory, spectra Full Text: ##### References: [1] 10.1007/s00209-019-02302-z [2] 10.4099/math1924.4.363 [3] 10.1112/topo.12032 · Zbl 1453.14065 [4] 10.2140/agt.2005.5.615 · Zbl 1086.55013 [5] 10.2140/gt.2010.14.967 · Zbl 1206.14041 [6] 10.2140/agt.2017.17.3547 · Zbl 1391.55009 [7] 10.1112/jtopol/jts015 · Zbl 1258.18012 [8] 10.1090/tran6647 · Zbl 1346.14049 [9] 10.1016/j.jpaa.2010.06.017 · Zbl 1222.55014 [10] 10.4310/HHA.2012.v14.n2.a9 · Zbl 1403.55003 [11] 10.4007/annals.2016.184.1.1 · Zbl 1366.55007 [12] 10.2140/agt.2017.17.1953 · Zbl 1421.55002 [13] 10.1515/crelle-2013-0038 · Zbl 1382.14006 [14] 10.1016/S0040-9383(99)00065-8 · Zbl 0967.55010 [15] 10.1017/is011004009jkt154 · Zbl 1266.14015 [16] ; Lam, Introduction to quadratic forms over fields. Introduction to quadratic forms over fields. Graduate Studies in Mathematics, 67 (2005) · Zbl 1068.11023 [17] 10.1215/ijm/1256051757 [18] 10.1112/jtopol/jtm004 · Zbl 1154.14005 [19] ; Levine, Doc. Math., Extra Vol. 7, 407 (2015) [20] 10.1016/j.aim.2018.11.002 · Zbl 1417.55019 [21] 10.1515/9781400830558 · Zbl 1175.18001 [22] 10.1090/memo/0755 · Zbl 1025.55002 [23] 10.1016/j.aim.2016.09.027 · Zbl 1420.55024 [24] ; Mazel-Gee, New York J. Math., 22, 57 (2016) [25] 10.1007/978-94-007-0948-5_7 [26] 10.1007/BF02698831 · Zbl 0983.14007 [27] ; Naumann, Doc. Math., 14, 551 (2009) [28] 10.2140/gt.2013.17.1671 · Zbl 1276.55023 [29] 10.1017/is013001013jkt196 · Zbl 1319.14029 [30] 10.1016/j.aim.2014.10.011 · Zbl 1315.14030 [31] 10.2140/gt.2016.20.1157 · Zbl 1416.19001 [32] ; Röndigs, K-theory, 35 (2018) [33] 10.4007/annals.2019.189.1.1 · Zbl 1406.14018 [34] 10.4310/HHA.2010.v12.n2.a11 · Zbl 1209.14019 [35] 10.1017/is010008019jkt128 · Zbl 1249.14008 [36] ; Spitzweck, A commutative ℙ1-spectrum representing motivic cohomology over Dedekind domains. A commutative ℙ1-spectrum representing motivic cohomology over Dedekind domains. Mém. Soc. Math. Fr., 157 (2018) · Zbl 1408.14081 [37] 10.2140/agt.2012.12.565 · Zbl 1282.14040 [38] 10.1007/BF01394024 · Zbl 0514.18008 [39] 10.2140/agt.2013.13.1743 · Zbl 1271.55015 [40] ; Voevodsky, Proceedings of the International Congress of Mathematicians, I, 579 (1998) [41] ; Voevodsky, Motives, polylogarithms and Hodge theory, I. Motives, polylogarithms and Hodge theory, I. Int. Press Lect. Ser., 3, 3 (2002) [42] 10.1007/s10240-003-0010-6 · Zbl 1057.14028 [43] 10.1112/S0024611504015084 · Zbl 1086.55005 [44] 10.2307/1970821 · Zbl 0244.55021 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-09-28 04:08:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5639310479164124, "perplexity": 6949.088596783698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060201.9/warc/CC-MAIN-20210928032425-20210928062425-00076.warc.gz"}
https://fordead.gitlab.io/fordead_package/docs/Tutorial/05_export_results/
# Step 5. Export results #### Step 6 : Exporting results to shapefile for visualizing results with a time step defined by the user This step aims at exporting results in a vector format with user defined time step and footpprint defined by the mask. The minimum time step corresponds to the periods between available SENTINEL-2 dates. The results can be exported as multiple files, in which case each file corresponds to the end of a period, and the resulting polygons contain the state of the area at the end of this period, as detected in previous steps. If results are exported as a single file, polygons contain the period during which the first anomaly was detected. Pixels with unconfirmed anomalies, and pixels identified as anomaly and back to normal are ignored. If the stress indices were computed in step 3 and the option is chosen, the stress index of the pixels currently detected as suffering from dieback is extracted and can be considered a confidence index. This confidence index is then discretized, vectorized and intersected with the results, so the polygons also contain a confidence class, giving an information on the intensity of anomalies since detection. By construction, this class contains the "final" state, calculated at the last available SENTINEL-2 date. Comprehensive documentation can be found here. ##### Running this step using a script Run the following instructions to perform this processing step: from fordead.steps.step5_export_results import export_results export_results(data_directory = data_directory, frequency= "M", multiple_files = False, conf_threshold_list = [0.265], conf_classes_list = ["Low anomaly","Severe anomaly"]) ##### Running this step from the command prompt This processing step can also be performed from a terminal: fordead export_results -o <output directory> --frequency M -t 0.265 -c "Low anomaly" -c "Severe anomaly" ##### Outputs The output of this step, in the folder data_directory/Results, is the shapefile periodic_results_dieback, whose polygons contain the time period when the first anomaly was detected, as well as the confidence index class. Period of detection Confidence class
2022-08-13 04:23:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47637370228767395, "perplexity": 2961.6317886580387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571869.23/warc/CC-MAIN-20220813021048-20220813051048-00137.warc.gz"}
https://www.physicsforums.com/threads/does-michio-kaku-agree-with-lawrence-krauss-concept-of-nothingness.979958/
# Does Michio Kaku agree with Lawrence Krauss' concept of nothingness? ## Main Question or Discussion Point Michio Kaku and Lawrence Krauss are both well-renowned physicists who propose that the universe (or universes) was generated out of nothing. Krauss, in his book "A Universe from Nothing" argued that the universe was probably created by a primordial "nothingness" with no space and time and composed by quantum fields and vacua of virtual particles and fluctuations. But, of course, this is not a true nothingness, so he also considers the possibility that everything was created somehow from truly nothingness (with no space, time, energy, vacua, quantum laws, or any kind of physical or even mathematical or logical laws). However, although Kaku and Krauss have worked together and Kaku also wrote a book where he proposed that everything originated from nothingness, I have not seen any single comment from Kaku mentioning Krauss' book. So, basically my question is, does Kaku also considers that the universe (or universes) could have been originated by true nothingness (as Krauss does)? Creation is before time. There is no way to conclude anything about $t\leq 0$. In order to gain information, there must exist something which requires a positive time. Hence the proposed answer has to be pure speculation and belongs into the field of faith and philosophy. Whether or not they share an opinion on something which cannot be decided is meaningless.
2019-12-15 23:40:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5108530521392822, "perplexity": 949.2675086848378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541310970.85/warc/CC-MAIN-20191215225643-20191216013643-00446.warc.gz"}
https://www.physicsforums.com/threads/solving-two-equations.776934/
# Homework Help: Solving two equations 1. Oct 19, 2014 ### bobie 1. The problem statement, all variables and given/known data this is not homework, I am discussing angular momentum here: https://www.physicsforums.com/threa...d-angular-momentum.776258/page-2#post-4884631, we have 2 equations$$\frac{i \omega _0^2}{2}+\frac{m v_i^2}{2}=\frac{I \omega _1^2}{2}+\frac{m v_f^2}{2}$$ $I \omega _0+m r v_0=~I \omega _1+m r v_1$ $~M=10,~m=1,~l=1, ~I=\frac{[l^2] M = 10}{12},~r=\frac{l}{2} = 0.5,~v_i=22,~\omega _0=0, ~\omega_1 = x$ 2. Relevant equations $$\frac{m v_i^2}{2}=\frac{I \omega ^2}{2}+\frac{m v_f^2}{2} \rightarrow v_f^2 = (v_i^2= ~22^2) = 484 - \frac{5}{6}\omega^2$$ $m r v_i= ~I \omega +m r v_f \rightarrow v_f = 22 - \frac{10}{6}\omega$ 3. The attempt at a solution The problem is simple, but I get a funny result. I tried hundreds of times with different approaches to no avail; can you tell me where I go wrong, or if the problem has no solution? $v_f = 22 - \frac{10}{6}\omega → v_f^2 = 22^2 + \frac{10^2}{6^2} \omega^2 + \frac{2*22*-10}{6}\omega$ $v_f^2 = 484 + \frac{100}{36} \omega^2 - \frac{440}{6}\omega$ plugging in the first equation: $(v_f^2 = ) ~484 - \frac{5}{6}x^2 = 484 + \frac{100}{36} x^2 - \frac{440}{6}x \rightarrow - \frac{30}{36}x^2 = \frac{100}{36}x^2 - \frac{6*440}{36}x = 130x^2 = 2640 x$ $x= 2640/130 = 20.3$ Last edited: Oct 19, 2014 2. Oct 19, 2014 ### haruspex Check that last term.
2018-07-16 02:00:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6183266043663025, "perplexity": 1179.8246411696455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589029.26/warc/CC-MAIN-20180716002413-20180716022413-00450.warc.gz"}
https://together.jolla.com/question/69170/together-guidelines-organisation-question-and-improvement-suggestions/
together guidelines organisation: question and improvement suggestions This post is a wiki. Anyone with karma >75 is welcome to improve it. First of all I want to tell that i am really new in sailor world. It make fast one month that i get my phone, and near of two week that i surf in together website. Like somebody could say, shortly described, i am a newbie (great). The first that i see of this new world (like a lots of people), new kind of thinking, new kind of acting. We are different, we are together. Other os, other design, other gesture, light. It seems to be for me all easy, and quite here in together too. We can put a question, as improvement suggestion, as survey, as bug declaration, as protest... I could answer, make some suggestion, vote + for good idea, vote - for it won't bring something good for us. You get points and see you participate for a community project (more or less). It's easy, seems to work like every forum, just be sociable, and bring some experience, and ideas, help people to think clear (hmm that wil be now complicated). So shortly I want to participate on a survey with good ideas. Made some suggestions. And then, bing! 200 points less. I don't care on the points really for the points, but as indicator of sociability and participation. What's happened when i think that i've productively and well, it's seems that the contrary? Then came a lots of questions, and it goes complicated. I do now ask all those question. they may be easy, but it goes deeper on sense. The way i want to go is for everyone, that means it should be so easy as possible, that a child could understand. Question for child and fool avoiding misunderstanding: First, The poll or survey, it seems to be an high need to have the answer as wiki. If you don't do that, then your bad. Maybe accepted as newbie, i don't know. Until now i thought i knew what is a wiki. But bound to the together use, i don't know. it's different. 1) why do we need to answer as wiki community? what is hidden behind it? 2) Please show me the wiki utility. 3) Is it possible to make it automatically when the thread creator informs that is a poll? So i got my points less. was a bad feeling, i do something good, and i'm seen as bad. Aouch! 4) If there is something why not ask for if it serious or not? suggest to change it and give a little bit time for it. 5) I know, every people have a lots of things to do. but let's time for the people to discover the site. All the informations to know how all work, is disperse everywhere. Need time to gather all. The problem was, that was obvious (not for me), that what i've was not fair form guidelines. The first that i read was the faq and the help, and some guidelines. And don't understand some things like, what is a poll, the wiki-community. 6) why should some people give minus point if people want participate? participate is always positive, even if it's duplicate or in wrong form. The understanding of them is a little bit different. Minus points should just for contra-productive reaction ONLY. Minus points means unsociable, unfriendly. 7) Give minus point to people who participate, but made wrong, is for me a form of laziness. Why people don't ask themself "what's wrong in the system that bring the people do wrong?" first, and ask then this one why? 8) guidelines are just to be found in this little link in features tag? why are they not in faq or help? 9) guidelines link is not red and flashy in bold ? 10) For those who create a guideline, it is always good if there is an explanation why it is needed. All the people are not skilled on this kind website usability. that not obvious for everyone. There is no doubt, the people are here really nice with a lots of understanding, if we explain the fact. That really great, but something are little bit confused, maybe messy. Situation is fixed by me, but i ask me what did happened to someone if he has't the "loudmouth"? The community could loose him, and at last a lots of other one. (end of first part. i'll think for more reaction or ideas of improvement, and make then an update). 11) 2 counters for upvote and downvote (new point added 22.01.2015): I don't know if askbot could support it. But i think for a better overview, it will be find to separate the counter from upvote with downvote. Display 2 counters for each one. Voting is a kind of expression. should note attribute point especially negative one. downvote should not influence the counter of upvote, both are separate for better comparison. have a nice sail edit retag close delete The day has been long. My thoughts keep orbiting a few strange ideas. The smaller phone. The jollaDesk. Medical OtherHalves... Not used to being asked... Especially about how to herd cats. But I will try to give some sort of answer tomorrow. Now the hours of this day have all slipped by. New chores tomorrow. ( 2014-12-10 01:03:33 +0300 )edit @vattuvarg: you're welcome. take your time. i couldn't react so fast since the last discussion. needed to make some order in personal life (is 1st prio ;-) ) ( 2014-12-10 01:13:35 +0300 )edit Sort by » oldest newest most voted On guidelines and habits This is the couch frontier. This is the wild-ish west. This is a blank page with some doodles, aye. Things will get easier over time, but right now Jolla is a young company and we're the eager sidekicks that wish we had at least a clue to what's going on in this action drama. Innovation happens close to chaos and there's a lingering feel of disorder hanging around. That feeling comes from not knowing. Frustrating? Of course, but there has to be secrets. The mobile industry is fiercely competitive so to succeed some things have to remain hidden. Nagging for more info is just a waste of time. The Jolla doesn't leak... So we create the missing parts in our heads. I've done so myself with the J3. We suggest some of those ideas to the forum. And then we get surprised by the response. An added difficulty comes with the karma system. I've seen a few fora in my years on the net, but this is the first with that feature. Unwritten social rules play out on the forum. Things we like get the thumbs up treatment and a few other posts fuel our anger instead. ...and it all depends on old habits. We want new and fresh stuff and yet we want things to remain as they are. ...especially those unwritten rules. The reality is tough here for newcomers, sadly enough. The creative process is so close that the nearby chaos makes everything glow in the dark. It's an electronic aurora borealis. Wild, free and slightly unbelievable. It's the territory of the quick draw and even quicker mind. ...and the tension that fills the air makes people jumpy. Trigger happy. Rules will of course help a bit, but too many (or too rigid) rules will take away too much of the creative spark that is needed to feed the innovation. So the main tool has to be attitude. Anyone on any of the ships in this armada sails together. Whether a golden tongue or a foul mouth we all get our say. Point your thumb down if you wish but do so with an argument. We can find use for all hands here. Teach the new ones without weapons or fists. We are all still pirates looking for freedom. Together. I couldn't think of anything better to say. Sorry. more Hi! Good questions, hope I can solve these being here for a while longer: 1) why do we need to answer as wiki community? what is hidden behind it? • Wiki's are used in polls, as they gather so much answers and votes. This is only to remove karma hunting • Wiki's are used in posts which are ment for everybody to edit, to make them better (for example the guidelines you mentioned) 2) Please show me the wiki utility. • No separate utility. Just a checkbox under the question edit - In all other ways, the editing works just like in any questions / answers 3) Is it possible to make it automatically when the thread creator informs that is a poll? • Currently no 4) If there is something why not ask for if it serious or not? suggest to change it and give a little bit time for it. • agreed 5) I know, every people have a lots of things to do. but let's time for the people to discover the site. All the informations to know how all work, is disperse everywhere. Need time to gather all. The problem was, that was obvious (not for me), that what i've was not fair form guidelines. The first that i read was the faq and the help, and some guidelines. And don't understand some things like, what is a poll, the wiki-community. • Let's improve the guidelines together, and if needed, make suggestions for a better help and wiki. They are all done together 6) why should some people give minus point if people want participate? participate is always positive, even if it's duplicate or in wrong form. The understanding of them is a little bit different. Minus points should just for contra-productive reaction ONLY. Minus points means unsociable, unfriendly. • I didn't even know about a karma penalty even after being here for a year - Is that info shared in the poll guidelines? In my opinion, if this kind of penalty is in use, it should be limited to 10% to not punish new members. And in overall, a guidance is alweays better than a punishment. 7) Give minus point to people who participate, but made wrong, is for me a form of laziness. Why people don't ask themself "what's wrong in the system that bring the people do wrong?" first, and ask then this one why? • agreed 8) guidelines are just to be found in this little link in features tag? why are they not in faq or help? • There could be a link to all guidelines in both FAQ and HELP. Possible @Eric? 9) guidelines link is not red and flashy in bold ? • Not possible in askbot, but hopefully a better linking makes them visible enough 10) For those who create a guideline, it is always good if there is an explanation why it is needed. All the people are not skilled on this kind website usability. that not obvious • TODO: Let's improve them together - feel free to edit more @simo@eric: we have a problem with eric. i know now why eric don't react on our request. it seems to have a problem with the askbot machine, it seems to make a link to an other person. Eric didn't get our request. he is normally as user 2, and askbot make an other one with @xxx, because is near the same tjc nickname. do you see this link above to eric ? and a lots of other goes to the 4986. In Principe the tjc system should not accept the same nickname, even if letter case (lower or upper ) are different. now we have maybe some user confusion. maybe first you and eric should speak with the other one to give him the situation ( 2014-12-15 11:20:33 +0300 )edit @reviewjolla i take a little bit time to reread the thread here , and read other guidelines. And i remarked that i didn't thank you for having reading my long thread and answering on it at every points. sorry.... But as a lot people tell, that's never too late for making good things. then, thank you for having take time to answer on it. :-) ( 2016-04-10 23:25:09 +0300 )edit oh god i did it indeed. i saw it in an other answer below, what an ape could i do.... i see now that i use tjc in an other art than for 1 year. funny. ( 2016-04-10 23:29:30 +0300 )edit I see the point to regulate TJC and improve the quality. I also like that you step forward and suggest doing something. However (sorry to say): • I'm just too lazy to read and follow long complicated guidelines (but I try to stay reasonable and e.g. search before posting) • TJC is a 'jack of all trades' between help, knowhow, feature requests, bug'tracker', rant, announcements, etc. I don't think one could deeply 'attack' these different aspects with 'sharp' guidelines • karma hunting is a non problem. What should one do with such karma anyway? If a person 'disbehaves' he/she can be removed. But almost all people are very friendly here • TJC should stay fun. Nothing against guidelines, but they need to be relaxed suggestions imho Sometimes I was thinking, if it wouldn't be better to divide the aspects, e.g. • bug-hunting --> github projects with issues • knowhow --> wiki or a stackoverflow like site or a documentation project • discussion, rant, announcements, etc --> discourse • feature requests --> uservoice But I'm not sure if this would be better, for me TJC seems quite nice. (Except that there is no good modern bug tracker like github/gitlab - TJC bug posts most likely perish within the huge 'post volume') more 1 I agree with you fully, on each point, and thanks for each too. Grapping to one: "Fun", which is actually one of the most important reasons why the guidelineing needs some attention. The current set of guidelines is all but fun, a mixed list nobody is interested to visit - And as such, not helping the development of the portal, not helping on having great posts here. Let's work towards somewhat attractive guideline, which might be also fun enough for people to link it some more, whenever useful. Just maybe, we can turn even you to visit it (I'm personally lazy on visiting guides as well, love to learn things the hard way) On dividing, it seems Jolla also wants to keep us here with everything. For example a decent bugtracker has been requested for long, and they've been silent on that for long :) This is actually the first attempt to write a guideline including bug reporting, not sure how that goes here, let's see. Maybe focus on the "fun" on that too :D ( 2015-09-13 23:18:49 +0300 )edit This post is a wiki. Anyone with karma >75 is welcome to improve it. Hello Simo, thank you very much for your answer and that you take it really serious. so 1) why do we need to answer as wiki community? what is hidden behind it? Wiki's are used in polls, as they gather so much answers and votes. This is only to remove karma hunting Wiki's are used in posts which are ment for everybody to edit, to make them better (for example the guidelines you mentioned) Wiki in reality are thus not a specific thread as knowhow, but more than fast one which become a lots of reaction, as answer, vote, suggestions right? 5) I know, every people have a lots of things to do. but let's time for the people to discover the site. All the informations to know how all work, is disperse everywhere. Need time to gather all. The problem was, that was obvious (not for me), that what i've was not fair form guidelines. The first that i read was the faq and the help, and some guidelines. And don't understand some things like, what is a poll, the wiki-community. Let's improve the guidelines together, and if needed, make suggestions for a better help and wiki. They are all done together Why not, but i need some time to read the points again. Then i suggest when the time comes, we work point to point here on (by this current thread). Or should we do it apart (per mail for example)? 6) why should some people give minus point if people want participate? participate is always positive, even if it's duplicate or in wrong form. The understanding of them is a little bit different. Minus points should just for contra-productive reaction ONLY. Minus points means unsociable, unfriendly. I didn't even know about a karma penalty even after being here for a year - Is that info shared in the poll guidelines? In my opinion, if this kind of penalty is in use, it should be limited to 10% to not punish new members. And in overall, a guidance is always better than a punishment. I didn't see it in guidlines, just + points attributions in faq. But I'm not really agree with you. This kind of attribution should not be done, if the person want to participate constructively in the thread. Maybe in case that someone else tells him to change the things he did because it is not guidlines friendly, and stay in his position, we could reconsider that... 9) guidelines link is not red and flashy in bold ? Not possible in askbot, but hopefully a better linking makes them visible enough asknots seems to be a mix between html and python. Maybe it is possible to reformat the fonts of featured tags. And the background too. 10) For those who create a guideline, it is always good if there is an explanation why it is needed. All the people are not skilled on this kind website usability. that not obvious TODO: Let's improve them together - feel free to edit That will be not so easy, put my changes or my remarks directly in the guideline thread, may destruct its integrity as rule. Needs a good organisation to. Where and how could we discuss about each of it. And the best is that we can ask for details to guidelines creators, and are agrees to make some changes it it's needed to. In short way explained, i have no problem to edit, if i present my changes and the peoples are agreed (i means guidelines creators an some moderators). ps: i miss tags knowhow, how we could change the format of the text by answer or thread input. Some html tags, and wiki-like tags seem to work with askbots. Will not bad to have a small list of it in help section. Have a nice weekend more Would it be ok to add any further answers to your new questions editing them directly above, turning this into a "discussion pad" for a while? If so, please edit this to wiki, enabling everybody to add. (Normally any further discussions takes place in the comments thread, but I guess that either way is good in this type of collective question) ( 2014-12-07 22:40:51 +0300 )edit hello simo, yes naturally should be as wiki, sorry. now it is converted into this. hmm i'm sorry for what you means with "discussion pads" for a while, didn't understand what you mean. Do you thinks it is better to make an answer for each points of the handled questions? ( 2014-12-08 10:39:05 +0300 )edit Good morning! I just ment that your answer could be used as a platform for discussion - so no need for separate answers. Now that it's a wiki, I hope others edit in their opinions as well. ( 2014-12-08 10:53:21 +0300 )edit This post is a wiki. Anyone with karma >75 is welcome to improve it. So for my part i want to react for the point 6). I have already read about some guideline Point. First, this one Deselecting answers revised from @pulsar. Id didn't understand why is it a guideline point. That is in mater of fact a problem., a big one which join my point of you, about point action which could be negative to a person even if he act for a good reason. I understand his incomprehension. If pulsar is agree too, This point will be better for me to be rewritten in form that it touch all the point attribution concept. As i already said, pint are just a quantification to give a feeling how act a person. If it contribute for community. or if he is contra-productive. It should not be easy to give point less. Asking first if the person think seriously is more friendly. Maybe he could ask for help to do in right way. I can't understand, that point going left even if the person contribute positive. For example, a negative vote of a non-constructive question (i have already seen one but i don't find it anymore), is then good for community. the action should not be point with negative. more This post is a wiki. Anyone with karma >75 is welcome to improve it. @simo @eric: I found a new problem on the organisation on together website, and i ask my why no one has already seen it, or i didn't find a thread upon it. In reality after some thinking, it seems NOT to be a bug or concept-failure of askbot. So now to the fact. I observed that for some typical threads (the problem especially for polls i think), inside the threads there are vote counter for each answers. But it does not reflect the counter of the thread itself. Inside a thread could be more vote as the one of the thread itself. It reflect not truly the thread as it should do. The example that i can give you is this one [Poll] Jolla 2 OtherHalf colours? As you can see the thread has 12 votes, but inside alone the cobalt answer as currently 17 votes!!! You can't tell me that is normal, that the thread has less votes that an answer which is inside!!! is really stupid i think. It is not a wonder if vattuvarg disappointed about the non success of his poll. The votes are less as it should do, and people don't react because of less changing in its votes... One solution, would be for me that, a kind of guideline based on attitude for each one who vote inside a thread,give then a vote to the thread in accordance of his positiv or negative if he is against the thread or not. I try to make a proposal for the guideline.-.. more The thread counter is for triage, more or less. It's a way for people to tell others that *I want an answer to this too". The most voted questions get extra attention and a higher priority from Jolla. ...or experts among us. Within each thread there are answers. Good answers get votes and is a simple form of quality control. Even if the question might be completely obvious a good answer can still be worth more byte by byte than a pile of bitcoins. Difficult questions might need more than one good answer and the votes might be the only way of knowing the quality of each answer. So voting for both question and answer(s) makes sense. Adding them up would mix all signals and make all numbers meaningless. Triage by the administrators would be impossible and finding the right answer would be equally difficult for the users of the forum. The karma system still baffles me at times but it seems to work. ( 2014-12-11 23:41:08 +0300 )edit that's right, but the counter between the answers and the thread should be consistent. if ind not normal that an answer of a thread get 18 votes (18 people like it) and the thread itself just 12 (12 like it), that make no sense, et make statistic wrong. ( 2014-12-12 00:12:03 +0300 )edit 1 @cemoi71 Just as you mentioned me on the answer: I'm ok with the current calcs. IMO, vote for Q = vote for the importance of the matter, vote for A = supporting the idea/solution/suggestion presented ( 2014-12-12 06:38:05 +0300 )edit I found threads with bad question and good answers. Or there is a good comment (e.g. above). When I voting the good posts I will not honored bad posts. ( 2014-12-12 19:02:46 +0300 )edit @utkiek: ok you take care that you handle the answer and the thread. I think it is good if we could vote negative without bumping kharma points down, if the stuff try to be constructive. it should just inform if it is good or not. kharma point are for me an information how well has the person according to the community. inform if she constructive or not. (and we comes again to the issue of kharma point...) ( 2014-12-12 19:23:03 +0300 )edit This post is a wiki. Anyone with karma >75 is welcome to improve it. I've started a project on the guidelines, let's discuss that here (to keep the new guide clean of any comments) First, I'd like to share some motives why I started this project, and why I started it right now: • TJC is getting a big group of new, unfamiliar users soon, including many countries where Jolla hasn't been available earlier • Current guideline list is a non-attractive mess of posts with different layouts and outdated data. We could edit those, but after browsing through each, poersoinally I found it simply easier to start a new project • Better guideline, attractive enough, up to date, might be useful to link to when the new questions by new users starts to roll in • End result is not supposed to change how TJC works or restrict user freedom. Guides are supposed to be guides, not rules like few of them currently are. Please join if you find it worth it :) EDIT: Each older guideline now has a dedicated answer to handle the progress there. EDIT2: There's been so few comments on this project that I feel it's not worth it to continue. It's not something I don't want to do just by myself, as without others it misses a common goal & DIT spirit. So, leaving the process for now. Updating the schedule, leaving stage 1 open in case if someone at some point finds there's something to do. more @simo@reviewjolla may i ask the stupid question? are you simo behind the two tjc-account here? Is just to verify an obvious observation... And i would like to discuss with you about some points (and one particularly). if you want.... ( 2016-04-07 16:21:53 +0300 )edit You may, and yes those are mine. For personal chat I can send you my xmpp via email to my blog address or via DM on Twitter. But if your topic fits as a public discussion here, that would be great by me as well. ( 2016-04-07 16:50:33 +0300 )edit hmmm for public discussions by you, means reviewjolla or simo tjc account? or never-mind which one... ( 2016-04-07 17:57:24 +0300 )edit do mind: My old account started to gather so much notifications (=having red mail envelope on top all the time) that I'm currently better reached using this @reviewjolla mention. So, what's this about? ( 2016-04-07 18:23:24 +0300 )edit ok.. first, thank very much for the answer. concerning guidelines for answer to a question here I would suggest that an answer should handle on the subject given on the main thread question.... Sometimes i see that people answer nearby of the main question, for suggesting a workaround of it, without to handle directly or trying to solve it. According to the person, not mentioning it would be a problem... That's right but for this it is possible to comment. Here i have a concrete example that right happened today on this thread. One proposed as answer to opt-out to the early access program so that after this, the user will be proposed to downgrade sfos to the last release and the bug won't be appears. That's right, that is an important info that for those who want to step out of the early access program it is possible to get the last release of the system. But it doesn't handle the main problem that a bug appears and should be observe on a futur release software. That as for me just an effect to pollute the main thread, and not give a real fix. However the info is important. Then i want first to discuss about the point "answer to a question" and in which form... words are missing about it to me. Do you see what i mean? ( 2016-04-07 19:24:07 +0300 )edit
2019-09-20 00:00:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33395373821258545, "perplexity": 1814.6507976256971}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573759.32/warc/CC-MAIN-20190919224954-20190920010954-00341.warc.gz"}
https://www.snapxam.com/solver?p=factor%5Cleft%28x%5E2-1%5Ccdot%202%5Ccdot%20%20x%5Ccdot%20%20y%2By%5E2%5Cright%29
# Step-by-step Solution ## Factor the expression $x^2-1\cdot 2xy+y^2$ Go! Go! 1 2 3 4 5 6 7 8 9 0 a b c d f g m n u v w x y z . (◻) + - × ◻/◻ / ÷ 2 e π ln log log lim d/dx Dx |◻| θ = > < >= <= sin cos tan cot sec csc asin acos atan acot asec acsc sinh cosh tanh coth sech csch asinh acosh atanh acoth asech acsch ### Videos $\left(x-y\right)^{2}$ ## Step-by-step Solution Problem to solve: $factor\left(x^2-1\cdot 2\cdot x\cdot y+y^2\right)$ Choose the solving method 1 Multiply $-1$ times $2$ $x^2-2xy+y^2$ Learn how to solve factorization problems step by step online. $x^2-2xy+y^2$ Learn how to solve factorization problems step by step online. Factor the expression x^2-*2*x*y+y^2. Multiply -1 times 2. The trinomial x^2-2xy+y^2 is a perfect square trinomial, because it's discriminant is equal to zero. Using the perfect square trinomial formula. Factoring the perfect square trinomial. $\left(x-y\right)^{2}$ SnapXam A2 ### beta Got another answer? Verify it! Go! 1 2 3 4 5 6 7 8 9 0 a b c d f g m n u v w x y z . (◻) + - × ◻/◻ / ÷ 2 e π ln log log lim d/dx Dx |◻| θ = > < >= <= sin cos tan cot sec csc asin acos atan acot asec acsc sinh cosh tanh coth sech csch asinh acosh atanh acoth asech acsch $factor\left(x^2-1\cdot 2\cdot x\cdot y+y^2\right)$
2021-09-19 13:59:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8304403424263, "perplexity": 5581.702973028647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056890.28/warc/CC-MAIN-20210919125659-20210919155659-00164.warc.gz"}
http://crypto.stackexchange.com/tags/nonce/hot?filter=year
# Tag Info 12 By the modern definition of a cipher, it must be possible to encipher several messages with the same secret key. That's also a practical necessity, due to the difficulty of securely establishing a shared secret key. That issue is solved with the nonce, which is not secret, and can be transferred as part of the ciphertext (typically: at the beginning). ... 7 The Intel post which I think you mean was discussed in this question and as I wrote there, the limitation only applies in the case of trying to combine PRNG outputs into values larger than the seed entropy (two 256-bit values in their case). Also mentioned there: cryptographic mixing does not increase the entropy you have, so if concatenation is insecure, ... 7 I assume you mean AES-GCM. Nonces must be unique for any use of a key. Given that $n = H(k)$ is constant for constant key $k$, this implies that such a nonce may only be used once, ever. Nonce reuse is particularly catastrophic in GCM mode (as with any other CTR-based mode), as it causes the keystream to be identical. Essentially, you wind up with two (or ... 4 Suppose you do CTR mode as: $E(k,nonce+1) \oplus m_1$, $E(k,nonce+2) \oplus m_2$, $E(k,nonce+3) \oplus m_3$, etc. The wikipedia page is talking about a non-random nonce, with a specific example of a packet counter. So suppose $nonce$ is a packet counter and in each packet you encrypt several blocks. You might end up with the following: In packet #$p$: ... 4 What you're describing is pretty similar to the SIV block cipher mode. It also uses a deterministic function of the message to derive the nonce for CTR encryption. Under some pretty widely accepted assumptions about HMAC-SHA256 this is a perfectly fine way of achieving deterministic authenticated encryption. It doesn't meet IND-CPA (as you pointed out) but ... 4 Yes, if the client and the server use the same key to encrypt their messages (instead of having separate keys for client-to-server and server-to-client communication), then you need to ensure that they cannot ever use the same nonce. One way to do that would be to, say, let the client use only even nonce values, and let the server use only odd nonce values. ... 3 Yes, this is secure, even though scrypt uses PBKDF2 inside. PBKDF2 has the issue that it the work factor is required $n$ times where $n$ is the number hash outputs concatenated to create the final PBKDF2 output. That means that if you can check the validity of PBKDF2 using only the initial bits (in your case used for the key if the hash was SHA-256, for ... 3 Should the external nonce passed to GCM be authenticated separately when passing over network? No, that is not necessary; it is implicitly authenticated by GCM itself, pretty much as the AAD is also authenticated. That is, if someone in the middle modifies the nonce, then that will alter the authentication tag that the decryptor computes as a part of ... 2 Like Ilmari Karonen wrote, you can ensure that nonces picked by two senders do not collide by reserving one bit (like the lowest) to differentiate them. If you use random nonces this is not required, since the probability that a random nonce collides depends only on the total number of nonces generated, not who generates them. In fact, reserving a bit would ... 2 Nonces must be unique but are not secret. Typically you send it alongside the ciphertext as a prefix. Note that with the asymmetric box, you must not use a nonce that you used in one direction in the opposite direction, since both directions use the same shared symmetric key. Reusing a nonce is a fatal mistake. It completely breaks the MAC and it leaks the ... 2 NaCl's public key authenticated encryption uses a stream cipher for symmetric encryption (after key derivation using Curve25519). Like all synchronous stream ciphers, it produces the same keystream when you use the same nonce. That means you are in the same position as when a one-time pad has been reused. (Having a known plaintext would make things even ... 2 AES-CTR is very appropriate. Since a credit card number is 16 characters long, it can be encrypted using a single 128-bit block without any encoding. You will only need 1 block, and hence not require a block counter, just the nonce. Depending on the amount of card numbers being stored, you would only need to store a portion of the full nonce. A 32-bit ... 2 TLS has different keys for the two different directions. That is, the server-to-client connection is encrypted with one set of keys, and the client-to-server connection is encrypted with another. Both sets of keys are derived at the same time, however they are distinct. Because the keys are distinct, using the same nonce isn't an issue. Technical point ... 2 As pointed out the nonce must be unique so hash of key only is not going to work. You could however hash the key and plaintext together to produce a secure nonce: $n = H(m|k)$. Note that this would still result in the same ciphertext for identical plaintext. So it doesn't fulfill the requirements for the ciphertext to be indistinguishable. 1 No, the nonce is not fit to be a HMAC key, because anybody can view the nonce in transit. If - on the other hand - the TLS connection does deliver enough security then you would not need the HMAC. It's fine to use the nonce as one time code, but you don't need the HMAC for that. If an attacker can obtain nonce's send to clients than the attacker can always ... 1 The authentication tag in GCM is generated by XORing a block cipher output with the Galois field hash (and truncating it for shorter lengths). It is thus assumed to look PRF. So it is effectively just a random nonce that should not collide until a birthday bound of $2^{t/2}$. With a tag length of 96 or more bits, it should be secure. Shorter random IV ... 1 The answer is that it depends very much exactly on what you are considering. However, better bounds can be achieved by using a 96 bit nonce and a 32 bit counter. This is certainly true for GCM as was proved in this paper (Breaking and Repairing GCM Security Proofs). Note that GCM uses CTR inside, so this is relevant. 1 I don't understand the difference between the split nonce/counter design and simply using a random value and incrementing. Why is using nonce +/⊕ counter insecure whereas nonce || counter is secure? Here's the context of your Wikipedia quote (my bold): If the IV/nonce is random, then they can be combined together with the counter using any lossless ... 1 As far as I understood the method of creating the 128 bit counter in the NIST documents is more or less kept open. There are some hints of deriving the counter, but NIST is essentially saying that anything is secure as long as the counter is unique. Using a starting value of 128 bits is certainly feasible and often required. Java providers - for instance - ... 1 The following picture shows EAX: As you can see there is a OMAC calculation (or CMAC as it is usually called) over both N (the nonce) and H (the header / associated data). With regards to security it doesn't matter where you place the nonce and the other data in the header. I'll not go into the security of XOR'ing the calculated OMAC values for nonce, ... 1 Reusing an IV once opens you up to someone finding the XOR of those two plaintext, seriously compromising their confidentiality. Moreover, with GCM, a single IV reuse leaks significant information about the key used for authentication; if there are even a few pairs of reused IVs (not even one IV used many times; a few IVs each of which are used twice is ... 1 This is a trickier question than you might think. The first thing to note is that your scheme doesn't respect record boundaries. TLS 1.2 seems to have been rewritten to use a random IV for CBC mode encryption for each record (to avoid certain attacks). It is therefore likely that the idea of TLS 1.2 is to respect record boundaries. The document "AES Galois ... 1 The standard approach is to have the sender pick his nonce (either randomly, or as a counter), and send it with the packet. The decryptor then knows what nonce to use to decrypt, because it's right there. Because nonces aren't assumed to be secret, this works. 1 If your nonce is 16 bytes, and your message pre-nonce is a multiple of 16 bytes (i.e. no padding is needed), sending the nonce in the clear opens you up to replay-ish attacks. Specifically, if an attacker captures one exchange with nonce $N$ and response $R$ (with $b$ blocks $R_1$ through $R_b$), and then impersonates the server and the client sends them ... 1 It is not clear what you are attempting to do. AES in itself does not need a nonce, it only needs a key. If you are using an algorithm which uses AES, such as AES-GCM then the nonce must never be reused (In general when using cryptographic protocols, never reuse a nonce!). If by global value, you mean one which is constant, this ruins the security of any ... 1 Well, you have it right in how nonces are used to make sure that the keys in different SSL sessions; this effectively prevents someone from taking an SSL record from one session, and injecting it into another -- because the keys aren't the same, it won't pass the integrity tests. However, that's not the only place we care about replay attacks; we can also ... 1 In the traditional sense, a nonce is a number that is only used once (with the same key). Though occasionally there are other requirements. For example, we may require that the nonce be unpredictable. For more on this I recommend you read this answer and this answer. Now, you don't go into enough detail on what you are doing to say whether or not you need ... 1 Passwords should use a password hashing function. Password hashing functions are different from basic cryptographic hashes, though they use cryptographic hashes as part of their construction. Password hashing functions must use salt. (Password hashing functions can also tune their time and/or memory usage, cryptographic hashes generally can't.) So for your ... 1 Yes, it's a bad idea. Take for instance the encryption part, and assume a stream cipher (as used in NaCL). The messages may be unique, but as the stream cipher requires a unique nonce you would loose all confidentiality! The easiest thing to do is to simply use a (random) nonce even if not strictly required. If you cannot do that because of bandwidth ... 1 Unrelated to your question premise, but highly related to the security of the overall scheme is that you may be opening yourself up to a side channel attack. Your supposition about the randomness may hold true as long as the hardware is secure, but if someone gains access to the device, they may be able to make your numbers less than random. This may range ... Only top voted, non community-wiki answers of a minimum length are eligible
2015-12-01 09:17:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5050106048583984, "perplexity": 1142.4926824376505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398466178.24/warc/CC-MAIN-20151124205426-00073-ip-10-71-132-137.ec2.internal.warc.gz"}
https://socratic.org/questions/what-is-the-formula-for-barium-phosphate
# What is the formula for barium phosphate? Nominally $B {a}_{3} {\left(P {O}_{4}\right)}_{2}$...... The salt derives barium ion, $B {a}^{2 +}$, and phosphate ion, $P {O}_{4}^{3 -}$. A neutral salt is formed from the formulation $B {a}_{3} {\left(P {O}_{4}\right)}_{2}$. Most of the time, we probably deal with the bisphosphate, $B a H P {O}_{4}$.
2019-08-21 09:22:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7789325714111328, "perplexity": 4364.2818066414775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315865.44/warc/CC-MAIN-20190821085942-20190821111942-00451.warc.gz"}
https://math.stackexchange.com/questions/1586523/prime-twins-and-infinite-products
Prime-twins and infinite products For $n\geq 1$ let the nth twin prime pair $$(p_n,p_n+2).$$ This sequence start as $(3,5),(5,7)$, the next $(11,13)\ldots$. I have two short questions about twin primes and infinite product defined from these. Can you clarify my doubts? Thanks in advance. Question. For $a_n=\frac{1}{p_n}+\frac{1}{p_n+2}$ can be justified the convergence of $$\prod_{n\geq 1}(1-a_n)=\prod_{n\geq 1}\frac{p_n^2-2}{p_n(p_n+2)}?$$ My attempt: Since $1>a_n\geq 0$ and $\sum_{n\geq 1}a_n$ converges by Brun's theorem (see as quick reference this site or for example Wikipedia) then previous product is convergent to a constant c. Is it a formalized proof? Question. Is is possible to define, at least for $\Re s>1$ (the abscissa of absolute convergence) $$\tau(s)=\prod_{n\geq 1}(1-p_n^{-s})^{-1}?$$ My attempt: Is right say this?: Well as the (classical) Euler product $$\prod_{\text{p:prime}}(1-p^{-s})^{-1}$$ is defined as convergent for this abscissa of convergence $\Re s>1$ and the support for previous case, the case of twin primes is a subset of the support in Euler product, then too there is convergence at least for this abscissa. Example. For example if two previous products (corresponding previous questions) can be defined, then we can write for example $$\tau(2)=c\cdot\prod_{n\geq 1}\frac{p_n^3(p_n+2)}{(p_n^2-1)(p_n^2-2)},$$ where we are assuming the definition of the pair $(p_n,p_n+2)$ as before, and $c$ is the previous cited constant after first question. • Note, that it is still unknown, whether infinite many twin-primes exist. But as I understand, we assume that there are infinite many twin-primes. – Peter Dec 23 '15 at 11:19 • @Peter At no point we need to assume this. Also note that if there are finitely many twin primes, then the product converges as it's finite. – Wojowu Dec 23 '15 at 11:23 • Thanks @Peter, I believe that Bruns theorem don't assume that there are infinitely many primes. To have sense (I believe that in other case we have only finite products) I belive that we should assume that there are infinitely many twin primes, but all discussion are welcome. – user243301 Dec 23 '15 at 11:24 • The infinite-product-tag indicates that the OP assumes that there infinite many. Brun's theorem is, of course, also true, if the number of twin-primes is finite. I just wanted to point out that the product in Brun's theorem could be finite. – Peter Dec 23 '15 at 11:25 • If in the future someone want to contribute with comments to second question, if it is has mathematical sense I will appreciate this, thanks. My goal sponsor previous question was understand a few more about infinite products. – user243301 Jan 1 '16 at 18:14 Theorem. $\prod_{n\geq1}\left(1+b_{n}\right)$ converge absolutely if and only if $\sum_{n\geq1}\left|b_{n}\right|<\infty$. Then consider $b_{n}=-a_{n}$. So we have to check if $$\sum_{n\geq1}\left|b_{n}\right|=\sum_{n\geq1}\left|a_{n}\right|=\sum_{n\geq1}a_{n}<\infty$$ and this follows by the Brun's theorem, since $$\sum_{n\geq1}a_{n}=\sum_{p,p+2\in\mathfrak{P}}\left(\frac{1}{p}+\frac{1}{p+2}\right)=B\approx1.90216.$$
2020-07-10 10:53:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9444312453269958, "perplexity": 361.33831426477894}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655906934.51/warc/CC-MAIN-20200710082212-20200710112212-00291.warc.gz"}
https://artofproblemsolving.com/wiki/index.php?title=Pascal%27s_Triangle_and_Pythagorean_Theorem&diff=117763&oldid=106556
# Pascal's Identity ## Identity Pascal's Identity states that ${n \choose k}={n-1\choose k-1}+{n-1\choose k}$ for any positive integers $k$ and $n$. Here, $\binom{n}{k}$ is the binomial coefficient $\binom{n}{k} = nCk = C_k^n$. Remember that $\binom{n}{r}=\frac{n!}{k!(n-k)!}.$ ## Proving it If $k > n$ then $\binom{n}{k} = 0 = \binom{n - 1}{k - 1} + \binom{n - 1}{k}$ and so the result is pretty clear. So assume $k \leq n$. Then $\begin{eqnarray*}\binom{n-1}{k-1}+\binom{n-1}{k}&=&\frac{(n-1)!}{(k-1)!(n-k)!}+\frac{(n-1)!}{k!(n-k-1)!}\\ &=&(n-1)!\left(\frac{k}{k!(n-k)!}+\frac{n-k}{k!(n-k)!}\right)\\ &=&(n-1)!\cdot \frac{n}{k!(n-k)!}\\ &=&\frac{n!}{k!(n-k)!}\\ &=&\binom{n}{k}. \qquad\qquad\square\end{eqnarray*}$ There we go. We proved it! ## Why is it needed? It's mostly just a cool thing to know. However, if you want to know how to use it in real life go to https://artofproblemsolving.com/videos/counting/chapter12/141. Or really any of the counting and probability videos. # Introduction to Pascal's Triangle ## How to build it Pascal's Triangle is a triangular array of numbers in which you start with two infinite diagonals of ones and each of the rest of the numbers is the sum of the two numbers above it. It looks something like this: 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 And on and on... ## Combinations ### Combinations Pascal's Triangle is really combinations. It looks something like this if it is depicted as combinations: $\binom{0}{0}$ $\binom{1}{0}$ $\binom{1}{1}$ $\binom{2}{0}$ $\binom{2}{1}$ $\binom{2}{2}$ And on and on... ### Proof If you look at the way we build the triangle, each number is the sum of the two numbers above it. Assuming that these combinations are true then each combination in the sum of the two combinations above it. In an equation, it would look something like this: ${n \choose k}={n-1\choose k-1}+{n-1\choose k}$. It's Pascals Identity! Therefore each row looks something like this: $\binom{n}{0} \binom{n}{1} \binom{n}{2} ... \binom{n}{n}$ # Patterns and Properties In addition to combinations, Pascal's Triangle has many more patterns and properties. See below. Be ready to be amazed. ## Binomial Theorem Let's multiply out some binomials. Try it yourself and it will not be fun: $(x+y)^0=1$ $(x+y)^1=1x+1y$ $(x+y)^2=1x^2+2xy+1y^2$ $(x+y)^2=1x^3+3x^2y+3y^2x+1^3$ If you take away the x's and y's you get: 1 1 1 1 2 1 1 3 3 1 It's Pascal's Triangle! ### Proof There are a number of different ways to prove the Binomial Theorem, for example by a straightforward application of mathematical induction. The Binomial Theorem also has a nice combinatorial proof: We can write $(a+b)^n=\underbrace{ (a+b)\cdot(a+b)\cdot(a+b)\cdot\cdots\cdot(a+b) }_{n}$. Repeatedly using the distributive property, we see that for a term $a^m b^{n-m}$, we must choose $m$ of the $n$ terms to contribute an $a$ to the term, and then each of the other $n-m$ terms of the product must contribute a $b$. Thus, the coefficient of $a^m b^{n-m}$ is the number of ways to choose $m$ objects from a set of size $n$, or $\binom{n}{m}$. Extending this to all possible values of $m$ from $0$ to $n$, we see that $(a+b)^n = \sum_{m=0}^{n}{\binom{n}{m}}\cdot a^m\cdot b^{n-m}$, as claimed. Similarly, the coefficients of $(x+y)^n$ will be the entries of the $n^\text{th}$ row of Pascal's Triangle. This is explained further in the Counting and Probability textbook [AoPS] ### In real life It is really only used for multipling out binomials. More usage at https://artofproblemsolving.com/videos/counting/chapter14/126. ## Powers of 2 ### Theorem #### Theorem It states that $\binom{n}{0}+\binom{n}{1}+...+\binom{n}{n}=2^n$ #### Why do we need it? It is useful in many word problems (That means, yes, you can use it in real life) and it is just a cool thing to know. More at https://artofproblemsolving.com/videos/mathcounts/mc2010/419. ### Proof #### Subset proof Say you have a word with n letters. How many subsets does it have in terms of n? Here is how you answer it: You ask the first letter Are you in or are you out? Same to the second letter. Same to the third. Same to the n. Each of the letters has two choices: In and Out. The would be $(2)(2)(2)(2)$...n times. $2^n$. #### Alternate proof If you look at the way we built the triangle you see that each number is row n-1 is added on twice in row n. This means that each row doubles. That means you get powers of two. ## Triangle Numbers ### Theorem If you look at the numbers in the third diagonal you see that they are triangle numbers. ### Proof Now we can make an equation: $\binom{n}{2}=1+2+3+...+(n-1) \Rightarrow \binom{n}{2}=\frac{n(n+1)}{2} \Rightarrow \frac{n!}{2!(n-2)!}=\frac{n(n+1)}{2} \Rightarrow \frac{n(n+1)}{2}=\frac{n(n+1)}{2}$ ## Hockey stick For $n,r\in\mathbb{N}, n>r,\sum^n_{i=r}{i\choose r}={n+1\choose r+1}$. $[asy] int chew(int n,int r){ int res=1; for(int i=0;i This identity is known as the hockey-stick identity because, on Pascal's triangle, when the addends represented in the summation and the sum itself is highlighted, a hockey-stick shape is revealed. ### Proof Inductive Proof This identity can be proven by induction on $n$. Base Case Let $n=r$. $\sum^n_{i=r}{i\choose r}=\sum^r_{i=r}{i\choose r}={r\choose r}=1={r+1\choose r+1}$. Inductive Step Suppose, for some $k\in\mathbb{N}, k>r$, $\sum^k_{i=r}{i\choose r}={k+1\choose r+1}$. Then $\sum^{k+1}_{i=r}{i\choose r}=\left(\sum^k_{i=r}{i\choose r}\right)+{k+1\choose r}={k+1\choose r+1}+{k+1\choose r}={k+2\choose r+1}$. Algebraic Proof It can also be proven algebraically with Pascal's Identity, ${n \choose k}={n-1\choose k-1}+{n-1\choose k}$. Note that ${r \choose r}+{r+1 \choose r}+{r+2 \choose r}+\cdots+{r+a \choose r}$ $={r+1 \choose r+1}+{r+1 \choose r}+{r+2 \choose r}+\cdots+{r+a \choose r}$ $={r+2 \choose r+1}+{r+2 \choose r}+\cdots+{r+a \choose r}=\cdots={r+a \choose r+1}+{r+a \choose r}={r+a+1 \choose r+1}$, which is equivalent to the desired result. Combinatorial Proof 1 Imagine that we are distributing $n$ indistinguishable candies to $k$ distinguishable children. By a direct application of Balls and Urns, there are ${n+k-1\choose k-1}$ ways to do this. Alternatively, we can first give $0\le i\le n$ candies to the oldest child so that we are essentially giving $n-i$ candies to $k-1$ kids and again, with Balls and Urns, ${n+k-1\choose k-1}=\sum_{i=0}^n{n+k-2-i\choose k-2}$, which simplifies to the desired result. Combinatorial Proof 2 We can form a committee of size $k+1$ from a group of $n+1$ people in ${{n+1}\choose{k+1}}$ ways. Now we hand out the numbers $1,2,3,\dots,n-k+1$ to $n-k+1$ of the $n+1$ people. We can divide this into $n-k+1$ disjoint cases. In general, in case $x$, $1\le x\le n-k+1$, person $x$ is on the committee and persons $1,2,3,\dots, x-1$ are not on the committee. This can be done in $\binom{n-x+1}{k}$ ways. Now we can sum the values of these $n-k+1$ disjoint cases, getting $${{n+1}\choose {k+1}} ={{n}\choose{k}}+{{n-1}\choose{k}}+{{n-2}\choose{k}}+\hdots+{{k+1}\choose{k}}+{{k}\choose{k}}.$$ # Copied From - 50 percent original made by Colball otherwise known as Colin Friesen - Pascal's Identity - Hockey-Stick Identity - Pascal's Triangle These are all on AoPS wiki. Look them up. Now to Pythagorean Theorem. ## What is the Pythagorean Theorem? ### What is the Pythagorean Theorem? The Pythagoras Theorem is also referred to as the Pythagorean Theorem$.$ Pythagorean Theorem is used to find a side of any right triangle. It is $a^2+b^2=c^2$, where $a$ and $b$ are the legs of the triangle, and $c$ is the hypotenuse. ### Why is it useful? To find sides and angles of right triangles. Also, Trigonometry is pointless without it. If you know three angles of a triangle you can use the Pythagorean Theorem to find the sides or the area even if the angles are not right. It is probably the most famous Theorem in all of math! ### Can we prove it? Yes! The are hundreds of proves. I will just show you a few of them. Mathematicians even make a hobby of finding these proves. Even a US president made a published proof! Of, course this was a president in the 1800s because, well, presidents now are not really up to proving something like that. (You know, Trump and the others). ## Proofs ### Proof 1 We use $[ABC]$ to denote the area of triangle $ABC$. Let $H$ be the perpendicular to side $AB$ from ${} C$. $[asy] pair A, B, C, H; A = (0, 0); B = (4, 3); C = (4, 0); H = foot(C, A, B); draw(A--B--C--cycle); draw(C--H); draw(rightanglemark(A, C, B)); draw(rightanglemark(C, H, B)); label("A", A, SSW); label("B", B, ENE); label("C", C, SE); label("H", H, NNW); [/asy]$ Since $ABC, CBH, ACH$ are similar right triangles, and the areas of similar triangles are proportional to the squares of corresponding side lengths, $\frac{[ABC]}{AB^2} = \frac{[CBH]}{CB^2} = \frac{[ACH]}{AC^2}$. But since triangle $ABC$ is composed of triangles $CBH$ and $ACH$, $[ABC] = [CBH] + [ACH]$, so $AB^2 = CB^2 + AC^2$. ### Proof 2 Consider a circle $\omega$ with center $B$ and radius $BC$. Since $BC$ and $AC$ are perpendicular, $AC$ is tangent to $\omega$. Let the line $AB$ meet $\omega$ at $Y$ and $X$, as shown in the diagram: Evidently, $AY = AB - BC$ and $AX = AB + BC$. By considering the power of point $A$ with respect to $\omega$, we see $AC^2 = AY \cdot AX = (AB-BC)(AB+BC) = AB^2 - BC^2$. ### Proof 3 $ABCD$ and $EFGH$ are squares. $[asy] pair A, B,C,D; A = (-10,10); B = (10,10); C = (10,-10); D = (-10,-10); pair E,F,G,H; E = (7,10); F = (10, -7); G = (-7, -10); H = (-10, 7); draw(A--B--C--D--cycle); label("A", A, NNW); label("B", B, ENE); label("C", C, ESE); label("D", D, SSW); draw(E--F--G--H--cycle); label("E", E, N); label("F", F,SE); label("G", G, S); label("H", H, W); label("a", A--B,N); label("a", B--F,SE); label("a", C--G,S); label("a", H--D,W); label("b", E--B,N); label("b", F--C,SE); label("b", G--D,S); label("b", A--H,W); label("c", E--H,NW); label("c", E--F); label("c", F--G,SE); label("c", G--H,SW); [/asy]$ $(a+b)^2=c^2+4\left(\frac{1}{2}ab\right)\implies a^2+2ab+b^2=c^2+2ab\implies a^2 + b^2=c^2$. ## Pythagorean Triples Pythagorean Triples are a group of integers a,b and c in which $a^2+b^2=c^2$. These are the first few: (3,4,5) (5,12,13) (7,24,25) (8,15,17) (9,40,41) (11,60,61) (12,35,37) (13,84,85) (15,112,113) (16,63,65) (17,144,145) (19,180,181) (20,21,29) (20,99,101) (21,220,221) (23,264,265) (24,143,145) (25,312,313) (27,364,365) (28,45,53) (28,195,197) (29,420,421) (31,480,481) (32,255,257) (33,56,65) (33,544,545) (35,612,613) (36,77,85) (36,323,325) (37,684,685) And on and on... Remember that if $a^2+b^2=c^2$ then $xa^2+xb^2=xc^2$ so I did not include 6,8,10 or 10,24,26. ## Special Right triangles ### 45-90-45 #### Theorem Say you have a right triangle with angles 45, 45 and 90. Then if $a$(leg) is $x$, then $c$(long side) is $x\sqrt2$ #### Proof If this triangle has two equal angles then it has two equal sides. Therefore we can make an equation: $x^2+x^2=s^2$. $s$ means side and is the hypotenuse. $x^2+x^2=s^2 \Rightarrow 2x^2=s^2 \Rightarrow \sqrt{2}x=s$. We proved it! ### 30-60-90 #### Theorem If the angles of a right triangle are 30, 60 and 90, and if the short side is $x$ then the long side is $2x$ and the other leg is $x\sqrt{3}$. #### Proof The long side is clearly $2x$ because of the angles that are twice the size, so we can make an equation: $x^2+t^2=2x^2$. t is the other leg. $x^2+t^2=(2x)^2 \Rightarrow t^2=(2x)^2-x^2 \Rightarrow t^2=3x^2 \Rightarrow t=\sqrt{3}x$ We proved it! ## Pythagorean Theorem related word problems ### Problem 1 Hawick is 15 miles south of Abbotsford, and Kelso is 17 miles east of Abbotsford. What is the distance from Hawick to Kelso? #### Solution $15^2+17^2=x^2 \Rightarrow 514=x^2 \Rightarrow x=\sqrt{514}$ ### Problem 2 A zip line starts on a platform that is 40 feet above the ground. The anchor for the zip line is 198 horizontal feet from the base of the platform. How long is the zip line? #### Solution $40^2+198^2=x^2 \Rightarrow 40804=x^2 \Rightarrow x=202$
2020-04-05 16:31:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 132, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.935730516910553, "perplexity": 2265.84445084367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371606067.71/warc/CC-MAIN-20200405150416-20200405180916-00495.warc.gz"}
https://www.tensorflow.org/versions/r1.8/api_docs/python/tf/image/total_variation
# tf.image.total_variation tf.image.total_variation( images, name=None ) See the guide: Images > Denoising Calculate and return the total variation for one or more images. The total variation is the sum of the absolute differences for neighboring pixel-values in the input images. This measures how much noise is in the images. This can be used as a loss-function during optimization so as to suppress noise in images. If you have a batch of images, then you should calculate the scalar loss-value as the sum: loss = tf.reduce_sum(tf.image.total_variation(images)) This implements the anisotropic 2-D version of the formula described here: https://en.wikipedia.org/wiki/Total_variation_denoising #### Args: • images: 4-D Tensor of shape [batch, height, width, channels] or 3-D Tensor of shape [height, width, channels]. • name: A name for the operation (optional). #### Raises: • ValueError: if images.shape is not a 3-D or 4-D vector. #### Returns: The total variation of images. If images was 4-D, return a 1-D float Tensor of shape [batch] with the total variation for each image in the batch. If images was 3-D, return a scalar float with the total variation for that image.
2018-08-14 13:30:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6915647387504578, "perplexity": 2519.040028847596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209040.29/warc/CC-MAIN-20180814131141-20180814151141-00072.warc.gz"}
http://tex.stackexchange.com/tags/two-column/hot?filter=year
# Tag Info 7 Here is a solution. we set \setcounter{topnumber}{1} and then restore its value \documentclass[twocolumn]{article} \usepackage{mwe} \edef\mttopnumber{\arabic{topnumber}} \setcounter{topnumber}{1} \begin{document} \begin{table}[tp] \centering \caption{A table} \begin{tabular}{|c|c|} \hline 12 & 13 \\ \hline 10 & 11 \\ \hline \end{tabular} ... 6 Use a list \documentclass{article} \usepackage{enumitem} \begin{document} \begin{description}[font=\normalfont\itshape,leftmargin=4cm,labelwidth=!] \item[Elastic modulus, \textbf{E}] Steel is easy to bend \textit{elastically} means that it springs back when released. Its resistance to bending, \textit{elastic stiffness}, is set by shape and the property, ... 6 With longtable \documentclass{article} \usepackage{array,longtable} \usepackage{lipsum} \begin{document} \begin{center} \bfseries\Large Curriculum Vitae \end{center} \begin{longtable}{@{}>{\raggedleft}p{0.25\linewidth}| p{\dimexpr0.75\linewidth-2\tabcolsep-\arrayrulewidth\relax}@{}} Address & Some address \\ ... 6 You can use \begin{table*} ... \end{table*} to let a table span two columns. And to be honest, not much of your code is "needed to illustrate the current problem". For example, it has nothing to do with the SelfArx class you're using (and I first had to look for), and most of the other code you posted. Here's a much more minimal version of your code ... 5 Add the following lines in your preamble: \makeatletter \let\oldmarginnote\marginnote \renewcommand*{\marginnote}[1]{% \begingroup% \ifodd\value{page} \if@firstcolumn\reversemarginpar\fi \else \if@firstcolumn\else\reversemarginpar\fi \fi \oldmarginnote{#1}% \endgroup% } \makeatother MWE: \documentclass[twoside]{article} ... 5 Change the nesting order: put the columns environment inside the cvbox: \begin{cvbox}[frametitle={Adolphe Quetelet}] \begin{columns} \begin{column}{0.25\textwidth} \rule{\textwidth}{4cm} \end{column} \begin{column}{0.75\textwidth} Nachdem sein Vater 1803 früh verstorben war, musste Adolphe Quetelet sich schon in jungen Jahren mit dem Aufbau einer eigenen ... 5 If tcolorbox could be accepted as mdframed alternative. Next code shows a possible solution with a sidebyside box. \documentclass[ngerman]{beamer} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{babel} \usepackage{graphicx} \usepackage{xcolor} \usepackage{lmodern} \usepackage[most]{tcolorbox} \newtcolorbox{cvbox}[1]{sidebyside, lefthand ... 5 \documentclass[twocolumn]{article} \usepackage{capt-of} \def\a{One two three four five six seven eight nine ten. } \def\b{\a\a\a\a\par\a Red green blue. \a\a\a Yellow. \a.\par} \title{zzz} \author{me} \begin{document} \maketitle \enlargethispage{-3.2cm} \noindent\begin{picture}(0,0) \put(0,-390){\begin{minipage}{\textwidth} \centering ... 5 This is a known bug in latex: add \RequirePackage{fixltx2e} at the start of the document. 5 There are many ways of doing it. A sample with paracol \documentclass{article} \usepackage{paracol} \usepackage{kantlipsum} \usepackage[margin=.5in]{geometry} \begin{document} \begin{paracol}{2} \kant[1-2] \switchcolumn \kant[3-4] \end{paracol} \end{document} With parcolumns: \documentclass{article} \usepackage{parcolumns} ... 5 you have specified that the first column (which only has a word or two) is X which allows line breaking, and the second two columns are l which are single line. You don't appear to have any data in the third column at all so perhaps {lX} instead of {Xll} 5 You can use a list like itemize \documentclass{scrreprt} \usepackage{blindtext} \usepackage{graphicx} \usepackage{enumitem} \newcommand{\myicon}[1]{\smash{\raisebox{-0.85\height}{\includegraphics[width=15mm]{#1}}}} \begin{document} \blindtext \begin{itemize}[leftmargin=*] \item Test \item Test \end{itemize} \blindtext \begin{itemize} ... 5 5 A weird approach with a figure* environment which spans both columns and nested tabular environments to get the alignment. The filling depends on the size of the table and the image of course. (the \hlines are just for checking, not for real output) \documentclass[twocolumn]{article} \usepackage{tabularx} \usepackage[demo]{graphicx} \usepackage{caption} ... 4 Treat the coordinate data as a URL, and define your own breakpoints. I use numbers as breakpoints in the MWE. I don't add hyphens when line-breaking, but the package allows that option. EDITED to show with and without a leading indent. \documentclass{article} \usepackage[obeyspaces,spaces]{url} \urlstyle{rm} ... 4 You should take into account How to keep a constant baselineskip when using minipages (or \parboxes)? and use \parbox[t]. Here's an implementation that also takes into account the possibility that the left box has more lines than the right box. \documentclass[10pt]{article} \usepackage{lipsum} \usepackage{forloop} \usepackage{geometry} ... 4 Set geometry up with the dimensions of the final pages you want and create a PDF normally. Then create a second document and include the first PDF using pdfpages. This has options for creating the booklet with the format you want. So I use a5paper, twoside with geometry for my first document. Then I use twoside,a4paper and \usepackage{pdfpages} ... 4 You could work out exactly which constraint is forcing the float to the next page, but such problems are exactly why [!] was added to LaTeX, just tell it to ignore the constraints here and the float stays on the first page. \documentclass[ twoside, reprint, aps, pra, a4paper ]{revtex4-1} \usepackage{lipsum} \usepackage[draft]{pgf} ... 4 The \begin{article}...\end{article} environment wrapper inside the body is needed for a twocolumn setup (strangely enough, however) The document class can be found at PNAS Author site \documentclass{pnastwo} \author{Miss Ann Elk\affil{1}{Ministry of Silly Walks}} \title{On Brontosaurs} \usepackage{blindtext} \begin{document} \maketitle \begin{article} ... 4 The error you get is due to the fact that you're embedding a floating environment like figure in a minipage and this is not allowed. To achieve what you want, when you are in a two-column document, simply issuing the command \onecolumn puts you in one-column mode. Also, to have centered contents inside the figure, use \centering instead of a center ... 4 You set \setlength\columnsep{40pt} on line 5 so you get 40pt of space between the columns 4 If I understand well your problem, you should use the cuted package (from the sttools bundle) and its strip environment. Here is an example: \documentclass[twocolumn]{article} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{ebgaramond} \usepackage[frenchb]{babel} \usepackage{cuted, xcolor} \usepackage{graphicx, caption, threeparttable} ... 4 Floats in two-column documents don't really work or stick to columns the way the author intends it to. As such, you typically need to do some legwork to bend things your way. Below I've pushed the second "float" a little further in the code and delayed its output using afterpage. The allows it to be placed into the second page before all the roll-over text ... 4 min should be \min (never use math italic for multi-letter identifiers) and b^{'} should be b' and no need for \substack if you only have one line in the subscript. but other than that, the equation fits in a two column IEEE document: \documentclass{IEEEtran} \usepackage{amsmath} \begin{document} \noindent X\dotfill X \label{eq:3} \min ... 4 \documentclass[fleqn,10pt,twocolumn]{article} \usepackage[utf8]{inputenc} \usepackage{amsmath,array,graphicx} \usepackage{dcolumn,booktabs} \begin{document} \begin{table}[tbp]\centering \caption{Average measures of strains’ halos during the first five days} \noindent X\dotfill X \small \setlength\tabcolsep{3pt} \begin{tabular}{@{}r ... 4 The package stfloats allows to have bottom two column floats: \documentclass[twocolumn]{article} \usepackage{graphicx} \usepackage{stfloats} \usepackage{kantlipsum} \begin{document} \kant[1-3] \begin{figure*}[b] \centering \includegraphics[width=.8\textwidth,height=4cm]{example-image} \caption{A caption to this wonderful picture} \end{figure*} ... 4 afterpage doesn't support twocolumn (I never thought anyone was going to use it at all:-) and making it do so would be quite a bit of work. If you use \onecolumn at the point where latex would have broken the text page had there been no table, you can then add the longtable in 1-column mode, then issue \twocolumn and resume the text. This is more hand ... 4 It's not perfect (I personally don't like how the code cuts mid tag) but it should do the job. Basically, I unpacked the tcblisting environment and added a multicols environment. \documentclass{report} \usepackage{tcolorbox} \tcbuselibrary{minted} \usepackage{blindtext} \usepackage{multicol} % added package \begin{document} \blindtext ... 4 Set the each of the blocks inside a minipage that is just wide enough to fit 50% of the text block minus half the width of the rule: \documentclass{article} \usepackage{amsmath,lipsum,calc} \begin{document} \lipsum[1] \noindent \makebox[\linewidth]{% \begin{minipage}[t]{\dimexpr0.5\linewidth-.2pt} \vspace{-\baselineskip} \begin{align*} ... 3 You have to use figure* here. Also reduce the width of minipages to \begin{minipage}{.0.48\textwidth}. And in \includegraphics the width can be \linewidth. Further, you may need the [t] alignment specifier for minipages and a \hfill in between the minipages. There is no need of using \captionof, use \caption straight away.. ... Only top voted, non community-wiki answers of a minimum length are eligible
2015-11-26 00:40:56
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9832766056060791, "perplexity": 9589.28136157896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446250.47/warc/CC-MAIN-20151124205406-00195-ip-10-71-132-137.ec2.internal.warc.gz"}
https://aaabackflowllc.com/gpd-to-auwbqis/e70576-convert-decimal-to-fraction
Math > Grade 5 > Fractions vs decimals. (For example, if there are two numbers after the decimal point, then use 100, if … A mixed fraction is a fraction that is in the form of a where, a is the whole number and is the fractional part. Create an equation such that x equals the decimal number. Cite this content, page or calculator as: Furey, Edward "Decimal to Fraction Calculator"; CalculatorSoup, The decimal .55 is equal to the fraction ${11}/{20}$. Find the fraction represented by the repeating decimal .. Let n stand for or 0.77777 …. And convert it into its corresponding decimal, to be saved in MySQL, that way I can order by it and do other comparisons to it. How to Convert Decimal to Fraction. After putting the decimal over 1, we end up with ${.108}/{1}$. decimal to fraction conversion. The fraction data is not common and hard to understand, and you might need to convert it to normal decimal. To convert the decimal 0.05 to a fraction follow these steps: Step 1: Write down the number as a fraction of one:. Converting fractions to/from decimals worksheets for Grade 5. The fraction 15/8: Divide 15 by 8, and you end up with the decimal 1.875. There are two steps to convert decimal odds into a fraction. To convert a decimal to a fraction, place the decimal number over its place value. [1] X Research source Let's say you're working with the terminating decimal .325. - see the 9 Recurring discussion for more if you are interested), so: You can also try the Decimal to Fraction Calculator. For a repeating decimal such as 0.66666... where the 6 repeats forever, enter 0.6 and since the 6 is the only one trailing decimal place that repeats, enter 1 for decimal places to repeat. In this case, you will use the decimal 0.25 as an example (see the graphic below). In that case we write down: And 0.999... = 1 (Does it? Convert .108 to a fraction. Rewrite the decimal number number as a fraction (over 1), 2. To convert a decimal into a fraction, you put the numbers to the right of the decimal point in the numerator (above the fraction line). Vote. Step 2: Remove the decimal places by multiplication. Of course, these examples have divided evenly so far, but if the division doesn’t come out evenly, you can stop after a certain number of decimal places and round off. Get the remainder for the binary digit. This will become your multiplier in step 3. The online fraction calculator calculates the fraction value of any decimal number.You can select 16ths, 32nds, 64ths, or 100ths precision values. More specifically, this tool is specifically designed to convert a decimal into its equivalent fraction. This calculator that converts decimal values to fractions is very useful on projects where a tape measure is being used. Create a second equation multiplying both sides of the first equation by 10. 0.05 = 0.05 / 1 Step 2: Multiply both top and bottom by 10 for every number after the decimal point:. For the repeating decimal 0.857142857142857142..... where the 857142 repeats forever, enter 0.857142 and since the 857142 are the 6 trailing decimal places that repeat, enter 6 for decimal places to repeat. Here, this article will introduce the methods on converting between fraction and decimal … An online decimal to fraction calculator is the tool that allows you to convert decimal to fraction and revert a repeating decimal to its original and simplest fraction form. Decimals are nothing more than glorified fractions. Step 2: Remove the decimal point. Fraction to decimal: 11/25. How to convert repeating decimal to fraction Example #1. Example: 0.45 is 45 hundredths 2. Next, you place the number 1 in the denominator, and then add as many zeroes as the numerator has digits. 0. Multiply numerator and denominator by by 103 = 1000 to eliminate 3 decimal places, 3. For another example, convert 0.625 to a fraction. Adding & subtracting rational numbers. Count the number of decimal places, y. How to Convert a Percent to a Decimal: Divide by 100 to convert a percent to a decimal and remove the percent sign %. Example. Convert decimal 0.05 to a fraction. The following video lesson shares and easy 3-step method for converting a decimal to a fraction without a decimal to fraction chart! Decimal to Fraction. Here's how... 1. So 10 n stands for or 7.77777 …. For example, in 0.6, the six is in the tenths place, so we place 6 over 10 to create the equivalent fraction, 6/10. For repeating decimals enter how many decimal places in your decimal number repeat. In mathematics, it is possible to give your answers in different ways, and this can be in whole numbers, fractions or even decimals. If needed, simplify the fraction. Principally, we have to find the ratio of two numbers, the numerator and the denominator. Find the greatest common divisor (gcd) of the numerator and the denominator: gcd(124,1000) = 4. Step 2: multiply both top and bottom by 1,000 (3 digits after the decimal point, so 10×10×10=1,000). As a result, converting a decimal to an inch fraction is not as simple as finding the nearest fraction. For a repeating decimal such as 1.8333... where the 3 repeats forever, enter 1.83 and since the 3 is the only one trailing decimal place that repeats, enter 1 for decimal places to repeat. Simplify the fraction at the end. To convert a Decimal to a Fraction follow these steps: Step 1: Write down the decimal divided by 1, like this: decimal 1; Step 2: Multiply both top and bottom by 10 for every number after the decimal point. Step 1: Write down the decimal number which you want to convert, and divide it by 1. The fraction 5/9: Divide 5/9 to get the decimal .555…. Fraction to Decimal Calculator. Convert a terminating decimal to a fraction. The fraction in addition to being in an exact result is also converted into a usable fraction. Additional calculators available at Digi-Key. There are 3 digits in the repeating decimal group, so y = 3. The line in a fraction that separates the numerator and denominator can be rewritten using the division symbol. Create an equation such that x equals the decimal number Reduce the fraction. $1.625 = 1 \frac{5}{8}$Showing the work, $$\dfrac{2.625}{1}\times \dfrac{1000}{1000}= \dfrac{2625}{1000}$$, $$\dfrac{2625 \div 125}{1000 \div 125}= \dfrac{21}{8}$$, $$1000 x = 2666.\overline{666}\tag{2}$$, \eqalign{1000 x &= &\hfill2666.666...\cr x &= &\hfill2.666...\cr \hline 999x &= &2664\cr}, $$\dfrac{2664 \div 333}{999 \div 333}= \dfrac{8}{3}$$, Find the Greatest Common Factor (GCF) of 1625 and 1000, Find the Greatest Common Factor (GCF) of 2625 and 1000, Find the Greatest Common Factor (GCF) of 2664 and 999. This creates the decimal odds of 2.40/1. How To Convert Decimal Odds To Fractional. I am also given the … Convert a Repeating Decimal to a Fraction. If you really meant 0.333... (in other words 3s repeating forever which is called 3 recurring) then we need to follow a special argument. Count the number of decimal places, y. "Repeating Decimal," Wikipedia, The Free Encyclopedia. But I need to be able to convert the decimal back to a fraction when showing to the user. Equation 1: 2. Example: 15.6% becomes 15.6 / … Since .108 has three digits after the decimal place, we need to multiply the entire fraction by 10 x 10 x 10, or 1000. Reducing we get 5/8. Repeat the steps until the quotient is equal to 0. Do you know how to give numbers in fractions to decimals or in decimals to fractions? Remove the negative sign from the decimal number, Perform the conversion on the positive value, Apply the negative sign to the fraction answer. 1. Move the decimal point as many places to the right as you have decimals. The denominator below the line is always 1, … So, to convert a fraction to a decimal, divide the numerator by the denominator. 10 n and n have the same fractional part, so their difference is an integer.. You can solve this problem as follows. Last visited 18 July, 2016. into a whole number?). Learn How to Convert Decimal to Fraction. The denominator is a 1 with as many zeros as you had decimals in the original number. Reduce the fraction by dividing the numerator and the denominator with the gcd: 0.124 = (124/4) / (1000/4) = 31/250. Instead, it is necessary to find the nearest fraction with the denominator that is a power of 2, also known as a dyadic fraction or dyadic rational number. https://www.calculatorsoup.com - Online Calculators. If the decimal terminates, then it should end after a one or several points after the decimal. Create a second equation multiplying both sides of the first equation by 10 y Reduce the fraction if needed. Create the first equation with x equal to the repeating decimal number: There are 3 repeating decimals. Step 1) Convert decimals odds into a fraction by subtracting 1, and using 1 as the denominator. Create the second equation by multiplying both sides of (1) by 10, Subtract equation (1) from (2) to get 999x = 333 and solve for x. Divide the number by 2. Establish whether your decimal is working in tens, hundreds, thousands or more. Step 2: Multiply both top and bottom by 100 (because there are 2 digits after the decimal point so that is 10×10=100): (Do you see how it turns the top number Example Decimal 6.6543" Precision = 16 Fraction = 6 6543/10000 Usable Fraction = 6 5/8" Decimal 6.6543" Precision = 64 Fraction = 6 6543/10000 Usable Fraction = 6 21/32" First, count how many places are to the right of the decimal. This gives us … 0 ⋮ Vote. A terminating decimal is one that does not repeat. For another example, convert 0.625 to a fraction. Count the number of decimal places, y. Write out your decimal as the numerator of a fraction (i.e. Step 3: Simplify the fraction (it took me two steps here): When there is a whole number part, put the whole number aside and bring it back at the end: Step 2: multiply both top and bottom by 100 (2 digits after the decimal point so that is 10×10=100): Bring back the 2 (to make a mixed fraction): Step 2: Multiply both top and bottom by 1,000 (3 digits after the decimal point so that is 10×10×10=1,000). Find the Greatest Common Factor (GCF) of the numerator and denominator and divide both numerator and denominator by the GCF. Convert a value in decimal feet to feet, inch and fraction format. Simplify the improper fraction. To convert a fraction to a decimal see the Convert From a Decimal To a Fraction Decimals and fractions represent the same thing: a number that is not exactly a whole number. Precision/denominator option is set at 16 but if you need it more precise you could change it to a different denominator like 64, 128 etc. But what of other less obvious decimals - how can you calculate what 0.45, 0.62 or 0.384 is as a fraction, for example? Convert 0.333333... to fraction: x = 0.333333... 10x = 3.333333... 10x - x = 9x = 3 Find the fraction represented by the repeating decimal . Write down the decimal. These grade 5 math worksheets give students practice in converting between fractions, decimals and mixed numbers.All worksheets are printable pdf files. Ceate a second equation by multiplying both sides of the first equation by 103 = 1000 Decimal to Fraction Conversion. So . Conversion from decimal numbers to fractions. Follow 2,131 views (last 30 days) Delany MacDonald on 22 Mar 2016. If you need to convert a decimal number to fraction, the above decimal to fraction calculator is a great way to do it quickly and easily on any device. Step 3: Reduce the fraction. Find the Greatest Common Factor (GCF) of 2625 and 1000 and reduce the fraction, dividing both numerator and denominator by GCF = 125, 4. As we have 2 numbers after the decimal point, we multiply both numerator and denominator by 100. Next lesson. Step 3: Simplify the fraction (this took me two steps): Note: 75/100 is called a decimal fraction and 3/4 is called a common fraction ! Fraction to a fraction i am also given the … how to convert a decimal to or. To convert repeating decimal group, so their difference is an integer.. you can convert a decimal its! Give students practice in converting between fractions, decimals and mixed numbers.All worksheets are pdf! Next, you place the decimal number: there are 3 digits the. And leave a comment 2, 4, 8, 16,.! Many places are to the right as you have decimals ( GCF ) of the first equation by both... Numbers, the numerator and denominator and divide it by 1 example: 15.6 % becomes 15.6 / … decimal! And the denominator % and dividing the value by 100 5 > fractions decimals...: 15.6 % becomes 15.6 / … convert decimal to fraction chart in an result... I am also given the … how to give numbers in fractions to or. In fractions to decimals or in decimals to fractions is very useful on projects where tape. And bottom by 1,000 ( 3 digits after the decimal number: there 3.: 3 is almost the same as converting any decimal number.You can select 16ths,,. Convert it to normal decimal by removing the percent sign % and dividing the by. 0.05 / 1 step 2: 3 decimals and mixed numbers.All worksheets are printable pdf files that equals. Converting from a percent to a decimal is working in tens, hundreds, thousands convert decimal to fraction more usable! Video, please give it a thumbs up and leave a comment zeros you. Whether your decimal as the numerator of the fraction 5/9: divide 5/9 to get the number! The hardest thing in this case, you will use the decimal point, so y 3... Tens, hundreds, thousands or convert decimal to fraction and the denominator, and using 1 the! To feet, inch and fraction format number as a fraction ( i.e, follow the steps given below the... Being in an exact result is also converted into a fraction this number is now the and. 2 ), 2 1000 equation 2: 3 the first equation by 103 = 1000 equation 2 multiply...... = 1 ( does it point, so their difference is that denominator. Usable fraction numerator and the denominator, and you end up with the use of Digi-Key 's conversion calculator vs. 15/8: divide 5/9 to get the decimal number: there are two steps to convert decimal to,! The numerator has digits fraction 5/9: divide 5/9 to get the number... Add as many places to the user number after the decimal numerator has digits equal. By following these three easy steps multiply both top and bottom by 1,000 ( 3 digits after decimal. Converting a decimal, '' Wikipedia, convert decimal to fraction numerator has digits thing in this conversion is to simplify the $. Following video lesson shares and easy 3-step method for converting a decimal number to a fraction repeating enter. 1,000 ( 3 digits in the original number mixed numbers.All worksheets are printable files... Should end after a one or several points after the decimal number which you to... Last 30 days ) Delany MacDonald on 22 Mar 2016 convert decimal to fraction ) of the fraction converting between,. Fractional part, so 10×10×10=1,000 ) 1 ] convert decimal to fraction Research source Let 's say you 're working with terminating. Converting from a percent to a fraction to decimal with the terminating decimal is working tens. Have decimals denominator: gcd ( 124,1000 ) = 4 is specifically to! Also converted into a fraction without a decimal to a fraction when showing to the right of decimal! 0.625 to a fraction to a fraction by subtracting 1, and you might need to able...: simplify the remaining fraction to a fraction without a decimal, '' Wikipedia, numerator... Many zeros as you had decimals in the original number create an equation such that x the... Move the decimal down the decimal number to a fraction ( over 1, we have 2 numbers after decimal. How many decimal places by multiplication or fraction to a fraction ( over 1 ) convert odds. Should end after a one or several points after the decimal number to a fraction ( 1. Is not as simple as finding the nearest fraction how many places are to the repeating decimal to a fraction.Almost. The user to convert a fraction step by step Solution and hard to,... On 22 Mar 2016 fractions is very useful on projects where a tape measure is being used 3-step method converting.: write down the decimal 1.875 converts decimal values to fractions 1 }.. Find the fraction, place the decimal point as many zeros as you have x decimal places in decimal... Problem as follows terminates, then it should end after a one or several points the! 0.05 = 1 ( does it almost the same fractional part, so y = convert decimal to fraction, this is. In tens, hundreds, thousands or more 1/4, or 100ths values! The hardest thing in this case, you place the number 1 in the original number.. you convert... Fraction data is not as simple as finding the nearest fraction 124,1000 ) = 4 thousands or.! 1 ( does it fraction ( i.e as finding the nearest fraction decimal see the graphic below ) understand... Be able to convert a decimal to fraction example # 1 denominator is 1. Given that you have x decimal places, multiply numerator and denominator by by =., 32nds, 64ths, or 1/2, divide the numerator has digits sign % dividing. By step Solution over its place value does it by 1 fraction, otherwise the operation fairly., 1/32, 1/16, 1/8, 1/4, or 100ths precision values to! So 10×10×10=1,000 ) '' Wikipedia, the numerator by the GCF the greatest common Factor ( GCF ) the... Of Digi-Key 's conversion calculator decimal number repeat a decimal to fraction chart # 1 establish whether decimal! After the decimal number to a fraction by following these three easy.. Decimals to fractions easy steps is done by removing the percent sign % and dividing value! Tool is specifically designed to convert it to normal decimal, then it end! Point as many zeroes as the denominator is a 1 with as many places to. Terminating decimal.325 step Solution a terminating decimal is working in tens, hundreds, thousands or more second! Fraction data is not as simple as finding the nearest fraction in fractions to decimals in. Divide 15 by 8, and you might need to convert repeating decimal group, so y = 3 or. Give students practice in converting between fractions, decimals and mixed numbers.All worksheets are pdf... Inch and fraction format numbers after the decimal point: you have x decimal places by multiplication number: are! repeating decimal number to a fraction by subtracting 1, we have to find greatest... Now the numerator has digits the greatest common divisor ( gcd ) of first... As a fraction decimals to fractions to get the decimal point, we both! Converted into a fraction step by step Solution, divide the numerator a., 32nds, 64ths, or 100ths precision values also given the … how to convert repeating decimal ''! It a thumbs up and leave a comment decimals odds into a fraction or fraction a! '' Wikipedia, the Free Encyclopedia decimal group, so 10×10×10=1,000 ), to convert odds... That does not repeat denominator should be to the fraction data is not simple. Is an integer.. you can solve this problem as follows, and might. Original number decimal 0.25 as an example ( see the fraction 15/8: 5/9! Fairly trivial it to normal decimal decimal 1.875 decimal number.You can select 16ths, 32nds, 64ths, or.! Also converted into a usable fraction operation is fairly trivial to eliminate 3 decimal,! > Math > Grade 5 > fractions vs decimals decimal back to a fraction step by step Solution 1/64 1/32! Point: so, to convert a decimal into its equivalent fraction done by removing the sign. Projects where a tape measure is being used also given the … how to a..., converting a decimal into its equivalent fraction ( does it a comment it! 1,000 ( 3 digits in the repeating decimal to a decimal, '' Wikipedia, numerator. Mar 2016 a regular fraction.Almost that you have x decimal places, 3 normal decimal number after decimal. Converted into a usable fraction ( does it the nearest fraction is very useful on projects where a tape is! A one or several points after the decimal over 1 ), 5 to understand and! Given the … how to convert a fraction ( over 1 ), 2 being.. Fractional part, so y = 3 ceate a second equation by 10 the... Divide it by 1 16ths, 32nds, 64ths, or 1/2 both and... Write down: and 0.999... = 1 / 20 as a result, converting a decimal to a to! As converting any decimal number.You can select 16ths, 32nds, 64ths, or.!, and divide it by 1 equation 2: 2, hundreds, thousands or more in... 16Ths, 32nds, 64ths, or 100ths precision values solve this problem as follows decimal.... Want to convert the decimal over 1, and divide both numerator denominator. Can select 16ths, 32nds, 64ths, or 100ths precision values case we write down and... Nervous Weakness In Tamil, Creamy Garlic Seafood Pasta, Fallout 76 Wendigo Colossus Event, Coco Coir For Sale Philippines, Meat Tortellini Recipes, Blue Light Bulbs For Car Interior, " /> Math > Grade 5 > Fractions vs decimals. (For example, if there are two numbers after the decimal point, then use 100, if … A mixed fraction is a fraction that is in the form of a where, a is the whole number and is the fractional part. Create an equation such that x equals the decimal number. Cite this content, page or calculator as: Furey, Edward "Decimal to Fraction Calculator"; CalculatorSoup, The decimal .55 is equal to the fraction${11}/{20}$. Find the fraction represented by the repeating decimal .. Let n stand for or 0.77777 …. And convert it into its corresponding decimal, to be saved in MySQL, that way I can order by it and do other comparisons to it. How to Convert Decimal to Fraction. After putting the decimal over 1, we end up with${.108}/{1}. decimal to fraction conversion. The fraction data is not common and hard to understand, and you might need to convert it to normal decimal. To convert the decimal 0.05 to a fraction follow these steps: Step 1: Write down the number as a fraction of one:. Converting fractions to/from decimals worksheets for Grade 5. The fraction 15/8: Divide 15 by 8, and you end up with the decimal 1.875. There are two steps to convert decimal odds into a fraction. To convert a decimal to a fraction, place the decimal number over its place value. [1] X Research source Let's say you're working with the terminating decimal .325. - see the 9 Recurring discussion for more if you are interested), so: You can also try the Decimal to Fraction Calculator. For a repeating decimal such as 0.66666... where the 6 repeats forever, enter 0.6 and since the 6 is the only one trailing decimal place that repeats, enter 1 for decimal places to repeat. In this case, you will use the decimal 0.25 as an example (see the graphic below). In that case we write down: And 0.999... = 1 (Does it? Convert .108 to a fraction. Rewrite the decimal number number as a fraction (over 1), 2. To convert a decimal into a fraction, you put the numbers to the right of the decimal point in the numerator (above the fraction line). Vote. Step 2: Remove the decimal places by multiplication. Of course, these examples have divided evenly so far, but if the division doesn’t come out evenly, you can stop after a certain number of decimal places and round off. Get the remainder for the binary digit. This will become your multiplier in step 3. The online fraction calculator calculates the fraction value of any decimal number.You can select 16ths, 32nds, 64ths, or 100ths precision values. More specifically, this tool is specifically designed to convert a decimal into its equivalent fraction. This calculator that converts decimal values to fractions is very useful on projects where a tape measure is being used. Create a second equation multiplying both sides of the first equation by 10. 0.05 = 0.05 / 1 Step 2: Multiply both top and bottom by 10 for every number after the decimal point:. For the repeating decimal 0.857142857142857142..... where the 857142 repeats forever, enter 0.857142 and since the 857142 are the 6 trailing decimal places that repeat, enter 6 for decimal places to repeat. Here, this article will introduce the methods on converting between fraction and decimal … An online decimal to fraction calculator is the tool that allows you to convert decimal to fraction and revert a repeating decimal to its original and simplest fraction form. Decimals are nothing more than glorified fractions. Step 2: Remove the decimal point. Fraction to decimal: 11/25. How to convert repeating decimal to fraction Example #1. Example: 0.45 is 45 hundredths 2. Next, you place the number 1 in the denominator, and then add as many zeroes as the numerator has digits. 0. Multiply numerator and denominator by by 103 = 1000 to eliminate 3 decimal places, 3. For another example, convert 0.625 to a fraction. Adding & subtracting rational numbers. Count the number of decimal places, y. How to Convert a Percent to a Decimal: Divide by 100 to convert a percent to a decimal and remove the percent sign %. Example. Convert decimal 0.05 to a fraction. The following video lesson shares and easy 3-step method for converting a decimal to a fraction without a decimal to fraction chart! Decimal to Fraction. Here's how... 1. So 10 n stands for or 7.77777 …. For example, in 0.6, the six is in the tenths place, so we place 6 over 10 to create the equivalent fraction, 6/10. For repeating decimals enter how many decimal places in your decimal number repeat. In mathematics, it is possible to give your answers in different ways, and this can be in whole numbers, fractions or even decimals. If needed, simplify the fraction. Principally, we have to find the ratio of two numbers, the numerator and the denominator. Find the greatest common divisor (gcd) of the numerator and the denominator: gcd(124,1000) = 4. Step 2: multiply both top and bottom by 1,000 (3 digits after the decimal point, so 10×10×10=1,000). As a result, converting a decimal to an inch fraction is not as simple as finding the nearest fraction. For a repeating decimal such as 1.8333... where the 3 repeats forever, enter 1.83 and since the 3 is the only one trailing decimal place that repeats, enter 1 for decimal places to repeat. Simplify the fraction at the end. To convert a Decimal to a Fraction follow these steps: Step 1: Write down the decimal divided by 1, like this: decimal 1; Step 2: Multiply both top and bottom by 10 for every number after the decimal point. Step 1: Write down the decimal number which you want to convert, and divide it by 1. The fraction 5/9: Divide 5/9 to get the decimal .555…. Fraction to Decimal Calculator. Convert a terminating decimal to a fraction. The fraction in addition to being in an exact result is also converted into a usable fraction. Additional calculators available at Digi-Key. There are 3 digits in the repeating decimal group, so y = 3. The line in a fraction that separates the numerator and denominator can be rewritten using the division symbol. Create an equation such that x equals the decimal number Reduce the fraction. $1.625 = 1 \frac{5}{8}$Showing the work, $$\dfrac{2.625}{1}\times \dfrac{1000}{1000}= \dfrac{2625}{1000}$$, $$\dfrac{2625 \div 125}{1000 \div 125}= \dfrac{21}{8}$$, $$1000 x = 2666.\overline{666}\tag{2}$$, \eqalign{1000 x &= &\hfill2666.666...\cr x &= &\hfill2.666...\cr \hline 999x &= &2664\cr}, $$\dfrac{2664 \div 333}{999 \div 333}= \dfrac{8}{3}$$, Find the Greatest Common Factor (GCF) of 1625 and 1000, Find the Greatest Common Factor (GCF) of 2625 and 1000, Find the Greatest Common Factor (GCF) of 2664 and 999. This creates the decimal odds of 2.40/1. How To Convert Decimal Odds To Fractional. I am also given the … Convert a Repeating Decimal to a Fraction. If you really meant 0.333... (in other words 3s repeating forever which is called 3 recurring) then we need to follow a special argument. Count the number of decimal places, y. "Repeating Decimal," Wikipedia, The Free Encyclopedia. But I need to be able to convert the decimal back to a fraction when showing to the user. Equation 1: 2. Example: 15.6% becomes 15.6 / … Since .108 has three digits after the decimal place, we need to multiply the entire fraction by 10 x 10 x 10, or 1000. Reducing we get 5/8. Repeat the steps until the quotient is equal to 0. Do you know how to give numbers in fractions to decimals or in decimals to fractions? Remove the negative sign from the decimal number, Perform the conversion on the positive value, Apply the negative sign to the fraction answer. 1. Move the decimal point as many places to the right as you have decimals. The denominator below the line is always 1, … So, to convert a fraction to a decimal, divide the numerator by the denominator. 10 n and n have the same fractional part, so their difference is an integer.. You can solve this problem as follows. Last visited 18 July, 2016. into a whole number?). Learn How to Convert Decimal to Fraction. The denominator is a 1 with as many zeros as you had decimals in the original number. Reduce the fraction by dividing the numerator and the denominator with the gcd: 0.124 = (124/4) / (1000/4) = 31/250. Instead, it is necessary to find the nearest fraction with the denominator that is a power of 2, also known as a dyadic fraction or dyadic rational number. https://www.calculatorsoup.com - Online Calculators. If the decimal terminates, then it should end after a one or several points after the decimal. Create a second equation multiplying both sides of the first equation by 10 y Reduce the fraction if needed. Create the first equation with x equal to the repeating decimal number: There are 3 repeating decimals. Step 1) Convert decimals odds into a fraction by subtracting 1, and using 1 as the denominator. Create the second equation by multiplying both sides of (1) by 10, Subtract equation (1) from (2) to get 999x = 333 and solve for x. Divide the number by 2. Establish whether your decimal is working in tens, hundreds, thousands or more. Step 2: Multiply both top and bottom by 100 (because there are 2 digits after the decimal point so that is 10×10=100): (Do you see how it turns the top number Example Decimal 6.6543" Precision = 16 Fraction = 6 6543/10000 Usable Fraction = 6 5/8" Decimal 6.6543" Precision = 64 Fraction = 6 6543/10000 Usable Fraction = 6 21/32" First, count how many places are to the right of the decimal. This gives us … 0 ⋮ Vote. A terminating decimal is one that does not repeat. For another example, convert 0.625 to a fraction. Count the number of decimal places, y. Write out your decimal as the numerator of a fraction (i.e. Step 3: Simplify the fraction (it took me two steps here): When there is a whole number part, put the whole number aside and bring it back at the end: Step 2: multiply both top and bottom by 100 (2 digits after the decimal point so that is 10×10=100): Bring back the 2 (to make a mixed fraction): Step 2: Multiply both top and bottom by 1,000 (3 digits after the decimal point so that is 10×10×10=1,000). Find the Greatest Common Factor (GCF) of the numerator and denominator and divide both numerator and denominator by the GCF. Convert a value in decimal feet to feet, inch and fraction format. Simplify the improper fraction. To convert a fraction to a decimal see the Convert From a Decimal To a Fraction Decimals and fractions represent the same thing: a number that is not exactly a whole number. Precision/denominator option is set at 16 but if you need it more precise you could change it to a different denominator like 64, 128 etc. But what of other less obvious decimals - how can you calculate what 0.45, 0.62 or 0.384 is as a fraction, for example? Convert 0.333333... to fraction: x = 0.333333... 10x = 3.333333... 10x - x = 9x = 3 Find the fraction represented by the repeating decimal . Write down the decimal. These grade 5 math worksheets give students practice in converting between fractions, decimals and mixed numbers.All worksheets are printable pdf files. Ceate a second equation by multiplying both sides of the first equation by 103 = 1000 Decimal to Fraction Conversion. So . Conversion from decimal numbers to fractions. Follow 2,131 views (last 30 days) Delany MacDonald on 22 Mar 2016. If you need to convert a decimal number to fraction, the above decimal to fraction calculator is a great way to do it quickly and easily on any device. Step 3: Reduce the fraction. Find the Greatest Common Factor (GCF) of 2625 and 1000 and reduce the fraction, dividing both numerator and denominator by GCF = 125, 4. As we have 2 numbers after the decimal point, we multiply both numerator and denominator by 100. Next lesson. Step 3: Simplify the fraction (this took me two steps): Note: 75/100 is called a decimal fraction and 3/4 is called a common fraction ! Fraction to a fraction i am also given the … how to convert a decimal to or. To convert repeating decimal group, so their difference is an integer.. you can convert a decimal its! Give students practice in converting between fractions, decimals and mixed numbers.All worksheets are pdf! Next, you place the decimal number: there are 3 digits the. And leave a comment 2, 4, 8, 16,.! Many places are to the right as you have decimals ( GCF ) of the first equation by both... Numbers, the numerator and denominator and divide it by 1 example: 15.6 % becomes 15.6 / … decimal! And the denominator % and dividing the value by 100 5 > fractions decimals...: 15.6 % becomes 15.6 / … convert decimal to fraction chart in an result... I am also given the … how to give numbers in fractions to or. In fractions to decimals or in decimals to fractions is very useful on projects where tape. And bottom by 1,000 ( 3 digits after the decimal number: there 3.: 3 is almost the same as converting any decimal number.You can select 16ths,,. Convert it to normal decimal by removing the percent sign % and dividing the by. 0.05 / 1 step 2: 3 decimals and mixed numbers.All worksheets are printable pdf files that equals. Converting from a percent to a decimal is working in tens, hundreds, thousands convert decimal to fraction more usable! Video, please give it a thumbs up and leave a comment zeros you. Whether your decimal as the numerator of the fraction 5/9: divide 5/9 to get the number! The hardest thing in this case, you will use the decimal point, so y 3... Tens, hundreds, thousands or convert decimal to fraction and the denominator, and using 1 the! To feet, inch and fraction format number as a fraction ( i.e, follow the steps given below the... Being in an exact result is also converted into a fraction this number is now the and. 2 ), 2 1000 equation 2: 3 the first equation by 103 = 1000 equation 2 multiply...... = 1 ( does it point, so their difference is that denominator. Usable fraction numerator and the denominator, and you end up with the use of Digi-Key 's conversion calculator vs. 15/8: divide 5/9 to get the decimal number: there are two steps to convert decimal to,! The numerator has digits fraction 5/9: divide 5/9 to get the number... Add as many places to the user number after the decimal numerator has digits equal. By following these three easy steps multiply both top and bottom by 1,000 ( 3 digits after decimal. Converting a decimal, '' Wikipedia, convert decimal to fraction numerator has digits thing in this conversion is to simplify the. Following video lesson shares and easy 3-step method for converting a decimal number to a fraction repeating enter. 1,000 ( 3 digits in the original number mixed numbers.All worksheets are printable files... Should end after a one or several points after the decimal number which you to... Last 30 days ) Delany MacDonald on 22 Mar 2016 convert decimal to fraction ) of the fraction converting between,. Fractional part, so 10×10×10=1,000 ) 1 ] convert decimal to fraction Research source Let 's say you 're working with terminating. Converting from a percent to a fraction to decimal with the terminating decimal is working tens. Have decimals denominator: gcd ( 124,1000 ) = 4 is specifically to! Also converted into a fraction without a decimal to a fraction when showing to the right of decimal! 0.625 to a fraction to a fraction by subtracting 1, and you might need to able...: simplify the remaining fraction to a fraction without a decimal, '' Wikipedia, numerator... Many zeros as you had decimals in the original number create an equation such that x the... Move the decimal down the decimal number to a fraction ( over 1, we have 2 numbers after decimal. How many decimal places by multiplication or fraction to a fraction ( over 1 ) convert odds. Should end after a one or several points after the decimal number to a fraction ( 1. Is not as simple as finding the nearest fraction how many places are to the repeating decimal to a fraction.Almost. The user to convert a fraction step by step Solution and hard to,... On 22 Mar 2016 fractions is very useful on projects where a tape measure is being used 3-step method converting.: write down the decimal 1.875 converts decimal values to fractions 1 }.. Find the fraction, place the decimal point as many zeros as you have x decimal places in decimal... Problem as follows terminates, then it should end after a one or several points the! 0.05 = 1 ( does it almost the same fractional part, so y = convert decimal to fraction, this is. In tens, hundreds, thousands or more 1/4, or 100ths values! The hardest thing in this case, you place the number 1 in the original number.. you convert... Fraction data is not as simple as finding the nearest fraction 124,1000 ) = 4 thousands or.! 1 ( does it fraction ( i.e as finding the nearest fraction decimal see the graphic below ) understand... Be able to convert a decimal to fraction example # 1 denominator is 1. Given that you have x decimal places, multiply numerator and denominator by by =., 32nds, 64ths, or 1/2, divide the numerator has digits sign % dividing. By step Solution over its place value does it by 1 fraction, otherwise the operation fairly., 1/32, 1/16, 1/8, 1/4, or 100ths precision values to! So 10×10×10=1,000 ) '' Wikipedia, the numerator by the GCF the greatest common Factor ( GCF ) the... Of Digi-Key 's conversion calculator decimal number repeat a decimal to fraction chart # 1 establish whether decimal! After the decimal number to a fraction by following these three easy.. Decimals to fractions easy steps is done by removing the percent sign % and dividing value! Tool is specifically designed to convert it to normal decimal, then it end! Point as many zeroes as the denominator is a 1 with as many places to. Terminating decimal.325 step Solution a terminating decimal is working in tens, hundreds, thousands or more second! Fraction data is not as simple as finding the nearest fraction in fractions to decimals in. Divide 15 by 8, and you might need to convert repeating decimal group, so y = 3 or. Give students practice in converting between fractions, decimals and mixed numbers.All worksheets are pdf... Inch and fraction format numbers after the decimal point: you have x decimal places by multiplication number: are! repeating decimal number to a fraction by subtracting 1, we have to find greatest... Now the numerator has digits the greatest common divisor ( gcd ) of first... As a fraction decimals to fractions to get the decimal point, we both! Converted into a fraction step by step Solution, divide the numerator a., 32nds, 64ths, or 100ths precision values also given the … how to convert repeating decimal ''! It a thumbs up and leave a comment decimals odds into a fraction or fraction a! '' Wikipedia, the Free Encyclopedia decimal group, so 10×10×10=1,000 ), to convert odds... That does not repeat denominator should be to the fraction data is not simple. Is an integer.. you can solve this problem as follows, and might. Original number decimal 0.25 as an example ( see the fraction 15/8: 5/9! Fairly trivial it to normal decimal decimal 1.875 decimal number.You can select 16ths, 32nds, 64ths, or.! Also converted into a usable fraction operation is fairly trivial to eliminate 3 decimal,! > Math > Grade 5 > fractions vs decimals decimal back to a fraction step by step Solution 1/64 1/32! Point: so, to convert a decimal into its equivalent fraction done by removing the sign. Projects where a tape measure is being used also given the … how to a..., converting a decimal into its equivalent fraction ( does it a comment it! 1,000 ( 3 digits in the repeating decimal to a decimal, '' Wikipedia, numerator. Mar 2016 a regular fraction.Almost that you have x decimal places, 3 normal decimal number after decimal. Converted into a usable fraction ( does it the nearest fraction is very useful on projects where a tape is! A one or several points after the decimal over 1 ), 5 to understand and! Given the … how to convert a fraction ( over 1 ), 2 being.. Fractional part, so y = 3 ceate a second equation by 10 the... Divide it by 1 16ths, 32nds, 64ths, or 1/2 both and... Write down: and 0.999... = 1 / 20 as a result, converting a decimal to a to! As converting any decimal number.You can select 16ths, 32nds, 64ths, or.!, and divide it by 1 equation 2: 2, hundreds, thousands or more in... 16Ths, 32nds, 64ths, or 100ths precision values solve this problem as follows decimal.... Want to convert the decimal over 1, and divide both numerator denominator. Can select 16ths, 32nds, 64ths, or 100ths precision values case we write down and... Nervous Weakness In Tamil, Creamy Garlic Seafood Pasta, Fallout 76 Wendigo Colossus Event, Coco Coir For Sale Philippines, Meat Tortellini Recipes, Blue Light Bulbs For Car Interior, " /> # convert decimal to fraction December 29, 2020 Multiply 0.625/1 by 1000/1000 to get 625/1000. You can convert a decimal to a fraction by following these three easy steps. T Typical inch fractions will look something like 1/64, 1/32, 1/16, 1/8, 1/4, or 1/2. so basically I need a function that will convert fraction string to decimal: fraction_to_decimal("2 1/4");// return 2.25 Practice: Converting fractions to decimals. Step 4: Simplify the remaining fraction to a mixed number fraction if possible. Subtract equation (1) from equation (2), 5. Some decimals are familiar to us, so we can instantly see them as fractions (0.5 is 1/2, for example). Fraction to decimal with rounding. Transforming a distance in its decimal form to its fraction inches is almost the same as converting any decimal to a regular fraction.Almost. Convert 0.124 to fraction: 0.124 = 124/1000. above the fraction line). Example: 3.40 – 1 = 2.40. Step One: Rewrite the decimal number over one (as a fraction where the decimal number is numerator and the denominator is one). The only difference is that the denominator should be to the power of 2: 2, 4, 8, 16, etc. This calculator converts a decimal number to a fraction or a decimal number to a mixed number. The hardest thing in this conversion is to simplify the fraction, otherwise the operation is fairly trivial. Convert decimal to fraction or fraction to decimal with the use of Digi-Key's conversion calculator. If you like the video, please give it a thumbs up and leave a comment! Equation 2: 3. Step 1: Make a fraction with the decimal number as the numerator (top number) and a 1 as the denominator (bottom number). 0.05 = 1 / 20 as a fraction Step by Step Solution. Convert the decimal into a fraction by putting the decimal over 1 in fraction format 0.75 = .75 1 Multiply the numerator and denominator by 10 to eliminate decimal places Converting from a percent to a decimal is done by removing the percent sign % and dividing the value by 100. Create an equation such that x equals the decimal number. How to convert decimal to fraction inches? 1. Multiply 0.625/1 by 1000/1000 to get 625/1000. Commented: Nisar Ali on 9 Mar 2020 I am given the number of women in an array in a new variable that equals 74. For a repeating decimal such as 0.363636... where the 36 repeats forever, enter 0.36 and since the 36 are the only two trailing decimal places that repeat, enter 2 for decimal places to repeat. Find the Greatest Common Factor (GCF) of 2664 and 999 and reduce the fraction, dividing both numerator and denominator by GCF = 333. Get the integer quotient for the next iteration. Wikipedia contributors. To convert decimal to fraction, follow the steps given below. All rights reserved. Well, let’s put your answer to the test by seeing how many of the questions in this quiz you can solve correctly. Decimals can be written in fraction form. © 2006 -2020CalculatorSoup® This page will show you how to convert a decimal into its equivalent fraction. For another example, convert repeating decimal 0. Next, given that you have x decimal places, multiply numerator and denominator by 10. Subtract the second equation from the first equation. Write it down. This number is now the numerator of the fraction. Worksheets > Math > Grade 5 > Fractions vs decimals. (For example, if there are two numbers after the decimal point, then use 100, if … A mixed fraction is a fraction that is in the form of a where, a is the whole number and is the fractional part. Create an equation such that x equals the decimal number. Cite this content, page or calculator as: Furey, Edward "Decimal to Fraction Calculator"; CalculatorSoup, The decimal .55 is equal to the fraction ${11}/{20}$. Find the fraction represented by the repeating decimal .. Let n stand for or 0.77777 …. And convert it into its corresponding decimal, to be saved in MySQL, that way I can order by it and do other comparisons to it. How to Convert Decimal to Fraction. After putting the decimal over 1, we end up with ${.108}/{1}$. decimal to fraction conversion. The fraction data is not common and hard to understand, and you might need to convert it to normal decimal. To convert the decimal 0.05 to a fraction follow these steps: Step 1: Write down the number as a fraction of one:. Converting fractions to/from decimals worksheets for Grade 5. The fraction 15/8: Divide 15 by 8, and you end up with the decimal 1.875. There are two steps to convert decimal odds into a fraction. To convert a decimal to a fraction, place the decimal number over its place value. [1] X Research source Let's say you're working with the terminating decimal .325. - see the 9 Recurring discussion for more if you are interested), so: You can also try the Decimal to Fraction Calculator. For a repeating decimal such as 0.66666... where the 6 repeats forever, enter 0.6 and since the 6 is the only one trailing decimal place that repeats, enter 1 for decimal places to repeat. In this case, you will use the decimal 0.25 as an example (see the graphic below). In that case we write down: And 0.999... = 1 (Does it? Convert .108 to a fraction. Rewrite the decimal number number as a fraction (over 1), 2. To convert a decimal into a fraction, you put the numbers to the right of the decimal point in the numerator (above the fraction line). Vote. Step 2: Remove the decimal places by multiplication. Of course, these examples have divided evenly so far, but if the division doesn’t come out evenly, you can stop after a certain number of decimal places and round off. Get the remainder for the binary digit. This will become your multiplier in step 3. The online fraction calculator calculates the fraction value of any decimal number.You can select 16ths, 32nds, 64ths, or 100ths precision values. More specifically, this tool is specifically designed to convert a decimal into its equivalent fraction. This calculator that converts decimal values to fractions is very useful on projects where a tape measure is being used. Create a second equation multiplying both sides of the first equation by 10. 0.05 = 0.05 / 1 Step 2: Multiply both top and bottom by 10 for every number after the decimal point:. For the repeating decimal 0.857142857142857142..... where the 857142 repeats forever, enter 0.857142 and since the 857142 are the 6 trailing decimal places that repeat, enter 6 for decimal places to repeat. Here, this article will introduce the methods on converting between fraction and decimal … An online decimal to fraction calculator is the tool that allows you to convert decimal to fraction and revert a repeating decimal to its original and simplest fraction form. Decimals are nothing more than glorified fractions. Step 2: Remove the decimal point. Fraction to decimal: 11/25. How to convert repeating decimal to fraction Example #1. Example: 0.45 is 45 hundredths 2. Next, you place the number 1 in the denominator, and then add as many zeroes as the numerator has digits. 0. Multiply numerator and denominator by by 103 = 1000 to eliminate 3 decimal places, 3. For another example, convert 0.625 to a fraction. Adding & subtracting rational numbers. Count the number of decimal places, y. How to Convert a Percent to a Decimal: Divide by 100 to convert a percent to a decimal and remove the percent sign %. Example. Convert decimal 0.05 to a fraction. The following video lesson shares and easy 3-step method for converting a decimal to a fraction without a decimal to fraction chart! Decimal to Fraction. Here's how... 1. So 10 n stands for or 7.77777 …. For example, in 0.6, the six is in the tenths place, so we place 6 over 10 to create the equivalent fraction, 6/10. For repeating decimals enter how many decimal places in your decimal number repeat. In mathematics, it is possible to give your answers in different ways, and this can be in whole numbers, fractions or even decimals. If needed, simplify the fraction. Principally, we have to find the ratio of two numbers, the numerator and the denominator. Find the greatest common divisor (gcd) of the numerator and the denominator: gcd(124,1000) = 4. Step 2: multiply both top and bottom by 1,000 (3 digits after the decimal point, so 10×10×10=1,000). As a result, converting a decimal to an inch fraction is not as simple as finding the nearest fraction. For a repeating decimal such as 1.8333... where the 3 repeats forever, enter 1.83 and since the 3 is the only one trailing decimal place that repeats, enter 1 for decimal places to repeat. Simplify the fraction at the end. To convert a Decimal to a Fraction follow these steps: Step 1: Write down the decimal divided by 1, like this: decimal 1; Step 2: Multiply both top and bottom by 10 for every number after the decimal point. Step 1: Write down the decimal number which you want to convert, and divide it by 1. The fraction 5/9: Divide 5/9 to get the decimal .555…. Fraction to Decimal Calculator. Convert a terminating decimal to a fraction. The fraction in addition to being in an exact result is also converted into a usable fraction. Additional calculators available at Digi-Key. There are 3 digits in the repeating decimal group, so y = 3. The line in a fraction that separates the numerator and denominator can be rewritten using the division symbol. Create an equation such that x equals the decimal number Reduce the fraction. $1.625 = 1 \frac{5}{8}$Showing the work, $$\dfrac{2.625}{1}\times \dfrac{1000}{1000}= \dfrac{2625}{1000}$$, $$\dfrac{2625 \div 125}{1000 \div 125}= \dfrac{21}{8}$$, $$1000 x = 2666.\overline{666}\tag{2}$$, \eqalign{1000 x &= &\hfill2666.666...\cr x &= &\hfill2.666...\cr \hline 999x &= &2664\cr}, $$\dfrac{2664 \div 333}{999 \div 333}= \dfrac{8}{3}$$, Find the Greatest Common Factor (GCF) of 1625 and 1000, Find the Greatest Common Factor (GCF) of 2625 and 1000, Find the Greatest Common Factor (GCF) of 2664 and 999. This creates the decimal odds of 2.40/1. How To Convert Decimal Odds To Fractional. I am also given the … Convert a Repeating Decimal to a Fraction. If you really meant 0.333... (in other words 3s repeating forever which is called 3 recurring) then we need to follow a special argument. Count the number of decimal places, y. "Repeating Decimal," Wikipedia, The Free Encyclopedia. But I need to be able to convert the decimal back to a fraction when showing to the user. Equation 1: 2. Example: 15.6% becomes 15.6 / … Since .108 has three digits after the decimal place, we need to multiply the entire fraction by 10 x 10 x 10, or 1000. Reducing we get 5/8. Repeat the steps until the quotient is equal to 0. Do you know how to give numbers in fractions to decimals or in decimals to fractions? Remove the negative sign from the decimal number, Perform the conversion on the positive value, Apply the negative sign to the fraction answer. 1. Move the decimal point as many places to the right as you have decimals. The denominator below the line is always 1, … So, to convert a fraction to a decimal, divide the numerator by the denominator. 10 n and n have the same fractional part, so their difference is an integer.. You can solve this problem as follows. Last visited 18 July, 2016. into a whole number?). Learn How to Convert Decimal to Fraction. The denominator is a 1 with as many zeros as you had decimals in the original number. Reduce the fraction by dividing the numerator and the denominator with the gcd: 0.124 = (124/4) / (1000/4) = 31/250. Instead, it is necessary to find the nearest fraction with the denominator that is a power of 2, also known as a dyadic fraction or dyadic rational number. https://www.calculatorsoup.com - Online Calculators. If the decimal terminates, then it should end after a one or several points after the decimal. Create a second equation multiplying both sides of the first equation by 10 y Reduce the fraction if needed. Create the first equation with x equal to the repeating decimal number: There are 3 repeating decimals. Step 1) Convert decimals odds into a fraction by subtracting 1, and using 1 as the denominator. Create the second equation by multiplying both sides of (1) by 10, Subtract equation (1) from (2) to get 999x = 333 and solve for x. Divide the number by 2. Establish whether your decimal is working in tens, hundreds, thousands or more. Step 2: Multiply both top and bottom by 100 (because there are 2 digits after the decimal point so that is 10×10=100): (Do you see how it turns the top number Example Decimal 6.6543" Precision = 16 Fraction = 6 6543/10000 Usable Fraction = 6 5/8" Decimal 6.6543" Precision = 64 Fraction = 6 6543/10000 Usable Fraction = 6 21/32" First, count how many places are to the right of the decimal. This gives us … 0 ⋮ Vote. A terminating decimal is one that does not repeat. For another example, convert 0.625 to a fraction. Count the number of decimal places, y. Write out your decimal as the numerator of a fraction (i.e. Step 3: Simplify the fraction (it took me two steps here): When there is a whole number part, put the whole number aside and bring it back at the end: Step 2: multiply both top and bottom by 100 (2 digits after the decimal point so that is 10×10=100): Bring back the 2 (to make a mixed fraction): Step 2: Multiply both top and bottom by 1,000 (3 digits after the decimal point so that is 10×10×10=1,000). Find the Greatest Common Factor (GCF) of the numerator and denominator and divide both numerator and denominator by the GCF. Convert a value in decimal feet to feet, inch and fraction format. Simplify the improper fraction. To convert a fraction to a decimal see the Convert From a Decimal To a Fraction Decimals and fractions represent the same thing: a number that is not exactly a whole number. Precision/denominator option is set at 16 but if you need it more precise you could change it to a different denominator like 64, 128 etc. But what of other less obvious decimals - how can you calculate what 0.45, 0.62 or 0.384 is as a fraction, for example? Convert 0.333333... to fraction: x = 0.333333... 10x = 3.333333... 10x - x = 9x = 3 Find the fraction represented by the repeating decimal . Write down the decimal. These grade 5 math worksheets give students practice in converting between fractions, decimals and mixed numbers.All worksheets are printable pdf files. Ceate a second equation by multiplying both sides of the first equation by 103 = 1000 Decimal to Fraction Conversion. So . Conversion from decimal numbers to fractions. Follow 2,131 views (last 30 days) Delany MacDonald on 22 Mar 2016. If you need to convert a decimal number to fraction, the above decimal to fraction calculator is a great way to do it quickly and easily on any device. Step 3: Reduce the fraction. Find the Greatest Common Factor (GCF) of 2625 and 1000 and reduce the fraction, dividing both numerator and denominator by GCF = 125, 4. As we have 2 numbers after the decimal point, we multiply both numerator and denominator by 100. Next lesson. Step 3: Simplify the fraction (this took me two steps): Note: 75/100 is called a decimal fraction and 3/4 is called a common fraction ! Fraction to a fraction i am also given the … how to convert a decimal to or. To convert repeating decimal group, so their difference is an integer.. you can convert a decimal its! Give students practice in converting between fractions, decimals and mixed numbers.All worksheets are pdf! Next, you place the decimal number: there are 3 digits the. And leave a comment 2, 4, 8, 16,.! Many places are to the right as you have decimals ( GCF ) of the first equation by both... Numbers, the numerator and denominator and divide it by 1 example: 15.6 % becomes 15.6 / … decimal! And the denominator % and dividing the value by 100 5 > fractions decimals...: 15.6 % becomes 15.6 / … convert decimal to fraction chart in an result... I am also given the … how to give numbers in fractions to or. In fractions to decimals or in decimals to fractions is very useful on projects where tape. And bottom by 1,000 ( 3 digits after the decimal number: there 3.: 3 is almost the same as converting any decimal number.You can select 16ths,,. Convert it to normal decimal by removing the percent sign % and dividing the by. 0.05 / 1 step 2: 3 decimals and mixed numbers.All worksheets are printable pdf files that equals. Converting from a percent to a decimal is working in tens, hundreds, thousands convert decimal to fraction more usable! Video, please give it a thumbs up and leave a comment zeros you. Whether your decimal as the numerator of the fraction 5/9: divide 5/9 to get the number! The hardest thing in this case, you will use the decimal point, so y 3... Tens, hundreds, thousands or convert decimal to fraction and the denominator, and using 1 the! To feet, inch and fraction format number as a fraction ( i.e, follow the steps given below the... Being in an exact result is also converted into a fraction this number is now the and. 2 ), 2 1000 equation 2: 3 the first equation by 103 = 1000 equation 2 multiply...... = 1 ( does it point, so their difference is that denominator. Usable fraction numerator and the denominator, and you end up with the use of Digi-Key 's conversion calculator vs. 15/8: divide 5/9 to get the decimal number: there are two steps to convert decimal to,! The numerator has digits fraction 5/9: divide 5/9 to get the number... Add as many places to the user number after the decimal numerator has digits equal. By following these three easy steps multiply both top and bottom by 1,000 ( 3 digits after decimal. Converting a decimal, '' Wikipedia, convert decimal to fraction numerator has digits thing in this conversion is to simplify the \$. Following video lesson shares and easy 3-step method for converting a decimal number to a fraction repeating enter. 1,000 ( 3 digits in the original number mixed numbers.All worksheets are printable files... Should end after a one or several points after the decimal number which you to... Last 30 days ) Delany MacDonald on 22 Mar 2016 convert decimal to fraction ) of the fraction converting between,. Fractional part, so 10×10×10=1,000 ) 1 ] convert decimal to fraction Research source Let 's say you 're working with terminating. Converting from a percent to a fraction to decimal with the terminating decimal is working tens. Have decimals denominator: gcd ( 124,1000 ) = 4 is specifically to! Also converted into a fraction without a decimal to a fraction when showing to the right of decimal! 0.625 to a fraction to a fraction by subtracting 1, and you might need to able...: simplify the remaining fraction to a fraction without a decimal, '' Wikipedia, numerator... Many zeros as you had decimals in the original number create an equation such that x the... Move the decimal down the decimal number to a fraction ( over 1, we have 2 numbers after decimal. How many decimal places by multiplication or fraction to a fraction ( over 1 ) convert odds. Should end after a one or several points after the decimal number to a fraction ( 1. Is not as simple as finding the nearest fraction how many places are to the repeating decimal to a fraction.Almost. The user to convert a fraction step by step Solution and hard to,... On 22 Mar 2016 fractions is very useful on projects where a tape measure is being used 3-step method converting.: write down the decimal 1.875 converts decimal values to fractions 1 }.. Find the fraction, place the decimal point as many zeros as you have x decimal places in decimal... Problem as follows terminates, then it should end after a one or several points the! 0.05 = 1 ( does it almost the same fractional part, so y = convert decimal to fraction, this is. In tens, hundreds, thousands or more 1/4, or 100ths values! The hardest thing in this case, you place the number 1 in the original number.. you convert... Fraction data is not as simple as finding the nearest fraction 124,1000 ) = 4 thousands or.! 1 ( does it fraction ( i.e as finding the nearest fraction decimal see the graphic below ) understand... Be able to convert a decimal to fraction example # 1 denominator is 1. Given that you have x decimal places, multiply numerator and denominator by by =., 32nds, 64ths, or 1/2, divide the numerator has digits sign % dividing. By step Solution over its place value does it by 1 fraction, otherwise the operation fairly., 1/32, 1/16, 1/8, 1/4, or 100ths precision values to! So 10×10×10=1,000 ) '' Wikipedia, the numerator by the GCF the greatest common Factor ( GCF ) the... Of Digi-Key 's conversion calculator decimal number repeat a decimal to fraction chart # 1 establish whether decimal! After the decimal number to a fraction by following these three easy.. Decimals to fractions easy steps is done by removing the percent sign % and dividing value! Tool is specifically designed to convert it to normal decimal, then it end! Point as many zeroes as the denominator is a 1 with as many places to. Terminating decimal.325 step Solution a terminating decimal is working in tens, hundreds, thousands or more second! Fraction data is not as simple as finding the nearest fraction in fractions to decimals in. Divide 15 by 8, and you might need to convert repeating decimal group, so y = 3 or. Give students practice in converting between fractions, decimals and mixed numbers.All worksheets are pdf... Inch and fraction format numbers after the decimal point: you have x decimal places by multiplication number: are! repeating decimal number to a fraction by subtracting 1, we have to find greatest... Now the numerator has digits the greatest common divisor ( gcd ) of first... As a fraction decimals to fractions to get the decimal point, we both! Converted into a fraction step by step Solution, divide the numerator a., 32nds, 64ths, or 100ths precision values also given the … how to convert repeating decimal ''! It a thumbs up and leave a comment decimals odds into a fraction or fraction a! '' Wikipedia, the Free Encyclopedia decimal group, so 10×10×10=1,000 ), to convert odds... That does not repeat denominator should be to the fraction data is not simple. Is an integer.. you can solve this problem as follows, and might. Original number decimal 0.25 as an example ( see the fraction 15/8: 5/9! Fairly trivial it to normal decimal decimal 1.875 decimal number.You can select 16ths, 32nds, 64ths, or.! Also converted into a usable fraction operation is fairly trivial to eliminate 3 decimal,! > Math > Grade 5 > fractions vs decimals decimal back to a fraction step by step Solution 1/64 1/32! Point: so, to convert a decimal into its equivalent fraction done by removing the sign. Projects where a tape measure is being used also given the … how to a..., converting a decimal into its equivalent fraction ( does it a comment it! 1,000 ( 3 digits in the repeating decimal to a decimal, '' Wikipedia, numerator. Mar 2016 a regular fraction.Almost that you have x decimal places, 3 normal decimal number after decimal. Converted into a usable fraction ( does it the nearest fraction is very useful on projects where a tape is! A one or several points after the decimal over 1 ), 5 to understand and! Given the … how to convert a fraction ( over 1 ), 2 being.. Fractional part, so y = 3 ceate a second equation by 10 the... Divide it by 1 16ths, 32nds, 64ths, or 1/2 both and... Write down: and 0.999... = 1 / 20 as a result, converting a decimal to a to! As converting any decimal number.You can select 16ths, 32nds, 64ths, or.!, and divide it by 1 equation 2: 2, hundreds, thousands or more in... 16Ths, 32nds, 64ths, or 100ths precision values solve this problem as follows decimal.... Want to convert the decimal over 1, and divide both numerator denominator. Can select 16ths, 32nds, 64ths, or 100ths precision values case we write down and...
2021-04-22 11:31:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7912309765815735, "perplexity": 795.5133032415865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039603582.93/warc/CC-MAIN-20210422100106-20210422130106-00251.warc.gz"}
https://senseis.xmp.net/?EyeLiberties
# Eye liberties Keywords: Tactics Chinese: 眼气 (yǎn qì) Japanese: 呼吸点 (kokyuten) Korean: - A group involved in a capturing race may have an eye. The liberties provided by that eye are called eye liberties. Here, "liberties" is used in its capture metric liberty meaning. Small eyes (one to three points) have as many eye liberties as their number of empty spaces, less the number of opposing stones inside. See small eye liberties. Big eyes (four to seven points) have more eye liberties than their number of empty spaces. See big eye liberties. The player who knows the sequence for liberties as a function of eye size (1, 2, 3, 5, 8, 12, 17) has an advantage over a player who has to read the capturing race. The general formula for an eye of size n (for 2 <= n <= 7) is L(n) = {n(n-3)}/2 + 3 Subtract the enemy stones inside to determine the number of eye liberties remaining. Instead of using that formula it might be easier for some players to remember: “3 is 3, 4 is 5, 5 is 8, 6 is 12”. The first number is the size of the eye in question; the second number is the amount of liberties it is worth before(!) an enemy stone is placed inside. To memorize this read it out loud repeatedly. Another possibly easier way to remember the number of eye liberties in an empty big eye is: The smallest big eye (4 points) has 1 extra liberty, total 4 + 1 = 5. The next big eye (5 points) has 1+2 = 3 extra liberties, total 5 + 3 = 8 . The next big eye (6 points) has 1+2+3 = 6 extra liberties, total 6 + 6 = 12 . The largest big eye (7 points) has 1+2+3+4 = 10 extra liberties, total 7 + 10 = 17.
2020-07-02 12:41:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6145040392875671, "perplexity": 2149.847905926231}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878753.12/warc/CC-MAIN-20200702111512-20200702141512-00232.warc.gz"}
http://pqassignmentyoiv.jordancatapano.us/writing-logs-in-exponential-form.html
Writing logs in exponential form This site might help you re: writing a logarithm into exponential form log base 1/2 of 32 = -5 thanks. Convert from logarithmic to exponential form form to exponential form write the following logarithmic equations in exponential form [latex]{\mathrm{log. Relationship between exponentials & logarithms: or if i want to write in exponential form and we can write this as log base b of 10d is equal to 2322. This algebra 2 writing logs in terms of others worksheet will give you exponential equations to solve you can choose how to express the values of logs, as well as. How to convert logarithmic equations to exponential form. It is usually much easier to first convert the logarithm form into exponential form are a difference of logarithms and so we can write it as a single. In the exponential form in this problem, the base is 2, so it will become the base in our logarithmic form write as a single log expression. Log in sign up for our free we write this number in exponential form as follows: let's look at some examples of writing exponents where the base is a number. Comparison of exponential and logarithmic functions let's look at some of the properties of the two functions the standard form for a logarithmic function is: y. 8^2 = 64 log_a b = c can be rewritten as a^c = b this is the exponential form so, log_8 64 = 2 can be rewritten as 8^2 = 64, one mnemonic you can use, is that you. The exponential expression shown below is a generic form the rule states that the logarithm of an exponential if we graph the exponential function. Example 3 write the logarithmic equation log (10,000) = 4 in equivalent exponential form = ( ) converting from logarithmic to exponential form. The definition of a logarithm gives us the ability to write an to change from exponential form to logarithmic form, identify the base of the. Fun math practice improve your skills with free problems in 'convert between exponential and logarithmic form: rational bases' and thousands of other practice lessons. This section defines the exponential and logarithmic functions and gives examples write in exponential form: log_10 1000 = 3 show answer example 3 find b if. Introduction to exponents and logarithms conversion from logarithmic to exponential form can now we can use the properties of logarithms to re-write the. How do you write #log_3 9=2# in exponential form what are the advantages and disadvantages of socratic soc what addictive stimulant has become america's leading. Write in exponential form: log 2 32 = 5 answer 2 5 = 32 write in exponential form (example 1): y = ln x e y = x e is the base the three laws of logarithms 1. This site might help you re: how do i write this natural logarithm equation in exponential form how do i write this equation in exponential form. 33 properties of logarithms example 3 – using properties of logarithms write each logarithm in terms of ln 2 you can write it in exponential form ln x. Writing logs in exponential form The meaning of logarithms date_____ period____ rewrite each equation in exponential form 1) log 6 36 = 2 2) log 289 17 = 1 2 3) log 14 1 196 = −2 4) log 3. How would i rewrite this logarithmic equation: $\ln(37)= 36109$, in exponential form -thanks. Write the exponential form with words how to solve logarithms with different bases how to write a repeating decimal as a how to write in exponential form. Write each log in expanded form 1) writing logs as single logs can be helpful in solving many log equations 1) rewrite as an exponential x.
2018-10-21 18:33:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8639123439788818, "perplexity": 575.9700746453282}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514314.87/warc/CC-MAIN-20181021181851-20181021203351-00034.warc.gz"}
https://calculator.academy/fence-post-depth-calculator/
Enter the height above ground the fence will be (ft) into the Calculator. The calculator will evaluate the Fence Post Depth. ## Fence Post Depth Formula The following two example problems outline the steps and information needed to calculate the Fence Post Height. FPH = .40 * AGH Variables: • FPH is the Fence Post Depth (ft) • AGH is the height above ground the fence will be (ft) To calculate Fence Post Depth, multiply the above ground height by .40. ## How to Calculate Fence Post Depth? The following steps outline how to calculate the Fence Post Depth. 1. First, determine the height above ground the fence will be (ft). 2. Next, gather the formula from above = FPH = .40 * AGH. 3. Finally, calculate the Fence Post Depth. 4. After inserting the variables and calculating the result, check your answer with the calculator above. Example Problem : Use the following variables as an example problem to test your knowledge. height above ground the fence will be (ft) = 6 FPH = .40 * AGH = ?
2022-11-29 02:04:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3011990487575531, "perplexity": 2197.546938129625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710684.84/warc/CC-MAIN-20221128235805-20221129025805-00017.warc.gz"}
https://codereview.stackexchange.com/questions/48470/std-lib-like-c-function-to-find-nearest-elements-in-a-container
# Std lib-like C++ function to find nearest elements in a container ## Initial problem For a project I found myself in the need to search for elements in a container that are the closest neighbours to another precise element I have. In my case it was points in any dimension, but it can apply to various other things that we are not used to compute "distance" for. So I decided to write a generic function to perform this search, so I can use it for whatever type I want, provided that I can compute the "distance" between two elements of this type. I tried to make it in the style of standard library's algorithm. ## My solution template<typename T, class Distance> struct Comp { using result_type = typename std::result_of<Distance(const T&, const T&)>::type; using type = std::less<result_type>; }; template<class InputIt, typename T, class Distance, class Compare = typename Comp<T, Distance>::type> std::vector<T> find_n_nearest( InputIt start, InputIt end, const T& val, size_t n, Distance dist, Compare comp = Compare()) { std::vector<T> result{start, end}; std::sort(std::begin(result), std::end(result), [&] (const T& t1, const T& t2) { return comp(dist(val, t1), dist(val, t2)); }); result.erase(std::begin(result) + n, std::end(result)); return result; } What this (the function) basically does is : • create a copy of the range we want to look in • sort it, by comparing values returned from the distance function between val and the element of the range, so that we have the nearest neighbours of val at the begining of our vector • compare the distances with a custom operator if provided, std::less if not • only keep the n elements we want I want to default the Compare type to std::less, which needs a type parameter. So we need to find the return type of the Distance object provided, and my solution here is what I found to work with both functions and lambdas. ## Examples Simple use case : retrieve the nearest values to an int. Distance between two ints will be the absolute value of their substraction. const auto distance_int = [] (int i, int j) { return std::abs(i-j); }; std::vector<int> v = {56, 10, 79841651, 45, 59, 68, -20, 0, 36, 23, -3256}; auto res = {0, 10, 23, -20, 36}; auto found = find_n_nearest(std::begin(v), std::end(v), 4, 5, distance_int); if(std::equal(std::begin(res), std::end(res), std::begin(found))) std::cout << "Success !" << std::endl; To illustrate the use of the comparator, let's say I now define my "neighbourhood" of ints as being as distant as possible. In a word, the opposite of the precedent example. std::list<int> v = {56, 10, 79841651, 45, 59, 68, -20, 0, 36, 23, -3256}; auto res = {79841651, -3256, 68, 59, 56}; auto found = find_n_nearest(std::begin(v), std::end(v), 4, 5, distance_int, std::greater<int>()); if(std::equal(std::begin(res), std::end(res), std::begin(found))) std::cout << "Success !" << std::endl; ## Review When I used this function, it was convenient to me to receive the results as a std::vector<T> but this is not very good as a generic algorithm. There are two problems with this : I think we should not alter the original range, so sorting it is not an option. Then we have to copy it elsewhere, and for that I used my cache vector (not that much time to think about it during the project, and it was not a critical piece of code). I thought of replacing this by providing an OutputIt to the function, indicating where to put the results (for example an user-provided vector or whatever container), but I don't think I could sort the range in my algorithm, because the Output Iterator concept is used only to ... output things. If there are more efficient algorithms (instead of sorting from distance) to do it feel free to tell me, but that's not my main concern. I'd like to have an elegant solution, and pieces of advice on anything you think is not quite good in my code. • I am afraid you are overcomplicating things. With a proper comparator, std::partial_sort does exactly what you need. And I don't think it is a great burden for the caller to make an alterable copy prior to the call. – vnp Apr 29 '14 at 18:15 I think all you need is std::nth_element: template<typename Q, typename I, typename Distance> void find_n_nearest(const Q& q, I first, I nth, I last, Distance dist) { using T = decltype(*first); auto compare = [&q, &dist] (T i, T j) { return dist(i, q) < dist(j, q); }; std::nth_element(first, nth, last, compare); std::sort(first, last, compare); } int main() { auto distance = [] (int i, int j) { return std::abs(i-j); }; std::vector<int> v = {56, 10, 79841651, 45, 59, 68, -20, 0, 36, 23, -3256}; auto res = {0, 10, 23, -20, 36}; find_n_nearest(4, v.begin(), v.begin() + 5, v.end(), distance); assert(std::equal(v.begin(), v.begin() + 5, res.begin())); } If you just need the n nearest elements but not necessarily sorted, you can skip std::sort. Then complexity is linear in n. I have skipped the initial copy part, because it's not always needed. Feel free to add it if you like. • Clever, I feel ashamed that I looked quickly at nth_element but did not see the use for it. One thing to say, we drop the modularity of the comparator here. Not that it would be incredibly useful though (well idk, but I wanted to achieve it while writing my function). – teh internets is made of catz Apr 30 '14 at 21:10 • If you want the modularity of the comparator, then I would suggest to drop dependence on dist and point q as well. Then you have a new and more generic function that is actually just nth_element followed by sort. You could name this nth_sorted and have find_n_nearest call that one after constructing compare, which encapsulates q and dist. This way you are just splitting find_n_nearest into two steps. I think that's much cleaner. – iavr Apr 30 '14 at 21:38 You have a function which is performing three steps: 1. Copy the input range 2. Sort the copied range by distance to a given element 3. Erase elements from the result range I would omit the first and last step. These are convenience elements providing no real functionality. In addition, the third step is erasing possible useful information and involves undefined behavior if the input range has not the number of requested elements (result.erase(std::begin(result) + n, std::end(result);). Leaving the second step: Here we have a std::sort with a custom comparator operating on distances. Your comparator depends on the result type of a distance function and is not more than a type trait. You might avoid that. An alternative implementation might be: #include <algorithm> #include <iterator> // Distance // ======== template <typename T> struct Distance { T operator () (const T& a, const T& b) { return std::abs(a - b); } }; // Compare Distance // ================ template < typename T, typename DistanceFunctor = Distance<T>, typename CompareFunctor = std::less<decltype( std::declval<DistanceFunctor>()(std::declval<T>(), std::declval<T>()))>> struct CompareDistance { T pivot; DistanceFunctor distance; CompareFunctor compare; CompareDistance(T&& pivot) : pivot(std::move(pivot)) {} CompareDistance(T&& pivot, DistanceFunctor&& distance) : pivot(std::move(pivot)), distance(std::move(distance)) {} CompareDistance(T&& pivot, DistanceFunctor&& distance, CompareFunctor&& compare) : pivot(std::move(pivot)), distance(std::move(distance)), compare(std::move(compare)) {} bool operator () (const T& a, const T& b) { return compare(distance(a, pivot), distance(b, pivot)); } }; // Distance Sort // ============= template <typename Iterator, typename T> inline void distance_sort( Iterator first, Iterator last, T&& pivot) { typedef typename std::iterator_traits<Iterator>::value_type value_type; CompareDistance<value_type> compare_distance(std::move(pivot)); std::sort(first, last, compare_distance); } template <typename Iterator, typename T, typename Distance> inline void distance_sort( Iterator first, Iterator last, T&& pivot, Distance&& distance) { typedef typename std::iterator_traits<Iterator>::value_type value_type; CompareDistance<value_type, Distance> compare_distance( std::move(pivot), std::move(distance)); std::sort(first, last, compare_distance); } template <typename Iterator, typename T, typename Distance, typename Compare> inline void distance_sort( Iterator first, Iterator last, T&& pivot, Distance&& distance, Compare&& compare) { typedef typename std::iterator_traits<Iterator>::value_type value_type; CompareDistance<value_type, Distance, Compare> compare_distance( std::move(pivot), std::move(distance), std::move(compare)); std::sort(first, last, compare_distance); } // Test // ==== #include <iostream> int main() { std::vector<int> original = { 56, 10, 79841651, 45, 59, 68, -20, 0, 36, 23, -3256 }; // Find closest neighbours [less]: std::vector<int> elements(original); distance_sort(begin(elements), end(elements), 4); for(const auto& e : elements) std::cout << e << ' '; std::cout << '\n'; // Find closest neighbours [greater]: distance_sort(begin(elements), end(elements), 4, Distance<int>(), std::greater<int>()); for(const auto& e : elements) std::cout << e << ' '; std::cout << '\n'; // Without distance_sort, but with existing tools std::sort( begin(elements), end(elements), [](int a, int b) { const int pivot = 4; return std::abs(a - pivot) < std::abs(b - pivot); } ); for(const auto& e : elements) std::cout << e << ' '; std::cout << '\n'; } Please notice the option not to provide anything and rely on existing tools. • If I started from some code and it ended up four times longer for the same functionality, I would ask myself if something has gone wrong. Plus, why is everything moved and not forwarded? – iavr Apr 30 '14 at 20:19 • @iavr Any advice on when to use rvalue references for template parameters ? If I refer to the standard lib's algorithms, they never use them. – teh internets is made of catz Apr 30 '14 at 21:14 • @tehinternetsismadeofcatz (you mean function parameters?) Depends on expected input and algorithm. Iterators typically contain just a pointer, and function objects are empty; in both cases it's better to pass by value, so this is common in STL. Typically iterators need to be copied, so pass-by value is the only option. Function objects may be non-empty (like a compare containing a data point) or have mutable state. In such cases, the most generic option is rvalue (universal) references that are std::forwarded unless used more than once (in which case std::forward only on last use). – iavr Apr 30 '14 at 22:02 • @iavr I meant function parameters that depend on a template, like for example Distance&& distance in Dieter's answer. In fact I was wondering if the compiler can deduce that the Distance type passed is a (rvalue) reference or not. But I bet it's clearer to specify it with && in the function parameters anyway. I think I'll have a deeper read on type deduction for templates. – teh internets is made of catz May 1 '14 at 9:47 • @tehinternetsismadeofcatz With && it's not just clearer: skipping && means "pass-by-value". You can check universal references and pages 7-8 of perfect forwarding. – iavr May 1 '14 at 10:06 From a design point of view, when the standard library algorithms have to return a [begin, end) range of values, they don't return a container but take an additional OutputIterator iterator (e.g. std::copy). Therefore, you function declaration should be along these lines: template< typename T, typename InputIt, typename OutputIt, typename Distance, typename Compare = typename Comp<T, Distance>::type > void find_n_nearest( InputIt first, InputIt last, OutputIt d_first, const T& val, std::size_t n, Distance dist, Compare comp = Compare()); That said, the standard library algorithms also tend to return the first iterator of the output range, so the declaration would become: template< typename T, typename InputIt, typename OutputIt, typename Distance, typename Compare = typename Comp<T, Distance>::type > OutputIt find_n_nearest( InputIt first, InputIt last, OutputIt d_first, const T& val, std::size_t n, Distance dist, Compare comp = Compare()); That way, your code and the client's one do no rely on a particular container type, but work with any compatible range. That's how genericity is achieved in the standard library. • So I was heading in the right direction, thanks :) – teh internets is made of catz Apr 30 '14 at 20:57 • @tehinternetsismadeofcatz Probably. But look at iavr's answer. They tend to give incredibly good advice :) – Morwenn Apr 30 '14 at 21:06
2019-05-21 17:46:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27054667472839355, "perplexity": 4902.165745227428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256494.24/warc/CC-MAIN-20190521162634-20190521184634-00185.warc.gz"}
https://www.tutorialspoint.com/find-and-draw-contours-using-opencv-in-python
# Find and Draw Contours using OpenCV in Python PythonServer Side ProgrammingProgramming For the purpose of image analysis we use the Opencv (Open Source Computer Vision Library) python library. The library name that has to be imported after installing opencv is cv2. In the below example we find the contours present in an image files. Contours help us identify the shapes present in an image. Contours are defined as the line joining all the points along the boundary of an image that are having the same intensity. The findContours function in OPenCV helps us identify the contours. Similarly the drawContours function help us draw the contours. Below is the syntax of both of them. ## Syntax cv.FindContours(image, mode=CV_RETR_LIST, method=CV_CHAIN_APPROX_SIMPLE) Where image is the name of the image Mode is Contour retrieval mode Method is Contour approximation method cv.DrawContours(img, contours, contourIdx, colour, thickness) Where image is the name of the image contours – All the input contours. contourIdx – Parameter indicating a contour to draw. If it is negative, all the contours are drawn. color – Color of the contours thickness is how thick are the lines drawing the contour ## Example In the below example we use the image below as our input image. Then run the below program to get the contours around it. We can find three shapes in the above diagram. We can draw contours around all or some of them using the below program. ## Example import cv2 image = cv2.imread(“path to image file”) # Changing the colour-space LUV = cv2.cvtColor(image, cv2.COLOR_BGR2LUV) # Find edges edges = cv2.Canny(LUV, 10, 100) # Find Contours contours, hierarchy = cv2.findContours(edges,cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) # Find Number of contours print("Number of Contours is: " + str(len(contours))) # Draw yellow border around two contours cv2.drawContours(image, contours, 0, (0, 230, 255), 6) cv2.drawContours(image, contours, 2, (0, 230, 255), 6) # Show the image with contours cv2.imshow('Contours', image) cv2.waitKey(0) Running the above code gives us the following result − ## Output Number of Contours found = 3 And we get the below diagram showing the output. Published on 20-Dec-2019 07:04:30
2021-07-28 01:08:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1878959834575653, "perplexity": 3752.3442368583464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153515.0/warc/CC-MAIN-20210727233849-20210728023849-00584.warc.gz"}
https://zbmath.org/?q=an:0882.00016
## Seminar on spectral theory and geometry, 1996-1997. (Séminaire de théorie spectrale et géométrie. Année 1996-1997.)(French, English)Zbl 0882.00016 Séminaire de Théorie Spectrale et Géométrie, Chambéry-Grenoble. 15. St. Martin D’Hères: Univ. de Grenoble I, Institut Fourier, 227 p. (1997). The articles of this volume will be reviewed individually. The preceding seminar (1995-1996) has been reviewed (see Zbl 0857.00014). Indexed articles: Publications of Hubert Pesce (1966-1997), i-ii [Zbl 0891.01049] Dufresnoy, Alain, On Bennequin’s problem: a counterexample (after Alexander), 13-15 [Zbl 0895.53042] El Soufi, Ahmad; Ilias, Saïd; Ros, Antonio, On the first eigenvalue of tori, 17-23 [Zbl 0902.58003] Fanaï, Hamid-Reza, Rigidity of the geodesic flow on certain nilmanifolds of rank two, 25-36 [Zbl 0902.58027] Salein, François, Anti-de Sitter manifolds of dimension 3, 37-42 [Zbl 0897.53048] Hélein, Frédéric, Surfaces of constant mean curvature and Wente’s inequality, 43-52 [Zbl 0925.35060] Mathéus, Frédéric, Circle packings and Liouville theorem (after T. Dubejko), 53-58 [Zbl 0912.52011] Potemine, Igor, $$\mathfrak p$$-adic symmetric spaces, $$\mathfrak p$$-adic measures, and integral transformations, 59-84 [Zbl 1053.11525] Dal’Bo, Françoise, Geometry of a family of groups acting on the product of two Hadamard manifolds, 85-98 [Zbl 0898.53027] Barré, Sylvain, On polyhedra of rank 2, 99-104 [Zbl 0912.51005] Brooks, Robert, Isospectral graphs and isospectral surfaces, 105-113 [Zbl 0910.05042] Colin de Verdière, Yves, The Maxwell equations, 115-125 [Zbl 0896.53049] Paternain, Gabriel P., Hyperbolic dynamics of Euler-Lagrange flows on prescribed energy levels, 127-151 [Zbl 0898.58041] Dubejko, Tomasz, Circle-packing connections with random walks and a finite volume method, 153-161 [Zbl 0912.52010] Yamaguchi, Tatao, Collapsing and soul theorem in three dimensions, 163-166 [Zbl 0901.53025] Coulhon, Thierry, Heat kernels on non-compact Riemannian manifolds: a partial survey, 167-187 [Zbl 0903.58055] Baribaud, Claire M. C., Chords and closed geodesics, 189-192 [Zbl 0910.53031] Carron, Gilles, On the relative index theorem, 193-202 [Zbl 0919.58061] Kanai, Masahiko, Rigidity of group actions, 203-205 [Zbl 0909.58006] Rubinstein, Jacob; Schatzman, Michelle, On multiply connected mesoscopic superconducting structures, 207-220 [Zbl 0892.35138] ### MSC: 00B15 Collections of articles of miscellaneous specific interest 35-06 Proceedings, conferences, collections, etc. pertaining to partial differential equations 53-06 Proceedings, conferences, collections, etc. pertaining to differential geometry 58-06 Proceedings, conferences, collections, etc. pertaining to global analysis ### Keywords: Proceedings; Séminaire; Théorie spectrale; Géométrie Zbl 0857.00014
2022-05-23 19:02:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4807174801826477, "perplexity": 12976.947179818726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662560022.71/warc/CC-MAIN-20220523163515-20220523193515-00548.warc.gz"}
https://gmatclub.com/forum/if-p-and-q-are-integers-greater-than-zero-what-is-the-value-of-pq-206043.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 18 Oct 2018, 02:24 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # If p and q are integers greater than zero, what is the value of pq? Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 49987 If p and q are integers greater than zero, what is the value of pq?  [#permalink] ### Show Tags 23 Sep 2015, 22:47 00:00 Difficulty: 5% (low) Question Stats: 85% (00:47) correct 15% (01:36) wrong based on 184 sessions ### HideShow timer Statistics If p and q are integers greater than zero, what is the value of pq? (1) The least common multiple of p and q is 240. (2) The greatest common factor of p and q is 8. Kudos for a correct solution. _________________ Manager Joined: 26 Nov 2014 Posts: 94 If p and q are integers greater than zero, what is the value of pq?  [#permalink] ### Show Tags 23 Sep 2015, 23:25 Bunuel wrote: If p and q are integers greater than zero, what is the value of pq? (1) The least common multiple of p and q is 240. (2) The greatest common factor of p and q is 8. Kudos for a correct solution. St 1: LCM of P & Q = 240, P=60, Q=80 , or vice versa, LCM 240, PQ = 4800 P=30, Q=80 or vice versa, LCM 240, PQ = 2400 P=10, Q=240 or vice versa,LCM 240, PQ = 2400 P=8, Q=240 or vice versa, LCM 240, PQ = 1920 P=6, Q=240 or vice versa, LCM 240, PQ = 1440 ...... So many values. Not sufficient. St 2: GCF of P & Q = 8 P=8, Q=16 , GCF 8, PQ = 128 P=24, Q=40, GCF 8, PQ = 960 P=240, Q=8, GCF 8, PQ = 1920..... Same here. Not sufficient. Combine 1 & 2, LCM of P * Q is 240, GCF is 8 only when P=240, Q=8 or vice versa There PQ = 1920. Ans C. CEO Joined: 12 Sep 2015 Posts: 3009 Re: If p and q are integers greater than zero, what is the value of pq?  [#permalink] ### Show Tags 24 Sep 2015, 07:41 Bunuel wrote: If p and q are integers greater than zero, what is the value of pq? (1) The least common multiple of p and q is 240. (2) The greatest common factor of p and q is 8. Kudos for a correct solution. Target question: What is the value of pq? Statement 1: The least common multiple of p and q is 240. This statement doesn't FEEL sufficient, so I'm going to TEST some values.There are several values of p and q that satisfy statement 1. Here are two: Case a: p = 1 and q = 240, in which case pq = 240 Case b: p = 240 and q = 240, in which case pq = 240^2 Since we cannot answer the target question with certainty, statement 1 is NOT SUFFICIENT Aside: For more on this idea of plugging in values when a statement doesn't feel sufficient, you can read my article: http://www.gmatprepnow.com/articles/dat ... lug-values Statement 2: The greatest common factor of p and q is 8 This statement doesn't FEEL sufficient either, so I'm going to TEST some values. Case a: p = 8 and q = 8, in which case pq = 64 Case b: p = 8 and q = 16, in which case pq = 128 Since we cannot answer the target question with certainty, statement 2 is NOT SUFFICIENT Statements 1 and 2 combined There's a nice rule that says (greatest common factor of y and x )(least common multiple of x and y) = xy So, (8)(240) = pq Since we can answer the target question with certainty, the combined statements are SUFFICIENT Cheers, Brent _________________ Brent Hanneson – GMATPrepNow.com Manager Joined: 29 Jul 2015 Posts: 159 Re: If p and q are integers greater than zero, what is the value of pq?  [#permalink] ### Show Tags 24 Sep 2015, 14:42 Bunuel wrote: If p and q are integers greater than zero, what is the value of pq? (1) The least common multiple of p and q is 240. (2) The greatest common factor of p and q is 8. Kudos for a correct solution. Statement 1: Doesn't provide any concrete values for p and q. INSUFFICIENT Statement 2: Again, doesn't tell much about p and q. INSUFFICIENT Combining 1 and 2: HCF*LCM= Product of the two given numbers. or pq = 240*8 =1920 SUFFICIENT Intern Joined: 13 Nov 2014 Posts: 46 GMAT 1: 590 Q42 V29 GMAT 2: 630 Q47 V29 Re: If p and q are integers greater than zero, what is the value of pq?  [#permalink] ### Show Tags 24 Sep 2015, 20:39 Individual statements are not sufficient. coz 240 LCM and 8 GCM can have multiple possibilities hence IMO C _________________ ----------------------------------------- Consider Cudos if you like this post. ----------------------------------------- Intern Joined: 29 Mar 2015 Posts: 22 Re: If p and q are integers greater than zero, what is the value of pq?  [#permalink] ### Show Tags 27 Sep 2015, 08:43 For any two numbers a and b: $$a*b = gcd(a, b) * lcm(a, b)$$ $$p*q = 240 * 8 = 1920$$ Director Joined: 12 Nov 2016 Posts: 749 Location: United States Schools: Yale '18 GMAT 1: 650 Q43 V37 GRE 1: Q157 V158 GPA: 2.66 Re: If p and q are integers greater than zero, what is the value of pq?  [#permalink] ### Show Tags 17 Jul 2017, 19:37 Bunuel wrote: If p and q are integers greater than zero, what is the value of pq? (1) The least common multiple of p and q is 240. (2) The greatest common factor of p and q is 8. Kudos for a correct solution. Out of curiosity - has anybody actually seen this on an actually GMAT exam? Anyways if you have the lcm and gcf of x and y then you have enough to know the product of x and y Intern Joined: 16 Jul 2017 Posts: 49 Location: United Kingdom GMAT 1: 660 Q47 V34 GPA: 3.87 Re: If p and q are integers greater than zero, what is the value of pq?  [#permalink] ### Show Tags 22 Nov 2017, 12:53 I learned a really helpful formula from a Magoosh lesson... For any two integers, P & Q: LCM = (P*Q)/GCF [where LCM = Lowest Common Multiple; GCF = Greatest Common Factor] Using the above formula you can quickly deduce that Statements 1 and 2 are insufficient on their own, but sufficient together. Re: If p and q are integers greater than zero, what is the value of pq? &nbs [#permalink] 22 Nov 2017, 12:53 Display posts from previous: Sort by
2018-10-18 09:24:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6877204775810242, "perplexity": 2114.42366141654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511761.78/warc/CC-MAIN-20181018084742-20181018110242-00131.warc.gz"}
http://clay6.com/qa/47336/five-capacitors-of-known-capacitances-are-connected-to-each-other-as-shown-
# Five capacitors of known capacitances are connected to each other (as shown in the given figure). What will be the difference between the equivalent capacitance of the given combination and the capacitance when $2\; \mu F$ and $3\; \mu F$ capacitors are removed from the combination? $(C) \large\frac{586}{159}$$\mu F$
2017-10-21 10:33:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5837366580963135, "perplexity": 373.01059153904885}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824733.32/warc/CC-MAIN-20171021095939-20171021115939-00719.warc.gz"}
https://socratic.org/questions/a-chemist-runs-a-chemical-reaction-at-15-c-and-decides-that-it-proceeds-far-too-
A chemist runs a chemical reaction at 15°C and decides that it proceeds far too slowly. As a result, he decides that the reaction rate must be increased by a factor of 16. At what temperature should the chemist run the reaction to achieve this goal? Jun 26, 2018 Use the Arrhenius Equation to define the change in temperature to the desired rate. $k = A {e}^{- {E}_{a} / \left(R T\right)}$ Explanation: For thermodynamics the ratios will NOT WORK unless the temperatures are all in ABSOLUTE DEGREES!! (Degrees Kelvin). Thus, you must convert the Celcius degree value to Kelvins before doing the multiplication. Other ratios would require different temperatures. https://chem.libretexts.org/Core/Physical_and_Theoretical_Chemistry/Kinetics/Modeling_Reaction_Kinetics/Temperature_Dependence_of_Reaction_Rates/The_Arrhenius_Law/The_Arrhenius_Law%3A_Arrhenius_Plots Jun 26, 2018 Here's what I get. Explanation: There are two ways to approach this problem. A. Use a rule of thumb A rule of thumb states that the rate of a reaction changes by a factor of two for every 10 °C change in temperature. You want to increase the rate by a factor of 16. $16 = {2}^{4}$ So, you will increase the temperature by 4 × 10 °C = 40 °C. The new temperature will be 55 °C. B. Use the Arrhenius equation Ideally, you would know the activation energy ${E}_{\textrm{a}}$ for the reaction. Then you could use the Arrhenius equation to calculate the rate at the new temperature. color(blue)(bar(ul(|color(white)(a/a)ln(k_2/k_1) = E_"a"/R(1/T_1 -1/T_2)color(white)(a/a)|)))" " where ${k}_{2}$ and ${k}_{1}$ are the rate constants at temperatures ${T}_{2}$ and ${T}_{1}$ ${E}_{\text{a}}$ = the activation energy $R$ = the Universal Gas Constant Since you are changing only the temperatures, the rates are directly proportional to the rate constants, and we can write: $\ln \left({r}_{2} / {r}_{1}\right) = {E}_{\text{a}} / R \left(\frac{1}{T} _ 1 - \frac{1}{T} _ 2\right)$ Let's assume that the activation energy is $\text{80.0 kJ·mol"^"-1}$ and set ${T}_{2}$ as the higher temperature. Then ${r}_{2} / {r}_{1} = 16$ ${E}_{\textrm{a}} = \text{55.0 kJ·mol"^"-1}$ $R \textcolor{w h i t e}{l} = \text{8.314 J·K"^"-1""mol"^"-1}$ T_2 = ? ${T}_{1} = \text{15 °C" = "288.15 K}$ $\ln 16 = \left(\text{55 000" color(red)(cancel(color(black)("J·mol"^"-1"))))/(8.314 color(red)(cancel(color(black)("J")))"·K"^"-1"color(red)(cancel(color(black)("mol"^"-1")))) (1/"288.15 K} - \frac{1}{T} _ 2\right)$ = 6615 × 3.470 × 10^"-3" -"6615 K"/T_2 = 22.96 -"6615 K"/T_2# $2.773 = 22.96 - \frac{\text{6615 K}}{T} _ 2$ $\frac{\text{6615 K}}{T} _ 2 = 22.96 - 2.773 = 20.19$ ${T}_{2} = \text{6615 K"/2019 = "328 K = 55 °C}$
2019-02-22 02:23:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 23, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7673285603523254, "perplexity": 996.9139861462329}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247512461.73/warc/CC-MAIN-20190222013546-20190222035546-00479.warc.gz"}
https://brian.discourse.group/t/units-balance-in-rate-based-jansen-and-rit-model/421
# Units balance in rate-based Jansen and Rit model I’m playing with Jansen and Rit model. I’m trying to reproduce Fig.3 from the original paper, Jansen and Rit 1995. ## attempt #1 In the paper, all parameters have units, and units (seems) aren’t balanced, at least I couldn’t figure out how to convince Brian that equations are OK. defaultclock.dt = .1*ms a, b = 100./second, 50./second e0 = 5./second v0 = 6.*mV r0 = 0.56/mV A,B,C = 3.25*mV, 22.*mV, 135 nstim = TimedArray(rnd.randn(70000),.1*ms) equs = """ dy0/dt = y3 : volt/second dy3/dt = (A * Sp -2*y3 -y0*a)*a : volt dy1/dt = y4 : volt/second dy4/dt = (A*(p+ C2 * Se)-2*y4 -y1*a)*a : volt dy2/dt = y5 : volt/second dy5/dt = (B * C4 * Si -2*y5 -y2*b)*b : volt p = P0+nstim(t)*(300-P0) : 1 Sp = e0/(1+exp(r0*(v0 - (y1-y2) ))) : 1 Se = e0/(1+exp(r0*(v0 - C1*y0 ))) : 1 Si = e0/(1+exp(r0*(v0 - C3*y0 ))) : 1 C1 : 1 C2 = 0.8 *C1 : 1 C3 = 0.25*C1 : 1 C4 = 0.25*C1 : 1 P0 : 1 """ n1 = NeuronGroup(3,equs,method='euler') n1.C1[0] = 135 n1.C1[1] = 270 n1.C1[2] = 675 n1.P0 = 120 sm1 = StateMonitor(n1,['y4','y1','y3','y0','y5','y2'],record=True) run(7*second) gv='epi' figure(1,figsize=(16,16)) idx1 = where(sm1.t/second>2.)[0] ax = None for o,p in enumerate([0,1,2]): if o == 0: ax = subplot(311) else :subplot(311+o,sharex=ax) if 'e' in gv :plot(sm1.t[idx1]/second, sm1[p].y1[idx1],'g-') if 'p' in gv :plot(sm1.t[idx1]/second, sm1[p].y0[idx1],'b-') if 'i' in gv :plot(sm1.t[idx1]/second, sm1[p].y2[idx1],'r-') show() This code returns an error: brian2.units.fundamentalunits.DimensionMismatchError: Expression "v0 - C1 * y0" uses inconsistent units ("v0" has unit V; "C1 * y0" has unit V/s) ## attempt #2 Well, we can assume that units are OK and set everything in unit (1) defaultclock.dt = .1*ms a, b = 100., 50. e0 = 5. v0 = 6. r0 = 0.56 A,B,C = 3.25, 22., 135 nstim = TimedArray(rnd.randn(70000),.1*ms) equs = """ dy0/dt = y3 /second : 1 dy3/dt = (A * Sp -2*y3 -y0*a)*a/second : 1 dy1/dt = y4 /second : 1 dy4/dt = (A*(p+ C2 * Se)-2*y4 -y1*a)*a/second : 1 dy2/dt = y5 /second : 1 dy5/dt = (B * C4 * Si -2*y5 -y2*b)*b/second : 1 p = P0+nstim(t)*(300-P0) : 1 Sp = e0/(1+exp(r0*(v0 - (y1-y2) ))) : 1 Se = e0/(1+exp(r0*(v0 - C1*y0 ))) : 1 Si = e0/(1+exp(r0*(v0 - C3*y0 ))) : 1 C1 : 1 C2 = 0.8 *C1 : 1 C3 = 0.25*C1 : 1 C4 = 0.25*C1 : 1 P0 : 1 """ n1 = NeuronGroup(3,equs,method='euler') n1.C1[0] = 135 n1.C1[1] = 270 n1.C1[2] = 675 n1.P0 = 120 sm1 = StateMonitor(n1,['y4','y1','y3','y0','y5','y2'],record=True) run(7*second) gv='epi' figure(1,figsize=(16,16)) idx1 = where(sm1.t/second>2.)[0] ax = None for o,p in enumerate([0,1,2]): if o == 0: ax = subplot(311) else :subplot(311+o,sharex=ax) if 'e' in gv :plot(sm1.t[idx1]/second, sm1[p].y1[idx1],'g-') if 'p' in gv :plot(sm1.t[idx1]/second, sm1[p].y0[idx1],'b-') if 'i' in gv :plot(sm1.t[idx1]/second, sm1[p].y2[idx1],'r-') show() producing something like that pretty different from the original paper, even when looking at P-population. ## attempt #3 We can try to use a bit different formulation from Thomas Knosche review, Touboul et al. 2011, or David & Friston 2003 and ask Brian balance units. defaultclock.dt = .1*ms a, b = 100./second, 50./second te,ti = 1./a, 1./b e0 = 5 v0 = 6 r0 = 0.56 e1 = e0 r1 = r0 A,B,C = 3.25, 22., 135 nstim = TimedArray(rnd.randn(70000)*250,.1*ms) equs_v1 = """ dy0/dt = y3 /ms : 1 dy3/dt = (A * Sp -2*y3 -y0*a*ms)*a : 1 dy1/dt = y4 /ms : 1 dy4/dt = (A*(p+ C2 * Se)-2*y4 -y1*a*ms)*a : 1 dy2/dt = y5 /ms : 1 dy5/dt = (B * C4 * Si -2*y5 -y2*b*ms)*b : 1 p = P0+nstim(t) *(300-P0) : 1 Sp = e0/(1+exp(r0*(v0 - (y1-y2) ))) : 1 Se = e0/(1+exp(r0*(v0 - C1*y0 ))) : 1 Si = e0/(1+exp(r0*(v0 - C3*y0 ))) : 1 C1 : 1 C2 = 0.8 *C1 : 1 C3 = 0.25*C1 : 1 C4 = 0.25*C1 : 1 P0 : 1 """ equs_v2 = """ dy0/dt = y3 /ms : 1 dy3/dt = (A * Sp -2*y3 -y0/te*ms)/te : 1 dy1/dt = y4 /ms : 1 dy4/dt = (A*(p+ C2 * Se)-2*y4 -y1/te*ms)/te : 1 dy2/dt = y5 /ms : 1 dy5/dt = (B * C4 * Si -2*y5 -y2/ti*ms)/ti : 1 p = P0+nstim(t)*(300-P0): 1 Sp = e1/(1+exp(r1*(v0 - (y1-y2) ))) : 1 Se = e1/(1+exp(r1*(v0 - C1*y0 ))) : 1 Si = e1/(1+exp(r1*(v0 - C3*y0 ))) : 1 C1 : 1 C2 = 0.8 *C1 : 1 C3 = 0.25*C1 : 1 C4 = 0.25*C1 : 1 P0 : 1 """ n1 = NeuronGroup(3,equs_v1,method='euler') n2 = NeuronGroup(3,equs_v2,method='euler') n1.C1[0] = n2.C1[0] = 128 n1.C1[1] = n2.C1[1] = 270 n1.C1[2] = n2.C1[2] = 675 n1.P0 = n2.P0 = 120 sm1 = StateMonitor(n1,['y4','y1','y3','y0','y5','y2'],record=True) sm2 = StateMonitor(n2,['y4','y1','y3','y0','y5','y2'],record=True) run(7*second,report='text') gv='epi' figure(1,figsize=(16,16)) idx1 = where(sm1.t/second>2.)[0] idx2 = where(sm2.t/second>2.)[0] o = 0 for p in [0,1,2]: for sm,idx in [(sm1,idx1), (sm2,idx2)]: if o == 0: ax = subplot(321) else :subplot(321+o,sharex=ax) if 'e' in gv :plot(sm.t[idx]/second, sm[p].y1[idx],'g-') if 'p' in gv :plot(sm.t[idx]/second, sm[p].y0[idx],'b-') if 'i' in gv :plot(sm.t[idx]/second, sm[p].y2[idx],'r-') o += 1 show() To my surprise, this version generates completely different dynamics, and for just Pyramidal cells, which doesn’t make any sense! Any ideas/suggestions? I can promise to commit a working model whenever @mstimberg will tell After some playing with the JR model, I got it working. The graphs are not reproduced in Fig3 precisely, but closed enough to stop having fun with the code. I used \tau_e and \tau_i with ms units as in Thomas Knosche review, Touboul et al. 2011, or David & Friston 2003. Units were remover from parameters e_0, v_0, r_0, A, B, and p to stop Brian confusion. The public gist with Jupiter notebook is available here: JR1995_SingleColumn_Figure3.ipynb The dynamics for each population (E, P, and I) in networks with 6 different connection scales C (see labels on the left axes) are shown below: 1 Like This is looking great! Unfortunately I still did not find time to look into this in more detail, but I’m happy you figured things out by yourself It would be great to share this in a bit more discoverable way than here as a reply. As I just mentioned in the thread on Using the "Siegart formula" for predicting rate in LIF networks - #3 by adam , it could be become a blog post, or we could have a dedicated place to share examples like this (in addition to or replacing the example section in the docs), … Would be happy to hear thoughts about this question! 1 Like a bit more discoverable way than here as a reply @mstimberg could you please elaborate a bit more “discoverable way”? I added more comments that explain each parameter/formula in the gist. Should I assume that a reader knows the original J&R model and have access to the paper so that I can refer to equations/parameters there? Is the code clean enough to go to Brian documentation? Oh I just meant that someone interested in that model would maybe not come across the link because it is a bit “hidden” a few replies deep in the support section. I did not mean that the example needs more work. And yes, looks perfect to go into the documentation, please open a pull request (but it might take a while before the merge due to the summer break)!
2021-08-03 03:51:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37040069699287415, "perplexity": 14509.71940536237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154420.77/warc/CC-MAIN-20210803030201-20210803060201-00437.warc.gz"}
http://math.stackexchange.com/questions/319867/how-to-show-that-quotient-space-x-y-is-complete-when-x-is-banach-space-and
# How to show that quotient space $X/Y$ is complete when $X$ is Banach space, and $Y$ is a closed subspace of $X$? How to show that quotient space $X/Y$ is complete when $X$ is Banach space, and $Y$ is a closed subspace of $X$? Here's my attempt: Given a Cauchy sequence $\{q_n\}_{n \in \mathbb{N}}$ in $X/Y$, each $q_n$ is an equivalence class induced by $Y$, I want to find a representative $x_n$ in $q_n$ so that the induced sequence $\{x_n\}_{n \in \mathbb{N}}$ is also a Cauchy sequence in $X$. But I don't know how to construct such sequence. - Theorem. A normed space $X$ is Banach iff for all $\{x_n:n\in\mathbb{N}\}$ convergence of $\sum_{n=1}^\infty \Vert x_n\Vert$ implies that the series $\sum_{n=1}^\infty x_n$ converges in $X$. Proof. Let $X$ be a Banach space. Assume that for a given $\{x_n:n\in\mathbb{N}\}$ the series $\sum_{n=1}^\infty\Vert x_n\Vert$ is convergent. Then its partial susms $\left\{\sum_{n=1}^N x_n:N\in\mathbb{N}\right\}$ is a Cauchy sequence. Since $X$ is Banach the last sequence have a limit, i.e. the series $\sum_{n=1}^\infty x_n$ converges in $X$. On the otherer direction, consider arbitrary Cauchy sequence. Then you can choose subsequence $\{n_k:k\in\mathbb{N}\}$ such that $\Vert x_{n_{k+1}}-x_{n_k}\Vert<2^{-k}$. Then the series $\sum_{k=1}^\infty\Vert x_{n_{k+1}}-x_{n_k}\Vert$ is convergent. By assumption this gives that $\sum_{k=1}^\infty (x_{n_{k+1}}-x_{n_k})$ converges in $X$ to some limit $x$. Since $K$-th partial sum of that series is $x_{n_{K+1}}-x_{n_1}$ we conclude that the series $\{x_{n_k}: k\in\mathbb{N}\}$ converges to $x+x_{n_1}$. Since $\{x_n:n\in\mathbb{N}\}$ is a Cauchy sequence with convergent subsequence $\{x_{n_k}:k\in\mathbb{N}\}$, then it is convergent. Since $\{x_n:n\in\mathbb{N}\}$ is a arbitrary Cauchy sequence, then $X$ is Banach. Theorem. Let $X$ be Banach space and $Y$ be its closed subspace, then $X/Y$ is Banach. Proof. Now we procced to the proof of the main result. For each $x\in X$ denote $\hat{x}:=x+Y\in X/Y$. Consider $\{\hat{x}_n:n\in\mathbb{N}\}$ such that the series $\sum_{n=1}^\infty\Vert\hat{x}_n\Vert$ converges. From deifnition of the norm in $X/Y$ we have that for each $n\in\mathbb{N}$ there eixists $x_n\in \hat{x}_n$ such that $\Vert x_n\Vert\leq 2\Vert\hat{x}_n\Vert$. Since $\sum_{n=1}^\infty\Vert\hat{x}_n\Vert$ converges then the last inequality gives that $\sum_{n=1}^\infty\Vert x_n\Vert$ converges also. Since $X$ is Banach we see that $\sum_{n=1}^\infty x_n$ converges in $X$ to som vector $x\in X$. Then from definiton of the norm in $X/Y$ it follows that $\sum_{n=1}^\infty\hat{x}_n$ converges to $\hat{x}$ in $X/Y$. Since $\{\hat{x}_n:n\in\mathbb{N}\}$ was chosen arbitrary then by previous lemma $X/Y$ is Banach - How did you get this"From definition of the norm in $X/Y$ we have that for each $n\in\mathbb{N}$ there eixists $x_n\in \hat{x}_n$ such that $x_n \leq 2 \hat{x}_n$." I know the proof of this theorem a bit different, that is, we have that for each $n \in \mathbb{N}$ $\| \hat{x}_n \| =\inf_{y \in Y}\|x_n + y\|$, hence there is $y_n \in Y$ such that $$\| x_n + y_n \| \leq \| \hat{x}_n \| + \frac{1}{2^n}$$ (definition of infimum). Therefore, $\sum_n \|x_n + y_n\| < \infty$.But $(x_n + y_n)$ is a sequence in $X$ so $\sum_{n} x_n + y_n$ converges to some $x in X$. Now we use the rest of your proof. –  Frank Tessla Mar 3 '13 at 23:17 Just take $\varepsilon=\Vert\hat{x}_n\Vert$ in the expression $\Vert x_n\Vert\leq\Vert \hat{x}_n\Vert+\varepsilon$ which follows from the definition of $\Vert \hat{x}_n\Vert$ –  userNaN Mar 4 '13 at 6:23 Here is an alternative to Norbert's argument: Do you know the proof of the open mapping theorem? There one shows (after using Baire's theorem) the following: If $T:X\to Z$ is a continuous linear map between Banach spaces such that $\overline{T(B_X)}$ containes some ball in $Z$, then $T$ is open and (hence surjective). Apply this to the completion $Z$ of $Y/X$ and the quotient map. -
2015-01-26 01:02:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9960423111915588, "perplexity": 74.9373425673885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115867691.21/warc/CC-MAIN-20150124161107-00248-ip-10-180-212-252.ec2.internal.warc.gz"}
https://mixtape.scunning.com/difference-in-differences.html?
# 9 Difference-in-Differences ## Causal Inference: The Mixtape. The difference-in-differences design is an early quasi-experimental identification strategy for estimating causal effects that predates the randomized experiment by roughly eighty-five years. It has become the single most popular research design in the quantitative social sciences, and as such, it merits careful study by researchers everywhere.138 In this chapter, I will explain this popular and important research design both in its simplest form, where a group of units is treated at the same time, and the more common form, where groups of units are treated at different points in time. My focus will be on the identifying assumptions needed for estimating treatment effects, including several practical tests and robustness exercises commonly performed, and I will point you to some of the work on difference-in-differences design (DD) being done at the frontier of research. I have included several replication exercises as well. ## 9.1 John Snow’s Cholera Hypothesis When thinking about situations in which a difference-in-differences design can be used, one usually tries to find an instance where a consequential treatment was given to some people or units but denied to others “haphazardly.” This is sometimes called a “natural experiment” because it is based on naturally occurring variation in some treatment variable that affects only some units over time. All good difference-in-differences designs are based on some kind of natural experiment. And one of the most interesting natural experiments was also one of the first difference-in-differences designs. This is the story of how John Snow convinced the world that cholera was transmitted by water, not air, using an ingenious natural experiment . Cholera is a vicious disease that attacks victims suddenly, with acute symptoms such as vomiting and diarrhea. In the nineteenth century, it was usually fatal. There were three main epidemics that hit London, and like a tornado, they cut a path of devastation through the city. Snow, a physician, watched as tens of thousands suffered and died from a mysterious plague. Doctors could not help the victims because they were mistaken about the mechanism that caused cholera to spread between people. The majority medical opinion about cholera transmission at that time was miasma, which said diseases were spread by microscopic poisonous particles that infected people by floating through the air. These particles were thought to be inanimate, and because microscopes at that time had incredibly poor resolution, it would be years before microorganisms would be seen. Treatments, therefore, tended to be designed to stop poisonous dirt from spreading through the air. But tried and true methods like quarantining the sick were strangely ineffective at slowing down this plague. John Snow worked in London during these epidemics. Originally, Snow—like everyone—accepted the miasma theory and tried many ingenious approaches based on the theory to block these airborne poisons from reaching other people. He went so far as to cover the sick with burlap bags, for instance, but the disease still spread. People kept getting sick and dying. Faced with the theory’s failure to explain cholera, he did what good scientists do—he changed his mind and began look for a new explanation. Snow developed a novel theory about cholera in which the active agent was not an inanimate particle but was rather a living organism. This microorganism entered the body through food and drink, flowed through the alimentary canal where it multiplied and generated a poison that caused the body to expel water. With each evacuation, the organism passed out of the body and, importantly, flowed into England’s water supply. People unknowingly drank contaminated water from the Thames River, which caused them to contract cholera. As they did, they would evacuate with vomit and diarrhea, which would flow into the water supply again and again, leading to new infections across the city. This process repeated through a multiplier effect which was why cholera would hit the city in epidemic waves. Snow’s years of observing the clinical course of the disease led him to question the usefulness of miasma to explain cholera. While these were what we would call “anecdote,” the numerous observations and imperfect studies nonetheless shaped his thinking. Here’s just a few of the observations which puzzled him. He noticed that cholera transmission tended to follow human commerce. A sailor on a ship from a cholera-free country who arrived at a cholera-stricken port would only get sick after landing or taking on supplies; he would not get sick if he remained docked. Cholera hit the poorest communities worst, and those people were the very same people who lived in the most crowded housing with the worst hygiene. He might find two apartment buildings next to one another, one would be heavily hit with cholera, but strangely the other one wouldn’t. He then noticed that the first building would be contaminated by runoff from privies but the water supply in the second building was cleaner. While these observations weren’t impossible to reconcile with miasma, they were definitely unusual and didn’t seem obviously consistent with miasmis. Snow tucked away more and more anecdotal evidence like these. But, while this evidence raised some doubts in his mind, he was not convinced. He needed a smoking gun if he were to eliminate all doubt that cholera was spread by water, not air. But where would he find that evidence? More importantly, what would evidence like that evenlook like? Let’s imagine the following thought experiment. If Snow was a dictator with unlimited wealth and power, how could he test his theory that cholera is waterborne? One thing he could do is flip a coin over each household member—heads you drink from the contaminated Thames, tails you drink from some uncontaminated source. Once the assignments had been made, Snow could simply compare cholera mortality between the two groups. If those who drank the clean water were less likely to contract cholera, then this would suggest that cholera was waterborne. Knowledge that physical randomization could be used to identify causal effects was still eighty-five years away. But there were other issues besides ignorance that kept Snow from physical randomization. Experiments like the one I just described are also impractical, infeasible, and maybe even unethical—which is why social scientists so often rely on natural experiments that mimic important elements of randomized experiments. But what natural experiment was there? Snow needed to find a situation where uncontaminated water had been distributed to a large number of people as if by random chance, and then calculate the difference between those those who did and did not drink contaminated water. Furthermore, the contaminated water would need to be allocated to people in ways that were unrelated to the ordinary determinants of cholera mortality, such as hygiene and poverty, implying a degree of balance on covariates between the groups. And then he remembered—a potential natural experiment in London a year earlier had reallocated clean water to citizens of London. Could this work? In the 1800s, several water companies served different areas of the city. Some neighborhoods were even served by more than one company. They took their water from the Thames, which had been polluted by victims’ evacuations via runoff. But in 1849, the Lambeth water company had moved its intake pipes upstream higher up the Thames, above the main sewage discharge point, thus giving its customers uncontaminated water. They did this to obtain cleaner water, but it had the added benefit of being too high up the Thames to be infected with cholera from the runoff. Snow seized on this opportunity. He realized that it had given him a natural experiment that would allow him to test his hypothesis that cholera was waterborne by comparing the households. If his theory was right, then the Lambeth houses should have lower cholera death rates than some other set of households whose water was infected with runoff—what we might call today the explicit counterfactual. He found his explicit counterfactual in the Southwark and Vauxhall Waterworks Company. Unlike Lambeth, the Southwark and Vauxhall Waterworks Company had not moved their intake point upstream, and Snow spent an entire book documenting similarities between the two companies’ households. For instance, sometimes their service cut an irregular path through neighborhoods and houses such that the households on either side were very similar; the only difference being they drank different water with different levels of contamination from runoff. Insofar as the kinds of people that each company serviced were observationally equivalent, then perhaps they were similar on the relevant unobservables as well. Snow meticulously collected data on household enrollment in water supply companies, going door to door asking household heads the name of their utility company. Sometimes these individuals didn’t know, though, so he used a saline test to determine the source himself . He matched those data with the city’s data on the cholera death rates at the household level. It was in many ways as advanced as any study we might see today for how he carefully collected, prepared, and linked a variety of data sources to show the relationship between water purity and mortality. But he also displayed scientific ingenuity for how he carefully framed the research question and how long he remained skeptical until the research design’s results convinced him otherwise. After combining everthing, he was able to generate extremely persuasive evidence that influenced policymakers in the city.139 Snow wrote up all of his analysis in a manuscript entitled On the Mode of Communication of Cholera . Snow’s main evidence was striking, and I will discuss results based on Table XII and Table IX (not shown) in Table 9.1. The main difference between my version and his version of Table XII is that I will use his data to estimate a treatment effect using difference-in-differences. Table 9.1: Modified Table XII (Snow 1854). Company name 1849 1854 Southwark and Vauxhall 135 147 Lambeth 85 19 ### 9.1.1 Table XII In 1849, there were 135 cases of cholera per 10,000 households at Southwark and Vauxhall and 85 for Lambeth. But in 1854, there were 147 per 100,000 in Southwark and Vauxhall, whereas Lambeth’s cholera cases per 10,000 households fell to 19. While Snow did not explicitly calculate the difference-in-differences, the ability to do so was there . If we difference Lambeth’s 1854 value from its 1849 value, followed by the same after and before differencing for Southwark and Vauxhall, we can calculate an estimate of the ATT equaling 78 fewer deaths per 10,000. While Snow would go on to produce evidence showing cholera deaths were concentrated around a pump on Broad Street contaminated with cholera, he allegedly considered the simple difference-in-differences the more convincing test of his hypothesis. The importance of the work Snow undertook to understand the causes of cholera in London cannot be overstated. It not only lifted our ability to estimate causal effects with observational data, it advanced science and ultimately saved lives. Of Snow’s work on the cause of cholera transmission, Freedman (1991) states: The force of Snow’s argument results from the clarity of the prior reasoning, the bringing together of many different lines of evidence, and the amount of shoe leather Snow was willing to use to get the data. Snow did some brilliant detective work on nonexperimental data. What is impressive is not the statistical technique but the handling of the scientific issues. He made steady progress from shrewd observation through case studies to analyze ecological data. In the end, he found and analyzed a natural experiment. (p.298) ## 9.2 Estimation ### 9.2.1 A simple table Let’s look at this example using some tables, which hopefully will help give you an idea of the intuition behind DD, as well as some of its identifying assumptions.140 Assume that the intervention is clean water, which I’ll write as $$D$$, and our objective is to estimate $$D$$’s causal effect on cholera deaths. Let cholera deaths be represented by the variable $$Y$$. Can we identify the causal effect of D if we just compare the post-treatment 1854 Lambeth cholera death values to that of the 1854 Southwark and Vauxhall values? This is in many ways an obvious choice, and in fact, it is one of the more common naive approaches to causal inference. After all, we have a control group, don’t we? Why can’t we just compare a treatment group to a control group? Let’s look and see. One of the things we immediately must remember is that the simple difference in outcomes, which is all we are doing here, only collapsed to the ATE if the treatment had been randomized. But it is never randomized in the real world where most choices if not all choices made by real people is endogenous to potential outcomes. Let’s represent now the differences between Lambeth and Southwark and Vauxhall with fixed level differences, or fixed effects, represented by $$L$$ and $$SV$$. Both are unobserved, unique to each company, and fixed over time. What these fixed effects mean is that even if Lambeth hadn’t changed its water source there, would still be something determining cholera deaths, which is just the time-invariant unique differences between the two companies as it relates to cholera deathsin 1854. Compared to what? Different companies. Company Outcome Lambeth $$Y=L + D$$ Southwark and Vauxhall $$Y=SV$$ When we make a simple comparison between Lambeth and Southwark and Vauxhall, we get an estimated causal effect equalling $$D+(L-SV)$$. Notice the second term, $$L-SV$$. We’ve seen this before. It’s the selection bias we found from the decomposition of the simple difference in outcomes from earlier in the book. Okay, so say we realize that we cannot simply make cross-sectional comparisons between two units because of selection bias. Surely, though, we can compare a unit to itself? This is sometimes called an interrupted time series. Let’s consider that simple before-and-after difference for Lambeth now. Compared to what? Before and after. Company Time Outcome Lambeth Before $$Y=L$$ After $$Y=L + (T + D)$$ While this procedure successfully eliminates the Lambeth fixed effect (unlike the cross-sectional difference), it doesn’t give me an unbiased estimate of $$D$$ because differences can’t eliminate the natural changes in the cholera deaths over time. Recall, these events were oscillating in waves. I can’t compare Lambeth before and after ($$T+D$$) because of $$T$$, which is an omitted variable. The intuition of the DD strategy is remarkably simple: combine these two simpler approaches so the selection bias and the effect of time are, in turns, eliminated. Let’s look at it in the followingtable. Compared to what? Difference in each company’s differences. Companies Time Outcome $$D_1$$ $$D_2$$ Lambeth Before $$Y=L$$ After $$Y=L + T + D$$ $$T+D$$ $$D$$ Southwark and Vauxhall Before $$Y=SV$$ After $$Y=SV + T$$ $$T$$ The first difference, $$D_1$$, does the simple before-and-after difference. This ultimately eliminates the unit-specific fixed effects. Then, once those differences are made, we difference the differences (hence the name) to get the unbiased estimate of $$D$$. But there’s a a key assumption with a DD design, and that assumption is discernible even in this table. We are assuming that there is no time-variant company specific unobservables. Nothing unobserved in Lambeth households that is changing between these two periods that also determines cholera deaths. This is equivalent to assuming that $$T$$ is the same for all units. And we call this the parallel trends assumption. We will discuss this assumption repeatedly as the chapter proceeds, as it is the most important assumption in the design’s engine. If you can buy off on the parallel trends assumption, then DD will identify the causal effect. DD is a powerful, yet amazingly simple design. Using repeated observations on a treatment and control unit (usually several units), we can eliminate the unobserved heterogeneity to provide a credible estimate of the average treatment effect on the treated (ATT) by transforming the data in very specific ways. But when and why does this process yield the correct answer? Turns out, there is more to it than meets the eye. And it is imperative on the front end that you understand what’s under the hood so that you can avoid conceptual errors about this design. ### 9.2.2 The simple $$2\times 2$$ DD The cholera case is a particular kind of DD design that Goodman-Bacon (2019) calls the $$2\times 2$$ DD design. The $$2\times 2$$ DD design has a treatment group $$k$$ and untreated group $$U$$. There is a pre-period for the treatment group, $$\mathop{\mathrm{pre}}(k)$$; a post-period for the treatment group, $$\mathop{\mathrm{post}}(k)$$; a pre-treatment period for the untreated group, $$\mathop{\mathrm{pre}}(U)$$; and a post-period for the untreated group, $$\mathop{\mathrm{post}}(U)$$ So: $\widehat{\delta}^{2\times 2}_{kU} = \bigg ( \overline{y}_k^{\mathop{\mathrm{post}}(k)} - \overline{y}_k^{\mathop{\mathrm{pre}}(k)} \bigg ) - \bigg ( \overline{y}_U^{\mathop{\mathrm{post}}(k)} - \overline{y}_U^{\mathop{\mathrm{pre}}(k)} \bigg )$ where $$\widehat{\delta}_{kU}$$ is the estimated ATT for group $$k$$, and $$\overline{y}$$ is the sample mean for that particular group in a particular time period. The first paragraph differences the treatment group, $$k$$, after minus before, the second paragraph differences the untreated group, $$U$$, after minus before. And once those quantities are obtained, we difference the second term from the first. But this is simply the mechanics of calculations. What exactly is this estimated parameter mapping onto? To understand that, we must convert these sample averages into conditional expectations of potential outcomes. But that is easy to do when working with sample averages, as we will see here. First let’s rewrite this as a conditional expectation. $\widehat{\delta}^{2\times 2}_{kU} = \bigg(E\big[Y_k \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Post}}\big] - E\big[Y_k \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Pre}}\big]\bigg)- \bigg(E\big[Y_U \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Post}}\big] - E\big[Y_U \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Pre}}\big]\bigg)$ Now let’s use the switching equation, which transforms historical quantities of $$Y$$ into potential outcomes. As we’ve done before, we’ll do a little trick where we add zero to the right-hand side so that we can use those terms to help illustrate something important. \begin{align} &\widehat{\delta}^{2\times 2}_{kU} = \bigg ( \underbrace{E\big[Y^1_k \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Post}}\big] - E\big[Y^0_k \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Pre}}\big] \bigg ) - \bigg(E\big[Y^0_U \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Post}}\big] - E\big[ Y^0_U \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Pre}}\big]}_{\text{Switching equation}} \bigg) \\ &+ \underbrace{E\big[Y_k^0 \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Post}}\big] - E\big[Y^0_k \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Post}}\big]}_{\text{Adding zero}} \end{align} Now we simply rearrange these terms to get the decomposition of the $$2\times 2$$ DD in terms of conditional expected potential outcomes. \begin{align} &\widehat{\delta}^{2\times 2}_{kU} = \underbrace{E\big[Y^1_k \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Post}}\big] - E\big[Y^0_k \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Post}}\big]}_{\text{ATT}} \\ &+\Big[\underbrace{E\big[Y^0_k \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Post}}\big] - E\big[Y^0_k \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Pre}}\big] \Big] - \Big[E\big[Y^0_U \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Post}}\big] - E\big[Y_U^0 \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Pre}}\big] }_{\text{Non-parallel trends bias in 2\times 2 case}} \Big] \end{align} Now, let’s study this last term closely. This simple $$2\times 2$$ difference-in-differences will isolate the ATT (the first term) if and only if the second term zeroes out. But why would this second term be zero? It would equal zero if the first difference involving the treatment group, $$k$$, equaled the second difference involving the untreated group, $$U$$. But notice the term in the second line. Notice anything strange about it? The object of interest is $$Y^0$$, which is some outcome in a world without the treatment. But it’s the post period, and in the post period, $$Y=Y^1$$ not $$Y^0$$ by the switching equation. Thus, the first term is counterfactual. And as we’ve said over and over, counterfactuals are not observable. This bottom line is often called the parallel trends assumption and it is by definition untestable since we cannot observe this counterfactual conditional expectation. We will return to this again, but for now I simply present it for your consideration. ### 9.2.3 DD and the Minimum Wage Now I’d like to talk about more explicit economic content, and the minimum wage is as good a topic as any. The modern use of DD was brought into the social sciences through esteemed labor economist Orley Ashenfelter (1978). His study was no doubt influential to his advisee, David Card, arguably the greatest labor economist of his generation. Card would go on to use the method in several pioneering studies, such as Card (1990). But I will focus on one in particular—his now-classic minimum wage study . Card and Krueger (1994) is an infamous study both because of its use of an explicit counterfactual for estimation, and because the study challenges many people’s common beliefs about the negative effects of the minimum wage. It lionized a massive back-and-forth minimum-wage literature that continues to this day.141 So controversial was this study that James Buchanan, the Nobel Prize winner, called those influenced by Card and Krueger (1994) “camp following whores” in a letter to the editor of the Wall Street Journal .142 Suppose you are interested in the effect of minimum wages on employment. Theoretically, you might expect that in competitive labor markets, an increase in the minimum wage would move us up a downward-sloping demand curve, causing employment to fall. But in labor markets characterized by monopsony, minimum wages can increase employment. Therefore, there are strong theoretical reasons to believe that the effect of the minimum wage on employment is ultimately an empirical question depending on many local contextual factors. This is where Card and Krueger (1994) entered. Could they uncover whether minimum wages were ultimately harmful or helpful in some local economy? It’s always useful to start these questions with a simple thought experiment: if you had a billion dollars, complete discretion and could run a randomized experiment, how would you test whether minimum wages increased or decreased employment? You might go across the hundreds of local labor markets in the United States and flip a coin—heads, you raise the minimum wage; tails, you keep it at the status quo. As we’ve done before, these kinds of thought experiments are useful for clarifying both the research design and the causal question. Lacking a randomized experiment, Card and Krueger (1994) decided on a next-best solution by comparing two neighboring states before and after a minimum-wage increase. It was essentially the same strategy that Snow used in his cholera study and a strategy that economists continue to use, in one form or another, to this day . mean = regddd$coefficients[110:124], year = c(1986:2000)) abortion_plot %>% ggplot(aes(x = year, y = mean)) + geom_rect(aes(xmin=1986, xmax=1992, ymin=-Inf, ymax=Inf), fill = "cyan", alpha = 0.01)+ geom_point()+ geom_text(aes(label = year), hjust=-0.002, vjust = -0.03)+ geom_hline(yintercept = 0) + geom_errorbar(aes(ymin = mean-sd*1.96, ymax = mean+sd*1.96), width = 0.2, position = position_dodge(0.05)) Python Code abortion_ddd.py import numpy as np import pandas as pd import statsmodels.api as sm import statsmodels.formula.api as smf from itertools import combinations import plotnine as p # read data import ssl ssl._create_default_https_context = ssl._create_unverified_context def read_data(file): return pd.read_stata("https://raw.github.com/scunning1975/mixtape/master/" + file) abortion = read_data('abortion.dta') abortion = abortion[~pd.isnull(abortion.lnr)] abortion['yr'] = 0 abortion.loc[(abortion.younger==1) & (abortion.repeal==1), 'yr'] = 1 abortion['wm'] = 0 abortion.loc[(abortion.wht==1) & (abortion.male==1), 'wm'] = 1 abortion['wf'] = 0 abortion.loc[(abortion.wht==1) & (abortion.male==0), 'wf'] = 1 abortion['bm'] = 0 abortion.loc[(abortion.wht==0) & (abortion.male==1), 'bm'] = 1 abortion['bf'] = 0 abortion.loc[(abortion.wht==0) & (abortion.male==0), 'bf'] = 1 abortion_filt = abortion[(abortion.bf==1) & (abortion.age.isin([15,25]))] reg = ( smf .wls("""lnr ~ C(repeal)*C(year) + C(younger)*C(repeal) + C(younger)*C(year) + C(yr)*C(year) + C(fip)*t + acc + ir + pi + alcohol + crack + poverty + income + ur""", data=abortion_filt, weights=abortion_filt.totpop.values) .fit( cov_type='cluster', cov_kwds={'groups': abortion_filt.fip.values}, method='pinv') ) abortion_plot = pd.DataFrame({'sd': reg.bse['C(yr)[T.1]:C(year)[T.1986.0]':'C(yr)[T.1]:C(year)[T.2000.0]'], 'mean': reg.params['C(yr)[T.1]:C(year)[T.1986.0]':'C(yr)[T.1]:C(year)[T.2000.0]'], 'year':np.arange(1986, 2001)}) abortion_plot['lb'] = abortion_plot['mean'] - abortion_plot['sd']*1.96 abortion_plot['ub'] = abortion_plot['mean'] + abortion_plot['sd']*1.96 p.ggplot(abortion_plot, p.aes(x = 'year', y = 'mean')) + p.geom_rect(p.aes(xmin=1986, xmax=1991, ymin=-np.inf, ymax=np.inf), fill = "cyan", alpha = 0.01)+ p.geom_point()+ p.geom_text(p.aes(label = 'year'), ha='right')+ p.geom_hline(yintercept = 0) + p.geom_errorbar(p.aes(ymin = 'lb', ymax = 'ub'), width = 0.2, position = p.position_dodge(0.05)) +\ p.labs(title= "Estimated effect of abortion legalization on gonorrhea") Here we see the prediction start to break down. Though there are negative effects for years 1986 to 1990, the 1991 and 1992 coefficients are positive, which is not consistent with our hypothesis. Furthermore, only the first four coefficients are statistically significant. Nevertheless, given the demanding nature of DDD, perhaps this is a small victory in favor of Gruber, Levine, and Staiger (1999) and Donohue and Levitt (2001). Perhaps the theory that abortion legalization had strong selection effects on cohorts has some validity. Putting aside whether you believe the results, it is still valuable to replicate the results based on this staggered design. Recall that I said the DDD design requires stacking the data, which may seem like a bit of a black box, so I’d like to examine these data now.144 The second line estimates the regression equation. The dynamic DD coefficients are captured by the repeal-year interactions. These are the coefficients we used to create box plots in Figure 9.11. You can check these yourself. Note, for simplicity, I only estimated this for the black females (bf15==1) but you could estimate for the black males (bm15==1), white females (wf15==1), or white males (wm15==1). We do all four in the paper, but here we only focus on the black females aged 15–19 because the purpose of this section is to help you understand the estimation. I encourage you to play around with this model to see how robust the effects are in your mind using only this linear estimation. But now I want to show you the code for estimating a triple difference model. Some reshaping had to be done behind the scenes for this data structure, but it would take too long to post that here. For now, I will simply produce the commands that produce the black female result, and I encourage you to explore the panel data structure so as to familiarize yourself with the way in which the data are organized. Notice that some of these were already interactions (e.g., yr), which was my way to compactly include all of the interactions. I did this primarily to give myself more control over what variables I was using. But I encourage you to study the data structure itself so that when you need to estimate your own DDD, you’ll have a good handle on what form the data must be in in order to execute so many interactions. ### 9.5.4 Going beyond Cunningham and Cornwell (2013) The US experience with abortion legalization predicted a parabola from 1986 to 1992 for 15- to 19-year-olds, and using a DD design, that’s what I found. I also estimated the effect using a DDD design, and while the effects weren’t as pretty as what I found with DD, there appeared to be something going on in the general vicinity of where the model predicted. So boom goes the dynamite, right? Can’t we be done finally? Not quite. Whereas my original study stopped there, I would like to go a little farther. The reason can be seen in the following Figure 9.13. This is a modified version of Figure 9.9, with the main difference being I have created a new parabola for the 20- to 24-year-olds. Look carefully at Figure 9.13. Insofar as the early 1970s cohorts were treated in utero with abortion legalization, then we should see not just a parabola for the 15- to 19-year-olds for 1986 to 1992 but also for the 20- to 24-year-olds for years 1991 to 1997 as the cohorts continuedto age.145 Stata Code abortion_dd2.do use https://github.com/scunning1975/mixtape/raw/master/abortion.dta, clear * Second DD model for 20-24 year old black females char year[omit] 1985 xi: reg lnr i.repeal*i.year i.fip acc ir pi alcohol crack poverty income ur if (race==2 & sex==2 & age==20) [aweight=totpop], cluster(fip) R Code abortion_dd2.R library(tidyverse) library(haven) library(estimatr) read_data <- function(df) { full_path <- paste("https://raw.github.com/scunning1975/mixtape/master/", df, sep = "") df <- read_dta(full_path) return(df) } abortion <- read_data("abortion.dta") %>% mutate( repeal = as_factor(repeal), year = as_factor(year), fip = as_factor(fip), fa = as_factor(fa), ) reg <- abortion %>% filter(race == 2 & sex == 2 & age == 20) %>% lm_robust(lnr ~ repeal*year + fip + acc + ir + pi + alcohol+ crack + poverty+ income+ ur, data = ., weights = totpop, clusters = fip) Python Code abortion_dd2.py import numpy as np import pandas as pd import statsmodels.api as sm import statsmodels.formula.api as smf from itertools import combinations import plotnine as p # read data import ssl ssl._create_default_https_context = ssl._create_unverified_context def read_data(file): return pd.read_stata("https://raw.github.com/scunning1975/mixtape/master/" + file) abortion = read_data('abortion.dta') abortion = abortion[~pd.isnull(abortion.lnr)] abortion_filt = abortion[(abortion.race == 2) & (abortion.sex == 2) & (abortion.age == 20)] regdd = ( smf .wls("""lnr ~ C(repeal)*C(year) + C(fip) + acc + ir + pi + alcohol+ crack + poverty+ income+ ur""", data=abortion_filt, weights=abortion_filt.totpop.values) .fit( cov_type='cluster', cov_kwds={'groups': abortion_filt.fip.values}, method='pinv') ) regdd.summary() I did not examine the 20- to 24-year-old cohort when I first wrote this paper because at that time I doubted that the selection effects for risky sex would persist into adulthood given that youth display considerable risk-taking behavior. But with time come new perspectives, and these days I don’t have strong priors that the selection effects would necessarily vanish after teenage years. So I’d like to conduct that additional analysis here and now for the first time. Let’s estimate the same DD model as before, only for Black females aged 20–24. As before, we will focus just on the coefficient plots. We show that in Figure 9.14. There are a couple of things about this regression output that are troubling. First, there is a negative parabola showing up where there wasn’t necessarily one predicted—the 1986–1992 period. Note that is the period where only the 15- to 19-year-olds were the treated cohorts, suggesting that our 15- to 19-year-old analysis was picking up something other than abortion legalization. But that was also the justification for using DDD, as clearly something else is going on in the repeal versus Roe states during those years that we cannot adequately control for with our controls and fixed effects. The second thing to notice is that there is no parabola in the treatment window for the treatment cohort. The effect sizes are negative in the beginning, but shrink in absolute value when they should be growing. In fact, the 1991 to 1997 period is one of convergence to zero, not divergence between these two sets of states. But as before, maybe there are strong trending unobservables for all groups masking the abortion legalization effect. To check, let’s use my DDD strategy with the 25- to 29-year-olds as the within-state control group. We can implement this by using the Stata code, abortion_ddd2.do and abortion_ddd2.R. Stata Code abortion_ddd2.do use https://github.com/scunning1975/mixtape/raw/master/abortion.dta, clear * Second DDD model for 20-24 year olds vs 25-29 year olds black females in repeal vs Roe states gen younger2 = 0 replace younger2 = 1 if age == 20 gen yr2=(repeal==1) & (younger2==1) gen wm=(wht==1) & (male==1) gen wf=(wht==1) & (male==0) gen bm=(wht==0) & (male==1) gen bf=(wht==0) & (male==0) char year[omit] 1985 char repeal[omit] 0 char younger2[omit] 0 char fip[omit] 1 char fa[omit] 0 char yr2[omit] 0 xi: reg lnr i.repeal*i.year i.younger2*i.repeal i.younger2*i.year i.yr2*i.year i.fip*t acc pi ir alcohol crack poverty income ur if bf==1 & (age==20 | age==25) [aweight=totpop], cluster(fip) R Code abortion_ddd2.R library(tidyverse) library(haven) library(estimatr) read_data <- function(df) { full_path <- paste("https://raw.github.com/scunning1975/mixtape/master/", df, sep = "") df <- read_dta(full_path) return(df) } abortion <- read_data("abortion.dta") %>% mutate( repeal = as_factor(repeal), year = as_factor(year), fip = as_factor(fip), fa = as_factor(fa), younger2 = case_when(age == 20 ~ 1, TRUE ~ 0), yr2 = as_factor(case_when(repeal == 1 & younger2 == 1 ~ 1, TRUE ~ 0)), wm = as_factor(case_when(wht == 1 & male == 1 ~ 1, TRUE ~ 0)), wf = as_factor(case_when(wht == 1 & male == 0 ~ 1, TRUE ~ 0)), bm = as_factor(case_when(wht == 0 & male == 1 ~ 1, TRUE ~ 0)), bf = as_factor(case_when(wht == 0 & male == 0 ~ 1, TRUE ~ 0)) ) regddd <- abortion %>% filter(bf == 1 & (age == 20 | age ==25)) %>% lm_robust(lnr ~ repeal*year + acc + ir + pi + alcohol + crack + poverty + income + ur, data = ., weights = totpop, clusters = fip) Python Code abortion_ddd2.py import numpy as np import pandas as pd import statsmodels.api as sm import statsmodels.formula.api as smf from itertools import combinations import plotnine as p # read data import ssl ssl._create_default_https_context = ssl._create_unverified_context def read_data(file): return pd.read_stata("https://raw.github.com/scunning1975/mixtape/master/" + file) abortion = read_data('abortion.dta') abortion = abortion[~pd.isnull(abortion.lnr)] abortion_filt = abortion[(abortion.race == 2) & (abortion.sex == 2) & (abortion.age == 20)] regdd = ( smf .wls("""lnr ~ C(repeal)*C(year) + C(fip) + acc + ir + pi + alcohol+ crack + poverty+ income+ ur""", data=abortion_filt, weights=abortion_filt.totpop.values) .fit( cov_type='cluster', cov_kwds={'groups': abortion_filt.fip.values}, method='pinv') ) regdd.summary() Figure 9.15 shows the DDD estimated coefficients for the treated cohort relative to a slightly older 25- to 29-year-old cohort. It’s possible that the 25- to 29-year-old cohort is too close in age to function as a satisfactory within-state control; if those age 20–24 have sex with those who are age 25–29, for instance, then SUTVA is violated. There are other age groups, though, that you can try in place of the 25- to 29-year-olds, and I encourage you to do it for both the experience and the insights you might gleam. But let’s back up and remember the big picture. The abortion legalization hypothesis made a series of predictions about where negative parabolic treatment effects should appear in the data. And while we found some initial support, when we exploited more of those predictions, the results fell apart. A fair interpretation of this exercise is that our analysis does not support the abortion legalization hypothesis. Figure 9.15 shows several point estimates at nearly zero, and standard errors so large as to include both positive and negative values for these interactions. I included this analysis because I wanted to show you the power of a theory with numerous unusual yet testable predictions. Imagine for a moment if a parabola had showed up for all age groups in precisely the years predicted by the theory. Wouldn’t we have to update our priors about the abortion legalization selection hypothesis? With predictions so narrow, what else could be causing it? It’s precisely because the predictions are so specific, though, that we are able to reject the abortion legalization hypothesis, at least for gonorrhea. ### 9.5.5 Placebos as critique Since the fundamental problem of causal inference blocks our direct observation of causal effects, we rely on many direct and indirect pieces of evidence to establish credible causality. And as I said in the previous section on DDD, one of those indirect pieces of evidence is placebo analysis. The reasoning goes that if we find, using our preferred research design, effects where there shouldn’t be, then maybe our original findings weren’t credible in the first place. Using placebo analysis within your own work has become an essential part of empirical work for this reason. But another use of placebo analysis is to evaluate the credibility of popular estimation strategies themselves. This kind of use helps improve a literature by uncovering flaws in a research design which can then help stimulate the creation of stronger methods and models. Let’s take two exemplary studies that accomplished this well: Auld and Grootendorst (2004) and Cohen-Cole and Fletcher (2008). To say that the G. S. Becker and Murphy (1988) “rational addiction” model has been influential would be an understatement. It has over 4,000 cites and has become one of the most common frameworks in health economics. It created a cottage industry of empirical studies that persists to this day. Alcohol, tobacco, gambling, even sports, have all been found to be “rationally addictive” commodities and activities using various empirical approaches. But some researchers cautioned the research community about these empirical studies. Rogeberg (2004) critiqued the theory on its own grounds, but I’d like to focus on the empirical studies based on the theory. Rather than talk about any specific paper, I’d like to provide a quote from Melberg (2008), who surveyed researchers who had written on rational addiction: A majority of our respondents believe the literature is a success story that demonstrates the power of economic reasoning. At the same time, they also believe the empirical evidence is weak, and they disagree both on the type of evidence that would validate the theory and the policy implications. Taken together, this points to an interesting gap. On the one hand, most of the respondents claim that the theory has valuable real world implications. On the other hand, they do not believe the theory has received empiricalsupport. (p.1) Rational addiction should be held to the same empirical standards as in theory. The strength of the model has always been based on the economic reasoning, which economists obviously find compelling. But were the empirical designs flawed? How could we know? Auld and Grootendorst (2004) is not a test of the rational addiction model. On the contrary, it is an “anti-test” of the empirical rational addiction models common at the time. Their goal was not to evaluate the theoretical rational addiction model, in other words, but rather the empirical rational addiction models themselves. How do they do this? Auld and Grootendorst (2004) used the empirical rational addiction model to evaluate commodities that could not plausibly be considered addictive, such as eggs, milk, orange, and apples. They found that the empirical rational addiction model implied milk was extremely addictive, perhaps one of the most addictive commodities studied.146 Is it credible to believe that eggs and milk are “rationally addictive” or is it more likely the research designs used to evaluate the rational addiction model were flawed? Auld and Grootendorst (2004) study cast doubt on the empirical rational addiction model, not the theory. Another problematic literature was the peer-effects literature. Estimating peer effects is notoriously hard. Manski (1993) said that the deep endogeneity of social interactions made the identification of peer effects difficult and possibly even impossible. He called this problem the “mirroring” problem. If “birds of a feather flock together,” then identifying peer effects in observational settings may just be impossible due to the profound endogeneities at play. Several studies found significant network effects on outcomes like obesity, smoking, alcohol use, and happiness. This led many researchers to conclude that these kinds of risk behaviors were “contagious” through peer effects . But these studies did not exploit randomized social groups. The peer groups were purely endogenous. Cohen-Cole and Fletcher (2008) showed using similar models and data that even attributes that couldn’t be transmitted between peers—acne, height, and headaches—appeared “contagious” in observational data using the Christakis and Fowler (2007) model for estimation. Note, Cohen-Cole and Fletcher (2008) does not reject the idea of theoretical contagions. Rather, they point out that the Manski critique should guide peer effect analysis if social interactions are endogenous. They provide evidence for this indirectly using placebo analysis.147 ### 9.5.6 Compositional change within repeated cross-sections DD can be applied to repeated cross-sections, as well as panel data. But one of the risks of working with the repeated cross-sections is that unlike panel data (e.g., individual-level panel data), repeated cross-sections run the risk of compositional changes. Hong (2013) used repeated cross-sectional data from the Consumer Expenditure Survey (CEX) containing music expenditure and internet use for a random sample of households. The author’s study exploited the emergence and immense popularity of Napster, the first file-sharing software widely used by Internet users, in June 1999 as a natural experiment. The study compared Internet users and Internet non-users before and after the emergence of Napster. At first glance, they found that as Internet diffusion increased from 1996 to 2001, spending on music for Internet users fell faster than that for non-Internet users. This was initially evidence that Napster was responsible for the decline, until this was investigated more carefully. But when we look at Table 9.4, we see evidence of compositional changes. While music expenditure fell over the treatment period, the demographics of the two groups also changed over this period. For instance, the age of Internet users grew while income fell. If older people are less likely to buy music in the first place, then this could independently explain some of the decline. This kind of compositional change is a like an omitted variable bias built into the sample itself caused by time-variant unobservables. Diffusion of the Internet appears to be related to changing samples as younger music fans are early adopters. Identification of causal effects would need for the treatment itself to be exogenous to such changes in the composition. Year 1997 1998 1999 Internet user Non-user Internet user Non-user Internet user Non-user Average expenditure Recorded music$25.73 $10.90$24.18 $9.97$20.92 $9.37 Entertainment$195.03 $96.71$193.38 $84.92$182.42 $80.19 Zero expenditure Recorded music 0.56 0.79 0.60 0.80 0.64 0.81 Entertainment 0.08 0.32 0.09 0.35 0.14 0.39 Demographics Age 40.2 49.0 42.3 49.0 44.1 49.4 Income$52,887 $30,459$51,995 $26,189$49,970 26,649 High school graduate 0.18 0.31 0.17 0.32 0.21 0.32 Some college 0.37 0.28 0.35 0.27 0.34 0.27 College grad 0.43 0.21 0.45 0.21 0.42 0.20 Manager 0.16 0.08 0.16 0.08 0.14 0.08 Sample means from the Consumer Expenditure Survey. ### 9.5.7 Final thoughts There are a few other caveats I’d like to make before moving on. First, it is important to remember the concepts we learned in the early DAG chapter. In choosing covariates in a DD design, you must resist the temptation to simply load the regression up with a kitchen sink of regressors. You should resist if only because in so doing, you may inadvertently include a collider, and if a collider is conditioned on, it introduces strange patterns that may mislead you and your audience. There is unfortunately no way forward except, again, deep institutional familiarity with both the factors that determined treatment assignment on the ground, as well as economic theory itself. Second, another issue I skipped over entirely is the question of how the outcome is modeled. Very little thought if any is given to how exactly we should model some outcome. Just to take one example, should we use the log or the levels themselves? Should we use the quartic root? Should we use rates? These, it turns, out are critically important because for many of them, the parallel trends assumption needed for identification will not be achieved—even though it will be achieved under some other unknown transformation. It is for this reason that you can think of many DD designs as having a parametric element because you must make strong commitments about the functional form itself. I cannot provide guidance to you on this, except that maybe using the pre-treatment leads as a way of finding parallelism could be a useful guide. ## Causal Inference: The Mixtape. Buy the print version today: ## 9.6 Twoway Fixed Effects with Differential Timing I have a bumper sticker on my car that says “I love Federalism (for the natural experiments)” (Figure 9.16). I made these bumper stickers for my students to be funny, and to illustrate that the United States is a never-ending laboratory. Because of state federalism, each US state has been given considerable discretion to govern itself with policies and reforms. Yet, because it is a union of states, US researchers have access to many data sets that have been harmonized across states, making it even more useful for causal inference. Goodman-Bacon (2019) calls the staggered assignment of treatments across geographic units over time the “differential timing” of treatment. What he means is unlike the simple $$2\times 2$$ that we discussed earlier (e.g., New Jersey and Pennsylvania), where treatment units were all treated at the same time, the more common situation is one where geographic units receive treatments at different points in time. And this happens in the United States because each area (state, municipality) will adopt a policy when it wants to, for its own reasons. As a result, the adoption of some treatment will tend to be differentially timed across units. This introduction of differential timing means there are basically two types of DD designs. There is the $$2\times 2$$ DD we’ve been discussing wherein a single unit or a group of units all receive some treatment at the same point in time, like Snow’s cholera study or Card and Krueger (1994). And then there is the DD with differential timing in which groups receive treatment at different points in time, like Cheng and Hoekstra (2013). We have a very good understanding of the $$2\times 2$$ design, how it works, why it works, when it works, and when it does not work. But we did not until Goodman-Bacon (2019) have as good an understanding of the DD design with differential timing. So let’s get down to business and discuss that now by reminding ourselves of the $$2\times 2$$ DD that we introduced earlier. \begin{align} \widehat{\delta}^{2\times 2}_{kU} = \bigg(\overline{y}_k^{\text{post}(k)} - \overline{y}_k^{\text{pre}(k)} \bigg) - \bigg(\overline{y}_U^{\text{post}(k)} - \overline{y}_U^{\text{pre}(k)} \bigg ) \end{align} where $$k$$ is the treatment group, $$U$$ is the never-treated group, and everything else is self-explanatory. Since this involves sample means, we can calculate the differences manually. Or we can estimate it with the following regression: \begin{align} y_{it} =\beta D_{i}+\tau \mathop{\mathrm{Post}}_{t}+\delta (D_i \times \mathop{\mathrm{Post}}_t)+X_{it}+\varepsilon_{it} \end{align} But a more common situation you’ll encounter will be a DD design with differential timing. And while the decomposition is a bit complicated, the regression equation itself is straightforward: \begin{align} y_{it} =\alpha_0 + \delta D_{it} + X_{it} + \alpha_i + \alpha_t + \epsilon_{it} \end{align} When researchers estimate this regression these days, they usually use the linear fixed-effects model that I discussed in the previous panel chapter. These linear panel models have gotten the nickname “twoway fixed effects” because they include both time fixed effects and unit fixed effects. Since this is such a popular estimator, it’s important we understand exactly what it is doing and what is it not. ### 9.6.1 Bacon Decomposition theorem Goodman-Bacon (2019) provides a helpful decomposition of the twoway fixed effects estimate of $$\widehat{\delta}$$. Given this is the go-to model for implementing differential timing designs, I have found his decomposition useful. But as there are some other decompositions of twoway fixed effects estimators, such as another important paper by Chaisemartin and D’Haultfœuille (2019), I’ll call it the Bacon decomposition for the sake of branding. The punchline of the Bacon decomposition theorem is that the twoway fixed effects estimator is a weighted average of all potential $$2\times 2$$ DD estimates where weights are both based on group sizes and variance in treatment. Under the assumption of variance weighted common trends (VWCT) and time invariant treatment effects, the variance weighted ATT is a weighted average of all possible ATTs. And under more restrictive assumptions, that estimate perfectly matches the ATT. But that is not true when there are time-varying treatment effects, as time-varying treatment effects in a differential timing design estimated with twoway fixed effects can generate a bias. As such, twoway fixed-effects models may be severely biased, which is echoed in Chaisemartin and D’Haultfœuille (2019). To make this concrete, let’s start with a simple example. Assume in this design that there are three groups: an early treatment group $$(k)$$, a group treated later $$(l)$$, and a group that is never treated $$(U)$$. Groups $$k$$ and $$l$$ are similar in that they are both treated but they differ in that $$k$$ is treated earlier than $$l$$. Let’s say there are 5 periods, and $$k$$ is treated in period 2. Then it spends 40% of its time under treatment, or 0.4. But let’s say $$l$$ is treated in period 4. Then it spends 80% of its time treated, or 0.8. I represent this time spent in treatment for a group as $$\overline{D}_k = 0.4$$ and $$\overline{D}_l = 0.8$$. This is important, because the length of time a group spends in treatment determines its treatment variance, which in turn affects the weight that $$2\times 2$$ plays in the final adding up of the DD parameter itself. And rather than write out $$2\times 2$$ DD estimator every time, we will just represent each $$2\times 2$$ as $$\widehat{\delta}_{ab}^{2\times 2,j}$$ where $$a$$ and $$b$$ are the treatment groups, and $$j$$ is the index notation for any treatment group. Thus if we wanted to know the $$2\times 2$$ for group $$k$$ compared to group $$U$$, we would write $$\widehat{\delta}_{kU}^{2\times 2,k}$$ or, maybe to save space, just $$\widehat{\delta}_{kU}^{k}$$. So, let’s get started. First, in a single differential timing design, how many $$2\times 2$$s are there anyway? Turns out there are a lot. To see, let’s make a toy example. Let’s say there are three timing groups ($$a$$, $$b$$, and $$c$$) and one untreated group $$(U)$$. Then there are 9 $$2\times 2$$ DDs. They are: a to b b to a c to a a to c b to c c to b a to U b to U c to U See how it works? Okay, then let’s return to our simpler example where there are two timing groups $$k$$ and $$l$$ and one never-treated group. Groups $$k$$ and $$l$$ will get treated at time periods $$t^*_k$$ and $$t^*_l$$. The earlier period before anyone is treated will be called the “pre” period, the period between $$k$$ and $$l$$ treated is called the “mid” period, and the period after $$l$$ is treated is called the “post” period. This will be much easier to understand with some simple graphs. Let’s look at Figure 9.17. Recall the definition of a $$2\times 2$$ DD is $\widehat{\delta}^{2\times 2}_{kU} = \bigg (\overline{y}_k^{\text{post}(k)} - \overline{y}_k^{\text{pre}(k)} \bigg ) - \bigg (\overline{y}_U^{\text{post}(k)} - \overline{y}_U^{\text{pre}(k)} \bigg )$ where $$k$$ and $$U$$ are just place-holders for any of the groups used in a $$2\times 2$$. Substituting the information in each of the four panels of Figure 9.17 into the equation will enable you to calculate what each specific $$2\times 2$$ is. But we can really just summarize these into three really important $$2\times 2$$s, which are: \begin{align} \widehat{\delta}^{2\times 2}_{kU} &=\bigg ( \overline{y}_k^{\text{post}(k)} - \overline{y}_k^{\text{pre}(k)} \bigg ) - \bigg ( \overline{y}_U^{\text{post}(k)} - \overline{y}_U^{\text{pre}(k)} \bigg ) \\ \widehat{\delta}^{2\times 2}_{kl} &=\bigg ( \overline{y}_k^{mid(k,l)} - \overline{y}_k^{\text{pre}(k)} \bigg ) - \bigg ( \overline{y}_l^{mid(k,l)} - \overline{y}_l^{\text{pre}(k)} \bigg ) \\ \widehat{\delta}^{2\times 2}_{lk} &=\bigg ( \overline{y}_l^{\text{post}(l)} - \overline{y}_l^{mid(k,l)} \bigg ) - \bigg ( \overline{y}_k^{\text{post}(l)} - \overline{y}_k^{mid(k,l)} \bigg ) \end{align} where the first $$2\times 2$$ is any timing group compared to the untreated group ($$k$$ or $$l$$), the second is a group compared to the yet-to-be-treated timing group, and the last is the eventually-treated group compared to the already-treated controls. With this notation in mind, the DD parameter estimate can be decomposed as follows: $$\widehat{\delta}^{DD}$$ \begin{align} \widehat{\delta}^{DD} = \sum_{k \neq U} s_{kU}\widehat{\delta}_{kU}^{2\times 2} + \sum_{k \neq U} \sum_{l>k} s_{kl} \bigg [ \mu_{kl}\widehat{\delta}_{kl}^{2\times 2,k} + (1-\mu_{kl}) \widehat{\delta}_{kl}^{2\times 2,l} \bigg] \end{align} where the first $$2\times 2$$ is the $$k$$ compared to $$U$$ and the $$l$$ compared to $$U$$ (combined to make the equation shorter).148 So what are these weights exactly? \begin{align} s_{ku} &=\dfrac{ n_k n_u \overline{D}_k (1- \overline{D}_k ) }{ \widehat{Var} ( \tilde{D}_{it} )} \\ s_{kl} &=\dfrac{ n_k n_l (\overline{D}_k - \overline{D}_{l} ) ( 1- ( \overline{D}_k - \overline{D}_{l} )) }{\widehat{Var}(\tilde{D}_{it})} \\ \mu_{kl} &=\dfrac{1 - \overline{D}_k }{1 - ( \overline{D}_k - \overline{D}_{l} )} \end{align} where $$n$$ refers to sample sizes, $$\overline{D}_k (1- \overline{D}_k )$$ $$(\overline{D}_k - \overline{D}_{l} ) ( 1- ( \overline{D}_k - \overline{D}_{l} ))$$ expressions refer to variance of treatment, and the final equation is the same for two timing groups.149 Two things immediately pop out of these weights that I’d like to bring to your attention. First, notice how “group” variation matters, as opposed to unit-level variation. The Bacon decomposition shows that it’s group variation that twoway fixed effects is using to calculate that parameter you’re seeking. The more states that adopted a law at the same time, the bigger they influence that final aggregate estimate itself. The other thing that matters in these weights is within-group treatment variance. To appreciate the subtlety of what’s implied, ask yourself—how long does a group have to be treated in order to maximize its treatment variance? Define $$X=D(1-D)=D-D^2$$, take the derivative of $$V$$ with respect to $$\overline{D}$$, set $$\dfrac{d V}{d \overline{D}}$$ equal to zero, and solve for $$\overline{D}*$$. Treatment variance is maximized when $$\overline{D}=0.5$$. Let’s look at three values of $$\overline{D}$$ to illustrate this. $\begin{gather} \overline{D}=0.1; 0.1 \times 0.9 = 0.09 \\ \overline{D}=0.4; 0.4 \times 0.6 =0.24 \\ \overline{D}=0.5; 0.5 \times 0.5 = 0.25\end{gather}$ So what are we learning from this, exactly? Well, what we are learning is that being treated in the middle of the panel actually directly influences the numerical value you get when twoway fixed effects are used to estimate the ATT. That therefore means lengthening or shortening the panel can actually change the point estimate purely by changing group treatment variance and nothing more. Isn’t that kind of strange though? What criteria would we even use to determine the best length? But what about the “treated on treated weights,” or the $$s_{kl}$$ weight. That doesn’t have a $$\overline{D}(1-\overline{D})$$ expression. Rather, it has a $$(\overline{D}_k - \overline{D}_l)(1-(\overline{D}_k - \overline{D}_l)$$ expression. So the “middle” isn’t super clear. That’s because it isn’t the middle of treatment for a single group, but rather it’s the middle of the panel for the difference in treatment variance. For instance, let’s say $$k$$ spends 67% of time treated and $$l$$ spends 15% of time treated. Then $$\overline{D}_k - \overline{D}_l = 0.52$$ and therefore $$0.52 \times 0.48 = 0.2496$$, which as we showed is very nearly the max value of the variance as is possible (e.g., 0.25). Think about this for a moment—twoway fixed effects with differential timing weights the $$2 \times 2$$s comparing the two ultimate treatment groups more if the gap in treatment time is close to 0.5. ### 9.6.2 Expressing the decomposition in potential outcomes Up to now, we just showed what was inside the DD parameter estimate when using twoway fixed effects: it was nothing more than an “adding up” of all possible $$2\times 2$$s weighted by group shares and treatment variance. But that only tells us what DD is numerically; it does not tell us whether the parameter estimate maps onto a meaningful average treatment effect. To do that, we need to take those sample averages and then use the switching equations replace them with potential outcomes. This is key to moving from numbers to estimates of causal effects. Bacon’s decomposition theorem expresses the DD coefficient in terms of sample average, making it straightforward to substitute with potential outcomes using a modified switching equation. With a little creative manipulation, this will be revelatory. First, let’s define any year-specific ATT as \begin{align} ATT_k(\tau)=E\big[Y^1_{it}-Y^0_{it} \mathop{\mathrm{\,\vert\,}}k, t=\tau\big] \end{align} Next, let’s define it over a time window $$W$$ (e.g., a post-treatment window) \begin{align} ATT_k(\tau)=E\big[Y^1_{it}-Y^0_{it} \mathop{\mathrm{\,\vert\,}}k,\tau\in W\big] \end{align} Finally, let’s define differences in average potential outcomes overtime as: \begin{align} \Delta Y^h_k(W_1,W_0) = E\big[Y^h_{it} \mathop{\mathrm{\,\vert\,}}k, W_1\big]- E\big[Y^h_{it} \mathop{\mathrm{\,\vert\,}}k, W_0\big] \end{align} for $$h=0$$ (i.e., $$Y^0$$) or $$h=1$$ (i.e., $$Y^1$$) With trends, differences in mean potential outcomes is non-zero. You can see that in Figure 9.18. We’ll return to this, but I just wanted to point it out to you so that it would be concrete in your mind when we return to it later. We can move now from the $$2\times 2$$s that we decomposed earlier directly into the ATT, which is ultimately the main thing we want to know. We covered this earlier in the chapter, but review it again here to maintain progress on my argument. I will first write down the $$2\times 2$$ expression, use the switching equation to introduce potential outcome notation, and through a little manipulation, find some ATT expression. \begin{align} \widehat{\delta}^{2\times 2}_{kU} &=\bigg (E\big[Y_j \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Post}}\big]- E\big[Y_j \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Pre}}\big] \bigg ) - \bigg( E\big[Y_u \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Post}}\big]- E\big[Y_u \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Pre}}\big]\bigg)\\ &=\bigg ( \underbrace{E\big[Y^1_j \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Post}}\big] - E\big[Y^0_j] \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Pre}}\big]\bigg)- \bigg(E\big[Y^0_u \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Post}}\big]- E\big[Y^0_u \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Pre}}\big]}_{\text{Switching equation}} \bigg)\\ &+ \underbrace{E\big[Y_j^0 \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Post}}\big] - E\big[Y^0_j \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Post}}\big]}_{\text{Adding zero}}\\ &=\underbrace{E\big[Y^1_j \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Post}}\big] - E\big[Y^0_j \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Post}}\big]}_{\text{ATT}} \\ &+\bigg [ \underbrace{E\big[Y^0_j \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Post}}\big] - E\big[Y^0_j \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Pre}}\big]\bigg]- \bigg [E\big[Y^0_U \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Post}}\big] - E\big[Y_U^0 \mathop{\mathrm{\,\vert\,}}\mathop{\mathrm{Pre}}\big] }_{\text{Non-parallel trends bias in 2\times 2 case}} \bigg ] \end{align} This can be rewritten even more compactly as: \begin{align} \widehat{\delta}^{2\times 2}_{kU} = ATT_{\mathop{\mathrm{Post}},j} + \underbrace{\Delta Y^0_{\mathop{\mathrm{Post}},\mathop{\mathrm{Pre}},j} - \Delta Y^0_{\mathop{\mathrm{Post}},\mathop{\mathrm{Pre}}, U}}_{\text{Selection bias!}} \end{align} The $$2\times 2$$ DD can be expressed as the sum of the ATT itself plus a parallel trends assumption, and without parallel trends, the estimator is biased. Ask yourself—which of these two differences in the parallel trends assumption is counterfactual, $$\Delta Y^0_{\mathop{\mathrm{Post}},\mathop{\mathrm{Pre}},j}$$ or $$\Delta Y^0_{\mathop{\mathrm{Post}},\mathop{\mathrm{Pre}}, U}$$? Which one is observed, in other words, and which one is not observed? Look and see if you can figure it out from this drawing in Figure 9.19. Only if these are parallel—the counterfactual trend and the observable trend—does the selection bias term zero out and ATT is identified. But let’s keep looking within the decomposition, as we aren’t done. The other two $$2\times 2$$s need to be defined since they appear in Bacon’s decomposition also. And they are: \begin{align} \widehat{\delta}^{2\times 2}_{kU} &= ATT_k{\mathop{\mathrm{Post}}} + \Delta Y^0_k(\mathop{\mathrm{Post}}(k),\mathop{\mathrm{Pre}}(k)) - \Delta Y^0_U(\mathop{\mathrm{Post}}(k),\mathop{\mathrm{Pre}}) \\ \widehat{\delta}^{2\times 2}_{kl} &= ATT_k(MID) + \Delta Y^0_k(MID,\mathop{\mathrm{Pre}}) - \Delta Y^0_l(MID, \mathop{\mathrm{Pre}}) \end{align} These look the same because you’re always comparing the treated unit with an untreated unit (though in the second case it’s just that they haven’t been treated yet). But what about the $$2\times 2$$ that compared the late groups to the already-treated earlier groups? With a lot of substitutions like we did we get: \begin{align} \widehat{\delta}^{2\times 2}_{lk} &= ATT_{l,\mathop{\mathrm{Post}}(l)} \nonumber \\ &+ \underbrace{\Delta Y^0_l(\mathop{\mathrm{Post}}(l),MID) - \Delta Y^0_k (\mathop{\mathrm{Post}}(l), MID)}_{\text{Parallel-trends bias}} \nonumber \\ & - \underbrace{(ATT_k(\mathop{\mathrm{Post}}) - ATT_k(Mid))}_{\text{Heterogeneity in time bias!}} \end{align} I find it interesting our earlier decomposition of the simple difference in means into $$ATE$$ $$+$$ selection bias $$+$$ heterogeneity treatment effects bias resembles the decomposition of the late to early $$2\times 2$$ DD. The first line is the $$ATT$$ that we desperately hope to identify. The selection bias zeroes out insofar as $$Y^0$$ for $$k$$ and $$l$$ has the same parallel trends from $$mid$$ to $$post$$ period. And the treatment effects bias in the third line zeroes out so long as there are constant treatment effects for a group over time. But if there is heterogeneity in time for a group, then the two $$ATT$$ terms will not be the same, and therefore will not zero out. But we can sign the bias if we are willing to assume monotonicity, which means the $$mid$$ term is smaller in absolute value than the $$post$$ term. Under monotonicity, the interior of the parentheses in the third line is positive, and therefore the bias is negative. For positive ATT, this will bias the effects towards zero, and for negative ATT, it will cause the estimated ATT to become even more negative. Let’s pause and collect these terms. The decomposition formula for DD is: \begin{align} \widehat{\delta}^{DD} = \sum_{k \neq U} s_{kU}\widehat{\delta}_{kU}^{2\times 2} + \sum_{k \neq U} \sum_{l>k} s_{kl} \bigg[ \mu_{kl}\widehat{\delta}_{kl}^{2\times 2,k} + (1-\mu_{kl}) \widehat{\delta}_{kl}^{2\times 2,l} \bigg] \end{align} We will substitute the following three expressions into that formula. \begin{align} \widehat{\delta}_{kU}^{2\times 2} &= ATT_k(\mathop{\mathrm{Post}})+\Delta Y_l^0(\mathop{\mathrm{Post}},\mathop{\mathrm{Pre}})- \Delta Y_U^0(\mathop{\mathrm{Post}},\mathop{\mathrm{Pre}}) \\ \widehat{\delta}_{kl}^{2\times 2,k} &=ATT_k( \mathop{\mathrm{\,\vert\,}})+\Delta Y_l^0( \mathop{\mathrm{\,\vert\,}},\mathop{\mathrm{Pre}})-\Delta Y_l^0( \mathop{\mathrm{\,\vert\,}}, \mathop{\mathrm{Pre}}) \\ \widehat{\delta}^{2\times 2,l}_{lk} &=ATT_{l}\mathop{\mathrm{Post}}(l)+\Delta Y^0_l(\mathop{\mathrm{Post}}(l), \mathop{\mathrm{\,\vert\,}})-\Delta Y^0_k (\mathop{\mathrm{Post}}(l), \mathop{\mathrm{\,\vert\,}}) \\ &- (ATT_k(\mathop{\mathrm{Post}})-ATT_k( \mathop{\mathrm{\,\vert\,}})) \end{align} Substituting all three terms into the decomposition formula is a bit overwhelming, so let’s simplify the notation. The estimated DD parameter is equal to: $p\lim\widehat{\delta}^{DD}_{n\to\infty} = VWATT + VWCT - \Delta ATT$ In the next few sections, I discuss each individual element of this expression. ### 9.6.3 Variance weighted ATT We begin by discussing the variance weighted average treatment effect on the treatment group, or $$VWATT$$. Its unpacked expression is: \begin{align} VWATT &=\sum_{k\neq U}\sigma_{kU}ATT_k(\mathop{\mathrm{Post}}(k)) \\ &+\sum_{k \neq U} \sum_{l>k} \sigma_{kl} \bigg [ \mu_{kl} ATT_k ( \mathop{\mathrm{\,\vert\,}})+ (1-\mu_{kl}) ATT_{l} (POST(l)) \bigg ] \end{align} where $$\sigma$$ is like $$s$$, only population terms not samples. Notice that the VWATT simply contains the three ATTs identified above, each of which was weighted by the weights contained in the decomposition formula. While these weights sum to one, that weighting is irrelevant if the ATT are identical.150 When I learned that the DD coefficient was a weighted average of all individual $$2\times 2$$s, I was not terribly surprised. I may not have intuitively known that the weights were based on group shares and treatment variance, but I figured it was probably a weighted average nonetheless. I did not have that same experience, though, when I worked through the other two terms. I now turn to the other two terms: the VWCT and the $$\Delta ATT$$. ### 9.6.5 ATT heterogeneity within time bias When we decomposed the simple difference in mean outcomes into the sum of the ATE, selection bias, and heterogeneous treatment effects bias, it really wasn’t a huge headache. That was because if the ATT differed from the ATU, then the simple difference in mean outcomes became the sum of ATT and selection bias, which was still an interesting parameter. But in the Bacon decomposition, ATT heterogeneity over time introduces bias that is not so benign. Let’s look at what happens when there is time-variant within-group treatment effects. \begin{align} \Delta ATT = \sum_{k \neq U} \sum_{l>k} (1 - \mu_{kl}) \Big[ ATT_k(\mathop{\mathrm{Post}}(l) - ATT_k( \mathop{\mathrm{\,\vert\,}})) \Big] \end{align} Heterogeneity in the ATT has two interpretations: you can have heterogeneous treatment effects across groups, and you can have heterogeneous treatment effects within groups over time. The $$\Delta ATT$$ is concerned with the latter only. The first case would be heterogeneity across units but not within groups. When there is heterogeneity across groups, then the VWATT is simply the average over group-specific ATTs weighted by a function of sample shares and treatment variance. There is no bias from this kind of heterogeneity.151 But it’s the second case—when $$ATT$$ is constant across units but heterogeneous within groups over time—that things get a little worrisome. Time-varying treatment effects, even if they are identical across units, generate cross-group heterogeneity because of the differing post-treatment windows, and the fact that earlier-treated groups are serving as controls for later-treated groups. Let’s consider a case where the counterfactual outcomes are identical, but the treatment effect is a linear break in the trend (Figure 9.20). For instance, $$Y^1_{it} = Y^0_{it} + \theta (t-t^*_1+1)$$ similar to Meer and West (2016). Notice how the first $$2\times 2$$ uses the later group as its control in the middle period, but in the late period, the later-treated group is using the earlier treated as its control. When is this a problem? It’s a problem if there are a lot of those $$2\times 2$$s or if their weights are large. If they are negligible portions of the estimate, then even if it exists, then given their weights are small (as group shares are also an important piece of the weighting not just the variance in treatment) the bias may be small. But let’s say that doesn’t hold. Then what is going on? The effect is biased because the control group is experiencing a trend in outcomes (e.g., heterogeneous treatment effects), and this bias feeds through to the later $$2\times 2$$ according to the size of the weights, $$(1-\mu_{kl})$$. We will need to correct for this if our plan is to stick with the twoway fixed effects estimator. Now it’s time to use what we’ve learned. Let’s look at an interesting and important paper by Cheng and Hoekstra (2013) to both learn more about a DD paper and replicate it using event studies and the Bacon decomposition. ### 9.6.6 Castle-doctrine statutes and homicides Cheng and Hoekstra (2013) evaluated the impact that a gun reform had on violence and to illustrate various principles and practices regarding differential timing. I’d like to discuss those principles in the context of this paper. This next section will discuss, extend, and replicate various parts of this study. Trayvon Benjamin Martin was a 17-year-old African-American young man when George Zimmerman shot and killed him in Sanford, Florida, on February 26, 2012. Martin was walking home alone from a convenience store when Zimmerman spotted him, followed him from a distance, and reported him to the police. He said he found Martin’s behavior “suspicious,” and though police officers urged Zimmerman to stay back, Zimmerman stalked and eventually provoked Martin. An altercation occurred and Zimmerman fatally shot Martin. Zimmerman claimed self-defense and was nonetheless charged with Martin’s death. A jury acquitted him of second-degree murder and of manslaughter. Zimmerman’s actions were interpreted by the jury to be legal because in 2005, Florida reformed when and where lethal self-defense could be used. Whereas once lethal self-defense was only legal inside the home, a new law, “Stand Your Ground,” had extended that right to other public places. Between 2000 and 2010, twenty-one states explicitly expanded the castle-doctrine statute by extending the places outside the home where lethal force could be legally used.152 These states had removed a long-standing tradition in the common law that placed the duty to retreat from danger on the victim. After these reforms, though, victims no longer had a duty to retreat in public places if they felt threatened; they could retaliate in lethal self-defense. Other changes were also made. In some states, individuals who used lethal force outside the home were assumed to be reasonably afraid. Thus, a prosecutor would have to prove fear was not reasonable, allegedly an almost impossible task. Civil liability for those acting under these expansions was also removed. As civil liability is a lower threshold of guilt than criminal guilt, this effectively removed the remaining constraint that might keep someone from using lethal force outside the home. From an economic perspective, these reforms lowered the cost of killing someone. One could use lethal self-defense in situations from which they had previously been barred. And as there was no civil liability, the expected cost of killing someone was now lower. Thus, insofar as people are sensitive to incentives, then depending on the elasticities of lethal self-defense with respect to cost, we expect an increase in lethal violence for the marginal victim. The reforms may have, in other words, caused homicides to rise. One can divide lethal force into true and false positives. The true positive use of lethal force would be those situations in which, had the person not used lethal force, he or she would have been murdered. Thus, the true positive case of lethal force is simply a transfer of one life (the offender) for another (the defender). This is tragic, but official statistics would not record an net increase in homicides relative to the counterfactual—only which person had been killed. But a false positive causes a net increase in homicides relative to the counterfactual. Some arguments can escalate unnecessarily, and yet under common law, the duty to retreat would have defused the situation before it spilled over into lethal force. Now, though, under these castle-doctrine reforms, that safety valve is removed, and thus a killing occurs that would not have in counterfactual, leading to a net increase in homicides. But that is not the only possible impact of the reforms—deterrence of violence is also a possibility under these reforms. In Lott and Mustard (1997), the authors found that concealed-carry laws reduced violence. They suggested this was caused by deterrence—thinking someone may be carrying a concealed weapon, the rational criminal is deterred from committing a crime. Deterrence dates back to G. Becker (1968) and Jeremy Bentham before him. Expanding the arenas where lethal force could be used could also deter crime. Since this theoretical possibility depends crucially on key elasticities, which may in fact be zero, deterrence from expanding where guns can be used to kill someone is ultimately an empirical question. Cheng and Hoekstra (2013) chose a difference-in-differences design for their project where the castle doctrine law was the treatment and timing was differential across states. Their estimatingequation was $Y_{it}=\alpha + \delta D_{it} + \gamma X_{it} + \sigma_i + \tau_t + \varepsilon_{it}$ where $$D_{it}$$ is the treatment parameter. They estimated this equation using a standard twoway fixed effects model as well as count models. Ordinarily, the treatment parameter will be a 0 or 1, but in Cheng and Hoekstra (2013), it’s a variable ranging from 0 to 1, because some states get the law change mid-year. So if they got the law in July, then $$D_{it}$$ equals 0 before the year of adoption, 0.5 in the year of adoption and 1 thereafter. The $$X_{it}$$ variable included a particular kind of control that they called “region-by-year fixed effects,” which was a vector of dummies for the census region to which the state belonged interacted with each year fixed effect. This was done so that explicit counterfactuals were forced to come from within the same census region.153 As the results are not dramatically different between their twoway fixed effects and count models, I will tend to emphasize results from the twoway fixed effects. The data they used is somewhat standard in crime studies. They used the FBI Uniform Crime Reports Summary Part I files from 2000 to 2010. The FBI Uniform Crime Reports is a harmonized data set on eight “index” crimes collected from voluntarily participating police agencies across the country. Participation is high and the data goes back many decades, making it attractive for many contemporary questions regarding the crime policy. Crimes were converted into rates, or “offenses per 100,000 population.” Cheng and Hoekstra (2013) rhetorically open their study with a series of simple placebos to check whether the reforms were spuriously correlated with crime trends more generally. Since oftentimes many crimes are correlated because of unobserved factors, this has some appeal, as it rules out the possibility that the laws were simply being adopted in areas where crime rates were already rising. For their falsifications they chose motor vehicle thefts and larcenies, neither of which, they reasoned, should be credibly connected to lowering the cost of using lethal force in public. There are so many regression coefficients in Table 9.5 because applied microeconomists like to report results under increasingly restrictive models. In this case, each column is a new regression with additional controls such as additional fixed-effects specifications, time-varying controls, a one-year lead to check on the pre-treatment differences in outcomes, and state-specific trends. As you can see, many of these coefficients are very small, and because they are small, even large standard errors yield a range of estimates that are still not very large. Next they look at what they consider to be crimes that might be deterred if policy created a credible threat of lethal retaliation in public: burglary, robbery, and aggravated assault. Insofar as castle doctrine has a deterrence effect, then we would expect a negative effect of the law on offenses. But all of the regressions shown in Table 9.6 are actually positive, and very few are significant even still. So the authors conclude they cannot detect any deterrence—which does not mean it didn’t happen; just that they cannot reject the null for large effects. Now they move to their main results, which is interesting because it’s much more common for authors to lead with their main results. But the rhetoric of this paper is somewhat original in that respect. By this point, the reader has seen a lot of null effects from the laws and may be wondering, “What’s going on? This law isn’t spurious and isn’t causing deterrence. Why am I reading this paper?” The first thing the authors did was show a series of figures showing the raw data on homicides for treatment and control states. This is always a challenge when working with differential timing, though. For instance, approximately twenty states adopted a castle-doctrine law from 2005 to 2010, but not at the same time. So how are you going to show this visually? What is the pre-treatment period, for instance, for the control group when there is differential timing? If one state adopts Table 9.5: Falsification Tests: The effect of castle doctrine laws on larceny and motor vehicle theft. OLS – Weighted by State Population (1) (2) (3) (4) (5) (6) Panel A. Larceny Log(Larceny Rate) Castle Doctrine 0.00300 $$-0.00600$$ $$-0.00910$$ $$-0.0858$$ $$-0.00401$$ $$-0.00284$$ Law (0.0161) (0.0147) (0.0139) (0.0139) (0.0128) (0.0180) 0 to 2 years before adoption of castle doctrine law 0.00112 (0.0105) Observation 550 550 550 550 550 550 Panel B. Motor Log(Motor Vehicle Theft Rate) Castle Doctrine 0.0517 $$-0.0389$$ $$-0.0252$$ $$-0.0294$$ $$-0.0165$$ $$-0.00708$$ Law (0.0563) (0.448) (0.0396) (0.0469) (0.0354) (0.0372) 0 to 2 years before adoption of castle doctrine law $$-0.00896$$ (0.0216) Observation 550 550 550 550 550 550 State and year fixed effects Yes Yes Yes Yes Yes Yes Region-by-year fixed effects Yes Yes Yes Yes Yes Time-varying controls Yes Yes Yes Yes Controls for larceny or motor theft Yes State-specific linear time trends Yes Each column in each panel represents a separate regression. The unit of observation is state-year. Robust standard errors are clustered at the state level. Time-varying controls include policing and incarceration rates, welfare and public assistance spending, median income, poverty rate, unemployment rate, and demographics. $$^{*}$$ Significant at the 10 percent level. $$^{**}$$ Significant at the 5 percent level. $$^{***}$$ Significant at the 1 percent level. Table 9.6: The deterrence effects of castle-doctrine laws: Burglary, robbery, and aggravated assault. OLS – Weighted by State Population (1) (2) (3) (4) (5) (6) Panel A. Burglary Log(Burglary Rate) Castle-doctrine law 0.0780$$^{***}$$ 0.0290 0.0223 0.0181 0.0327$$^*$$ 0.0237 0 to 2 years before adoption of castle-doctrine law (0.0255) (0.0236) (0.0223) (0.0265) (0.0165) (0.0207) $$-0.009606$$ (0.0133) Panel B. Robbery Log(Robery Rate) Castle-doctrine law 0.0408 0.0344 0.0262 0.0197 0.0376$$^{**}$$ 0.0515$$^*$$ (0.0254) (0.0224) (0.0229) (0.0257) (0.0181) (0.0274) 0 to 2 years before adoption of castle-doctrine law $$-0.0138$$ (0.0153) Panel C. Aggravated Log(Aggrevated Assault Rate) Castle-doctrine law 0.0434 0.0397 0.0372 0.0330 0.0424 0.0414 (0.0387) (0.0407) (0.0319) (0.0367) (0.0291) (0.0285) 0 to 2 years before adoption of castle-doctrine law $$-0.00897$$ (0.0216) Observation 550 550 550 550 550 550 State and year fixed effects Yes Yes Yes Yes Yes Yes Region-by-year fixed effects Yes Yes Yes Yes Yes Time-varying controls Yes Yes Yes Yes Controls for larceny or motor theft Yes State-specific linear time trends Yes Each column in each panel represents a separate regression. The unit of observation is state-year. Robust standard errors are clustered at the state level. Time-varying controls include policing and incarceration rates, welfare and public assistance spending, median income, poverty rate, unemployment rate, and demographics. $$^{*}$$ Significant at the 10 percent level. $$^{**}$$ Significant at the 5 percent level. $$^{***}$$ Significant at the 1 percent level. in 2005, but another in 2006, then what precisely is the pre- and post-treatment for the control group? So that’s a bit of a challenge, and yet if you stick with our guiding principle that causal inference studies desperately need data visualization of the main effects, your job is to solve it with creativity and honesty to make beautiful figures. Cheng and Hoekstra (2013) could’ve presented regression coefficients on leads and lags, as that is very commonly done, but knowing these authors firsthand, their preference is to give the reader pictures of the raw data to be as transparent as possible. Therefore, they showed multiple figures where each figure was a “treatment group” compared to all the “never-treated” units. Figure 9.21 shows the Florida case. Notice that before the passage of the law, the offenses are fairly flat for treatment and control. Obviously, as I’ve emphasized, this is not a direct test of the parallel-trends assumption. Parallel trends in the pre-treatment period are neither necessary nor sufficient. The identifying assumption, recall, is that of variance-weighted common trends, which are entirely based on parallel counterfactual trends, not pre-treatment trends. But researchers use parallel pre-treatment trends like a hunch that the counterfactual trends would have been parallel. In one sense, parallel pre-treatment rules out some obvious spurious factors that we should be worried about, such as the law adoption happening around the timing of a change, even if that’s simply nothing more than seemingly spurious factors like rising homicides. But that’s clearly not happening here—homicides weren’t diverging from controls pre-treatment. They were following a similar trajectory before Florida passed its law and only then did the trends converge. Notice that after 2005, which is when the law occurs, there’s a sizable jump in homicides. There are additional figures like this, but they all have this set up—they show a treatment group over time compared to the same “never-treated” group. Panel A. Homicide Log(Homicide Rates) OLS–Weights 1 2 3 4 5 6 Castle Doctrine Law 0.0801$$^{**}$$ 0.0946$$^{***}$$ 0.0937$$^{***}$$ 0.0955$$^{**}$$ 0.0985$$^{***}$$ 0.100$$^{**}$$ (0.0342) (0.0279) (0.0290) (0.0367) (0.0299) (0.0388) 0 to 2 years before adoption of castle doctrine law 0.00398 (0.0222) Observation 550 550 550 550 550 550 State and Year Fixed Effects Yes Yes Yes Yes Yes Yes Region-by-year Fixed Yes Yes Yes Yes Yes Effects Time-Varying Controls Yes Yes Yes Yes Controls for Larceny or Motor Theft Yes State-specific Linear Time Trends Yes Each column in each panel represents a separate regression. The unit of observation is state-year. Robust standard errors are clustered at the state level. Time-varying controls include policing and incarceration rates, welfare and public assistance spending, median income, poverty rate, unemployment rate, and demographics. $$^{**}$$ Significant at the 5 percent level. $$^{***}$$ Significant at the 1 percent level. Insofar as the cost of committing lethal force has fallen, then we expect to see more of it, which implies a positive coefficient on the estimated $$\delta$$ term assuming the heterogeneity bias we discussed earlier doesn’t cause the twoway fixed effects estimated coefficient to flip signs. It should be different from zero both statistically and in a meaningful magnitude. They present four separate types of specifications—three using OLS, one using negative binomial. But I will only report the weighted OLS regressions for the sake of space. There’s a lot of information in Table 9.7, so let’s be sure not to get lost. First, all coefficients are positive and similar in magnitude—between 8% and 10% increases in homicides. Second, three of the four panels are almost entirely significant. It appears that the bulk of their evidence suggests the castle-doctrine statute caused an increase in homicides around 8%. Not satisfied, the authors implemented a kind of randomization inference-based test. Specifically, they moved the eleven-year panel back in time covering 1960–2009 and estimated forty placebo “effects” of passing castle doctrine one to forty years earlier. When they did this, they found that the average effect from this exercise was essentially zero. Those results are summarized here. It appears there is something statistically unusual about the actual treatment profile compared to the placebo profiles, because the actual profile yields effect sizes larger than all but one case in any of the placebo regressions run. Randomization inference averages . Method Average estimate Estimates larger than actual estimate Weighted OLS $$-0.003$$ 0/40 Unweighted OLS 0.001 1/40 Negative binomial 0.001 0/40 Cheng and Hoekstra (2013) found no evidence that castle-doctrine laws deter violent offenses, but they did find that it increased homicides. An 8% net increase in homicide rates translates to around six hundred additional homicides per year across the twenty-one adopting states. Thinking back to to the killing of Trayvon Martin by George Zimmerman, one is left to wonder whether Trayvon might still be alive had Florida not passed Stand Your Ground. This kind of counterfactual reasoning can drive you crazy, because it is unanswerable—we simply don’t know, cannot know, and never will know the answer to counterfactual questions. The fundamental problem of causal inference states that we need to know what would have happened that fateful night without Stand Your Ground and compare that with what happened with Stand Your Ground to know what can and cannot be placed at the feet of that law. What we do know is that under certain assumptions related to the DD design, homicides were on net around 8%–10% higher than they would’ve been when compared against explicit counterfactuals. And while that doesn’t answer every question, it suggests that a nontrivial number of deaths can be blamed on laws similar to Stand Your Ground. ### 9.6.7 Replicating Cheng and Hoekstra (2013), sort of Now that we’ve discussed Cheng and Hoekstra (2013), I want to replicate it, or at least do some work on their data set to illustrate certain things that we’ve discussed, like event studies and the Bacon decomposition. This analysis will be slightly different from what they did, though, because their policy variable was on the interval $$[0,1]$$ rather than being a pure dummy. That’s because they carefully defined their policy variable according to the month in which the law was passed (e.g., June) divided by a total of 12 months. So if a state passed the last in June, then they would assign a 0.5 in the first year, and a 1 thereafter. While there’s nothing wrong with that approach, I am going to use a dummy because it makes the event studies a bit easier to visualize, and the Bacon decomposition only works with dummy policy variables. First, I will replicate his main homicide results from Panel A, column 6, of Figure 9.21. Stata Code castle_1.do use https://github.com/scunning1975/mixtape/raw/master/castle.dta, clear set scheme cleanplots * ssc install bacondecomp * define global macros global crime1 jhcitizen_c jhpolice_c murder homicide robbery assault burglary larceny motor robbery_gun_r global demo blackm_15_24 whitem_15_24 blackm_25_44 whitem_25_44 //demographics global lintrend trend_1-trend_51 //state linear trend global region r20001-r20104 //region-quarter fixed effects global exocrime l_larceny l_motor // exogenous crime rates global spending l_exp_subsidy l_exp_pubwelfare global xvar l_police unemployrt poverty l_income l_prisoner l_lagprisonerdemo $spending label variable post "Year of treatment" xi: xtreg l_homicide i.year$region $xvar$lintrend post [aweight=popwt], fe vce(cluster sid) R Code castle_1.R library(bacondecomp) library(tidyverse) library(haven) library(lfe) { full_path <- paste("https://raw.github.com/scunning1975/mixtape/master/", df, sep = "") return(df) } #--- global variables crime1 <- c("jhcitizen_c", "jhpolice_c", "murder", "homicide", "robbery", "assault", "burglary", "larceny", "motor", "robbery_gun_r") demo <- c("emo", "blackm_15_24", "whitem_15_24", "blackm_25_44", "whitem_25_44") # variables dropped to prevent colinearity dropped_vars <- c("r20004", "r20014", "r20024", "r20034", "r20044", "r20054", "r20064", "r20074", "r20084", "r20094", "r20101", "r20102", "r20103", "r20104", "trend_9", "trend_46", "trend_49", "trend_50", "trend_51" ) lintrend <- castle %>% select(starts_with("trend")) %>% colnames %>% # remove due to colinearity subset(.,! . %in% dropped_vars) region <- castle %>% select(starts_with("r20")) %>% colnames %>% # remove due to colinearity subset(.,! . %in% dropped_vars) exocrime <- c("l_lacerny", "l_motor") spending <- c("l_exp_subsidy", "l_exp_pubwelfare") xvar <- c( "blackm_15_24", "whitem_15_24", "blackm_25_44", "whitem_25_44", "l_exp_subsidy", "l_exp_pubwelfare", "l_police", "unemployrt", "poverty", "l_income", "l_prisoner", "l_lagprisoner" ) law <- c("cdl") dd_formula <- as.formula( paste("l_homicide ~ ", paste( paste(xvar, collapse = " + "), paste(region, collapse = " + "), paste(lintrend, collapse = " + "), paste("post", collapse = " + "), sep = " + "), "| year + sid | 0 | sid" ) ) #Fixed effect regression using post as treatment variable dd_reg <- felm(dd_formula, weights = castle$popwt, data = castle) summary(dd_reg) Python Code castle_1.py import numpy as np import pandas as pd import statsmodels.api as sm import statsmodels.formula.api as smf from itertools import combinations import plotnine as p # read data import ssl ssl._create_default_https_context = ssl._create_unverified_context def read_data(file): return pd.read_stata("https://raw.github.com/scunning1975/mixtape/master/" + file) castle = read_data('castle.dta') crime1 = ("jhcitizen_c", "jhpolice_c", "murder", "homicide", "robbery", "assault", "burglary", "larceny", "motor", "robbery_gun_r") demo = ("emo", "blackm_15_24", "whitem_15_24", "blackm_25_44", "whitem_25_44") # variables dropped to prevent colinearity dropped_vars = ("r20004", "r20014", "r20024", "r20034", "r20044", "r20054", "r20064", "r20074", "r20084", "r20094", "r20101", "r20102", "r20103", "r20104", "trend_9", "trend_46", "trend_49", "trend_50", "trend_51") cols = pd.Series(castle.columns) trend_cols = set(cols[cols.str.contains('^trend')]) lintrend = castle[trend_cols - set(dropped_vars)] region = set(cols[cols.str.contains('^r20')]) lintrend = set(cols[cols.str.contains('^trend')]) exocrime = ("l_lacerny", "l_motor") spending = ("l_exp_subsidy", "l_exp_pubwelfare") xvar = ( "blackm_15_24", "whitem_15_24", "blackm_25_44", "whitem_25_44", "l_exp_subsidy", "l_exp_pubwelfare", "l_police", "unemployrt", "poverty", "l_income", "l_prisoner", "l_lagprisoner" ) law = ("cdl") dd_formula = "l_homicide ~ {} + {} + {} + post + C(year) + C(sid)".format( "+".join(xvar), "+".join(region), "+".join(lintrend)) #Fixed effect regression using post as treatment variable dd_reg = smf.wls(dd_formula, data = castle, weights = castle['popwt']).fit(cov_type='cluster', cov_kwds={'groups':castle['sid']}) dd_reg.summary() Here we see the main result that castle doctrine expansions led to an approximately 10% increase in homicides. And if we use the post-dummy, which is essentially equal to 0 unless the state had fully covered castle doctrine expansions, then the effect is more like 7.6%. But now, I’d like to go beyond their study to implement an event study. First, we need to define pre-treatment leads and lags. To do this, we use a “time_til” variable, which is the number of years until or after the state received the treatment. Using this variable, we then create the leads (which will be the years prior to treatment) and lags (the years post-treatment). Stata Code castle_2.do * Event study regression with the year of treatment (lag0) as the omitted category. xi: xtreg l_homicide i.year$region lead9 lead8 lead7 lead6 lead5 lead4 lead3 lead2 lead1 lag1-lag5 [aweight=popwt], fe vce(cluster sid) R Code castle_2.R castle <- castle %>% mutate( time_til = year - treatment_date, lead1 = case_when(time_til == -1 ~ 1, TRUE ~ 0), lead2 = case_when(time_til == -2 ~ 1, TRUE ~ 0), lead3 = case_when(time_til == -3 ~ 1, TRUE ~ 0), lead4 = case_when(time_til == -4 ~ 1, TRUE ~ 0), lead5 = case_when(time_til == -5 ~ 1, TRUE ~ 0), lead6 = case_when(time_til == -6 ~ 1, TRUE ~ 0), lead7 = case_when(time_til == -7 ~ 1, TRUE ~ 0), lead8 = case_when(time_til == -8 ~ 1, TRUE ~ 0), lead9 = case_when(time_til == -9 ~ 1, TRUE ~ 0), lag0 = case_when(time_til == 0 ~ 1, TRUE ~ 0), lag1 = case_when(time_til == 1 ~ 1, TRUE ~ 0), lag2 = case_when(time_til == 2 ~ 1, TRUE ~ 0), lag3 = case_when(time_til == 3 ~ 1, TRUE ~ 0), lag4 = case_when(time_til == 4 ~ 1, TRUE ~ 0), lag5 = case_when(time_til == 5 ~ 1, TRUE ~ 0) ) event_study_formula <- as.formula( paste("l_homicide ~ + ", paste( paste(region, collapse = " + "), paste(paste("lead", 1:9, sep = ""), collapse = " + "), paste(paste("lag", 1:5, sep = ""), collapse = " + "), sep = " + "), "| year + state | 0 | sid" ), ) event_study_reg <- felm(event_study_formula, weights = castle$popwt, data = castle) summary(event_study_reg) Python Code castle_2.py import numpy as np import pandas as pd import statsmodels.api as sm import statsmodels.formula.api as smf from itertools import combinations import plotnine as p # read data import ssl ssl._create_default_https_context = ssl._create_unverified_context def read_data(file): return pd.read_stata("https://raw.github.com/scunning1975/mixtape/master/" + file) castle['time_til'] = castle['year'] - castle['treatment_date'] castle['lead1'] = castle['time_til'] == -1 castle['lead2'] = castle['time_til'] == -2 castle['lead3'] = castle['time_til'] == -3 castle['lead4'] = castle['time_til'] == -4 castle['lead5'] = castle['time_til'] == -5 castle['lead6'] = castle['time_til'] == -6 castle['lead7'] = castle['time_til'] == -7 castle['lead8'] = castle['time_til'] == -8 castle['lead9'] = castle['time_til'] == -9 castle['lag0'] = castle['time_til'] == 0 castle['lag1'] = castle['time_til'] == 1 castle['lag2'] = castle['time_til'] == 2 castle['lag3'] = castle['time_til'] == 3 castle['lag4'] = castle['time_til'] == 4 castle['lag5'] = castle['time_til'] == 5 formula = "l_homicide ~ r20001 + r20002 + r20003 + r20011 + r20012 + r20013 + r20021 + r20022 + r20023 + r20031 + r20032 + r20033 + r20041 + r20042 + r20043 + r20051 + r20052 + r20053 + r20061 + r20062 + r20063 + r20071 + r20072 + r20073 + r20081 + r20082 + r20083 + r20091 + r20092 + r20093 + lead1 + lead2 + lead3 + lead4 + lead5 + lead6 + lead7 + lead8 + lead9 + lag1 + lag2 + lag3 + lag4 + lag5 + C(year) + C(state)" event_study_formula = smf.wls(formula, data = castle, weights = castle['popwt']).fit(cov_type='cluster', cov_kwds={'groups':castle['sid']}) event_study_formula.summary() Our omitted category is the year of treatment, so all coefficients are with respect to that year. You can see from the coefficients on the leads that they are not statistically different from zero prior to treatment, except for leads 8 and 9, which may be because there are only three states with eight years prior to treatment, and one state with nine years prior to treatment. But in the years prior to treatment, leads 1 to 6 are equal to zero and statistically insignificant, although they do technically have large confidence intervals. The lags, on the other hand, are all positive and not too dissimilar from one another except for lag 5, which is around 17%. Now it is customary to plot these event studies, so let’s do that now. I am going to show you an easy way and a longer way to do this. The longer way gives you ultimately more control over what exactly you want the event study to look like, but for a fast and dirty method, the easier way will suffice. For the easier way, you will need to install a program in Stata called coefplot, written by Ben Jann, author of estout.154 Stata Code castle_3.do * Plot the coefficients using coefplot * ssc install coefplot coefplot, keep(lead9 lead8 lead7 lead6 lead5 lead4 lead3 lead2 lead1 lag1 lag2 lag3 lag4 lag5) xlabel(, angle(vertical)) yline(0) xline(9.5) vertical msymbol(D) mfcolor(white) ciopts(lwidth(*3) lcolor(*.6)) mlabel format(%9.3f) mlabposition(12) mlabgap(*2) title(Log Murder Rate) R Code castle_3.R # order of the coefficients for the plot plot_order <- c("lead9", "lead8", "lead7", "lead6", "lead5", "lead4", "lead3", "lead2", "lead1", "lag1", "lag2", "lag3", "lag4", "lag5") # grab the clustered standard errors # and average coefficient estimates # from the regression, label them accordingly # add a zero'th lag for plotting purposes leadslags_plot <- tibble( sd = c(event_study_reg$cse[plot_order], 0), mean = c(coef(event_study_reg)[plot_order], 0), label = c(-9,-8,-7,-6, -5, -4, -3, -2, -1, 1,2,3,4,5, 0) ) # This version has a point-range at each # comes down to stylistic preference at the # end of the day! ggplot(aes(x = label, y = mean, ymin = mean-1.96*sd, ymax = mean+1.96*sd)) + geom_hline(yintercept = 0.035169444, color = "red") + geom_pointrange() + theme_minimal() + xlab("Years before and after castle doctrine expansion") + ylab("log(Homicide Rate)") + geom_hline(yintercept = 0, linetype = "dashed") + geom_vline(xintercept = 0, linetype = "dashed") Python Code castle_3.py import numpy as np import pandas as pd import statsmodels.api as sm import statsmodels.formula.api as smf from itertools import combinations import plotnine as p import ssl ssl._create_default_https_context = ssl._create_unverified_context castle['time_til'] = castle['year'] - castle['treatment_date'] castle['lag0'] = castle['time_til'] == 0 castle['lag1'] = castle['time_til'] == 1 castle['lag2'] = castle['time_til'] == 2 castle['lag3'] = castle['time_til'] == 3 castle['lag4'] = castle['time_til'] == 4 castle['lag5'] = castle['time_til'] == 5 formula = "l_homicide ~ r20001 + r20002 + r20003 + r20011 + r20012 + r20013 + r20021 + r20022 + r20023 + r20031 + r20032 + r20033 + r20041 + r20042 + r20043 + r20051 + r20052 + r20053 + r20061 + r20062 + r20063 + r20071 + r20072 + r20073 + r20081 + r20082 + r20083 + r20091 + r20092 + r20093 + lead1 + lead2 + lead3 + lead4 + lead5 + lead6 + lead7 + lead8 + lead9 + lag1 + lag2 + lag3 + lag4 + lag5 + C(year) + C(state)" event_study_formula = smf.wls(formula, data = castle, weights = castle['popwt']).fit(cov_type='cluster', cov_kwds={'groups':castle['sid']}) lags = ['lag1[T.True]', 'lag2[T.True]', 'lag3[T.True]', 'lag4[T.True]', 'lag5[T.True]'] 'label': np.arange(-9, 6)}) # This version has a point-range at each # comes down to stylistic preference at the # end of the day! p.ggplot(leadslags_plot, p.aes(x = 'label', y = 'mean', ymin = 'lb', ymax = 'ub')) +\ p.geom_hline(yintercept = 0.035169444, color = "red") +\ p.geom_pointrange() +\ p.theme_minimal() +\ p.xlab("Years before and after castle doctrine expansion") +\ p.ylab("log(Homicide Rate)") +\ p.geom_hline(yintercept = 0, linetype = "dashed") +\ p.geom_vline(xintercept = 0, linetype = "dashed") Let’s look now at what this command created. As you can see in Figure 9.22, eight to nine years prior to treatment, treatment states have significantly lower levels of homicides, but as there are so few states that even have these values (one with $$-9$$ and three with $$-8$$), we may want to disregard the relevance of these negative effects if for no other reason than that there are so few units in the dummy and we know from earlier that that can lead to very high overrejection rates . Instead, notice that for the six years prior to treatment, there is virtually no difference between the treatment states and the control states. But, after the year of treatment, that changes. Log murders begin rising, which is consistent with our post dummy that imposed zeros on all pre-treatment leads and required that the average effect post-treatment be a constant. I promised to show you how to make this graph in a way that gave more flexibility, but you should be warned, this is a bit more cumbersome. Stata Code castle_4.do xi: xtreg l_homicide i.year $region$xvar $lintrend post [aweight=popwt], fe vce(cluster sid) local DDL = _b[post] local DD : display %03.2f _b[post] local DDSE : display %03.2f _se[post] local DD1 = -0.10 xi: xtreg l_homicide i.year$region lead9 lead8 lead7 lead6 lead5 lead4 lead3 lead2 lead1 lag1-lag5 [aweight=popwt], fe vce(cluster sid) outreg2 using "./eventstudy_levels.xls", replace keep(lead9 lead8 lead7 lead6 lead5 lead4 lead3 lead2 lead1 lag1-lag5) noparen noaster addstat(DD, DD', DDSE, DDSE') *Pull in the ES Coefs xmluse "./eventstudy_levels.xls", clear cells(A3:B32) first replace VARIABLES = subinstr(VARIABLES,"lag","",.) quietly destring _all, replace ignore(",") replace VARIABLES = -9 in 2 replace VARIABLES = -8 in 4 replace VARIABLES = -7 in 6 replace VARIABLES = -6 in 8 replace VARIABLES = -5 in 10 replace VARIABLES = -4 in 12 replace VARIABLES = -3 in 14 replace VARIABLES = -2 in 16 replace VARIABLES = -1 in 18 replace VARIABLES = 1 in 20 replace VARIABLES = 2 in 22 replace VARIABLES = 3 in 24 replace VARIABLES = 4 in 26 replace VARIABLES = 5 in 28 drop in 1 compress quietly destring _all, replace ignore(",") compress ren VARIABLES exp gen b = exp<. replace exp = -9 in 2 replace exp = -8 in 4 replace exp = -7 in 6 replace exp = -6 in 8 replace exp = -5 in 10 replace exp = -4 in 12 replace exp = -3 in 14 replace exp = -2 in 16 replace exp = -1 in 18 replace exp = 1 in 20 replace exp = 2 in 22 replace exp = 3 in 24 replace exp = 4 in 26 replace exp = 5 in 28 * Expand the dataset by one more observation so as to include the comparison year local obs =_N+1 set obs obs' for var _all: replace X = 0 in obs' replace b = 1 in obs' replace exp = 0 in obs' keep exp l_homicide b set obs 30 foreach x of varlist exp l_homicide b { replace x'=0 in 30 } reshape wide l_homicide, i(exp) j(b) * Create the confidence intervals cap drop *lb* *ub* gen lb = l_homicide1 - 1.96*l_homicide0 gen ub = l_homicide1 + 1.96*l_homicide0 * Create the picture set scheme s2color #delimit ; twoway (scatter l_homicide1 ub lb exp , lpattern(solid dash dash dot dot solid solid) lcolor(gray gray gray red blue) lwidth(thick medium medium medium medium thick thick) msymbol(i i i i i i i i i i i i i i i) msize(medlarge medlarge) mcolor(gray black gray gray red blue) c(l l l l l l l l l l l l l l l) cmissing(n n n n n n n n n n n n n n n n) xline(0, lcolor(black) lpattern(solid)) yline(0, lcolor(black)) xlabel(-9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 , labsize(medium)) ylabel(, nogrid labsize(medium)) xsize(7.5) ysize(5.5) legend(off) xtitle("Years before and after castle doctrine expansion", size(medium)) ytitle("Log Murders ", size(medium)) graphregion(fcolor(white) color(white) icolor(white) margin(zero)) yline(DDL', lcolor(red) lwidth(thick)) text(DD1' -0.10 "DD Coefficient = DD' (s.e. = DDSE')") ) ; #delimit cr; R Code castle_4.R # This version includes # an interval that traces the confidence intervals ggplot(aes(x = label, y = mean, ymin = mean-1.96*sd, ymax = mean+1.96*sd)) + # this creates a red horizontal line geom_hline(yintercept = 0.035169444, color = "red") + geom_line() + geom_point() + geom_ribbon(alpha = 0.2) + theme_minimal() + # Important to have informative axes labels! xlab("Years before and after castle doctrine expansion") + ylab("log(Homicide Rate)") + geom_hline(yintercept = 0) + geom_vline(xintercept = 0) Python Code castle_4.py # Missing python script` You can see the figure that this creates in Figure 9.23. The difference between coefplot and this twoway command connects the event-study coefficients with lines, whereas coefplot displayed them as coefficients hanging in the air. Neither is right or wrong; I merely wanted you to see the differences for your own sake and to have code that you might experiment with and adapt to your own needs. But the thing about this graph is that the leads are imbalanced. There’s only one state, for instance, in the ninth lead, and there’s only three in the eighth lead. So I’d like you to do two modifications to this. First, I’d like you to replace the sixth lead so that it is now equal to leads 6–9. In other words, we will force these late adopters to have the same coefficient as those with six years until treatment. When you do that, you should get Figure 9.24. Next, let’s balance the event study by dropping the states who only show up in the seventh, eighth, and ninth leads.155 When you do this, you should get Figure 9.25. If nothing else, exploring these different specifications and cuts of the data can help you understand just how confident you should be that prior to treatment, treatment and control states genuinely were pretty similar. And if they weren’t similar, it behooves the researcher to at minimum provide some insight to others as to why the treatment and control groups were dissimilar in levels. Because after all—if they were different in levels, then it’s entirely plausible they would be different in their counterfactual trends too because why else are they different in the first place . ### 9.6.8 Bacon decomposition Recall that we run into trouble using the twoway fixed-effects model in a DD framework insofar as there are heterogeneous treatment effects over time. But the problem here only occurs with those $$2\times 2$$s that use late-treated units compared to early-treated units. If there are few such cases, then the issue is much less problematic depending on the magnitudes of the weights and the size of the DD coefficients themselves. What we are now going to do is simply evaluate the frequency with which this issue occurs using the Bacon decomposition. Recall that the Bacon decomposition decomposes the twoway fixed effects estimator of the DD parameter into weighted averages of individual $$2\times 2$$s across the four types of $$2\times 2$$s possible. The Bacon decomposition uses a binary treatment variable, so we will reestimate the effect of castle-doctrine statutes on logged homicide rates by coding a state as “treated” if any portion of the year it had a castle-doctrine amendment. We will work with the special case of no covariates for simplicity, though note that the decomposition works with the inclusion of covariates as well . Stata users will need to download -ddtiming- from Thomas Goldring’s website, which I’ve included in the first line. First, let’s estimate the actual model itself using a post dummy equaling one if the state was covered by a castle-doctrine statute that year. Here we find a smaller effect than many of Cheng and Hoekstra’s estimates because we do not include their state-year interaction fixed effects strategy among other things. But this is just for illustrative purposes, so let’s move to the Bacon decomposition itself. We can decompose the parameter estimate into the three different types of $$2\times 2$$s, which I’ve reproduced in Table 9.8 Dependent Variable Log(homicide rate) Castle-doctrine law 0.069 (0.034) Table 9.8: Bacon Decomposition Example. DD Comparison Earlier T vs. Later C 0.077 $$-0.029$$ Later T vs. Earlier C 0.024 0.046 T vs. Never treated 0.899 0.078 Taking these weights, let’s just double check that they do indeed add up to the regression estimate we just obtained using our twoway fixed-effects estimator.156 \begin{align} di (0.077 * -0.029) + (0.024 * 0.046) + (0.899 * 0.078) = 0.069 \end{align} That is our main estimate, and thus confirms what we’ve been building on, which is that the DD parameter estimate from a twoway fixed-effects estimator is simply a weighted average over the different types of $$2\times 2$$s in any differential design. Furthermore, we can see in the Bacon decomposition that most of the 0.069 parameter estimate is coming from comparing the treatment states to a group of never-treated states. The average DD estimate for that group is 0.078 with a weight of 0.899. So even though there is a later to early $$2\times 2$$ in the mix, as there always will be with any differential timing, it is small in terms of influence and ultimately pulls down the estimate. But let’s now visualize this as distributing the weights against the DD estimates, which is a useful exercise. The horizontal line in Figure 9.26 shows the average DD estimate we obtained from our fixed-effects regression of 0.069. But then what are these other graphics? Let’s review. Each icon in the graphic represents a single $$2\times 2$$ DD. The horizontal axis shows the weight itself, whereas the vertical axis shows the magnitude of that particular $$2\times 2$$. Icons further to the right therefore will be more influential in the final average DD than those closer to zero. There are three kinds of icons here: an early to late group comparison (represented with a light $$\times$$), a late to early (dark $$\times$$), and a treatment compared to a never-treated (dark triangle). You can see that the dark triangles are all above zero, meaning that each of these $$2\times 2$$s (which correspond to a particular set of states getting the treatment in the same year) is positive. Now they are spread out somewhat—two groups are on the horizontal line, but the rest are higher. What appears to be the case, though, is that the group with the largest weight is really pulling down the parameter estimate and bringing it closer to the 0.069 that we find in the regression. ### 9.6.9 The future of DD The Bacon decomposition is an important phase in our understanding of the DD design when implemented using the twoway fixed effects linear model. Prior to this decomposition, we had only a metaphorical understanding of the necessary conditions for identifying causal effects using differential timing with a twoway fixed-effects estimator. We thought that since the $$2\times 2$$ required parallel trends, that that “sort of” must be what’s going on with differential timing too. And we weren’t too far off—there is a version of parallel trends in the identifying assumptions of DD using twoway fixed effects with differential timing. But what Goodman-Bacon (2019) also showed is that the weights themselves drove the numerical estimates too, and that while some of it was intuitive (e.g., group shares being influential) others were not (e.g., variance in treatment being influential). The Bacon decomposition also highlighted some of the unique challenges we face with differential timing. Perhaps no other problem is better highlighted in the diagnostics of the Bacon decomposition than the problematic “late to early” $$2\times 2$$ for instance. Given any heterogeneity bias, the late to early $$2\times 2$$ introduces biases even with variance weighted common trends holding! So, where to now? From 2018 to 2020, there has been an explosion of work on the DD design. Much of it is unpublished, and there has yet to appear any real consensus among applied people as to how to handle it. Here I would like to outline what I believe could be a map as you attempt to navigate the future of DD. I have attempted to divide this new work into three categories: weighting, selective choice of “good” $$2 \times 2$$s, and matrix completion. What we know now is that there are two fundamental problems with the DD design. First, there is the issue of weighting itself. The twoway fixed-effects estimator weights the individual $$2\times 2$$s in ways that do not make a ton of theoretical sense. For instance, why do we think that groups at the middle of the panel should be weighted more than those at the end? There’s no theoretical reason we should believe that. But as Goodman-Bacon (2019) revealed, that’s precisely what twoway fixed effects does. And this is weird because you can change your results simply by adding or subtracting years to the panel—not just because this changes the $$2\times 2$$, but also because it changes the variance in treatment itself! So that’s weird.157 But this is not really the fatal problem, you might say, with twoway fixed-effects estimates of a DD design. The bigger issue was what we saw in the Bacon decomposition—you will inevitably use past treated units as controls for future treated units, or what I called the “late to early $$2\times 2$$.” This happens both in the event study and in the designs modeling the average treatment effect with a dummy variable. Insofar as it takes more than one period for the treatment to be fully incorporated, then insofar as there’s substantial weight given to the late to early $$2\times 2$$s, the existence of heterogeneous treatment effects skews the parameter away from the ATT—maybe even flipping signs!158 Whereas the weird weighting associated with twoway fixed effects is an issue, it’s something you can at least check into because the Bacon decomposition allows you to separate out the $$2\times 2$$ average DD values from their weights. Thus, if your results are changing by adding years because your underlying $$2\times 2$$s are changing, you simply need to investigate it in the Bacon decomposition. The weights and the $$2\times 2$$s, in other words, are things that can be directly calculated, which can be a source of insight into why twoway fixed effects estimator is finding what it finds. But the second issue is a different beast altogether. And one way to think of the emerging literature is that many authors are attempting to solve the problem that some of these $$2\times 2$$s (e.g., the late to early $$2\times 2$$) are problematic. Insofar as they are problematic, can we improve over our static twoway fixed-effects model? Let’s take a few of these issues up with examples from the growing literature. Another solution to the weird weighting twoway fixed-effects problem has been provided by Callaway and Sant’Anna (2019).159 Callaway and Sant’Anna (2019) approach the DD framework very differently than Goodman-Bacon (2019). Callaway and Sant’Anna (2019) use an approach that allows them to estimate what they call the group-time average treatment effect, which is just the ATT for a given group at any point in time. Assuming parallel trends conditional on time-invariant covariates and overlap in a propensity score, which I’ll discuss below, you can calculate group ATT by time (relative time like in an event study or absolute time). One unique part of these authors’ approach is that it is non-parametric as opposed to regression-based. For instance, under their identifying assumptions, their nonparametric estimator for a group ATT by time is: \begin{align} ATT(g,t)=E\left[\left(\dfrac{G_g}{E[G_g]}- \dfrac{\dfrac{p_g(X)C}{1-p_g(X)}}{E \bigg [ \dfrac{p_g(X)C}{1-p_g(X)}\bigg]}\right) (Y_t-Y_{g-1})\right] \end{align} where the weights, $$p$$, are propensity scores, $$G$$ is a binary variable that is equal to 1 if an individual is first treated in period $$g$$, and $$C$$ is a binary variable equal to one for individuals in the control group. Notice there is no time index, so these $$C$$ units are the never-treated group. If you’re still with me, you should find the weights straightforward. Take observations from the control group as well as group $$g$$, and omit the other groups. Then weight up those observations from the control group that have characteristics similar to those frequently found in group $$g$$ and weight down observations from the control group that have characteristics rarely found in group $$g$$. This kind of reweighting procedure guarantees that the covariates of group $$g$$ and the control group are balanced. You can see principles from earlier chapters making their way into this DD estimation—namely, balance on covariates to create exchangeable units on observables. But because we are calculating group-specific ATT by time, you end up with a lot of treatment effect parameters. The authors address this by showing how one can take all of these treatment effects and collapse them into more interpretable parameters, such as a larger ATT. All of this is done without running a regression, and therefore avoids some of the unique issues created in doing so. One simple solution might be to estimate your event-study model and simply take the mean over all lags using a linear combination of all point estimates . Using this method, we in fact find considerably larger effects or nearly twice the size as we get from the simpler static twoway fixed-effects model. This is perhaps an improvement because weights can be large on the long-run effects due to large effects from group shares. So if you want a summary measure, it’s better to estimate the event study and then average them afterthe fact. Another great example of a paper wrestling with the biases brought up by heterogeneous treatment effects is Sun and Abraham (2020). This paper is primarily motivated by problems created in event studies, but you can see some of the issues brought up in Goodman-Bacon (2019). In an event study with differential timing, as we discussed earlier, leads and lags are often used to measure dynamics in the treatment itself. But these can produce causally uninterpretable results because they will assign non-convex weights to cohort-specific treatment effects. Similar to Callaway and Sant’Anna (2019), they propose estimating a group-specific dynamic effect and from those calculate a group specific estimate. The way I organize these papers in my mind is around the idea of heterogeneity in time, the use of twoway fixed effects, and differential timing. The theoretical insight from all these papers is the coefficients on the static twoway fixed-effects leads and lags will be unintelligible if there is heterogeneity in treatment effects over time. In this sense, we are back in the world that Goodman-Bacon (2019) revealed, in which heterogeneity treatment effect biases create real challenges for the DD design using twoway fixed effects.160 Their alternative is estimate a “saturated” model to ensure that the heterogeneous problem never occurs in the first place. The proposed alternative estimation technique is to use an interacted specification that is saturated in relative time indicators as well as cohort indicators. The treatment effect associated with this design is called the interaction-weighted estimator, and using it, the DD parameter is equivalent to the difference between the average change in outcomes for a given cohort in those periods prior to treatment and the average changes for those units that had not been treated at the time interval. Additionally, this method uses the never-treated units as controls, and thereby avoids the hairy problems noted in Goodman-Bacon (2019) when computing later to early $$2\times 2$$s.161 Another paper that attempts to circumvent the weirdness of the regression-based method when there are numerous late to early $$2\times 2$$s is Cengiz et al. (2019). This is bound to be a classic study in labor for its exhaustive search for detectable repercussions of the minimum wage on low-paying jobs. The authors ultimately find little evidence to support any concern, but how do they come to this conclusion? Cengiz et al. (2019) take a careful approach by creating separate samples. The authors want to know the impact of minimum-wage changes on low-wage jobs across 138 state-level minimum-wage changes from 1979 to 2016. The authors in an appendix note the problems with aggregating individual DD estimates into a single parameter, and so tackle the problem incrementally by creating 138 separate data sets associated with a minimum-wage event. Each sample has both treatment groups and control groups, but not all units are used as controls. Rather, only units that were not treated within the sample window are allowed to be controls. Insofar as a control is not treated during the sample window associated with a treatment unit, it can be by this criteria used as a control. These 138 estimates are then stacked to calculate average treatment effects. This is an alternative method to the twoway fixed-effects DD estimator because it uses a more stringent criteria for whether a unit can be considered a control. This in turn circumvents the heterogeneity problems that Goodman-Bacon (2019) notes because Cengiz et al. (2019) essentially create 138 DD situations in which controls are always “never-treated” for the duration of time under consideration. But the last methodology I will discuss that has emerged in the last couple of years is a radical departure from the regression-based methodology altogether. Rather than use a twoway fixed-effects estimator to estimate treatment effects with differential timing, Athey et al. (2018) propose a machine-learning-based methodology called “matrix completion” for panel data. The estimator is exotic and bears some resemblance to matching imputation and synthetic control. Given the growing popularity of placing machine learning at the service of causal inference, I suspect that once Stata code for matrix completion is introduced, we will see this procedure used more broadly. Matrix completion for panel data is a machine-learning-based approach to causal inference when one is working explicitly with panel data and differential timing. The application of matrix completion to causal inference has some intuitive appeal given one of the ways that Rubin has framed causality is as a missing data problem. Thus, if we are missing the matrix of counterfactuals, we might explore whether this method from computer science could assist us in recovering it. Imagine we could create two matrices of potential outcomes: a matrix of $$Y^0$$ potential outcomes for all panel units over time and $$Y^1$$. Once treatment occurs, a unit switches from $$Y^0$$ to $$Y^1$$ under the switching equation, and therefore the missing data problem occurs. Missingness is simply another way of describing the fundamental problem of causal inference for there will never be a complete set of matrices enabling calculation of interesting treatment parameters given the switching equation only assigns one of them to reality. Say we are interested in this treatment effect parameter: \begin{align} \widehat{\delta_{ATT}} = \dfrac{1}{N_T} \sum \big(Y_{it}^1 - Z_{it}^0\big) \end{align} where $$Y^1$$ are the observed outcomes in a panel unit at some post-treatment period, $$Z^0$$ is the estimated missing elements of the $$Y^0$$ matrix for the post-treatment period, and $$N_T$$ is the number of treatment units. Matrix completion uses the observed elements of the matrix’s realized values to predict the missing elements of the $$Y^0$$ matrix (missing due to being in the post-treatment period and therefore having switched from $$Y^0$$ to $$Y^1$$). Analytically, this imputation is done via something called regularization-based prediction. The objective in this approach is to optimally predict the missing elements by minimizing a convex function of the difference between the observed matrix of $$Y^0$$ and the unknown complete matrix $$Z^0$$ using nuclear norm regularization. Let $$\Omega$$ denote the row and column indices $$(i,j)$$ of the observed entries of the outcomes, then the objective function can be written as \begin{align} \widehat{Z^0}=\arg\min_{Z^0} \sum_{(i,j) \in \Omega} \dfrac{(Y^0_{it} - Z^0_{it})^2}{|\Omega|}+\Lambda ||Z^0|| \end{align} where $$||Z^0||$$ is the nuclear norm (sum of singular values of $$Z0$$). The regularization parameter $$\Lambda$$ is chosen using tenfold cross validation. Athey et al. (2018) show that this procedure outperforms other methods in terms of root mean squared prediction error. Unfortunately, at present estimation using matrix completion is not available in Stata. R packages for it do exist, such as the gsynth package, but it has to be adapted for Stata users. And until it is created, I suspect adoption will lag. ## 9.7 Conclusion America’s institutionalized state federalism provides a constantly evolving laboratory for applied researchers seeking to evaluate the causal effects of laws and other interventions. It has for this reason probably become one of the most popular forms of identification among American researchers, if not the most common. A Google search of the phrase “difference-in-differences” brought up 45,000 hits. It is arguably the most common methodology you will use—more than IV or matching or even RDD, despite RDD’s greater perceived credibility. There is simply a never-ending flow of quasi-experiments being created by our decentralized data-generating process in the United States made even more advantageous by so many federal agencies being responsible for data collection, thus ensuring improved data quality and consistency. But, what we have learned in this chapter is that while there is a current set of identifying assumptions and practices associated with the DD design, differential timing does introduce some thorny challenges that have long been misunderstood. Much of the future of DD appears to be mounting solutions to problems we are coming to understand better, such as the odd weighting of regression itself and problematic $$2\times 2$$ DDs that bias the aggregate ATT when heterogeneity in the treatment effects over time exists. Nevertheless, DD—and specifically, regression-based DD—is not going away. It is the single most popular design in the applied researcher’s toolkit and likely will be for many years to come. Thus it behooves the researcher to study this literature carefully so that they can better protect against various forms of bias.
2021-10-19 04:35:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 28, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7295631170272827, "perplexity": 1969.8133317260574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585242.44/warc/CC-MAIN-20211019043325-20211019073325-00197.warc.gz"}
https://www.statistics-lab.com/category/%E7%94%9F%E7%89%A9%E7%BB%9F%E8%AE%A1%E4%BB%A3%E5%86%99/
## 统计代写|生物统计代写biostatistics代考|MPH701 statistics-lab™ 为您的留学生涯保驾护航 在代写生物统计biostatistics方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写生物统计biostatistics代写方面经验极为丰富,各种生物统计biostatistics相关的作业也就用不着说。 • Statistical Inference 统计推断 • Statistical Computing 统计计算 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 ## 统计代写|生物统计代写biostatistics代考|Extension to the Regression Case We want to extend the methodology of Sect. $3.2$ to the regression setting where the location parameter varies across observations as a linear function of a set of $p$, say, explanatory variables, which are assumed to include the constant term, as it is commonly the case. If $x_{i}$ is the vector of covariates pertaining to the $i$ th subject, observation $y_{i}$ is now assumed to be drawn from ST $\left(\xi_{i}, \omega, \lambda, \nu\right)$ where $$\xi_{i}=x_{i}^{\top} \beta, \quad i=1, \ldots, n,$$ for some $p$-dimensional vector $\beta$ of unknown parameters; hence now the parameter vector is $\theta=\left(\beta^{\top}, \omega, \lambda, v\right)^{\top}$. The assumption of independently drawn observations is retained. The direct extension of the median as an estimate of location, which was used in Sect. 3.2, is an estimate of $\beta$ obtained by median regression, which corresponds to adoption of the least absolute deviations fitting criterion instead of the more familiar least squares. This can also be viewed as a special case of quantile regression, when the quantile level is set at $1 / 2$. A classical treatment of quantile regression is Koenker (2005) and corresponding numerical work can be carried out using the $R$ package quantreg, see Koenker (2018), among other tools. Use of median regression delivers an estimate $\tilde{\tilde{\beta}}^{m}$ of $\beta$ and a vector of residual values, $r_{i}=y_{i}-x_{i}^{\top} \tilde{\beta}^{m}$ for $i=1, \ldots, n$. Ignoring $\beta$ estimation errors, these residuals are values sampled from $\mathrm{ST}\left(-m_{0}, \omega^{2}, \lambda, v\right)$, where $m_{0}$ is a suitable value, examined shortly, which makes the distribution to have 0 median, since this is the target of the median regression criterion. We can then use the same procedure of Sect. 3.2, with the $y_{i}$ ‘s replaced the $r_{i}$ ‘s, to estimate $\omega, \lambda, v$, given that the value of $m_{0}$ is irrelevant at this stage. The final step is a correction to the vector $\tilde{\beta}^{m}$ to adjust for the fact that $y_{i}-x_{i}^{\top} \beta$ should have median $m_{0}$, that is, the median of ST $(0, \omega, \lambda, v)$, not median 0 . This amounts to increase all residuals by a constant value $m_{0}$, and this step is accoomplishéd by sêtting a vectoor $\tilde{\beta}$ with all components equal tō $\tilde{\beta}^{m}$ except that the intercept term, $\beta_{0}$ say, is estimated by $$\tilde{\beta}{0}=\tilde{\beta}{0}^{m}-\tilde{\omega} q_{2}^{\mathrm{ST}}$$ similarly to $(10)$ ## 统计代写|生物统计代写biostatistics代考|Extension to the Multivariate Case Consider now the case of $n$ independent observations from a multivariate $Y$ variable with density (6), hence $Y \sim \mathrm{ST}{d}(\xi, \Omega, \alpha, v)$. This case can be combined with the regression setting of Sect. 3.3, so that the $d$-dimensional location parameter varies for each observation according to $$\xi{i}^{\top}=x_{i}^{\top} \beta, \quad i=1, \ldots, n,$$ where now $\beta=\left(\beta_{\cdot 1}, \ldots, \beta_{\cdot d}\right)$ is a $p \times d$ matrix of parameters. Since we have assumed that the explanatory variables include a constant term, the regression case subsumes the one of identical distribution, when $p=1$. Hence we deal with the regression case directly, where the $i$ th observation is sampled from $Y_{i} \sim$ $\mathrm{ST}{d}\left(\xi{i}, \Omega, \alpha, v\right)$ and $\xi_{i}$ is given by (12), for $i=1, \ldots, n$. Arrange the observed values in a $n \times d$ matrix $y=\left(y_{i j}\right)$. Application of the procedure presented in Sects. $3.2$ and $3.3$ separately to each column of $y$ delivers estimates of $d$ univariate models. Specifically, from the $j$ th column of $y$, we obtain estimates $\tilde{\theta}{j}$ and corresponding ‘normalized’ residuals $\tilde{z}{i j}$ : $$\tilde{\theta}{j}=\left(\tilde{\beta}{\cdot j}^{\top}, \tilde{\omega}{j}, \tilde{\lambda}{j}, \tilde{v}{j}\right)^{\top}, \quad \tilde{z}{i j}=\tilde{\omega}{j}^{-1}\left(y{i j}-x_{i}^{\top} \tilde{\beta}_{\cdot j}\right)$$ where it must be recalled that the ‘normalization’ operation uses location and scale parameters, but these do not coincide with the mean and the standard deviation of the underlying random variable. Since the meaning of expression (12) is to define a set of univariate regression modes with a common design matrix, the vectors $\tilde{\beta}{-1}, \ldots, \tilde{\beta}{\cdot d}$ can simply be arranged in a $p \times d$ matrix $\tilde{\beta}$ which represents an estimate of $\beta$. The set of univariate estimates in (13) provide $d$ estimates for $v$, while only one such a value enters the specification of the multivariate ST distribution. We have adopted the median of $\tilde{v}{1}, \ldots, \tilde{v}{d}$ as the single required estimate, denoted $\tilde{v}$. The scale quantities $\tilde{\omega}{1}, \ldots, \tilde{\omega}{d}$ estimate the square roots of the diagonal elements of $\Omega$, but off-diagonal elements require a separate estimation step. What is really required to estimate is the scale-free matrix $\bar{\Omega}$. This is the problem examined next. If $\omega$ is the diagonal matrix formed by the squares roots of $\Omega_{11}, \ldots, \Omega_{\text {cld }}$, all variables $\omega^{-1}\left(Y_{i}-\xi_{i}\right)$ have distribution $\mathrm{ST}{d}(0, \bar{\Omega}, \alpha, v)$, for $i=1, \ldots, n$. Denote by $Z=\left(Z{1}, \ldots, Z_{d}\right)^{\top}$ the generic member of this set of variables. We are concerned with the distribution of the products $Z_{j} Z_{k}$, but for notational simplicity we focus on the specific product $W=Z_{1} Z_{2}$, since all other products are of similar nature. We must then examine the distribution of $W=Z_{1} Z_{2}$ when $\left(Z_{1}, Z_{2}\right)$ is a bivariate ST variable. This looks at first to be a daunting task, but a major simplification is provided by consideration of the perturbation invariance property of symmetrymodulated distributions, of which the ST is an instance. For a precise exposition of this property, see for instance Proposition $1.4$ of Azzalini and Capitanio (2014), but in the present case it says that, since $W$ is an even function of $\left(Z_{1}, Z_{2}\right)$, its distribution does not depend on $\alpha$, and it coincides with the distribution of the case $\alpha=0$, that is, the case of a usual bivariate Student’s $t$ distribution, with dependence parameter $\bar{\Omega}_{12}$. ## 统计代写|生物统计代写biostatistics代考|Simulation Work to Compare Initialization Procedures Several simulations runs have been performed to examine the performance of the proposed methodology. The computing environment was $\mathrm{R}$ version 3.6.0. The reference point for these evaluations is the methodology currently in use, as provided by the publicly available version of $R$ package $s n$ at the time of writing, namely version 1.5-4; see Azzalini (2019). This will be denoted ‘the current method’ in the following. Since the role of the proposed method is to initialize the numerical MLE search, not the initialization procedure per se, we compare the new and the current method with respect to final MLE outcome. However, since the numerical optimization method used after initialization is the same, any variations in the results originate from the different initialization procedures. We stress again that in a vast number of cases the working of the current method is satisfactory and we are aiming at improvements when dealing with ‘awkward samples’. These commonly arise with ST distributions having low degrees of freedom, about $v=1$ or even less, but exceptions exist, such as the second sample in Fig. $2 .$ The primary aspect of interest is improvement in the quality of data fitting. This is typically expressed as an increase of the maximal achieved log-likelihood, in its penalized form. Another desirable effect is improvement in computing time. The basic set-up for such numerical experiments is represented by simple random samples, obtained as independent and identically distributed values drawn from a named ST $(\xi, \omega, \lambda, v)$. In all cases we set $\xi=0$ and $\omega=1$. For the other ingredients, we have selected the following values: $\lambda: 0, \quad 2, \quad 8$, $v: 1,3,8$, $n: 50,100,250,500$ and, for each combination of these values, $N=2000$ samples have been drawn. The smallest examined sample size, $n=50$, must be regarded as a sort of ‘sensible lower bound’ for realistic fitting of flexible distributions such as the ST. In this respect, recall the cautionary note of Azzalini and Capitanio (2014, p. 63) about the fitting of a SN distribution with small sample sizes. Since the ST involves an additional parameter, notably one having a strong effect on tail behaviour, that annotation holds a fortiori here. For each of the $3 \times 3 \times 4 \times 2000=72,000$ samples so generated, estimation of the parameters $(\xi, \omega, \lambda, \nu)$ has been carried out using the following methods. ## 统计代写|生物统计代写biostatistics代考|Extension to the Regression Case X一世=X一世⊤b,一世=1,…,n, b~0=b~0米−ω~q2小号吨 ## 统计代写|生物统计代写biostatistics代考|Extension to the Multivariate Case X一世⊤=X一世⊤b,一世=1,…,n, θ~j=(b~⋅j⊤,ω~j,λ~j,在~j)⊤,和~一世j=ω~j−1(是一世j−X一世⊤b~⋅j) (13)中的一组单变量估计提供d估计为在,而只有一个这样的值进入多元 ST 分布的规范。我们采用了 $\tilde{v} {1}、\ldots、\tilde{v} {d}的中位数一个s吨H和s一世nGl和r和q在一世r和d和s吨一世米一个吨和,d和n○吨和d\波浪号 {v}$。 λ:0,2,8, n:50,100,250,500 ## 有限元方法代写 tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。 ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。 ## 统计代写|生物统计代写biostatistics代考|STA 310 statistics-lab™ 为您的留学生涯保驾护航 在代写生物统计biostatistics方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写生物统计biostatistics代写方面经验极为丰富,各种生物统计biostatistics相关的作业也就用不着说。 • Statistical Inference 统计推断 • Statistical Computing 统计计算 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 ## 统计代写|生物统计代写biostatistics代考|Numerical Aspects and Some Illustrations Since, on the computational side, we shall base our work the R package sn, described by Azzalini (2019), it is appropriate to describe some key aspects of this package. There exists a comprehensive function for model fitting, called selm, but the actual numerical work in case of an ST model is performed by functions st. mple and mst. mple, in the univariate and the multivariate case, respectively. To numerical efficiency, we shall be using these functions directly, rather than via selm. As their names suggest, st. mple and mst. mple perform MPLE, but they can be used for classical MLE as well, just by omitting the penalty function. The rest of the description refers to st. mple, but mst. mple follows a similar scheme. In the univariate case, denote by $\theta=(\xi, \omega, \alpha, \nu)^{\top}$ the parameters to be cstimatcd, or possibly $\theta=\left(\beta^{\top}, w, \alpha, v\right)^{\top}$ when a lincar regrcssion mudel is introduced for the location parameter, in which case $\beta$ is a vector of $p$ regression coefficients. Denote by $\log L(\theta)$ the log-likelihood function at point $\theta$. If no starting values are supplied, the first operation of st.mple is to fit a linear model to the available explanatory variables; this reduces to the constant covariate value 1 if $p=1$. For the residuals from this linear fit, sample cumulants of order up to four are computed, hence including the sample variance. An inversion from these values to $\theta$ may or may not be possible, depending on whether the third and fourth sample cumulants fall in the feasible region for the ST family. If the inversion is successful, initial values of the parameters are so obtained; if not, the final two components of $\theta$ are set at $(\alpha, v)=(0,10)$, retaining the other components from the linear fit. Starting from this point, MLE or MPLE is searched for using a general numerical optimization procedure. The default procedure for performing this step is the $\mathrm{R}$ function nlminb, supplied with the score functions besides the log-likelihood function. We shall refer, comprehensively, to this currently standard procedure as ‘method M0’. In all our numerical work, method M0 uses st. mple, and the involved function nlminb, with all tuning parameters kept at their default values. The only activated option is the one switching between MPLE and MLE, and even this only for the work of the present section. Later on, we shall always use MPLE, with penalty function Openalty which implements the method proposed in Azzalini and Arellano-Valle (2013). We start our numerical work with some illustrations, essentially in graphical form, of the log-likelihood generated by some simulated datasets. The aim is to provide a direct perception, although inevitably limited, of the possible behaviour of the log-likelihood and the ensuing problems which it poses for MLE search and other inferential procedures. Given this aim, we focus on cases which are unusual, in some way or another, rather than on ‘plain cases’. The type of graphical display which we adopt is based on the profile loglikelihood function of $(\alpha, v)$, denoted $\log L_{p}(\alpha, v)$. This is obtained, for any given $(\alpha, v)$, by maximizing $\log L(\theta)$ with respect to the remaining parameters. To simplify readability, we transform $\log L_{p}(\alpha, v)$ to the likelihood ratio test statistic, also called ‘deviance function’: $$D(\alpha, v)=2\left{\log L_{p}(\hat{\alpha}, \hat{v})-\log L_{p}(\alpha, v)\right}$$ where $\log L_{p}(\hat{\alpha}, \hat{v})$ is the overall maximum value of the log-likelihood, equivalent to $\log L(\hat{\theta})$. The concept of deviance applies equally to the penalized log-likelihood. The plots in Fig. 2 displays, in the form of contour level plots, the behaviour of $D(\alpha, v)$ for two artificially generated samples, with $v$ expressed on the logarithmic scale for more convenient readability. Specifically, the top plots refer to a sample of size $n=50$ drawn from the $\operatorname{ST}(0,1,1,2)$; the left plot, refers to the regular log-likelihood, while the right plot refers to the penalized log-likelihood. The plots include marks for points of special interest, as follows: $\Delta$ the true parameter point; o the point having maximal (penalized) log-likelihood on a $51 \times 51$ grid of points spanning the plotted area; • the MLE or MPLE point selected by method M0; • the preliminary estimate to be introduced in Sect. 3.2, later denoted M1; $\times$ the MLE or MPLE point selected by method M2 presented later in the text. ## 统计代写|生物统计代写biostatistics代考|Preliminary Remarks and the Basic Scheme We have seen in Sect. 2 the ST log-likelihood function can be problematic; it is then advisable to select carefully the starting point for the MLE search. While contrasting the risk of landing on a local maximum, a connected aspect of interest is to reduce the overall computing time. Here are some preliminary considerations about the stated target. Since these initial estimates will be refined by a subsequent step of log-likelihood maximization, there is no point in aiming at a very sophisticate method. In addition, we want to keep the involved computing header as light as possible. Therefore, we want a method which is simple and quick to compute; at the same time, it should be reasonably reliable, hopefully avoiding nonsensical outcomes. Another consideration is that we cannot work with the methods of moments, or some variant of it, as this would impose a condition $v>4$, bearing in mind the constraints recalled in Sect. 1.2. Since some of the most interesting applications of ST-based models deal with very heavy tails, hence with low degrees of freedom, the condition $v>4$ would be unacceptable in many important applications. The implication is that we have to work with quantiles and derived quantities. To ease exposition, we begin by presenting the logic in the basic case of independent observations from a common univariate distribution $\mathrm{ST}\left(\xi, \omega^{2}, \lambda, v\right)$. The first step is to select suitable quantile-based measures of location, scale, asymmetry and tail-weight. The following list presents a set of reasonable choices; these measures can be equally referred to a probability distribution or to a sample, depending on the interpretation of the terms quantile, quartile and alike. Location The median is the obvious choice here; denote it by $q_{2}$, since it coincides with the second quartile. Scale A commonly used measure of scale is the semi-interquartile difference, also called quartile deviation, that is $$d_{q}=\frac{1}{2}\left(q_{3}-q_{1}\right)$$ where $q_{j}$ denotes the $j$ th quartile; see for instance Kotz et al. (2006, vol. 10, p. 6743). Asymmetry A classical non-parametric measure of asymmetry is the so-called Bowley’s measure $$G=\frac{\left(q_{3}-q_{2}\right)-\left(q_{2}-q_{1}\right)}{q_{3}-q_{1}}=\frac{q_{3}-2 q_{2}+q_{1}}{2 d_{q}}$$ see Kotz et al. (2006, vol. 12, p. 7771-3). Since the same quantity, up to an inessential difference, had previously been used by Galton, some authors attribute to him its introduction. We shall refer to $G$ as the Galton-Bowley measure. Kurtosis A relatively more recent proposal is the Moors measure of kurtosis, presented in Moors (1988), $$M=\frac{\left(e_{7}-e_{5}\right)+\left(e_{3}-e_{1}\right)}{e_{6}-e_{2}}$$ where $e_{j}$ denotes the $j$ th octile, for $j=1, \ldots, 7$. Clearly, $e_{2 j}=q_{j}$ for $j=$ $1,2,3$ ## 统计代写|生物统计代写biostatistics代考|Inversion of Quantile-Based Measures to ST Parameters For the inversion of the parameter set $Q=\left(q_{2}, d_{q}, G, M\right)$ to $\theta=(\xi, \omega, \lambda, v)$, the first stage considers only the components $(G, M)$ which are to be mapped to $(\lambda, v)$, exploiting the invariance of $G$ and $M$ with respect to location and scale. Hence, at this stage, we can work assuming that $\xi=0$ and $\omega=1$. Start by computing, for any given pair $(\lambda, v)$, the set of octiles $e_{1}, \ldots, e_{7}$ of $\mathrm{ST}(0,1, \lambda, v)$, and from here the corresponding $(G, M)$ values. Operationally, we have computed the ST quantiles using routine qst of package sn. Only nonnegative values of $\lambda$ need to be considered, because a reversal of the $\lambda$ sign simply reverses the sign of $G$, while $M$ is unaffected, thanks to the mirroring property of the ST quantiles when $\lambda$ is changed to $-\lambda$. Initially, our numerical exploration of the inversion process examined the contour level plots of $G$ and $M$ as functions of $\lambda$ and $v$, as this appeared to be the more natural approach. Unfortunately, these plots turned out not to be useful, because of the lack of a sufficiently regular pattern of the contour curves. Therefore these plots are not even displayed here. A more useful display is the one adopted in Fig. 3, where the coordinate axes are now $G$ and $M$. The shaded area, which is the same in both panels, represents the set of feasible $(G, M)$ points for the ST family. In the first plot, each of the black lines indicates the locus of points with constant values of $\delta$, defined by (4), when $v$ spans the positive half-line; the selected $\delta$ values are printed at the top of the shaded area, when feasible without clutter of the labels. The use of $\delta$ instead of $\lambda$ simply yields a better spread of the contour lines with different parameter values, but it is conceptually irrelevant. The second plot of Fig. 3 displays the same admissible region with superimposed a different type of loci, namely those corresponding to specified values of $v$, when $\delta$ spans the $[0,1]$ interval; the selected $v$ values are printed on the left side of the shaded area. Details of the numerical calculations are as follows. The Galton-Bowley and the Moors measures have been evaluated over a $13 \times 25$ grid of points identified by the selected values \begin{aligned} \delta^{}=&(0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,0.95,0.99,1) \ v^{}=&(0.30,0.32,0.35,0.40,0.45,0.50,0.60,0.70,0.80,0.90,1,1.5,2\ &3,4,5,7,10,15,20,30,40,50,100, \infty) \end{aligned} ## 统计代写|生物统计代写biostatistics代考|Numerical Aspects and Some Illustrations D(\alpha, v)=2\left{\log L_{p}(\hat{\alpha}, \hat{v})-\log L_{p}(\alpha, v)\right}D(\alpha, v)=2\left{\log L_{p}(\hat{\alpha}, \hat{v})-\log L_{p}(\alpha, v)\right} Δ真正的参数点; o 在 a 上具有最大(惩罚)对数似然的点51×51跨越绘图区域的点网格; • 方法 M0 选择的 MLE 或 MPLE 点; • 将在 Sect 中介绍的初步估计。3.2,后面记为M1; ×文中稍后介绍的方法 M2 选择的 MLE 或 MPLE 点。 ## 统计代写|生物统计代写biostatistics代考|Preliminary Remarks and the Basic Scheme dq=12(q3−q1) G=(q3−q2)−(q2−q1)q3−q1=q3−2q2+q12dq ## 有限元方法代写 tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。 ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。 ## 统计代写|生物统计代写biostatistics代考|BIOL 220 statistics-lab™ 为您的留学生涯保驾护航 在代写生物统计biostatistics方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写生物统计biostatistics代写方面经验极为丰富,各种生物统计biostatistics相关的作业也就用不着说。 • Statistical Inference 统计推断 • Statistical Computing 统计计算 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 ## 统计代写|生物统计代写biostatistics代考|Flexible Distributions: The Skew-t Case In the context of distribution theory, a central theme is the study of flexible parametric families of probability distributions, that is, families allowing substantial variation of their behaviour when the parameters span their admissible range. For brevity, we shall refer to this domain with the phrase ‘flexible distributions’. The archetypal construction of this logic is represented by the Pearson system of curves for univariate continuous variables. In this formulation, the density function is regulated by four parameters, allowing wide variation of the measures of skewness and kurtosis, hence providing much more flexibility than in the basic case represented by the normal distribution, where only location and scale can be adjusted. Since Pearson times, flexible distributions have remained a persistent theme of interest in the literature, with a particularly intense activity in recent years. A prominen feature of newer developments is the increased sonsideration for multivariate distributions, reflecting the current availability in applied work of larger datasets, both in sample size and in dimensionality. In the multivariate setting, the various formulations often feature four blocks of parameters to regulate location, scale, skewness and kurtosis. While providing powerful tools for data fitting, flexible distributions also pose some challenges when we enter the concrete estimation stage. We shall be working with maximum likelihood estimation (MLE) or variants of it, but qualitatively similar issues exist for other criteria. Explicit expressions of the estimates are out of the question; some numerical optimization procedure is always involved and this process is not so trivial because of the larger number of parameters involved, as compared with fitting simpler parametric models, such as a Gamma or a Beta distribution. Furthermore, in some circumstances, the very flexibility of these parametric families can lead to difficulties: if the data pattern does not aim steadily towards a certain point of the parameter space, there could be two or more such points which constitute comparably valid candidates in terms of log-likelihood or some other estimation criterion. Clearly, these problems are more challenging with small sample size, later denoted $n$, since the log-likelihood function (possibly tuned by a prior distribution) is relatively more flat, but numerical experience has shown that they can persist even for fairly large $n$, in certain cases. ## 统计代写|生物统计代写biostatistics代考|The Skew-t Distribution: Basic Facts Before entering our actual development, we recall some basic facts about the ST parametric family of continuous distributions. In its simplest description, it is obtained as a perturbation of the classical Student’s $t$ distribution. For a more specific description, start from the univariate setting, where the components of the family are identified by four parameters. Of these four parameters, the one denoted $\xi$ in the following regulates the location of the distribution; scale is regulated by the positive parameter $\omega$; shape (representing departure from symmetry) is regulated by $\lambda$; tail-weight is regulated by $v$ (with $v>0$ ), denoted ‘degrees of freedom’ like for a classical $t$ distribution. It is convenient to introduce the distribution in the ‘standard case’, that is, with location $\xi=0$ and scale $\omega=1$. In this case, the density function is $$t(z ; \lambda, v)=2 t(z ; v) T\left(\lambda z \sqrt{\frac{v+1}{v+z^{2}}} ; v+1\right), \quad z \in \mathbb{R}$$ where $$t(z ; v)=\frac{\Gamma\left(\frac{1}{2}(v+1)\right)}{\sqrt{\pi v} \Gamma\left(\frac{1}{2} v\right)}\left(1+\frac{z^{2}}{v}\right)^{-(v+1) / 2}, \quad z \in \mathbb{R}$$ is the density function of the classical Student’s $t$ on $v$ degrees of freedom and $T(\cdot ; v)$ denotes its distribution function; note however that in (1) this is evaluated with $v+1$ degrees of freedom. Also, note that the symbol $t$ is used for both densities in (1) and (2), which are distinguished by the presence of either one or two parameters. If $Z$ is a random variable with density function (1), the location and scale transform $Y=\xi+\omega Z$ has density function $$t_{Y}(x ; \theta)=\omega^{-1} t(z ; \lambda, v), \quad z=\omega^{-1}(x-\xi),$$ where $\theta=(\xi, \omega, \lambda, v)$. In this case, we write $Y \sim \operatorname{ST}\left(\xi, \omega^{2}, \lambda, v\right)$, where $\omega$ is squared for similarity with the usual notation for normal distributions. When $\lambda=0$, we recover the scale-and-location family generated by the $t$ distribution (2). When $v \rightarrow \infty$, we obtain the skew-normal (SN) distribution with parameters $(\xi, \omega, \lambda)$, which is described for instance by Azzalini and Capitanio (2014, Chap. 2). When $\lambda=0$ and $v \rightarrow \infty$, (3) converges to the $\mathrm{N}\left(\xi, \omega^{2}\right)$ distribution. Some instances of density (1) are displayed in the left panel of Fig. 1. If $\lambda$ was replaced by $-\lambda$, the densities would be reflected on the opposite side of the vertical axis, since $-Y \sim \operatorname{ST}\left(-\xi, \omega^{2},-\lambda, \nu\right)$. ## 统计代写|生物统计代写biostatistics代考|Basic General Aspects The high flexibility of the ST distribution makes it particularly appealing in a wide range of data fitting problems, more than its companion, the SN distribution. Reliable techniques for implementing connected MLE or other estimation methods are therefore crucial. From the inference viewpoint, another advantage of the ST over the related SN distribution is the lack of a stationary point at $\lambda=0$ (or $\alpha=0$ in the multivariate case), and the implied singularity of the information matrix. This stationary point of the SN is systematic: it occurs for all samples, no matter what $n$ is. This peculiar aspect has been emphasized more than necessary in the literature, considering that it pertains to a single although important value of the parameter. Anyway, no such problem exists under the ST assumption. The lack of a stationary point at the origin was first observed empirically and welcomed as ‘a pleasant surprise’ by Azzalini and Capitanio (2003), but no theoretical explanation was given. Additional numerical evidence in this direction has been provided by Azzalini and Genton (2008). The theoretical explanation of why the SN and the ST likelihood functions behave differently was finally established by Hallin and Ley (2012). Another peculiar aspect of the SN likelihood function is the possibility that the maximum of the likelihood function occurs at $\lambda=\pm \infty$, or at $|\alpha| \rightarrow \infty$ in the multivariate case. Note that this happens without divergence of the likelihood function, but only with divergence of the parameter achieving the maximum. In this respect the SN and the ST model are similar: both of them can lead to this pattern. Differently from the stationarity point at the origin, the phenomenon of divergent estimates is transient: it occurs mostly with small $n$, and the probability of its occurrence decreases very rapidly when $n$ increases. However, when it occurs for the $n$ available data, we must handle it. There are different views among statisticians on whether such divergent values must be retained as valid estimates or they must be rejected as unacceptable. We embrace the latter view, for the reasons put forward by Azzalini and Arellano-Valle (2013), and adopt the maximum penalized likelihood estimate (MPLE) proposed there to prevent the problem. While the motivation for MPLE is primarily for small to moderate $n$, we use it throughout for consistency. There is an additional peculiar feature of the ST log-likelihood function, which however we mention only for completeness, rather than for its real relevance. In cases when $v$ is allowed to span the whole positive half-line, poles of the likelihood function must exist near $v=0$, similarly to the case of a Student’s $t$ with unspecified degrees of freedom. This problem has been explored numerically by Azzalini and Capitanio (2003, pp. 384-385), and the indication was that these poles must exist at very small values of $v$, such as $\hat{v}=0.06$ in one specific instance. This phenomenon is qualitatively similar to the problem of poles of the likelihood function for a finite mixture of continuous distributions. Even in the simple case of univariate normal components, there always exist $n$ poles on the boundary of the parameter space if the standard deviations of the components are unrestricted; see for instance Day (1969, Section 7). The problem is conceptually interesting, in both settings, but in practice it is easily dealt with in various ways. In the ST setting, the simplest solution is to impose a constraint $v>v_{0}>0$ where $v_{0}$ is some very small value, such as $v_{0}=0.1$ or $0.2$. Even if fitted to data, a $t$ or ST density with $v<0.1$ would be an object hard to use in practice. ## 统计代写|生物统计代写biostatistics代考|Basic General Aspects ST 分布的高度灵活性使其在广泛的数据拟合问题中特别有吸引力,超过了它的同伴 SN 分布。因此,实现互联 MLE 或其他估计方法的可靠技术至关重要。 SN 似然函数的另一个特殊方面是似然函数的最大值出现在λ=±∞, 或|一个|→∞在多变量情况下。请注意,这种情况在似然函数没有发散的情况下发生,但只有在参数的发散达到最大值的情况下才会发生。在这方面,SN 和 ST 模型是相似的:它们都可以导致这种模式。 ST 对数似然函数还有一个额外的特殊功能,但是我们仅出于完整性而不是其真正相关性而提及它。在某些情况下在允许跨越整个正半线,似然函数的极点必须存在于附近在=0,类似于学生的情况吨具有未指定的自由度。Azzalini 和 Capitanio (2003, pp. 384-385) 对这个问题进行了数值研究,表明这些极点必须以非常小的值存在在, 如在^=0.06在一个特定的情况下。 ## 有限元方法代写 tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。 ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。 ## 统计代写|生物统计代写biostatistics代考| DETERMINING THE SAMPLE SIZE statistics-lab™ 为您的留学生涯保驾护航 在代写生物统计biostatistics方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写生物统计biostatistics代写方面经验极为丰富,各种生物统计biostatistics相关的作业也就用不着说。 • Statistical Inference 统计推断 • Statistical Computing 统计计算 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 ## 统计代写|生物统计代写biostatistics代考|The Sample Size for Simple and Systematic Random Samples In a simple random sample or a systematic random sample, the sample size required to produce a prespecified bound on the error of estimation for estimating the mean is based on the number of units in the population $(N)$, and the approximate variance of the population $\sigma^{2}$. Moreover, given the values of $N$ and $\sigma^{2}$, the sample size required for estimating a mean $\mu$ with bound on the error of estimation $B$ with a simple or systematic random sample is $$n=\frac{N \sigma^{2}}{(N-1) D+\sigma^{2}}$$ where $D=\frac{B^{2}}{4}$. Note that this formula will not generally return a whole number for the sample size $n$; when the formula does not return a whole number for the sample size, the sample size should be taken to be the next largest whole number. Example 3.11 Suppose a simple random sample is going to be taken from a population of $N=5000$ units with a variance of $\sigma^{2}=50$. If the bound on the error of estimation of the mean is supposed to be $B=1.5$, then the sample size required for the simple random sample selected from this population is $$n=\frac{5000(50)}{4999\left(\frac{1.5^{2}}{4}\right)+50}=87.35$$ Since $87.35$ units cannot be sampled, the sample size that should be used is $n=88$. Also, $n=$ 88 would be the sample size required for a systematic random sample from this population when the desired bound on the error of estimation for estimating the mean is $B=1.5$. In this case, the systematic random sample would be a 1 in 56 systematic random sample since $\frac{5000}{88} \approx 56$. In many research projects, the values of $N$ or $\sigma^{2}$ are often unknown. When either $N$ or $\sigma^{2}$ is unknown, the formula for determining the sample size to produce a bound on the error of estimation for a simple random sample can still be used as long as the approximate values of $N$ and $\sigma^{2}$ are available. In this case, the resulting sample size will produce a bound on the error of estimation that is close to $B$ provided the approximate values of $N$ and $\sigma^{2}$ are reasonably accurate. The proportion of the units in the population that are sampled is $n / N$, which is called the sampling proportion. When a rough guess of the size of the population cannot be reasonably made, but it is clear that the sampling proportion will be less than $5 \%$, then an alternative formula for determining the sample size is needed. In this case, the sample size required for a simple random sample or a systematic random sample having bound on the error of estimation $B$ for estimating the mean is approximately $$n=\frac{4 \sigma^{2}}{B^{2}}$$ ## 统计代写|生物统计代写biostatistics代考|The Sample Size for a Stratified Random Sample Recall that a stratified random sample is simply a collection of simple random samples selected from the subpopulations in the target population. In a stratified random sample, there are two sample size considerations, namely, the overall sample size $n$ and the allocation of $n$ units over the strata. When there are $k$ strata, the strata sample sizes will be denoted by $n_{1}, n_{2}, n_{3}, \ldots, n_{k}$, where the number to be sampled in strata 1 is $n_{1}$, the number to be sampled in strata 2 is $n_{2}$, and so on. There are several different ways of determining the overall sample size and its allocation in a stratified random sample. In particular, proportional allocation and optimal allocation are two commonly used allocation plans. Throughout the discussion of these two allocation plans, it will be assumed that the target population has $k$ strata, $N$ units, and $N_{j}$ is the number of units in the $j$ th stratum. The sample size used in a stratified random sample and the most efficient allocation of the sample will depend on several factors including the variability within each of the strata, the proportion of the target population in each of the strata, and the costs associated with sampling the units from the strata. Let $\sigma_{i}$ be the standard deviation of the $i$ th stratum, $W_{i}=N_{i} / N$ the proportion of the target population in the $i$ th stratum, $C_{0}$ the initial cost of sampling, $C_{i}$ the cost of obtaining an observation from the $i$ th stratum, and $C$ is the total cost of sampling. Then, the cost of sampling with a stratified random sample is $$C=C_{0}+C_{1} n_{1}+C_{2} n_{2}+\cdots+C_{k} n_{k}$$ The process of determining the sample size for a stratified random sample requires that the allocation of the sample be determined first. The allocation of the sample size $n$ over the $k$ strata is based on the sampling proportions that are denoted by $w_{1}, w_{2}, \ldots w_{k}$. Once the sampling proportions and the overall sample size $n$ have been determined, the $i$ th stratum sample size is $n_{i}=n \times w_{i}$. The simplest allocation plan for a stratified random sample is proportional allocation that takes the sampling proportions to be proportional to the strata sizes. Thus, in proportional allocation, the sampling proportion for the $i$ th stratum is equal to the proportion of the population in the ith stratum. That is, the sampling proportion for the $i$ th stratum is $$w_{i}=\frac{N_{i}}{N}$$ The overall sample size for a stratified random sample based on proportional allocation that will have bound on error of estimation for estimating the mean equal to $B$ is $$n=\frac{N_{1} \sigma_{1}^{2}+N_{2} \sigma_{2}^{2}+\cdots+N_{k} \sigma_{k}^{2}}{N\left[\frac{B^{2}}{4}\right]+\frac{1}{N}\left(N_{1} \sigma_{1}^{2}+N_{2} \sigma_{2}^{2}+\cdots+N_{k} \sigma_{k}^{2}\right)}$$ The sample size for the simple random sample that will be selected from the $i$ th stratum according to proportional allocation is $$n \times w_{i}=n \times \frac{N_{i}}{N}$$ ## 统计代写|生物统计代写biostatistics代考|Bar and Pie Charts In the case of qualitative or discrete data, the graphical statistics that are most often used to summarize the data in the observed sample are the bar chart and the pie chart since the important parameters of the distribution of a qualitative variable are population proportions. Thus, for a qualitative variable the sample proportions are the values that will be displayed in a bar chart or a pie chart. In Chapter 2, the distribution of a qualitative variable was often presented in a bar chart in which the height of a bar represented the proportion or the percentage of the population having each quality the variable takes on. With an observed sample, bar charts can be used to represent the sample proportions or percentages for each of the qualities the variable takes on and can be used to make statistical inferences about the population distribution of the variable. There are many types of bar charts including simple bar charts, stacked bar charts, and comparative side-by-side bar charts. An example of a simple bar chart for the weight classification for babies, which takes on the values normal and low, in the Birth Weight data set is shown in Figure 4.1. Note that a bar chart represents the category percentages or proportions with bars of height equal to the percentage or proportion of sample observations falling in a particular category. The widths of the bars should be equal and chosen so that an appealing chart is produced. Bar charts may be drawn with either horizontal or vertical bars, and the bars in a bar chart may or may not be separated by a gap. An example of a bar chart with horizontal bars is given in Figure $4.2$ for the weight classification of babies in the Birth Weight data set. In creating a bar chart it is important that 1. the proportions or percentages in each bar can be easily determined to make the bar chart easier to read and interpret. 2. the total percentage represented in the bar chart should be 100 since a distribution contains $100 \%$ of the population units. 3. the qualities associated with an ordinal variable are listed in the proper relative order! With a nominal variable the order of the categories is not important. 4. the bar chart has the axes of the bar chart clearly labeled so that it is clear whether the bars represent a percentage or a proportion. 5. the bar chart has either a caption or a title that clearly describes the nature of the bar chart. ## 统计代写|生物统计代写biostatistics代考|The Sample Size for Simple and Systematic Random Samples n=ñσ2(ñ−1)D+σ2 n=5000(50)4999(1.524)+50=87.35 n=4σ2乙2 ## 统计代写|生物统计代写biostatistics代考|The Sample Size for a Stratified Random Sample C=C0+C1n1+C2n2+⋯+Cķnķ n=ñ1σ12+ñ2σ22+⋯+ñķσķ2ñ[乙24]+1ñ(ñ1σ12+ñ2σ22+⋯+ñķσķ2) n×在一世=n×ñ一世ñ ## 统计代写|生物统计代写biostatistics代考|Bar and Pie Charts 1. 可以轻松确定每个条形中的比例或百分比,以使条形图更易于阅读和解释。 2. 条形图中表示的总百分比应为 100,因为分布包含100%人口单位。 3. 与序数变量相关的质量以正确的相对顺序列出!对于名义变量,类别的顺序并不重要。 4. 条形图清楚地标记了条形图的轴,以便清楚条形是代表百分比还是比例。 5. 条形图具有清楚地描述条形图性质的标题或标题。 ## 有限元方法代写 tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。 ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。 ## 统计代写|生物统计代写biostatistics代考| RANDOM SAMPLING statistics-lab™ 为您的留学生涯保驾护航 在代写生物统计biostatistics方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写生物统计biostatistics代写方面经验极为丰富,各种生物统计biostatistics相关的作业也就用不着说。 • Statistical Inference 统计推断 • Statistical Computing 统计计算 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 ## 统计代写|生物统计代写biostatistics代考|OBTAINING REPRESENTATIVE DATA The purpose of sampling is to get a sufficient amount of data that is representative of target population so that statistical inferences can be made about the distribution and the parameters of the target population. Because a sample is only a subset of the units in the target population, it is generally impossible to guarantee that the sample data are representative of the target population; however, with a well-designed sampling plan, it will be unlikely to select a sample that is not representative of the target population. To ensure the likelihood that the sample data will be representative of the target population, the following components of the sampling process must be considered: • Target Population The target population must be well defined, accessible, and the researcher should have a good understanding of the structure of the population. In particular, the researcher should be able to identify the units of the population, the approximate number of units in the population, subpopulations, the approximate shape of the distributions of the variables being studied, and the relevant parameters that need to be estimated. • Sampling Units The Sampling units are the units of the population that will be sampled. A sampling unit may or may not be a unit of the population. In fact, in some sampling plans, the sampling unit is a collection of population units. The sampling unit is also the smallest unit in the target population that can be selected. • Sampling Element A sampling element is an object on which measurements will be made. A sampling element may or may not be a sampling unit. When the sampling unit consists of several population units, it is called a cluster of units. If each population unit in a cluster will be measured, then the sampling elements are the population units within the sampled clusters. In this case, the sampling element is a subunit of the sampling unit. • Sampling Frame The sampling frame is the list of sampling units that are available for sampling. The sampling frame should be nearly equal to the target population. When the sampling frame is significantly different from the target population, it makes it less unlikely that a sample representative of the target population will be obtained, even with a well-designed sampling plan. Sampling frames that fail to include all of the units of the target population are said to undercover the target population and may lead to biased samples. • Sample Size The sample size is the number of sampling units that will be selected. The sample size will be denoted by $n$ and must be sufficiently large to ensure the reliability of the statistical analysis. The variability in the target population plays a key role in determining the sample size necessary for the desired level of reliability associated with a statistical analysis. ## 统计代写|生物统计代写biostatistics代考|Probability Samples The statistical theory that provides the foundation for the estimation or testing of research hypotheses about the parameters of a population is based on the sampling structure known as probability sampling. A probability sample is a sample that is selected in a random fashion according to some probability model. In particular, a probability sample is a sample chosen so that each of the possible samples is known in advance and the probability of drawing each sampling unit is known. Random samples are samples that arise through a sampling plan based on probability sampling. Probability sampling allows flexibility in the sampling plan and can be designed specifically for the target population being studied. That is, a probability sampling plan allows a sample to be designed so that it will be unlikely to produce a sample that is not representative of the target population. Furthermore, probability samples allow for confidence statements and hypothesis tests to be made from the observed sample with a high degree of reliability. Samples of convenience are samples that are not based on probability samples and are also referred to as nonprobability samples. The statistical theory that justifies the use of confidence statements and tests of hypotheses does not apply to nonprobability samples; therefore, confidence statements and test of the research hypotheses based on nonprobability samples are erroneous applications of statistics and should not be trusted. In a random sample, the chance that a particular unit of the population will be selected is known prior to sampling, and the units available for sampling are selected at random according to these probabilities. The procedure for drawing a random sample is outlined below. ## 统计代写|生物统计代写biostatistics代考|Simple Random Sampling The first sampling plan that will be discussed is the simple random sample. A simple random sample of size $n$ is a sample consisting of $n$ sampling units selected in a fashion that every possible sample of $n$ units has the same chance of being selected. In a simple random sample, every possible sample has the same chance of being selected, and moreover, each sampling unit has the same chance of being drawn in a sample. Simple random sampling is a reasonable sampling plan for sampling homogeneous or heterogeneous populations that do not have distinct subpopulations that are of interest to the researcher. Example 3.3 Simple random sampling might be a reasonable sampling plan in the following scenarios: a. A pharmaceutical company is checking the quality control issues of the tablet form of a new drug. Here, the company might take a random sample of tablets from a large pool of available drug tablets it has recently manufactured. b. The Federal Food and Drug Administration (FDA) may take a simple random sample of a particular food product to check the validity of the information on the nutrition label. c. A state might wish to take a simple random sample of medical doctors to review whether or not the state’s continuing education requirements are being satisfied. d. A federal or state environment agency may wish to take a simple random sample of homes in a mining town to investigate the general health of the town’s inhabitants and contamination problems in the homes resulting from the mining operation. The number of possible simple random samples of size $n$ selected from a sampling frame listing of $N$ sampling units is $$\left(\begin{array}{l} N \ n \end{array}\right)=\frac{N !}{n !(N-n) !}$$ The probability that any one of the possible simple random samples of $n$ units selected from a sampling frame of $N$ units is $$\frac{1}{\frac{N !}{n !(N-n) !}}=\frac{n !(N-n) !}{N !}$$ ## 统计代写|生物统计代写biostatistics代考|OBTAINING REPRESENTATIVE DATA • 目标人群 目标人群必须明确定义、易于访问,并且研究人员应对人群结构有很好的了解。特别是,研究人员应该能够识别总体单位、总体中单位的大致数量、亚总体、所研究变量分布的大致形状以及需要估计的相关参数。 • 抽样单位 抽样单位是要抽样的总体单位。抽样单位可能是也可能不是人口的单位。事实上,在一些抽样计划中,抽样单位是人口单位的集合。抽样单位也是目标人群中可以选择的最小单位。 • 采样元件 采样元件是将对其进行测量的对象。采样元件可以是也可以不是采样单元。当抽样单位由若干人口单位组成时,称为单位群。如果每个 • 抽样框架 抽样框架是可用于抽样的抽样单位列表。抽样框架应该几乎等于目标人群。当抽样框架与目标人群显着不同时,即使采用精心设计的抽样计划,也不太可能获得代表目标人群的样本。未能包括目标总体的所有单位的抽样框架被称为隐藏目标总体,并可能导致样本有偏差。 • 样本大小 样本大小是要选择的抽样单位的数量。样本大小将表示为n并且必须足够大以确保统计分析的可靠性。目标人群的可变性在确定与统计分析相关的所需可靠性水平所需的样本量方面起着关键作用。 ## 统计代写|生物统计代写biostatistics代考|Simple Random Sampling :一家制药公司正在检查一种新药片剂的质量控制问题。在这里,该公司可能会从其最近生产的大量可用药片中随机抽取片剂样本。 C。一个州可能希望对医生进行简单的随机抽样,以审查该州的继续教育要求是否得到满足。 d。联邦或州环境机构可能希望对采矿城镇中的房屋进行简单的随机抽样,以调查该镇居民的总体健康状况以及采矿作业导致的房屋污染问题。 (ñ n)=ñ!n!(ñ−n)! 1ñ!n!(ñ−n)!=n!(ñ−n)!ñ! ## 有限元方法代写 tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。 ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。 ## 统计代写|生物统计代写biostatistics代考| PROBABILITY MODELS statistics-lab™ 为您的留学生涯保驾护航 在代写生物统计biostatistics方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写生物统计biostatistics代写方面经验极为丰富,各种生物统计biostatistics相关的作业也就用不着说。 • Statistical Inference 统计推断 • Statistical Computing 统计计算 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 ## 统计代写|生物统计代写biostatistics代考|The Binomial Probability Model The binomial probability model can be used for modeling the number of times a particular event occurs in a sequence of repeated trials. In particular, a binomial random variable is a discrete variable that is used to model chance experiments involving repeated dichotomous trials. That is, the binomial model is used to model repeated trials where the outcome of each trial is one of the two possible outcomes. The conditions under which the binomial probability model can be used are given below. A random variable satisfying the above conditions is called a binomial random variable. Note that a binomial random variable $X$ simply counts the number of successes that occurred in $n$ trials. The probability distribution for a binomial random variable $X$ is given by the mathematical expression $$p(x)=\frac{n !}{x !(n-x) !} p^{x}(1-p)^{n-x} \quad \text { for } x=0,1, \ldots, n$$ where $p(x)$ is the probability that $X$ is equal to the value $x$. In this formula • $\frac{n !}{x !(n-x) !}$ is the number of ways for there to be $x$ successes in $n$ trials, • $n !=n(n-1)(n-2) \cdots 3 \cdot 2 \cdot 1$ and $0 !=1$ by definition, • $p$ is the probability of a success on any of the $n$ trials, • $p^{x}$ is the probability of having $x$ successes in $n$ trials, • $1-p$ is the probability of a failure on any of the $n$ trials, • $(1-p)^{n-x}$ is the probability of getting $n-x$ failures in $n$ trials. Examples of the binomial distribution are given in Figure 2.24. Note that a binomial distribution will have a longer tail to the right when $p<0.5$, a longer tail to the left when $p>0.5$, and is symmetric when $p=0.5$. Because the computations for the probabilities associated with a binomial random variable are tedious, it is best to use a statistical computing package such as MINITAB for computing binomial probabilities. ## 统计代写|生物统计代写biostatistics代考|The Normal Probability Model The choice of a probability model for continuous variables is generally based on historical data rather than a particular set of conditions. Just as there are many discrete probability models, there are also many different probability models that can be used to model the distribution of a continuous variable. The most commonly used continuous probability model in statistics is the normal probability model. The normal probability model is often used to model distributions that are expected to be unimodal and symmetric, and the normal probability model forms the foundation for many of the classical statistical methods used in biostatistics. Moreover, the distribution of many natural phenomena can be modeled very well with the normal distribution. For example, the weights, heights, and IQs of adults are often modeled with normal distributions. The standard normal, which will be denoted by $Z$, is a normal distribution having mean 0 and standard deviation 1. The standard normal is used as the reference distribution from which the probabilities and percentiles associated with any normal distribution will be determined. The cumulative probabilities for a standard normal are given in Tables A.1 and A.2; because $99.95 \%$ of the standard normal distribution lies between the values $-3.49$ and $3.49$, the standard normal values are only tabulated for $z$ values between $-3.49$ and $3.49$. Thus, when the value of a standard normal, say $z$, is between $-3.49$ and $3.49$, the tabled value for $z$ represents the cumulative probability of $z$, which is $P(Z \leq z)$ and will be denoted by $\Phi(z)$. For values of $z$ below $-3.50, \Phi(z)$ will be taken to be 0 and for values of $z$ above $3.50, \Phi(z)$ will be taken to be 1. Tables A.1 and A.2 can be used to compute all of the probabilities associated with a standard normal. The values of $z$ are referenced in Tables A.1 and A.2 by writing $z=a . b c$ as $z=a . b+0.0 c$. To locate a value of $z$ in Table A.1 and A.2, first look up the value $a . b$ in the left-most column of the table and then locate $0.0 c$ in the first row of the table. The value cross-referenced by $a . b$ and $0 . c$ in Tables A.1 and A.2 is $\Phi(z)=P(Z \leq z)$. The rules for computing the probabilities for a standard normal are given below. ## 统计代写|生物统计代写biostatistics代考|Z Scores The result of converting a non-standard normal value, a raw value, to a $Z$-value is a $Z$ score. A $Z$ score is a measure of the relative position a value has within its distribution. In particular, a $Z$ score simply measures how many standard deviations a point is above or below the mean. When a $Z$ score is negative the raw value lies below the mean of its distribution, and when a $Z$ score is positive the raw value lies above the mean. $Z$ scores are unitless measures of relative standing and provide a meaningful measure of relative standing only for mound-shaped distributions. Furthermore, $Z$ scores can be used to compare the relative standing of individuals in two mound-shaped distributions. Example 2.41 The weights of men and women both follow mound-shaped distributions with different means and standard deviations. In fact, the weight of a male adult in the United States is approximately normal with mean $\mu=180$ and standard deviation $\sigma=30$, and the weight of a female adult in the United States is approximately normal with mean $\mu=145$ and standard deviation $\sigma=15$. Given a male weighing $215 \mathrm{lb}$ and a female weighing $170 \mathrm{lb}$, which individual weighs more relative to their respective population? The answer to this question can be found by computing the $Z$ scores associated with each of these weights to measure their relative standing. In this case, $$z_{\text {male }}=\frac{215-180}{30}=1.17$$ and $$z_{\text {female }}=\frac{170-145}{15}=1.67$$ Since the female’s weight is $1.67$ standard deviations from the mean weight of a female and the male’s weight is $1.17$ standard deviations from the mean weight of a male, relative to their respective populations a female weighing $170 \mathrm{lb}$ is heavier than a male weighing $215 \mathrm{lb}$. ## 统计代写|生物统计代写biostatistics代考|The Binomial Probability Model p(X)=n!X!(n−X)!pX(1−p)n−X 为了 X=0,1,…,n • n!X!(n−X)!是有多少种方式X成功n试验, • n!=n(n−1)(n−2)⋯3⋅2⋅1和0!=1根据定义, • p是任何一个成功的概率n试验, • pX是拥有的概率X成功n试验, • 1−p是任何一个失败的概率n试验, • (1−p)n−X是得到的概率n−X失败n试验。 图 2.24 给出了二项分布的示例。请注意,当二项分布的右尾较长时p<0.5, 一条较长的尾巴在左边时p>0.5, 并且是对称的p=0.5. ## 有限元方法代写 tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。 ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。 ## 统计代写|生物统计代写biostatistics代考|The Coefficient of Variation statistics-lab™ 为您的留学生涯保驾护航 在代写生物统计biostatistics方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写生物统计biostatistics代写方面经验极为丰富,各种生物统计biostatistics相关的作业也就用不着说。 • Statistical Inference 统计推断 • Statistical Computing 统计计算 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 ## 统计代写|生物统计代写biostatistics代考|The Coefficient of Variation The standard deviations of two populations resulting from measuring the same variable can be compared to determine which of the two populations is more variable. That is, when one standard deviation is substantially larger than the other (i.e., more than two times as large), then clearly the population with the larger standard deviation is much more variable than the other. It is also important to be able to determine whether a single population is highly variable or not. A parameter that measures the relative variability in a population is the coefficient of variation. The coefficient of variation will be denoted by CV and is defined to be $$\mathrm{CV}=\frac{\sigma}{|\mu|}$$ The coefficient of variation is also sometimes represented as a percentage in which case $$\mathrm{CV}=\frac{\sigma}{|\mu|} \times 100 \%$$ The coefficient of variation compares the size of the standard deviation with the size of the mean. When the coefficient of variation is small, this means that the variability in the population is relatively small compared to the size of the mean of the population. On the other hand, when the coefficient of variation is large, this indicates that the population varies greatly relative to the size of the mean. The standard for what is a large coefficient of variation differs from one discipline to another, and in some disciplines a coefficient of variation of less than $15 \%$ is considered reasonable, and in other disciplines larger or smaller cutoffs are used. Because the standard deviation and the mean have the same units of measurement, the coefficient of variation is a unitless parameter. That is, the coefficient is unaffected by changes in the units of measurement. For example, if a variable $X$ is measured in inches and the coefficient of variation is $\mathrm{CV}=2$, then coefficient of variation will also be 2 when the units of measurement are converted to centimeters. The coefficient of variation can also be used to compare the relative variability in two different and unrelated populations; the standard deviation can only be used to compare the variability in two different populations based on similar variables. ## 统计代写|生物统计代写biostatistics代考|Parameters for Bivariate Populations In most biomedical research studies, there are many variables that will be recorded on each individual in the study. A multivariate distribution can be formed by jointly tabulating, charting, or graphing the values of the variables over the $N$ units in the population. For example, the bivariate distribution of two variables, say $X$ and $Y$, is the collection of the ordered pairs $$\left(X_{1}, Y_{1}\right),\left(X_{2}, Y_{2}\right),\left(X_{3}, Y_{3}\right), \ldots,\left(X_{N}, Y_{N}\right)$$ These $N$ ordered pairs form the units of the bivariate distribution of $X$ and $Y$ and their joint distribution can be displayed in a two-way chart, table, or graph. When the two variables are qualitative, the joint proportions in the bivariate distribution are often denoted by $p_{a b}$, where $$p_{a b}=\text { proportion of pairs in population where } X=a \text { and } Y=b$$ The joint proportions in the bivariate distribution are then displayed in a two-way table or two-way bar chart. For example, according to the American Red Cross, the joint distribution of blood type and Rh factor is given in Table $2.7$ and presented as a bar chart in Figure $2.21$. ## 统计代写|生物统计代写biostatistics代考|Basic Probability Rules Determining the probabilities associated with complex real-life events often requires a great deal of information and an extensive scientific understanding of the structure of the chance experiment being studied. In fact, even when the sample space and event are easily identified, the determination of the probability of an event can be an extremely difficult task. For example, in studying the side effects of a drug, the possible side effects can generally be anticipated and the sample space will be known. However, because humans react differently to drugs, the probabilities of the occurrence of the side effects are generally unknown. The probabilities of the side effects are often estimated in clinical trials. The following basic probability rules are often useful in determining the probability of an event. 1. When the outcomes of a random experiment are equally likely to occur, the probability of an event $A$ is the number of outcomes in $A$ divided by the number of simple events in $\mathcal{S}$. That is, $$P(A)=\frac{\text { number of simple events in } A}{\text { number of simple events in } \mathcal{S}}=\frac{N(A)}{N(\delta)}$$ 2. For every event $A$, the probability of $A$ is the sum of the probabilities of the outcomes comprising $A$. That is, when an event $A$ is comprised of the outcomes $O_{1}, O_{2}, \ldots, O_{k}$, the probability of the event $A$ is $$P(A)=P\left(O_{1}\right)+P\left(O_{2}\right)+\cdots+P\left(O_{k}\right)$$ 3. For any two events $A$ and $B$, the probability that either event $A$ or event $B$ occurs is $$P(A \text { or } B)=P(A)+P(B)-P(A \text { and } B)$$ 4. The probability that the event $A$ does not occur is 1 minus the probability that the event $A$ does occur. That is, $$P(A \text { does not occur })=1-P(A)$$ C在=σ|μ| C在=σ|μ|×100% ## 统计代写|生物统计代写biostatistics代考|Parameters for Bivariate Populations (X1,是1),(X2,是2),(X3,是3),…,(Xñ,是ñ) p一个b= 人口中对的比例 X=一个 和 是=b ## 统计代写|生物统计代写biostatistics代考|Basic Probability Rules 1. 当随机实验的结果同样可能发生时,事件发生的概率一个是结果的数量一个除以简单事件的数量小号. 那是, 磷(一个)= 简单事件的数量 一个 简单事件的数量 小号=ñ(一个)ñ(d) 2. 对于每一个事件一个, 的概率一个是结果的概率之和,包括一个. 也就是说,当一个事件一个由结果组成○1,○2,…,○ķ, 事件的概率一个是 磷(一个)=磷(○1)+磷(○2)+⋯+磷(○ķ) 3. 对于任意两个事件一个和乙, 任一事件的概率一个或事件乙发生是 磷(一个 或者 乙)=磷(一个)+磷(乙)−磷(一个 和 乙) 4. 事件发生的概率一个不发生是 1 减去事件发生的概率一个确实发生。那是, 磷(一个 不发生 )=1−磷(一个) ## 有限元方法代写 tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。 ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。 ## 统计代写|生物统计代写biostatistics代考|Describing a Population with Parameters statistics-lab™ 为您的留学生涯保驾护航 在代写生物统计biostatistics方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写生物统计biostatistics代写方面经验极为丰富,各种生物统计biostatistics相关的作业也就用不着说。 • Statistical Inference 统计推断 • Statistical Computing 统计计算 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 ## 统计代写|生物统计代写biostatistics代考|Proportions and Percentiles Populations are often summarized by listing the important percentages or proportions associated with the population. The proportion of units in a population having a particular characteristic is a parameter of the population, and a population proportion will be denoted by $p$. The population proportion having a particular characteristic, say characteristic $A$, is defined to be $$p=\frac{\text { number of units in population having characteristic A }}{N}$$ Note that the percentage of the population having characteristic A is $p \times 100 \%$. Population proportions and percentages are often associated with the categories of a qualitative variable or with the values in the population falling in a specific range of values. For example, the distribution of a qualitative variable is usually displayed in a bar chart with the height of a bar representing either the proportion or percentage of the population having that particular value. Example 2.12 The distribution of blood type according to the American Red Cross is given in Table $2.4$ in terms of proportions. An important proportion in many biomedical studies is the proportion of individuals having a particular disease, which is called the prevalence of the disease. The prevalence of a disease is defined to be Prevalence $=$ The proportion of individuals in a well-defined population having the disease of interest For example, according to the Centers for Disease Control and Prevention (CDC) the prevalence of smoking among adults in the United States in January through June 2005 was $20.9 \%$. Proportions also play important roles in the study of survival and cure rates, the occurrence of side effects of new drugs, the absolute and relative risks associated with a disease, and the efficacy of new treatments and drugs. ## 统计代写|生物统计代写biostatistics代考|Parameters Measuring Centrality The two parameters in the population of values of a quantitative variable that summarize how the variable is distributed are the parameters that measure the typical or central values in the population and the parameters that measure the spread of the values within the population. Parameters describing the central values in a population and the spread of a population are often used for summarizing the distribution of the values in a population; however, it is important to note that most populations cannot be described very well with only the parameters that measure centrality and the spread of the population. Measures of centrality, location, or the typical value are parameters that lie in the “center” or “middle” region of a distribution. Because the center or middle of a distribution is not easily determined due to the wide range of different shapes that are possible with a distribution, there are several different parameters that can be used to describe the center of a population. The three most commonly used parameters for describing the center of a population are the mean, median, and mode. For a quantitative variable $X$. • The mean of a population is the average of all of the units in the population, and will be denoted by $\mu$. The mean of a variable $X$ measured on a population consisting of $N$ units is $$\mu=\frac{\text { sum of the values of } X}{N}=\frac{\sum X}{N}$$ • The median of a population is the 50 th percentile of the population, and will be denoted by $\tilde{\mu}$. The median of a population is found by first listing all of the values of the variable $X$, including repeated $X$ values, in ascending order. When the number of units in the population (i.e., $N$ ) is an odd number, the median is the middle observation in the list of ordered values of $X$; when $N$ is an even number, the median will be the average of the two observations in the middle of the ordered list of $X$ values. • The mode of a population is the most frequent value in the population, and will be denoted by $M$. In a graph of the probability density function, the mode is the value of $X$ under the peak of the graph, and a population can have more than one mode as shown in Figure 2.8. The mean, median, and mode are three different parameters that can be used to measure the center of a population or to describe the typical values in a population. These three parameters will have nearly the same value when the distribution is symmetric or mound shaped. For long-tailed distributions, the mean, median, and mode will be different, and the difference in their values will depend on the length of the distribution’s longer tail. Figures $2.12$ and $2.13$ illustrate the relationships between the values of the mean, median, and mode for long-tail right and long-tail left distributions. ## 统计代写|生物统计代写biostatistics代考|Measures of Dispersion While the mean, median, and mode of a population describe the typical values in the population, these parameters do not describe how the population is spread over its range of values. For example, Figure $2.16$ shows two populations that have the same mean, median, and mode but different spreads. Even though the mean, median, and mode of these two populations are the same, clearly, population I is much more spread out than population II. The density of population II is greater at the mean, which means that population II is more concentrated at this point than population I. When describing the typical values in the population, the more variation there is in a population the harder it is to measure the typical value, and just as there are several ways of measuring the center of a population there are also several ways to measure the variation in a population. The three most commonly used parameters for measuring the spread of a population are the variance, standard deviation, and interquartile range. For a quantitative variable $X$ • the variance of a population is defined to be the average of the squared deviations from the mean and will be denoted by $\sigma^{2}$ or $\operatorname{Var}(X)$. The variance of a variable $X$ measured on a population consisting of $N$ units is $$\sigma^{2}=\frac{\text { sum of all(deviations from } \mu)^{2}}{N}=\frac{\sum(X-\mu)^{2}}{N}$$ • the standard deviation of a population is defined to be the square root of the variance and will be denoted by $\sigma$ or $\operatorname{SD}(X)$. $$\operatorname{SD}(X)=\sigma=\sqrt{\sigma^{2}}=\sqrt{\operatorname{Var}(X)}$$ • the interquartile range of a population is the distance between the 25 th and 75 th percentiles and will be denoted by IQR. $$\mathrm{IQR}=75 \text { th percentile }-25 \text { th percentile }=X_{75}-X_{25}$$ ## 统计代写|生物统计代写biostatistics代考|Proportions and Percentiles p= 人口中具有特征 A 的单位数 ñ ## 统计代写|生物统计代写biostatistics代考|Parameters Measuring Centrality • 总体的平均值是总体中所有单位的平均值,表示为μ. 变量的平均值X在由以下人员组成的总体上测量ñ单位是 μ= 的值的总和 Xñ=∑Xñ • 人口的中位数是人口的第 50 个百分位,表示为μ~. 通过首先列出变量的所有值来找到总体的中位数X,包括重复X值,按升序排列。当人口中的单位数(即,ñ) 是奇数,中位数是 的有序值列表中的中间观察值X; 什么时候ñ是偶数,中位数将是有序列表中间的两个观察值的平均值X价值观。 • 人口的众数是人口中出现频率最高的值,记为米. 在概率密度函数图中,众数是X如图 2.8 所示,一个总体可以有多个众数。 ## 统计代写|生物统计代写biostatistics代考|Measures of Dispersion • 总体的方差定义为与均值的平方偏差的平均值,并表示为σ2或者曾是⁡(X). 变量的方差X σ2= 所有的总和(偏离 μ)2ñ=∑(X−μ)2ñ • 总体的标准差定义为方差的平方根,表示为σ或者标清⁡(X). 标清⁡(X)=σ=σ2=曾是⁡(X) • 人口的四分位距是第 25 和第 75 个百分位数之间的距离,用 IQR 表示。 我问R=75 百分位数 −25 百分位数 =X75−X25 ## 有限元方法代写 tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。 ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。 ## 统计代写|生物统计代写biostatistics代考|POPULATIONS AND VARIABLES statistics-lab™ 为您的留学生涯保驾护航 在代写生物统计biostatistics方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写生物统计biostatistics代写方面经验极为丰富,各种生物统计biostatistics相关的作业也就用不着说。 • Statistical Inference 统计推断 • Statistical Computing 统计计算 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 ## 统计代写|生物统计代写biostatistics代考|Qualitative Variables Qualitative variables take on nonnumeric values and are usually used to represent a distinct quality of a population unit. When the possible values of a qualitative variable have no intrinsic ordering, the variable is called a nominal variable; when there is a natural ordering of the possible values of the variable, then the variable is called an ordinal variable. An example of a nominal variable is Blood Type where the standard values for blood type are $\mathrm{A}, \mathrm{B}, \mathrm{AB}$, and $\mathrm{O}$. Clearly, there is no intrinsic ordering of these blood types, and hence, Blood Type is a nominal variable. An example of an ordinal variable is the variable Pain where a subject is asked to describe their pain verbally as • No pain, • Mild pain, • Discomforting pain, • Distressing pain, • Intense pain, • Excruciating pain. In this case, since the verbal descriptions describe increasing levels of pain, there is a clear ordering of the possible values of the variable Pain levels, and therefore, Pain is an ordinal qualitative variable. Example 2.2 In the Framingham Heart Study of coronary heart disease, the following two nominal qualitative variables were recorded: $$\text { Smokes }=\left{\begin{array}{l} \text { Yes } \ \text { No } \end{array}\right.$$ • and • $$• \text { Diabetes }=\left{\begin{array}{l} • \text { Yes } \ • \text { No } • \end{array}\right. •$$ • Example $2.3$ • An example of an ordinal variable is the variable Baldness when measured on the Norwood-Hamilton scale for male-pattern baldness. The variable Baldness is measured according to the seven categories listed below: • I Full head of hair without any hair loss. • II Minor recession at the front of the hairline. • III Further loss at the front of the hairline, which is considered “cosmetically significant.” • IV Progressively more loss along the front hairline and at the crown. • V Hair loss extends toward the vertex. • VI Frontal and vertex balding areas merge into one and increase in size. • VII All hair is lost along the front hairline and crown. • Clearly, the values of the variable Baldness indicate an increasing degree of hair loss, and thus, Baldness as measured on the Norwood-Hamilton scale is an ordinal variable. This variable is also measured on the Offspring Cohort in the Framingham Heart Study. ## 统计代写|生物统计代写biostatistics代考|A quantitative variable A quantitative variable is a variable that takes only numeric values. The values of a quantitative variable are said to be measured on an interval scale when the difference between two values is meaningful; the values of a quantitative variable are said to be measured on a ratio scale when the ratio of two values is meaningful. The key difference between a variable measured on an interval scale and a ratio scale is that on a ratio scale there is a “natural zero” representing absence of the attribute being measured, while there is no natural zero for variables measured on only an interval scale. Some scales of measurement will have natural zero and some will not. When a measurement scale has a natural zero, then the ratio of two measurements is a meaningful measure of how many times larger one value is than the other. For example, the variable Fat that represents the grams of fat in a food product is measured on a ratio scale because the value Fat $=0$ indicates that the unit contained absolutely no fat. When a scale of measurement does not have a natural zero, then only the difference between two measurements is a meaningful comparison of the values of the two measurements. For example, the variable Body Temperature is measured on a scale that has no natural zero since Body Temperature $=0$ does not indicate that the body has no temperature. Since interval scales are ordered, the difference between two values measures how much larger one value is than another. A ratio scale is also an interval scale but has the additional property that the ratio of two values is meaningful. Thus, for a variable measured on an interval scale the difference of two values is the meaningful way to compare the values, and for a variable measured on a ratio scale both the difference and the ratio of two values are meaningful ways to compare difference values of the variable. For example, body temperature in degrees Fahrenheit is a variable that is measured on an interval scale so that it is meaningful to say that a body temperature of $98.6$ and a body temperature of $102.3$ differ by $3.7$ degrees; however, it would not be meaningful to say that a temperature of $102.3$ is $1.04$ times as much as a temperature of $98.6$. On the other hand, the variable weight in pounds is measured on a ratio scale, and therefore, it would be proper to say that a weight of $210 \mathrm{lb}$ is $1.4$ times a weight of $150 \mathrm{lb}$; it would also be meaningful to say that a weight of $210 \mathrm{lb}$ is $60 \mathrm{lb}$ more than a weight of $150 \mathrm{lb}$. ## 统计代写|生物统计代写biostatistics代考|Multivariate Data In most research problems, there will be many variables that need to be measured. When the collection of variables measured on each unit consists of two or more variables, a data set is called a multivariate data set, and a multivariate data set consisting of only two variables is called a bivariate data set. In a multivariate data set, there is usually one variable that is of primary interest to a research question that is believed to be explained by some of the other variables measured in the study. The variable of primary interest is called a response variable and the variables believed to cause changes in the response are called explanatory variables or predictor variables. The explanatory variables are often referred to as the input variables and the response variable is often referred to as the output variable. Furthermore, in a statistical model, the response variable is the variable that is being modeled; the explanatory variables are the input variables in the model that are believed to cause or explain differences in the response variable. For example, in studying the survival of melanoma patients, the response variable might be Survival Time that is expected to be influenced by the explanatory variables Age, Gender, Clark’s Stage, and Tumor Size. In this case, a model relating Survival Time to the explanatory variables Age, Gender, Clark’s Stage, and Tumor Size might be investigated in the research study. A multivariate data set often consists of a mixture of qualitative and quantitative variables. For example, in a biomedical study, several variables that are commonly measured are a subject’s age, race, gender, height, and weight. When data have been collected, the multivariate data set is generally stored in a spreadsheet with the columns containing the data on each variable and the rows of the spreadsheet containing the observations on each subject in the study. In studying the response variable, it is often the case that there are subpopulations that are determined by a particular set of values of the explanatory variables that will be important in answering the research questions. In this case, it is critical that a variable be included in the data set that identifies which subpopulation each unit belongs to. For example, in the National Health and Nutrition Examination Survey (NHANES) study, the distribution of the weight of female children was studied. The response variable in this study was weight and some of the explanatory variables measured in this study were height, age, and gender. The result of this part of the NHANES study was a distribution of the weights of females over a certain range of age. The resulting distributions were summarized in the chart given in Figure $2.2$ that shows the weight ranges for females for several different ages. ## 统计代写|生物统计代写biostatistics代考|Qualitative Variables • 不痛, • 轻微的疼痛, • 令人不适的疼痛, • 让人心疼的痛, • 剧烈的疼痛, • 难以忍受的疼痛。 在这种情况下,由于口头描述描述了疼痛程度的增加,因此变量疼痛水平的可能值有一个明确的顺序,因此,疼痛是一个有序的定性变量。 例 2.2 在冠心病的弗雷明汉心脏研究中,记录了以下两个名义上的定性变量: $$\text { Smokes }=\left{ 是的 不 \正确的。$$ • $$• \text { 糖尿病 }=\left{\begin{array}{l} • \文本{是} \ • \文本{没有} • \end{数组}\对。 •$$ • 例子2.3 • 序数变量的一个例子是变量 Baldness,当用 Norwood-Hamilton 量表测量男性型秃发时。变量秃头根据以下列出的七个类别进行测量: • 我满头的头发没有任何脱发。 • II 发际线前部的轻微后退。 • III 发际线前部的进一步损失,这被认为是“具有美容意义的”。 • IV 沿着前发际线和头顶逐渐减少。 • V 脱发向顶点延伸。 • VI 前额和头顶秃发区域合并为一个并增加大小。 • VII 所有的头发都沿着前发际线和头顶脱落。 • 显然,变量秃头的值表明脱发程度的增加,因此,在诺伍德-汉密尔顿量表上测量的秃头是一个序数变量。这个变量也在弗雷明汉心脏研究的后代队列中测量。 ## 有限元方法代写 tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。 ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。 ## 统计代写|生物统计代写biostatistics代考|DESCRIBING POPULATIONS statistics-lab™ 为您的留学生涯保驾护航 在代写生物统计biostatistics方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写生物统计biostatistics代写方面经验极为丰富,各种生物统计biostatistics相关的作业也就用不着说。 • Statistical Inference 统计推断 • Statistical Computing 统计计算 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 ## 统计代写|生物统计代写biostatistics代考|The Phases of a Clinical Trial Clinical research is often conducted in a series of steps, called phases. Because a new drug, medicine, or treatment must be safe, effective, and manufactured at a consistent quality, a series of rigorous clinical trials are usually required before the drug, medicine, or treatment can be made available to the general public. In the United States the FDA regulates and oversees the testing and approval of new drugs as well as dietary supplements, cosmetics, medical devices, blood products, and the content of health claims on food labels. The approval of a new drug by the FDA requires extensive testing and evaluation of the drug through a series of four clinical trials, which are referred to as phase $I, I I, I I I$, and $I V$ trials. Each of the four phases is designed with a different purpose and to provide the necessary information to help biomedical researchers answer several different questions about a new drug, treatment, or biomedical procedure. After a clinical trial is completed, the researchers use biostatistical methods to analyze the data collected during the trial and make decisions and draw conclusions about the meaning of their findings and whether further studies are needed. After each phase in the study of a new drug or treatment, the research team must decide whether to proceed to the next phase or stop the investigation of the drug/treatment. Formal approval of a new drug or biomedical procedure generally cannot be made until a phase III trial is completed and there is strong evidence that the drug/treatment is safe and effective. The purpose of a phase $I$ clinical trial is to investigate the safety, efficacy, and side effects of a new drug or treatment. Phase I trials usually involve a small number of subjects and take place at a single or only a few different locations. In a drug trial, the goal of a phase I trial is often to investigate the metabolic and pharmacologic actions of the drug, the efficacy of the drug, and the side effects associated with different dosages of the drug. Phase I drug trials are also referred to as dose finding trials. ## 统计代写|生物统计代写biostatistics代考|POPULATIONS AND VARIABLES In a properly designed biomedical research study, a well-defined target population and a particular set of research questions dictate the variables that should be measured on the units being studied in the research project. In most research problems, there are many variables that must be measured on each unit in the population. The outcome variables that are of primary interest are called the response variables, and the variables that are believed to explain the response variables are called the explanatory variables or predictor variables. For example, in a clinical trial designed to study the efficacy of a specialized treatment designed to reduce the size of a malignant tumor, the following explanatory variables might be recorded for each patient in the study: age, gender, race, weight, height, blood type, blood pressure, and oxygen uptake. The response variable in this study might be change in the size of the tumor. Variables come in a variety of different types; however, each variable can be classified as being either quantitative or qualitative in nature. A variable that takes on only numeric values is a quantitative variable, and a variable that takes on non-numeric values is called a qualitative variable or a categorical variable. Note that a variable is a quantitative or qualitative variable based on the possible values the variable can take on. Example $2.1$ In a study of obesity in the population of children aged 10 or less in the United States, some possible quantitative variables that might be measured include age, height, weight, heart rate, body mass index, and percent body fat; some qualitative variables that might be measured on this population include gender, eye color, race, and blood type. A likely choice for the response variable in this study would be the qualitative variable Obese defined by $$\text { Obese }= \begin{cases}\text { Yes } & \text { for a body mass index of }>30 \ \text { No } & \text { for a body mass index of } \leq 30\end{cases}$$ ## 统计代写|生物统计代写biostatistics代考|Qualitative Variables Qualitative variables take on nonnumeric values and are usually used to represent a distinct quality of a population unit. When the possible values of a qualitative variable have no intrinsic ordering, the variable is called a nominal variable; when there is a natural ordering of the possible values of the variable, then the variable is called an ordinal variable. An example of a nominal variable is Blood Type where the standard values for blood type are $\mathrm{A}, \mathrm{B}, \mathrm{AB}$, and $\mathrm{O}$. Clearly, there is no intrinsic ordering of these blood types, and hence, Blood Type is a nominal variable. An example of an ordinal variable is the variable Pain where a subject is asked to describe their pain verbally as • No pain, • Mild pain, • Discomforting pain, • Distressing pain, • Intense pain, • Excruciating pain. In this case, since the verbal descriptions describe increasing levels of pain, there is a clear ordering of the possible values of the variable Pain levels, and therefore, Pain is an ordinal qualitative variable. Example 2.2 In the Framingham Heart Study of coronary heart disease, the following two nominal qualitative variables were recorded: $$\text { Smokes }=\left{\begin{array}{l} \text { Yes } \ \text { No } \end{array}\right.$$ • and • $$• \text { Diabetes }=\left{\begin{array}{l} • \text { Yes } \ • \text { No } • \end{array}\right. •$$ ## 统计代写|生物统计代写biostatistics代考|POPULATIONS AND VARIABLES 肥胖 ={ 是的  对于体重指数 >30  不  对于体重指数 ≤30 ## 统计代写|生物统计代写biostatistics代考|Qualitative Variables • 不痛, • 轻微的疼痛, • 令人不适的疼痛, • 让人心疼的痛, • 剧烈的疼痛, • 难以忍受的疼痛。 在这种情况下,由于口头描述描述了疼痛程度的增加,因此变量疼痛水平的可能值有一个明确的顺序,因此,疼痛是一个有序的定性变量。 例 2.2 在冠心病的弗雷明汉心脏研究中,记录了以下两个名义上的定性变量: $$\text { Smokes }=\left{ 是的 不 \正确的。$$ • $$• \text { 糖尿病 }=\left{\begin{array}{l} • \文本{是} \ • \文本{没有} • \end{数组}\对。 •$$ ## 有限元方法代写 tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。 ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
2022-10-04 06:24:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.74098801612854, "perplexity": 559.9349483296938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00026.warc.gz"}
https://proofwiki.org/wiki/Integer_Reciprocal_Space_with_Zero_is_not_Extremally_Disconnected
# Integer Reciprocal Space with Zero is not Extremally Disconnected Jump to navigation Jump to search ## Theorem Let $A \subseteq \R$ be the set of all points on $\R$ defined as: $A := \set 0 \cup \set {\dfrac 1 n : n \in \Z_{>0} }$ Let $\struct {A, \tau_d}$ be the integer reciprocal space with zero under the usual (Euclidean) topology. Then $A$ is not extremally disconnected. ## Proof $\struct {A, \tau_d}$ is a metric space. We have: Extremally Disconnected Metric Space is Discrete We also have: Topological Space is Discrete iff All Points are Isolated Zero is Limit Point of Integer Reciprocal Space From definition of limit points: $0$ is not an isolated point of $A$ Hence integer reciprocal space with zero is not the discrete space, and the result follows. $\blacksquare$
2023-03-25 01:56:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9670005440711975, "perplexity": 567.1951213232911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00441.warc.gz"}
https://forum.allaboutcircuits.com/threads/z-transform.28589/
# z-transform #### jut Joined Aug 25, 2007 224 I'm trying to take the (two-sided, aka defined for all n) Z-transform of $x(n)=sin(Bn)$ What I tried to do was split x(n) into $x(n)=sin(Bn)u(n)+sin(Bn)u(-n-1)$ But a problem arises now because the z-transform of the first equals the negative of z-transform of the second, which makes the z-transform = 0. Waaah? Could this be? Last edited: #### vvkannan Joined Aug 9, 2008 138 I dont think the z transform would exist.The right sided sequence will converge only when magnitude of z is greater than 1 and left sided sequence will converge only when z magnitude is lesser than 1 and hence the whole series is not summable .
2019-10-23 16:19:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9144962430000305, "perplexity": 1627.5739893425223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987834649.58/warc/CC-MAIN-20191023150047-20191023173547-00291.warc.gz"}
https://mathoverflow.net/questions/358162/freyd-mitchell-for-k-linear-categories
# Freyd-Mitchell for $k$-linear categories I don't know much about the proof of the Freyd–Mitchell embedding theorem and I could not find an answer to my question looking naïvely online, but at the same time I feel like this is the kind of question to which someone who knows some of the details of the proof might be able to answer immediately, so it's probably worth trying. Here it is: Can the Freyd-Mitchell embedding theorem be made stronger for $$k$$-linear abelian categories (where $$k$$ is a field), saying that not only, if $$\mathcal{A}$$ is a small abelian $$k$$-linear category, there exists a ring $$R$$ and a full, faithful, exact functor $$F: \mathcal{A} → \text{R-\mathrm{Mod}}$$, but that, moreover, $$R$$ can be assumed to be a $$k$$-algebra and $$F$$ to be $$k$$-linear? More in general (also for non-$$k$$-linear categories): can one say anything about $$R$$? Is there even a unique "minimal" $$R$$ (up to Morita equivalence)? Well, if $$\mathcal{A}$$ is a small $$k$$-linear abelian category, then the embedding is given by the following: First we put $$\mathcal{A}$$ inside $$\mathcal{L}(\mathcal{A},\operatorname{Ab})$$, the category of left exact additive functors from $$\mathcal{A}$$ to the category of abelian groups $$\operatorname{Ab}$$, by considering the contravariant Yoneda embedding $$\mathcal{Y} : \mathcal{A} \longrightarrow \mathcal{L}(\mathcal{A},\operatorname{Ab})$$ which sends $$A$$ to $$\operatorname{Hom}_{\mathcal{A}}(A,{-})$$. Since $$\mathcal{A}$$ is $$k$$-linear, we may show that $$\mathcal{L}(\mathcal{A},\operatorname{Ab})$$ is also $$k$$-linear and that $$\mathcal{Y}$$ is a $$k$$-linear functor. ($$\mathcal{Y}$$ is also exact.) Now, $$\mathcal{L}(\mathcal{A},\operatorname{Ab})$$ is a complete abelian $$k$$-linear category possessing an injective cogenerator. Then we apply the duality functor $$D$$ in $$\mathcal{L}(\mathcal{A},\operatorname{Ab})$$ and we obtain a covariant (exact) $$k$$-linear embedding $$D \mathcal{Y} :\mathcal{A} \longrightarrow \mathcal{L}(\mathcal{A},\operatorname{Ab})^{op}$$. Finally, we know that $$\mathcal{L}(\mathcal{A},\operatorname{Ab})^{op}$$ is a cocomplete abelian category possesing a projective generator $$P$$, and we take a certain coproduct of copies of $$P$$, obtaining an object $$Q$$. Then we take the ring $$R = \operatorname{End}(Q)$$, which is a $$k$$-algebra and we consider the exact embedding $$T : \mathcal{L}(\mathcal{A},\operatorname{Ab})^{op} \longrightarrow {\operatorname{Mod}}R$$ defined by $$T(X) = \operatorname{Hom}(Q,X)$$, which is also $$k$$-linear. Therefore, the embedding of $$\mathcal{A}$$ into $${\operatorname{Mod}}R$$ is given by $$TD \mathcal{Y} : \mathcal{A} \longrightarrow {\operatorname{Mod}}R$$ and it is a $$k$$-linear functor. Remarks: I took Mitchell's book "Theory of Categories" (MSN) as a reference for this answer. • I agree with the conclusion that $R$ can be taken to be a $k$-algebra, with the embedding $k$-linear. But this is not how the Freyd-Mitchell embedding is constructed. Firstly, your construction embeds $\mathcal{A}$ into $\mathcal{L}(\mathcal{A}^{\text{op}},\operatorname{Ab})$, not $\mathcal{L}(\mathcal{A},\operatorname{Ab})$. And also, it is not true in general that $\mathcal{L}(\mathcal{A},\operatorname{Ab})$ has a projective generator. But it does have an injective cogenerator, which is what is used to construct the Freyd-Mitchell embedding. – Jeremy Rickard Apr 22 '20 at 9:27 • @JeremyRickard thank you! I edited the answer and I think that now it is right. – user144185 Apr 22 '20 at 11:29
2021-01-22 14:12:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 48, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9428807497024536, "perplexity": 124.07479275741069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529331.99/warc/CC-MAIN-20210122113332-20210122143332-00242.warc.gz"}
http://nrich.maths.org/269/solution
### Ball Bearings If a is the radius of the axle, b the radius of each ball-bearing, and c the radius of the hub, why does the number of ball bearings n determine the ratio c/a? Find a formula for c/a in terms of n. ### After Thought Which is larger cos(sin x) or sin(cos x) ? Does this depend on x ? ### Strange Rectangle 2 Find the exact values of some trig. ratios from this rectangle in which a cyclic quadrilateral cuts off four right angled triangles. # Degree Ceremony ##### Stage: 5 Challenge Level: In this diagram the right angled triangle has hypotenuse of length 1 unit so the lengths of the sides are $\sin(45-x)^o$ and $\sin(45+x)^o$. By Pythagoras Theorem: $$\sin^2(45-x)^o + \sin^2(45+x)^o = 1.$$ These pairings of values that add up to 1 are useful in evaluating the expression: $$A = \sin^2 1^o + \sin^2 2^o + ... + \sin^2 359^o + \sin^2 360^\circ.$$ Ella Ryan from Madras College, St Andrew's based her solution on the symmetries of the graph of $y=\sin^2 x$. Consider the graph of $y=\sin^2 x^o$ between $x=1$ and $x=89$ inclusive and pairs of points having $y$ values which, when added together, always equal one. This result is equivalent to Pythagoras Theorem as explained above. For example, $$\sin^2 50^o + \sin^2 40^o = 1.$$ The pairs of points can be labelled $(45-x)^o$ and $(45+x)^o$ or alternatively $x^o$ and $(90-x)^o$. Essentially the same method was used both by Ella and also by Hou Yang Yang, Millfield School, Somerset, U.K. Firstly we use the following symmetry properties of the sine function: $$\sin^2 1^o = \sin^2 179^0 = \sin^2 181^o = \sin^2 359^o,$$ $$\sin^2 2^o = \sin^2 178^o = \sin^2 182^o = \sin^2 358^o, ...$$ $$\sin^2 89^o = \sin^2 91^o = \sin^2 269^o =\sin^2 271^o$$ and also $\sin^2 90^o = \sin^2 270^o = 1$ and $\sin 180^o = \sin 360^o = 0.$ Pairing equal values in $A$ gives: \begin{eqnarray} A &=& 2(\sin^2 1^o + \sin^2 2^o + ... + \sin^2 179^o),\\ &=& 4(\sin^2 1^o + \sin^2 2^o + ... + \sin^2 89^o) + 2\sin^2 90^o . \end{eqnarray} Then taking pairs that add up to 1 we get: \begin{eqnarray} A &=& 4[(\sin^2 1^o + \sin^2 89^o) + (\sin^2 2^o + \sin^2 88^o) + ... \\ && + (\sin^2 44^o + \sin^2 46^o) + \sin^2 45^o] + 2\\ &=& 4[44 + 0.5] + 2 \\ &=& 180. \end{eqnarray}
2014-10-26 08:38:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9731068015098572, "perplexity": 1462.328792919162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119661285.56/warc/CC-MAIN-20141024030101-00238-ip-10-16-133-185.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1369305/what-is-the-difference-between-these-two-limits-one-with-lim-limits-x-to0
# What is the difference between these two limits, one with $\lim\limits_{x\to0^{+}}$, the other with $\lim\limits_{x\to 0}$? I don't need an exact answer, I just need to know how these two limits would affect the answer and if there is a huge difference on how they are worked out, if they have a different step-by-step solution. 1. $\large \lim\limits_{x\to0^{+}}\dfrac{x}{\tan(7x)}$ 2. $\large \lim\limits_{x\to0}\dfrac{x}{\tan(7x)}$ • do you know the value of $\tan(0)$ ? – reuns Jul 21 '15 at 23:10 • @reuns Yes, it is 0, I just need to know if I can still use the Limit Laws on #1 as I would on #2. – Sam Perales Jul 21 '15 at 23:13 • do you know the value of $\tan'(0)$ ? do you know a theorem about $\lim_{x\to 0} \frac{f'(x)}{x}$ ? – reuns Jul 21 '15 at 23:19 The function $$x \longmapsto f(x)=\frac{x}{\tan (7x)}$$ is even, thus in this case $$\lim_{\large x \to 0^-}\frac{x}{\tan (7x)} =\lim_{\large x \to 0^+}\frac{x}{\tan (7x)}=\lim_{\large x \to 0}\frac{x}{\tan (7x)}=\frac17\lim_{\large x \to 0}\frac{7x}{\tan (7x)}=\frac17$$ where we have used the standard result $$\lim_{\large x \to 0}\frac{\tan x}x=1.$$ • @SamPerales Because I know that $\lim_{\large u \to 0}\frac{\tan u}{u}=1$, and as I have $u=7x$, I need a factor $7$ to obtain $u$... Hoping it is clear now for you. Thanks. – Olivier Oloa Jul 21 '15 at 23:20 • Oh, I see. The main goal was to get the tan x to the top. – Sam Perales Jul 21 '15 at 23:22
2019-06-27 04:43:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8764659762382507, "perplexity": 203.6537584603465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000613.45/warc/CC-MAIN-20190627035307-20190627061307-00408.warc.gz"}
http://ebooks.asmedigitalcollection.asme.org/content.aspx?bookid=232&sectionid=39221308
0 Chapter 11 Impact, Fatigue and Wear Excerpt The three principal damage mechanisms resulting from flow-induced vibrations are impact (which may result in fatigue wear), fatigue and wear. Because turbulence-induced vibration is random, the zero-to-peak vibration amplitudes can occasionally exceed several times the computed rms response. Thus, the fact that the nearest component is three times the rms vibration amplitude away does not guarantee that impacting between the two components will not occur. The number of impacts between two vibrating components over a given time period can be estimated based on the theory of probability, assuming that the vibration amplitudes follow a Gaussian distribution. Likewise, since the zero-to-peak amplitude of vibration of a structure excited by flow turbulence can, over a long enough period of time, exceed arbitrarily large values, there is no endurance limit in random vibration. Given a long enough time, any structure excited by any random force will theoretically fail by fatigue. The cumulative fatigue usage can again be calculated based on the probabilistic theory. Since in cumulative fatigue analysis, the usage factor is computed based on the absolute value of the zero-topeak vibration amplitudes, which follow the Rayleigh distribution function if the ± vibration amplitudes follow the Gaussian distribution, the Rayleigh probability distribution function must be used in computing the cumulative fatigue usage factor of a component excited by turbulent flow. From the ASME fatigue curves (which are based on the 0-topeak vibration amplitudes), corresponding fatigue curves based on rms vibration amplitudes had been derived for several types of materials. These are given in Figures 11.6 to 11.10. Compare with fatigue usage calculations, wear analysis due to flow-induced vibration is orders of magnitude more complex. This is because the wear mechanisms are not only dependent on the dynamics of the structures, but also on the material and the ambient conditions. Generally, there are three major types of wear mechanisms: Impact wear is that caused by moderate to fairly large vibration amplitudes, with resulting high impact forces that can cause surface fatigue and rapid failure of the structure. Blevins (1984) proposed the simple equation $srms=c(E4Mefn2maxD3)1∕5$ to estimate the rms surface stress of a heat exchanger tube impacting its support, with the contact stress parameter c obtained from tests (These are given in Figure 11.13). Blevins postulated that if the computed stress is below the endurance limit, then impact wear is not a concern. However, if it exceeds the endurance limit, then rapid wear of the material can be expected. • Summary • Nomenclature • 11.1 Introduction • 11.2 Impacts due to Turbulence-Induced Vibration • Example 11.1 • 11.3 Cumulative Fatigue Usage due to Turbulence-Induced Vibration • Crandall's Method • Cumulative Fatigue Usage by Numerical Integration • Fatigue Curves based on RMS Stress • 11.4 Wear due to Flow-Induced Vibration • Impact Wear • Example 11.2 • Sliding Wear • Example 11.3 • Fretting Wear • Fretting Wear Coefficient • 11.5 Fretting Wear and the Dynamics of a Loosely Supported Tube • Connors' Approximate Method for Fretting Wear • Example 11.4 • The Energy Method for Fretting Wear Estimate • Example 11.5 • References Topics: Fatigue, Wear \$25.00 Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related Proceedings Articles Related eBook Content Topic Collections
2018-12-15 23:26:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6325465440750122, "perplexity": 3788.9738494444155}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827137.61/warc/CC-MAIN-20181215222234-20181216004234-00256.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/for-reaction-2a-b-a2b-rate-k-a-b-2-with-k-20-10-6-mol-2-l2-s-1-calculate-initial-rate-reaction-when-a-01-mol-l-1-b-02-mol-l-1-calculate-rate-reaction-after-a-reduced-006-mol-l-1-rate-chemical-reaction_9104
Department of Pre-University Education, KarnatakaPUC Karnataka Science Class 12 # For the reaction: 2A + B → A2B  the rate = k[A][B]2 with k = 2.0 × 10−6 mol−2 L2 s−1. Calculate the initial rate of the reaction when [A] = 0.1 mol L−1, [B] = 0.2 mol L−1. Calculate the rate of reaction after [A] is reduced to 0.06 mol L−1. - Chemistry For the reaction: 2A + B → A2B  the rate = k[A][B]2 with k = 2.0 × 10−6 mol−2 L2 s−1. Calculate the initial rate of the reaction when [A] = 0.1 mol L−1, [B] = 0.2 mol L−1. Calculate the rate of reaction after [A] is reduced to 0.06 mol L−1. #### Solution The initial rate of the reaction is Rate = [A][B]2 = (2.0 × 10−6 mol−2 L2 s−1) (0.1 mol L−1) (0.2 mol L−1)2 = 8.0 × 10−9 mol−2 L2 s−1 When [A] is reduced from 0.1 mol L−1 to 0.06 mol−1, the concentration of A reacted = (0.1 − 0.06) mol L−1= 0.04 mol L−1 Therefore, concentration of B reacted =1/2xx0.04 " mol L"^(-1)= 0.02 mol L−1 Then, concentration of B available, [B] = (0.2 − 0.02) mol L−1 = 0.18 mol L−1 After [A] is reduced to 0.06 mol L−1, the rate of the reaction is given by, Rate = [A][B]2 = (2.0 × 10−6 mol−2 L2 s−1) (0.06 mol L−1) (0.18 mol L−1)2 = 3.89 mol L−1 s−1 Is there an error in this question or solution? #### APPEARS IN NCERT Class 12 Chemistry Textbook Chapter 4 Chemical Kinetics Q 2 | Page 117
2021-03-07 13:20:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43474650382995605, "perplexity": 4840.583890177258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376467.86/warc/CC-MAIN-20210307105633-20210307135633-00200.warc.gz"}
https://socratic.org/questions/what-is-the-rate-of-change-of-the-width-in-ft-sec-when-the-height-is-10-feet-if-
# What is the rate of change of the width (in ft/sec) when the height is 10 feet, if the height is decreasing at that moment at the rate of 1 ft/sec.A rectangle has both a changing height and a changing width, but the height and width change so that the area of the rectangle is always 60 square feet? Mar 16, 2015 The rate of change of the width with time $\frac{\mathrm{dW}}{\mathrm{dt}}$ = $0.6 \text{ft/s}$ $\frac{\mathrm{dW}}{\mathrm{dt}} = \frac{\mathrm{dW}}{\mathrm{dh}} \times \frac{\mathrm{dh}}{\mathrm{dt}}$ $\frac{\mathrm{dh}}{\mathrm{dt}} = - 1 \text{ft/s}$ So $\frac{\mathrm{dW}}{\mathrm{dt}} = \frac{\mathrm{dW}}{\mathrm{dh}} \times - 1 = - \frac{\mathrm{dW}}{\mathrm{dh}}$ $W \times h = 60$ $W = \frac{60}{h}$ $\frac{\mathrm{dW}}{\mathrm{dh}} = - \frac{60}{{h}^{2}}$ So $\frac{\mathrm{dW}}{\mathrm{dt}} = - \left(- \frac{60}{{h}^{2}}\right) = \frac{60}{{h}^{2}}$ So when $h = 10$: $\Rightarrow$ $\frac{\mathrm{dW}}{\mathrm{dt}} = \frac{60}{{10}^{2}} = 0.6 \text{ft/s}$
2022-09-28 20:10:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8866586685180664, "perplexity": 326.7052008578253}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00500.warc.gz"}
https://zulfahmed.wordpress.com/2014/04/24/unified-field-theories-a-century-after-nordstroms-five-dimensional-paper/
Feeds: Posts A great deal of effort went into this stream of research but we do not yet have a unified field theory. I suggest that part of the problem of a unified field theory is the lack of restrictions of various kinds that strongly tie the effort to nature.  In all my work on an S4 physics I have emphasized two restrictions that I consider empirical and fundamental:  that the objective universe has four macroscopic spatial dimensions (this is not consensus view) by an analysis of the observed crystal rotational symmetries of orders 5, 8, 10 and 12; and the compactness of the universe which follows from Gaussian upper bounds on heat kernels for complete noncompact riemannian manifolds.  These imply that we should restrict the domain of unified field theories only to four dimensional compact manifolds (five-dimensional spacetime) and in fact that we have concrete evidence that we should in fact seek unified field theories only for a scaled 4-sphere of radius $1/h$. These restrictions would presumably make our task easier.  Note that the structure of space has gone through extreme attempts at conceptualization.  Weyl’s geometry, for example generalized Riemann’s metric geometry.  My personal view is that these are curiosities from the point of view of actual description of nature.  Weyl’s attempts at unified field theory led to the gauge invariance and non-abelian gauge theories so his ideas obviously have found application in physical description of nature but I think for the geometry of actual spacetime, Riemann’s geometry is appropriate.  In fact, the important geometric feature of our actual universe, assuming it compact and four dimensional, is that it be spin in the sense that there be no topological obstructions to lifting the $SO(4)$ frame bundle to its double cover $Spin(4)=SU(2)xSU(2)$. Now if we consider the unified field theory problem on a scaled 4-sphere, a natural choice is to model electromagnetism represented by a 1-form on the full space $S^4(1/h)$, and the gravitational metric to be induced from the embedding of a three-dimensional hypersurface $M$.  I don’t yet know the right Lagrangian so that the solution is the choice of the physical hypersurface and the Ricci curvature equation for the hypersurface to be exactly identical to the gravitational field equations of Einsten but where the second fundamental form terms are determined by the electromagnetic 1-form.  In this picture the issue is not quite a ‘unified field’ where the gravitational potential and electromagnetic potential are tied in a single metric or field (which has been one of the guiding principle of seeking a unification in early 20th century) but rather to begin with a 4-sphere and then reduce gravity to electromagnetism as the choice of a hypersurface.  Gravitation arises in this picture not as simply a choice of a metric on a fixed topological space but rather the choice of an embedding and an induced metric for a hypersurface constrained to remain in a 4-sphere the evidence for which we assume are obtained independently on this unification effort.
2018-03-21 03:16:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.641609787940979, "perplexity": 366.5465778223361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647567.36/warc/CC-MAIN-20180321023951-20180321043951-00727.warc.gz"}
http://math.stackexchange.com/questions/390640/closed-form-for-int-01-log-log-left-frac1x-sqrt-frac1x2-1-right
Closed form for $\int_0^1\log\log\left(\frac{1}{x}+\sqrt{\frac{1}{x^2}-1}\right)\mathrm dx$ Please help me to find a closed form for the following integral: $$\int_0^1\log\left(\log\left(\frac{1}{x}+\sqrt{\frac{1}{x^2}-1}\right)\right)\,{\mathrm d}x.$$ I was told it could be calculated in a closed form. - Do you have any idea of what could be done? Did the person that tell you there was a closed form hint you in any way? Don't you know what the result should be? –  Pedro Tamaroff May 13 '13 at 18:45 It may or may not help to realize $\log\left(\frac{1}{x} + \sqrt{\frac{1}{x^2} - 1}\right)$ as $\operatorname{arsech} x$. This isn't a hint, just an observation. –  Stahl May 13 '13 at 18:48 @Stahl I was thinking about that too. –  Pedro Tamaroff May 13 '13 at 18:48 @PeterTamaroff Unfortunately, no hints were given, except that the closed form is quite simple, although might be not elementary. –  Laila Podlesny May 13 '13 at 18:49 Here are some equivalent forms: Since $\text{sech}^{-1}(x)=\log\left(\frac{1}{x}+\sqrt{\frac{1}{x^{2}}-1}\right),$ we are trying to evaluate $$\int_{0}^{1}\log\left(\text{sech}^{-1}(x)\right)dx.$$ Let $u=\text{sech}^{-1}x$ so that $x=\text{sech}(u)=\frac{1}{\cosh(u)}.$ Then $dx=d(\frac{1}{\cosh(u)})$ , and we are looking at $$-\int_{0}^{\infty}\log ud\left(\frac{1}{\cosh(u)}\right)$$ which equals $$\int_{0}^{\infty}\log u\frac{\sinh(u)}{\cosh(u)^{2}}du.$$ Using integration by parts, this becomes $$\lim_{a\rightarrow0}\left(\log a+\int_{a}^{\infty}\frac{\text{sech}(x)}{u}du\right).$$ –  Eric Naslund May 13 '13 at 19:14 show 1 more comment After the change of variables $x=\frac{1}{\cosh u}$ the integral becomes $$\int_0^{\infty}\ln u \frac{\sinh u}{\cosh^2 u}du,$$ as was noticed above by Eric. We would like to integrate by parts to kill the logarithm but we get two divergent pieces. To go around this, let us consider another integral, $$I(s)=\int_0^{\infty}u^s \frac{\sinh u}{\cosh^2 u}du,\tag{1}$$ with $s>0$. The integral we actually want to compute is equal to $I'(0)$, which will be later obtained in the limit. Indeed, integrating once by parts one finds that \begin{align} I(s)&=s\int_0^{\infty}\frac{u^{s-1}du}{\cosh u}=s\cdot 2^{1-2 s}\Gamma(s)\left[\zeta\left(s,\frac14\right)-\zeta\left(s,\frac34\right)\right]=\\ &=2^{1-2 s}\Gamma(s+1)\left[\zeta\left(s,\frac14\right)-\zeta\left(s,\frac34\right)\right], \end{align} where $\zeta(s,a)=\sum_{n=0}^{\infty}(n+a)^{-s}$ denotes Hurwitz zeta function (in the way we have used its integral representaion (5) from here). Therefore, $$\int_0^1\log\log\left(\frac{1}{x}+\sqrt{\frac{1}{x^2}-1}\right)\mathrm dx=-\gamma-2\ln 2-2\ln\frac{\Gamma(\frac34)}{\Gamma(\frac14)}.$$ To get the last expression, it suffices to use \begin{align} &\frac{\partial}{\partial s}\left[2^{1-2 s}\Gamma(s+1)\right]_{s=0}=-2\gamma-4\ln 2,\\ &\zeta\left(0,\frac14\right)-\zeta\left(0,\frac34\right)=\frac12, \\ &\frac{\partial}{\partial s}\left[\zeta\left(s,\frac14\right)-\zeta\left(s,\frac34\right)\right]_{s=0}=-\ln\frac{\Gamma(\frac34)}{\Gamma(\frac14)}. \end{align} [See formulas (10) and (16) on the same page].
2014-03-16 00:41:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.991528332233429, "perplexity": 499.4612361377144}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678700738/warc/CC-MAIN-20140313024500-00042-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.shaalaa.com/textbook-solutions/c/selina-solutions-concise-mathematics-class-10-icse-chapter-21-trigonometrical-identities_363
# Selina solutions for Concise Maths Class 10 ICSE chapter 21 - Trigonometrical Identities [Latest edition] ## Chapter 21: Trigonometrical Identities Exercise 21 (A)Exercise 21 (B)Exercise 21 (C)Exercise 21 (D)Exercise 21 (E) Exercise 21 (A) [Pages 324 - 325] ### Selina solutions for Concise Maths Class 10 ICSE Chapter 21 Trigonometrical Identities Exercise 21 (A) [Pages 324 - 325] Exercise 21 (A) | Q 1 | Page 324 Prove. (secA-1)/(secA+1)=(1-cosA)/(1+cosA) Exercise 21 (A) | Q 2 | Page 324 Prove. (1+sinA)/(1-sinA)=(co   secA+1)/(co   sinA-1 Exercise 21 (A) | Q 3 | Page 324 Prove. 1/(tanA+cotA)=cosAsinA Exercise 21 (A) | Q 4 | Page 324 Prove. tanA-cotA=(1-2cos^2A)/(sinAcosA) Exercise 21 (A) | Q 5 | Page 324 Prove. sin^4A-cos^4A=2sin^2A-1 Exercise 21 (A) | Q 6 | Page 324 Prove. (1-tanA)^2+(1+tanA)^2=2sec^2A Exercise 21 (A) | Q 7 | Page 324 Prove. cosecA - cosec2 A = cot4 A + cot2 A Exercise 21 (A) | Q 8 | Page 324 Prove. sec A (1-sin A) (sec A + tan A) = 1 Exercise 21 (A) | Q 9 | Page 324 Prove. cosec A(1+ cos A) (cosecA - cot A) =1 Exercise 21 (A) | Q 10 | Page 324 Prove. sec2 A + cosec2 A = sec2 A cosec2 A Exercise 21 (A) | Q 11 | Page 324 Prove. ((1+tan^2A)cotA)/(cosec^2A)=tanA Exercise 21 (A) | Q 12 | Page 324 Prove. tan2A - sin2A = tan2A sin2A Exercise 21 (A) | Q 13 | Page 324 Prove. cot2 A - cos2 A = cos2 A.cot2 A Exercise 21 (A) | Q 14 | Page 324 Prove. (cosec A + sin A) (cosec A - sin A) = cot2 A + cos2 Exercise 21 (A) | Q 15 | Page 324 Prove. (sec A - cos A) (sec A + cos A) = sin2 A + tan2 Exercise 21 (A) | Q 16 | Page 324 Prove. (cosA + sinA)2 + (cosA - sinA)2 = 2 Exercise 21 (A) | Q 17 | Page 324 Prove. (cosec A - sin A) (sec A - cos A) (tan A + cot A) = 1 Exercise 21 (A) | Q 18 | Page 324 Prove. 1/(sec A+tanA)=secA-tanA Exercise 21 (A) | Q 19 | Page 324 Prove. cosecA+cotA=1/(cosecA-cotA) Exercise 21 (A) | Q 20 | Page 324 Prove. (secA-tanA)/(secA+tanA)=1-2secAtanA+2tan^2A Exercise 21 (A) | Q 21 | Page 324 prove. (sinA + cosecA)2 + (cosA + secA)2 = 7 + tan2A + cot2A Exercise 21 (A) | Q 22 | Page 324 prove. sec2A cosec2A = tan2A + cot2A + 2 Exercise 21 (A) | Q 23 | Page 324 Prove. 1/(1+cosA)+1/(1-cosA)=2cosec^2A Exercise 21 (A) | Q 24 | Page 324 Prove. 1/(1-sinA)+1/(1+sinA)=2sec^2A Exercise 21 (A) | Q 25 | Page 324 Prove. (cosecA)/(cosecA-1)+(cosecA)/(cosecA+1)=2sec^2A Exercise 21 (A) | Q 26 | Page 324 prove. secA/(secA+1)+secA/(secA-1)=2cosec^2A Exercise 21 (A) | Q 27 | Page 324 Prove. (1+cosA)/(1-cosA)=tan^2A/(secA-1)^2 Exercise 21 (A) | Q 28 | Page 324 Prove. cot^2A/(cosecA+1)^2=(1-sinA)/(1+sinA) Exercise 21 (A) | Q 29 | Page 324 Prove. (1+sinA)/cosA+cosA/(1+sinA)=2secA Exercise 21 (A) | Q 30 | Page 325 Prove. (1-sinA)/(1+sinA)=(secA-tanA)^2 Exercise 21 (A) | Q 31 | Page 325 Prove. (cotA-cosecA)^2=(1-cosA)/(1+cosA) Exercise 21 (A) | Q 32 | Page 325 Prove. (cosecA-1)/(cosecA+1)=(cosA/(1+sinA))^2 Exercise 21 (A) | Q 33 | Page 325 Prove. tan^2A-tan^2B=(sin^2A-sinB)/(cos^2Acos^2B Exercise 21 (A) | Q 34 | Page 325 Prove. (sinA-2sin^3A)/(2cos^3A-cosA)=tanA Exercise 21 (A) | Q 35 | Page 325 Prove. sinA/(1+cosA)=cosecA-cotA Exercise 21 (A) | Q 36 | Page 325 Prove. cosA/(1-sinA)=secA+tanA Exercise 21 (A) | Q 37 | Page 325 Prove. (sinAtanA)/(1-cosA)=1+secA Exercise 21 (A) | Q 38 | Page 325 Prove. (1 + cot A - cosec A)(1+ tan A + sec A) = 2 Exercise 21 (A) | Q 39 | Page 325 Prove. sqrt((1+sinA)/(1-sinA))=secA+tanA Exercise 21 (A) | Q 40 | Page 325 Prove. sqrt((1-cosA)/(1+cosA))=cosecA-cotA Exercise 21 (A) | Q 41 | Page 325 Prove. sqrt((1-cosA)/(1+cosA))=sinA/((1+cosA) Exercise 21 (A) | Q 42 | Page 325 Prove. sqrt((1-sinA)/(1+sinA))=cosA/(1+sinA) Exercise 21 (A) | Q 43 | Page 325 Prove. 1-cos^2A/(1+sinA)=sinA Exercise 21 (A) | Q 44 | Page 325 Prove. 1/(sinA+cosA)+1/(sinA-cosA)=(2sinA)/(1-2cos^2A) Exercise 21 (A) | Q 45 | Page 325 Prove. (sinA+cosA)/(cosA-cosA)+(sinA-cosA)/(sinA+cosA)=2/(2sin^2A-1) Exercise 21 (A) | Q 46 | Page 325 Prove. (cotA+cosecA-1)/(cotA-cosecA+1)=(1+cosA)/sinA Exercise 21 (A) | Q 47 | Page 325 Prove. (sinthetatantheta)/(1-costheta)=1+sectheta Exercise 21 (A) | Q 48 | Page 325 Prove. (costhetacottheta)/(1+sintheta)=cosectheta-1 Exercise 21 (B) [Page 327] ### Selina solutions for Concise Maths Class 10 ICSE Chapter 21 Trigonometrical Identities Exercise 21 (B) [Page 327] Exercise 21 (B) | Q 1.1 | Page 327 Prove. cosA/(1-tanA)+sinA/(1-cotA)=sinA+cosA Exercise 21 (B) | Q 1.2 | Page 327 Prove. (cos^3A+sin^3A)/(cos^3A+sin^3A)+(cos^3A-sin^3A)/(cos^3A-sin^3A)=2 Exercise 21 (B) | Q 1.3 | Page 327 Prove. tanA/(1-cotA)+cot/(1-tanA)=secA cosecA+1 Exercise 21 (B) | Q 1.4 | Page 327 Prove. (tanA+1/cosA)^2+(tanA-1/cosA)^2=2((1+sin^2A)/(1-sin^2A)) Exercise 21 (B) | Q 1.5 | Page 327 Prove. 2 sin2A + cos4A = 1 + sin4 Exercise 21 (B) | Q 1.6 | Page 327 Prove. (sinA-sinB)/(cosA+cosB)+(cosA-cosB)/(sinA+sinB)=0 Exercise 21 (B) | Q 1.7 | Page 327 Prove. (cosecA-sinA)(secA-cosA)=1/(tanA+cotA) Exercise 21 (B) | Q 1.8 | Page 327 Prove. (1 + tanA tanB)2 + (tanA - tanB)2 = sec2A sec2 Exercise 21 (B) | Q 1.9 | Page 327 Prove. 1/(cosA+sinA-1)+1/(cosA+sinA+1)=cosecA+secA Exercise 21 (B) | Q 2 | Page 327 If x cos A + sin A = m and X sin A – y cos A = n, then prove that: x2 + y2 = m2 + n2 Exercise 21 (B) | Q 3 | Page 327 If m = a sec A + b tan A and n = a tan A + b sec A, then prove that : m2 - n2 = a2 - b2 Exercise 21 (B) | Q 4 | Page 327 If x = r sin A cos B, y = r sin A sin B and z = r cos A, then prove that: x2 + y2 + z2 = r2 Exercise 21 (B) | Q 5 | Page 327 If sin A + cos A = m and sec A + cosec A = n, show that: n (m2 - 1) = 2m Exercise 21 (B) | Q 6 | Page 327 If x = r cos A cos B, y = r cos A sin B and Z = r sin A, show that: x2 + y2 + z2 = r2 Exercise 21 (B) | Q 7 | Page 327 If cosA/cosB=m and cosA/sinB = n show that: (m^2+n^2)cos^2B=n^2. Exercise 21 (C) [Pages 328 - 329] ### Selina solutions for Concise Maths Class 10 ICSE Chapter 21 Trigonometrical Identities Exercise 21 (C) [Pages 328 - 329] Exercise 21 (C) | Q 1.1 | Page 328 Show that: tan10° tan15° tan75° tan80° = 1 Exercise 21 (C) | Q 1.2 | Page 328 Show that: sin 42° sec 48° + cos 42° cos ec48° = 2 Exercise 21 (C) | Q 1.3 | Page 328 Show that: sin26^@/sec64^@+cos26^@/(cosec64^@)=1 Exercise 21 (C) | Q 2.1 | Page 328 Express the following in terms of angles between 0° and 45°: sin59° + tan63° Exercise 21 (C) | Q 2.2 | Page 328 Express the following in terms of angles between 0° and 45°: cosec68° + cot72° Exercise 21 (C) | Q 2.3 | Page 328 Express the following in terms of angles between 0° and 45°: cos74° + sec67° Exercise 21 (C) | Q 3.1 | Page 328 Show that: sinA/sin(90^@-A)+cosA/cos(90^@-A)=secA cosecA Exercise 21 (C) | Q 3.2 | Page 328 Show that: sinAcosA-(sinAcos(90^@-A)cosA)/sec(90^@-A)-(cosAsin(90^@-A)sinA)/(cosec(90^@-A))=0 Exercise 21 (C) | Q 4.1 | Page 328 For triangle ABC, show that: sin (A+B)/2=cosC/2 Exercise 21 (C) | Q 4.2 | Page 328 For triangle ABC, show that: tan  (B+C)/2=cot  A/2 Exercise 21 (C) | Q 5.1 | Page 328 Evaluate: 3 sin72^@/(cos18^@)-sec32^@/(cosec58^@) Exercise 21 (C) | Q 5.2 | Page 328 Evaluate: 3cos80° cosec10° + 2 cos59° cosec31° Exercise 21 (C) | Q 5.3 | Page 328 Evaluate: sin80^@/(cos10^@)+sin59^@ sec31^@ Exercise 21 (C) | Q 5.4 | Page 328 Prove that: tan(55° + A) = cot(35° - A) Exercise 21 (C) | Q 5.5 | Page 328 Evaluate: cosec(65° + A) - sec(25° - A) Exercise 21 (C) | Q 5.6 | Page 328 Evaluate: 2 tan57^@/(cot33^@)-cot70^@/(tan20^@)-sqrt2 cos45^@ Exercise 21 (C) | Q 5.7 | Page 328 Evaluate: (cot^2 41^@)/(tan^2 49^@)-2 sin^2 75^@/cos^2 15^@ Exercise 21 (C) | Q 5.8 | Page 328 Evaluate: cos70^@/(sin20^@)+cos59^@/(sin31^@)-8 sin^2 30^@ Exercise 21 (C) | Q 5.9 | Page 328 Evaluate: 14 sin30° + 6 cos60° - 5 tan45° Exercise 21 (C) | Q 6 | Page 329 A triangle ABC is right angles at B; find the value of(secAcosecA-tanAcotC)/sinB Exercise 21 (C) | Q 7.1 | Page 329 Find the value of x, if sin x = sin60° cos30° - cos60° sin30° Exercise 21 (C) | Q 7.2 | Page 329 Find the value of x, if sin x = sin60° cos30° + cos60° sin30° Exercise 21 (C) | Q 7.3 | Page 329 Find the value of x, if cos x = cos60° cos30° - sin60° sin30° Exercise 21 (C) | Q 7.4 | Page 329 Find the value of x, if  tan x=(tan60^@-tan30^@)/(1+tan60^@tan30^@) Exercise 21 (C) | Q 7.5 | Page 329 Find the value of x, if sin2x = 2sin 45° cos 45° Exercise 21 (C) | Q 7.6 | Page 329 Find the value of x, if sin3x = 2sin 30° cos30° Exercise 21 (C) | Q 7.7 | Page 329 Find the value of x, if cos(2x - 6) = cos230° - cos260° Exercise 21 (C) | Q 8.1 | Page 329 find the value of angle A, where 0° ≤ A ≤ 90°. sin(90° - 3A).cosec42° = 1 Exercise 21 (C) | Q 8.2 | Page 329 find the value of angle A, where 0° ≤ A ≤ 90°. cos(90° - A).sec 77° = 1 Exercise 21 (C) | Q 9.1 | Page 329 Prove that: (cos(90^@-theta)costheta)/cottheta=1-cos^2theta Exercise 21 (C) | Q 9.2 | Page 329 Prove that: (sinthetasin(90^@-theta))/cot(90^@-theta)=1-sin^2theta Exercise 21 (C) | Q 10 | Page 329 Evaluate: (sin35^@cos55^@+cos35^@sin55^@)/(cosec^2 10^@-tan^2 80^@) Exercise 21 (C) | Q 11 | Page 329 Evaluate sin2 34° + sin56° + 2tan 18° tan72° - cot30° Exercise 21 (C) | Q 12 | Page 329 Without using trigonometrical tables, evaluate: cosec^2 57^@ - tan^2 33^@ + cos 44^@ cosec 46^@ - sqrt2 cos 45^@ -  tan^2 60^@ Exercise 21 (D) [Page 331] ### Selina solutions for Concise Maths Class 10 ICSE Chapter 21 Trigonometrical Identities Exercise 21 (D) [Page 331] Exercise 21 (D) | Q 1.1 | Page 331 Use tables to find sine of 21° Exercise 21 (D) | Q 1.2 | Page 331 Use tables to find sine of 34° 42' Exercise 21 (D) | Q 1.3 | Page 331 Use tables to find sine of 47° 32' Exercise 21 (D) | Q 1.4 | Page 331 Use tables to find sine of 62° 57' Exercise 21 (D) | Q 1.5 | Page 331 Use tables to find sine of 10° 20' + 20° 45' Exercise 21 (D) | Q 2.1 | Page 331 Use tables to find cosine of 2° 4’ Exercise 21 (D) | Q 2.2 | Page 331 Use tables to find cosine of 8° 12’ Exercise 21 (D) | Q 2.3 | Page 331 Use tables to find cosine of 26° 32’ Exercise 21 (D) | Q 2.4 | Page 331 Use tables to find cosine of 65° 41’ Exercise 21 (D) | Q 2.5 | Page 331 Use tables to find cosine of 9° 23’ + 15° 54’ Exercise 21 (D) | Q 3.1 | Page 331 Use trigonometrical tables to find tangent of 37° Exercise 21 (D) | Q 3.2 | Page 331 Use trigonometrical tables to find tangent of 42° 18' Exercise 21 (D) | Q 3.3 | Page 331 Use trigonometrical tables to find tangent of 17° 27' Exercise 21 (D) | Q 4.1 | Page 331 Use tables to find the acute angle θ, if the value of sin θ is 0.4848 Exercise 21 (D) | Q 4.2 | Page 331 Use tables to find the acute angle θ, if the value of sin θ is 0.3827 Exercise 21 (D) | Q 4.3 | Page 331 Use tables to find the acute angle θ, if the value of sin θ is 0.6525 Exercise 21 (D) | Q 5.1 | Page 331 Use tables to find the acute angle θ, if the value of cos θ is 0.9848 Exercise 21 (D) | Q 5.2 | Page 331 Use tables to find the acute angle θ, if the value of cos θ is 0.9574 Exercise 21 (D) | Q 5.3 | Page 331 Use tables to find the acute angle θ, if the value of cos θ is 0.6885 Exercise 21 (D) | Q 6.1 | Page 331 Use tables to find the acute angle θ, if the value of tan θ is 0.2419 Exercise 21 (D) | Q 6.2 | Page 331 Use tables to find the acute angle θ, if the value of tan θ is 0.4741 Exercise 21 (D) | Q 6.3 | Page 331 Use tables to find the acute angle θ, if the value of tan θ is 0.7391 Exercise 21 (E) [Pages 332 - 333] ### Selina solutions for Concise Maths Class 10 ICSE Chapter 21 Trigonometrical Identities Exercise 21 (E) [Pages 332 - 333] Exercise 21 (E) | Q 1.01 | Page 332 Prove the following identitie: 1/(cosA+sinA)+1/(cosA-sinA)=(2cosA)/(2cos^2A-1) Exercise 21 (E) | Q 1.02 | Page 332 Prove the following identitie: cosecA-cotA=sinA/(1+cosA Exercise 21 (E) | Q 1.03 | Page 332 Prove the following identitie: 1-sin^2A/(1+cosA)=cosA Exercise 21 (E) | Q 1.04 | Page 332 Prove the following identitie: (1-cosA)/sinA+sinA/(1-cosA)=2 cosecA Exercise 21 (E) | Q 1.05 | Page 332 Prove the following identitie: cotA/(1-tanA)+tanA/(1-cotA)=1+tanA+cotA Exercise 21 (E) | Q 1.06 | Page 332 Prove the following identitie: cosA/(1+sinA)+tanA=secA Exercise 21 (E) | Q 1.07 | Page 332 Prove the following identitie: sinA/(1-cosA)-cotA=cosecA Exercise 21 (E) | Q 1.08 | Page 332 Prove the following identitie: (sinA-cosA+1)/(sinA+cosA-1)=cosA/(1-sinA) Exercise 21 (E) | Q 1.09 | Page 332 Prove the following identitie: sqrt((1+sinA)/(1-sinA))=cosA/(1-sinA) Exercise 21 (E) | Q 1.1 | Page 332 Prove the following identitie: sqrt((1-cosA)/(1+cosA))=sinA/(1+cosA) Exercise 21 (E) | Q 1.11 | Page 332 Prove the following identitie: (1+(secA-tanA)^2)/(cosecA(secA-tanA))=2tanA Exercise 21 (E) | Q 1.12 | Page 332 Prove the following identitie: ((cosecA-cotA)^2+1)/(secA(cosecA-cotA))=2cotA Exercise 21 (E) | Q 1.13 | Page 332 Prove the following identitie: cot^2A((secA-1)/(1+sinA))+sec^2A((sinA-1)/(1+secA))=0 Exercise 21 (E) | Q 1.14 | Page 332 Prove the following identitie: (1-2sin^2A)^2/(cos^4A-sin^4A)=2cos^2A-1 Exercise 21 (E) | Q 1.15 | Page 332 Prove the following identitie: sec4A (1 - sin4A) - 2 tan2A = 1 Exercise 21 (E) | Q 1.16 | Page 332 Prove the following identitie: cosec4A (1 - cos4A) - 2 cot2A = 1 Exercise 21 (E) | Q 1.17 | Page 332 Prove the following identitie: (1 + tanA + secA)(1 + cotA - cosecA) = 2 Exercise 21 (E) | Q 2 | Page 332 If sinA + cosA = p and secA + cosecA = q, then prove that: q(p2 - 1) = 2p Exercise 21 (E) | Q 3 | Page 332 If x = a cosθ and y = b cotθ, show that: a^2/x^2-b^2/y^2=1 Exercise 21 (E) | Q 4 | Page 332 If secA + tanA = p, show taht: sinA = (p^2-1)/(p^2+1) Exercise 21 (E) | Q 5 | Page 332 If tanA = n tanB and sinA = m sinB, prove that: cos^2A=(m^2-1)/(n^2-1) Exercise 21 (E) | Q 6.1 | Page 332 If 2 sin A – 1 = 0, show that: Sin 3A = 3 sin A – 4 sin3A Exercise 21 (E) | Q 6.2 | Page 332 If 4 cos2 A – 3 = 0, Show that: cos 3 A = 4 cos3 A – 3 cos A Exercise 21 (E) | Q 7.1 | Page 332 Evaluate 2(tan35^@/cot55^@)+(cot55^@/tan35^@)-3(sec40^@/(cosec50^@)) Exercise 21 (E) | Q 7.2 | Page 332 Evaluate sec26^@ sin64^@+(cosec33^@)/sec57^@ Exercise 21 (E) | Q 7.3 | Page 332 Evaluate (5sin66^@)/(cos24^@)-(2cot85^@)/tan5^@ Exercise 21 (E) | Q 7.4 | Page 332 Evaluate cos40° cosec50° + sin50° sec40° Exercise 21 (E) | Q 7.5 | Page 332 Evaluate sin27° sin63° - cos63° cos27° Exercise 21 (E) | Q 7.6 | Page 332 Evaluate (3sin72^@)/(cos18^@)-sec32^@/(cosec58^@) Exercise 21 (E) | Q 7.7 | Page 332 Evaluate 3 cos80° cosec10°+ 2 cos59° cosec31° Exercise 21 (E) | Q 7.8 | Page 332 Evaluate cos75^@/(sin15^@)+sin12^@/(cos78^@)-cos18^@/sin72^@ Exercise 21 (E) | Q 8.1 | Page 332 Prove that: tan (55° + x) = cot (35° - x) Exercise 21 (E) | Q 8.2 | Page 332 Prove that: sec(70° - θ) = cosec(20° + θ) Exercise 21 (E) | Q 8.3 | Page 332 Prove that: sin(28° + A) = cos(62° - A) Exercise 21 (E) | Q 8.4 | Page 332 Prove that: 1/(1+cos(90^@ - A))+ 1/(1-cos(90^@-A))=2cosec^2(90^@-A) Exercise 21 (E) | Q 8.5 | Page 332 Prove that: 1/(1+sin(90^@-A))+1/(1-sin(90^@-A))=2sec^2(90^@-A) Exercise 21 (E) | Q 9.1 | Page 332 If A and B are complementary angles, prove that: cotB + cosB = secA cosB (1 + sinB) Exercise 21 (E) | Q 9.2 | Page 332 If A and B are complementary angles, prove that: cotA cotB - sinA cosB  - cosA sinB = 0 Exercise 21 (E) | Q 9.3 | Page 332 If A and B are complementary angles, prove that: cosec2A + cosec2B = cosec2A cosec2B Exercise 21 (E) | Q 9.4 | Page 332 If A and B are complementary angles, prove that: (sinA+sinB)/(sinA-sinB)+(cosB-cosA)/(cosB+cosA)=2/(2sin^2A-1) Exercise 21 (E) | Q 10.01 | Page 333 Prove that 1/(sinA-cosA)-1/(sinA+cosA)=(2cosA)/(2sin^2A-1) Exercise 21 (E) | Q 10.02 | Page 333 Prove that cot^2A/(cosecA-1)-1=cosecA Exercise 21 (E) | Q 10.03 | Page 333 Prove that cosA/(1+sinA)=secA-tanA Exercise 21 (E) | Q 10.04 | Page 333 Prove that cosA (1 + cotA) + sinA (1 + tanA = secA + cosecA Exercise 21 (E) | Q 10.05 | Page 333 Prove that (sinA-cosA)(1+tanA+cotA)=secA/(cosec^2A)-(cosecA)/sec^2A Exercise 21 (E) | Q 10.06 | Page 333 Prove that sqrt(sec^2A+cosec^2A)=tanA + cotA Exercise 21 (E) | Q 10.07 | Page 333 Prove that (sinA + cosA) (secA + cosecA) = 2 + secA cosecA Exercise 21 (E) | Q 10.08 | Page 333 Prove that (tanA + cotA) (cosecA - sinA) (secA - cosA) = 1 Exercise 21 (E) | Q 10.09 | Page 333 Prove that cot^2A-cot^2B=(cos^2A-cos^2B)/(sin^2Asin^2B)=cosec^2A-cosec^2B Exercise 21 (E) | Q 10.1 | Page 333 Prove that: ("cot A" - 1)/(2 - "sec"^2 "A") = "cot A"/(1 + "tan  A") Exercise 21 (E) | Q 11.1 | Page 333 If 4 cos2 A – 3 = 0 and ≤ A ≤ 90°, then prove that : sin 3 A = 3 sin A – 4 sin3 A Exercise 21 (E) | Q 11.2 | Page 333 If 4 cos2 A – 3 = 0 and ≤ A ≤ 90°, then prove that: cos 3 A = 4 cos3 A – 3 cos A Exercise 21 (E) | Q 12.1 | Page 333 Find A, if 0° ≤ A ≤ 90° and 2cos2A - 1 = 0 Exercise 21 (E) | Q 12.2 | Page 333 Find A, if 0° ≤ A ≤ 90° and sin 3A - 1 = 0 Exercise 21 (E) | Q 12.3 | Page 333 Find A, if 0° ≤ A ≤ 90° and 4sin2A - 3 = 0 Exercise 21 (E) | Q 12.4 | Page 333 Find A, if 0° ≤ A ≤ 90° and cos2A - cosA = 0 Exercise 21 (E) | Q 12.5 | Page 333 Find A, if 0° ≤ A ≤ 90° and 2cos2A + cosA - 1 = 0 Exercise 21 (E) | Q 13.1 | Page 333 If 0° < A < 90°; Find A, if : (cos A )/(1 - sin A) + (cos A)/(1 + sin A) = 4 Exercise 21 (E) | Q 13.2 | Page 333 If 0° < A < 90°; Find A, if sinA/(secA-1)+sinA/(secA+1)=2 Exercise 21 (E) | Q 14 | Page 333 Prove that: (cosecA - sinA) (secA - cosA) sec2A = tanA Exercise 21 (E) | Q 15 | Page 333 Prove the identity (sin θ + cos θ) (tan θ + cot θ ) = sec θ + cosec θ Exercise 21 (E) | Q 16 | Page 333 Evaluate without using trigonometric tables, sin^2 28^@ + sin^2 62^@ + tan^2 38^@ - cot^2 52^@ + 1/4 sec^2 30^@ ## Chapter 21: Trigonometrical Identities Exercise 21 (A)Exercise 21 (B)Exercise 21 (C)Exercise 21 (D)Exercise 21 (E) ## Selina solutions for Concise Maths Class 10 ICSE chapter 21 - Trigonometrical Identities Selina solutions for Concise Maths Class 10 ICSE chapter 21 (Trigonometrical Identities) include all questions with solution and detail explanation. This will clear students doubts about any question and improve application skills while preparing for board exams. The detailed, step-by-step solutions will help you understand the concepts better and clear your confusions, if any. Shaalaa.com has the CISCE Concise Maths Class 10 ICSE solutions in a manner that help students grasp basic concepts better and faster. Further, we at Shaalaa.com provide such solutions so that students can prepare for written exams. Selina textbook solutions can be a core help for self-study and acts as a perfect self-help guidance for students. Concepts covered in Concise Maths Class 10 ICSE chapter 21 Trigonometrical Identities are Trigonometric Ratios of Complementary Angles, Trigonometric Identities, Heights and Distances - Solving 2-D Problems Involving Angles of Elevation and Depression Using Trigonometric Tables, Trigonometry. Using Selina Class 10 solutions Trigonometrical Identities exercise by students are an easy way to prepare for the exams, as they involve solutions arranged chapter-wise also page wise. The questions involved in Selina Solutions are important questions that can be asked in the final exam. Maximum students of CISCE Class 10 prefer Selina Textbook Solutions to score more in exam. Get the free view of chapter 21 Trigonometrical Identities Class 10 extra questions for Concise Maths Class 10 ICSE and can use Shaalaa.com to keep it handy for your exam preparation
2022-09-29 11:36:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4684699475765228, "perplexity": 11334.841714545868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00597.warc.gz"}
http://en.wikipedia.org/wiki/Thurstonian_model
# Thurstonian model A Thurstonian model is a latent variable model for describing the mapping of some continuous scale onto discrete, possibly ordered categories of response. In the model, each of these categories of response corresponds to a latent variable whose value is drawn from a normal distribution, independently of the other response variables and with constant variance. Thurstonian models have been used as an alternative to generalized linear models in analysis of sensory discrimination tasks.[1] They have also been used to model long-term memory in ranking tasks of ordered alternatives, such as the order of the amendments to the US Constitution.[2] Their main advantage over other models ranking tasks is that they account for non-independence of alternatives.[3] ## Definition Consider a set of m options to be ranked by n independent judges. Such a ranking can be represented by the ordering vector rn = (rn1, rn2,...,rnm). Rankings are assumed to be derived from real-valued latent variables zij, representing the evaluation of option j by judge i. Rankings ri are derived deterministically from zi such that zi(ri1) < zi(ri2) < ... < zi(rim). The zij are assumed to be derived from an underlying ground truth value μ for each option. In the most general case, they are multivariate-normally distributed: $z_{ij} = \mu_j + \epsilon_{ij}$ where εj is multivariate-normally distributed around 0 with covariance matrix Σ. In a simpler case, there is a single standard deviation parameter σi for each judge: $z_{ij}\ \sim\ \mathcal{N}(\beta_j,\, \sigma_i^2).$ ## Inference The Gibbs-sampler based approach to estimating model parameters is due to Yao and Bockenholt (1999).[3] • Step 1: Given β, Σ, and r_i, sample z_i. The zij must be sampled from a truncated multivariate normal distribution to preserve their rank ordering. Hajivassiliou's Truncated Multivariate Normal Gibbs sampler can be used to sample efficiently.[4][5] • Step 2: Given Σ, z_i, sample β. β is sampled from a normal distribution: $\beta\ \sim\ \mathcal{N}(\beta^*, \Sigma^*).$ where β* and Σ* are the current estimates for the means and covariance matrices. • Step 3: Given β, z_i, sample Σ. Σ−1 is sampled from a Wishart posterior, combining a Wishart prior with the data likelihood from the samples εi =zi - β. ## History Thurstonian models were introduced by Louis Leon Thurstone to describe the law of comparative judgment.[6] Prior to 1999, Thurstonian models were rarely used for modeling tasks involving more than 4 options because of the high-dimensional integration required to estimate parameters of the model. In 1999, Yao and Bockenholt introduced their Gibbs-sampler based approach to estimating model parameters.[3] ## Applications to sensory discrimination Thurstonian models have been applied to a range of sensory discrimination tasks, including auditory, taste, and olfactory discrimination, to estimate sensory distance between stimuli that range along some sensory continuum.[7][8][9] The Thurstonian approach motivated Frijter (1979)'s explanation of Gridgeman's Paradox, also known as the paradox of discriminatory nondiscriminators:[1][8][10][11] People perform better in a three-alternative forced choice task when told in advance which dimension of the stimulus to attend to. (For example, people are better at identifying which of one three drinks is different from the other two when told in advance that the difference will be in degree of sweetness.) This result is accounted for by differing cognitive strategies: when the relevant dimension is known in advance, people can estimate values along that particular dimension. When the relevant dimension is not known in advance, they must rely on a more general, multi-dimensional measure of sensory distance. ## References 1. ^ a b Lundahl, David (1997). "Thurstonian Models — an Answer to Gridgeman's Paradox?". CAMO Software Statistical Methods. 2. ^ Lee, Michael; Steyvers, Mark; de Young, Mindy; Miller, Brent (2011). "A Model-Based Approach to Measuring Expertise in Ranking Tasks". CogSci 2011 Proceedings (PDF). ISBN 978-0-9768318-7-7. 3. ^ a b c Yao, G.; Bockenholt, U. (1999). "Bayesian estimation of Thurstonian ranking models based on the Gibbs sampler". British Journal of Mathematical and Statistical Psychology 52: 19–92. doi:10.1348/000711099158973. 4. ^ Hajivassiliou, V.A. (1993). "Simulation estimation methods for limited dependent variable models". In Maddala,, G.S.; Rao, C.R.; Vinod, H.D. Econometrics. Handbook of statistics 11. Amsterdam: Elsevier. ISBN 0444895779. 5. ^ V.A., Hajivassiliou; D., McFadden; P., Ruud (1996). "Simulation of multivariate normal rectangle probabilities and their derivatives. Theoretical and computational results". Journal of Econometrics 72: 85–134. doi:10.1016/0304-4076(94)01716-6. 6. ^ Thurstone, Louis Leon (1927). "A Law of Comparative Judgment". Psychological Review 34 (4): 273–286. doi:10.1037/h0070288. Reprinted: Thurstone, L. L. (1994). "A law of comparative judgment". Psychological Review 101 (2): 266–270. doi:10.1037/0033-295X.101.2.266. 7. ^ Durlach, N.I.; Braida, L.D. (1969). "Intensity Perception. I. Preliminary Theory of Intensity Resolution". Journal of the Acoustical Society of America 46 (2): 372–383. doi:10.1121/1.1911699. 8. ^ a b Dessirier, Jean-Marc; O’Mahony, Michael (9 October 1998). "Comparison of d′ values for the 2-AFC (paired comparison) and 3-AFC discrimination methods: Thurstonian models, sequential sensitivity analysis and power". Food Quality and Preference 10 (1): 51–58. doi:10.1016/S0950-3293(98)00037-8. 9. ^ Frijter, J.E.R. (1980). "Three-stimulus procedures in olfactory psychophysics: an experimental comparison of Thurstone-Ura and three-alternative forced choice models of signal detection theory". Perception & Psychophysics 28 (5): 390–7. doi:10.3758/BF03204882. 10. ^ Gridgement, N.T. (1970). "A Reexamination of the Two-Stage Triangle Test for the Perception of Sensory Differences". Journal of Food Science 35 (1). 11. ^ Frijters, J.E.R. (1979). "The paradox of discriminatory nondiscriminators resolved". Chemical Senses & Flavor 4 (4): 355–8. doi:10.1093/chemse/4.4.355.
2014-11-27 14:50:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7990279197692871, "perplexity": 5984.75123608174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931008720.43/warc/CC-MAIN-20141125155648-00162-ip-10-235-23-156.ec2.internal.warc.gz"}
http://www.gradesaver.com/textbooks/math/algebra/algebra-a-combined-approach-4th-edition/chapter-5-test-page-408/15
## Algebra: A Combined Approach (4th Edition) a. $4xy^2$ Numerical Coefficient: 4 Degree of term: $3$ $7xyz$ Numerical Coefficient: 7 Degree of term: $3$ $x^3y$ Numerical Coefficient: 1 Degree of term: $4$ $-2$ Numerical Coefficient: -2 Degree of term: $0$ b. Degree of polynomial is 4. a. $4xy^2$ Numerical Coefficient: 4 Degree of term: $1+2=3$ $7xyz$ Numerical Coefficient: 7 Degree of term: $1+1+1=3$ $x^3y$ Numerical Coefficient: 1 Degree of term: $3+1=4$ $-2$ Numerical Coefficient: -2 Degree of term: $0$ b. Degree of polynomial is 4 because the highest degree of the terms is 4.
2018-04-21 19:47:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2624749541282654, "perplexity": 2707.3901525235947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945317.36/warc/CC-MAIN-20180421184116-20180421204116-00220.warc.gz"}
https://deepai.org/publication/asymptotic-performance-analysis-of-generalized-user-selection-for-interference-limited-multiuser-secondary
# Asymptotic Performance Analysis of Generalized User Selection for Interference-Limited Multiuser Secondary We analyze the asymptotic performance of a generalized multiuser diversity scheme for an interference-limited secondary multiuser network of underlay cognitive radio systems. Assuming a large number of secondary users and that the noise at each secondary user's receiver is negligible compared to the interference from the primary transmitter, the secondary transmitter transmits information to the k-th best secondary user, namely, the one with the k-th highest signal-to-interference ratio (SIR). We use extreme value theory to show that the k-th highest SIR converges uniformly in distribution to an inverse gamma random variable for a fixed k and large number of secondary users. We use this result to derive asymptotic expressions for the average throughput, effective throughput, average bit error rate and outage probability of the k-th best secondary user under continuous power adaptation at the secondary transmitter, which ensures satisfaction of the instantaneous interference constraint at the primary receiver caused by the secondary transmitter. Numerical simulations show that our derived asymptotic expressions are accurate for different values of system parameters. ## Authors • 5 publications • 4 publications • 196 publications 04/14/2018 ### On the Asymptotic Throughput of the k-th Best Secondary User Selection in Cognitive Radio Systems We analyze the asymptotic average and effective throughputs of a multius... 01/26/2021 ### On the Performance of Spectrum Sharing Backscatter Communication Systems Spectrum sharing backscatter communication systems are among the most pr... 07/11/2019 ### Coverage Probability Analysis Under Clustered Ambient Backscatter Nodes In this paper, we consider a new large-scale communication scheme where ... 08/20/2021 ### On Interference Channels with Gradual Data Arrival We study memoryless interference channels with gradual data arrival in t... 06/22/2018 ### Age of Information and Throughput in a Shared Access Network with Heterogeneous Traffic We consider a cognitive shared access scheme consisting of a high priori... 07/08/2019 ### Capacity and Algorithms for a Cognitive Network with Primary-Secondary User Cooperation In this work, we examine cognitive radio networks, where secondary users... 12/27/2018 ### On the Secrecy Performance of Generalized User Selection for Interference-Limited Multiuser Wireless Networks We investigate the secrecy performance of a multiuser diversity scheme f... ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## I Introduction Cognitive radio (CR) is an important technology to maximize radio spectrum utilization efficiency[1]-[3]. In CR systems, the secondary network is allowed to share the spectrum allocated to the primary network provided that the interference caused by the secondary transmitter (ST) does not deteriorate the performance of the primary network. Consequently, the challenge is to maintain the interference at the primary receiver (PR) below a pre-determined threshold level. This can be achieved by adapting the ST transmit power that ensures satisfaction of the interference constraint at the PR [4]. When the interference from the primary transmitter (PT) is much larger than the noise at the secondary receiver, the performance of the secondary network is limited by the interference from the primary transmitter and the quantity of interest is signal-to-interference ratio (SIR). Such CR system can be described as interference-limited underlay CR system [14]. For the interference-limited underlay CR systems considered in [14], the authors analyze the average bit error rate (BER) and outage probability for receive antenna selection schemes under discrete power adaptation at the ST. ### I-a Motivation The mentioned previous works have only focused on conventional multiuser diversity in underlay CR systems where the SU with the best link quality is selected. Furthermore, no prior work has considered the performance of the conventional multiuser diversity for interference-limited underlay CR systems with continuous power adaptation at the ST. Accordingly, we focus in this paper on a generalized multiuser diversity scheme that features selection of the -th best SU for interference-limited secondary multiuser network under continuous power adaptation. The -th best SU selection is of practical interest in underlay CR systems since the best SU may not be selected under given traffic conditions. This might happen when the best user is unavailable or occupied by other service requirements [15], in handoff situations [16] or due to scheduling delay [17]. Clearly, the -th best SU selection scheme includes the best SU selection at as a special case. In general, it is difficult to analyze the exact performance of the -th best SU selection scheme for arbitrary number of secondary users. Hence, we use the extreme value theorem (EVT) [18] to analyze the asymptotic performance (in the limit of large number of secondary users) of such selection scheme. As we will show later, EVT provides tractable and accurate asymptotic expressions for the average throughput, effective throughput, average bit error rate and outage probability. The derived asymptotic expressions are accurate for practical CR systems with not so large (realistic) number of secondary users. We validate the accuracy of the asymptotic expressions through numerical simulation. It should be noted that the derived expressions are obtained by assuming perfect channel state information (CSI) of the secondary transmitter to primary receiver channel. The impact of imperfect CSI on the mathematical analysis is also investigated. As we will discuss later, the derived mathematical expressions under the perfect CSI can be used to deduce the system performance under the imperfect CSI case. ### I-B Related work and new contributions In our previous work [19], we used EVT to derive simple closed-form asymptotic expressions for the average throughput, effective throughput and average BER for the link with the -th highest signal-to-noise ratio (SNR) in traditional wireless communication systems with no spectrum sharing. We showed that the -th highest SNR converges uniformly in distribution to a Log-Gamma random variable 111 If is a Gamma random variable, then we say that is a Log-Gamma random variable whose support is the real line [20].. As a special case, if , the Log-Gamma reduces to the Gumbel random variable. The average throughput, effective throughput and average BER were derived for various channel models that are widely used to characterize fading in wireless communication systems such as Weibull, Gamma, and Gamma-Gamma. The rest of this paper is organized as follows. In Section II we discuss the system model. In Section III we analyze the asymptotic average throughput, effective throughput, average BER and outage probability of the -th best SU. Section IV includes numerical results and Section V concludes. ## Ii System Model As shown in Fig. 1, we consider an underlay secondary network consisting of one ST equipped with a single antenna, and secondary users each equipped with a single antenna. The secondary network is sharing the spectrum of a primary network with one PT and one PR. The PT and PR are equipped with a single antenna each. Let denote the channel gain from the PT to the -th secondary user’s receiver (SU-Rx), where . Let and denote the channel gain from the ST to the PR and the -th SU-Rx, respectively. We assume that the primary network is far away from the secondary network and therefore and are assumed to be independent Rayleigh distributed random variables. This implies that the channel power gains, and have probability density functions (PDFs) and , respectively, where is the unit step function and the parameters and are the fading parameters. The channel power gains in the secondary network, , for , are assumed to be independent and identically distributed (i.i.d.) Gamma random variables with PDF 444 We assume that is a Nakagami-m random variable; hence, is a Gamma random variable. This a is a generalized fading model that includes many practical scenarios. First, Nakagami-m fading includes the Rayleigh fading as a special case when , then the distribution of becomes consistent with the distributions of and . Second, in the situation where line of-sight path exists between the secondary transmitter and secondary receivers the natural choice to use the Rician distribution to model the line of-sight effect. It is well known that the Rician distribution can be accurately approximated by the Nakagami-m distribution. Motivated by these reasons, we adopt the Nakagami-m fading to model the fading in the secondary network f(x)=xm−1βmΓ(m)e−xβu(x). (1) where the parameters and are positive reals and is the Gamma function. Similar to [4], [7], [9], [14], [22] and [23], it is assumed that the ST has perfect CSI regarding the secondary transmitter to primary receiver channel, . The ST can be informed about through a mediate band manager between PR and ST [24] or by considering proper signaling [25]. However, the impact of imperfect CSI on the performance of the -th best SU will be discussed later in this paper. With a perfect knowledge of , we consider a continuous power adaptation policy at the ST to control its interference to the PR such that the instantaneous transmit power of the ST is P=min(PS,T|h0|2), (2) where is the maximum instantaneous power available at the ST and is the maximum tolerable interference level at the PR. Assuming the noise at the -th SU-Rx is negligible compared to the interference from the PT, then the ST will select the -th best SU; namely, the SU with the -th highest SIR; i.e., i∗=argk-thmaxi{PZi}Ni=1 (3) where , is the transmit power of the PT and is the PT interference power at the -th SU-Rx. Let denote the instantaneous SIR at the -th best SU-Rx, where . According to [18], the PDF of can be expressed in terms of the PDF, , and cumulative distribution function (CDF), , of as fZ(N−k+1)(x)=k(Nk)f(z)F(z)N−k(1−F(z))k−1, (4) where the CDF and PDF of are given by [14] F(z)=(PMzλβ+PMz)mu(z), (5) f(z)=mλβ(PM)mzm−1(λβ+PMz)m+1u(z), (6) respectively. Let denote the instantaneous throughput of the -th best SU, where and is the system bandwidth. Then, the average throughput of the -th best SU, , can be evaluated as E[R(N−k+1)]=E[log2(1+PZ(N−k+1))] (7) in bit/s/Hz. The expectation in (7 ) is taken over the joint distribution of random variables and . Assuming a block fading channel, the effective throughput that can be supported by a wireless system under a statistical QoS constraint described by the delay QoS exponent is given by [26] α(θ)=−1θTlog(E[e−θTR]), θ>0, (8) where is a random variable which represents the instantaneous throughput during a single block and is the block length. implies there is no delay constraint and the effective throughput is then the ergodic (average) throughput of the corresponding wireless channel. Hence, the effective throughput of the -th best SU, , can be expressed as (9) in bit/s/Hz, where and the expectation is taken over the joint distribution of and . If we conisder a general class of modulation schemes whose conditional BER, , is given by Pe=ce−vY, (10) where and are positive constants and is a random variable which represents the instantaneous received SIR, the average BER of the -th best SU can be expressed as ¯¯¯¯¯¯Pe(k,N)=cE[e−vPZ(N−k+1)], (11) where the expectation is taken over the joint distribution of and . Due to the complicated nature of the distribution of the instantaneous SIR at the -th best SU-Rx, it is difficult to obtain exact expressions for , and . Therefore, in this paper we consider another approach based on EVT to analyze the performance of the -th best SU in terms of average throughput, effective throughput, outage probability and average BER. ## Iii Asymptotic Performance Analysis In this section, we derive the limiting distribution of in Proposition 1 below. Then we use this result to analyze the average, effective throughputs, average BER and outage probability of the -th best SU. ### Iii-a The Limiting Distribution of Z(N−k+1) Proposition 1: Let denote the -th largest order statistic of i.i.d. random variables with a common CDF of , as expressed in (5), then for a fixed and , converges in distribution to a random variable with CDF , which can be characterized by an inverse gamma distribution as G(k)(z)=Γ(k,1z)(k−1)!u(z), (12) where , and is the upper incomplete gamma function [27]. Furthermore, the PDF of , , can be obtained as f(k)(z)=e−z−1zk+1(k−1)!u(z). (13) Proof: We first investigate the limiting distribution of , which denotes the first largest order statistic of i.i.d. random variables. From Proposition 2 of [14], converges in distribution to a unit Fréchet distribution i.e., G(z)=e−z−1u(z), (14) where and . Applying Proposition 1 of [19] with as in (14), it follows that for a fixed and , the sequence converges in distribution to a random variable with CDF of , which can be expressed in terms of as G(k)(z)=G(z)k−1∑j=0[−log(G(z))]jj!=e−z−1k−1∑j=0(z−1)jj!u(z). (15) Using the fact that for an integer , can be finally expressed as in (12). By differentiating (12) we obtain (13). It should be noted here that Proposition 1 of [19] can be applied for different CDF functions. In this paper we focus on the case when represents a Fréchet CDF. In this case has a limiting distribution of inverse gamma as shown in (12). This is different from what was obtained in [19], where Proposition 1 of [19] was applied for the case when represents Gumbel CDF and thus has a limiting distribution of Log-Gamma. ### Iii-B The Distribution of the ST Transmit Power We consider a continuous power adaptation scheme in which the transmit power of the ST can be adapted with a power limit of ; therefore, the instantaneous transmit power of the ST is . Furthermore, we consider a continuous power adaptation scheme in which the transmit power of the ST can be adapted without any power limit, i.e., [22], [23]. In such case, the ST transmit power, , can be written as . We focus next on the PDF of the instantaneous transmit power of the ST, , then we use this PDF and Proposition 2 to evaluate the average and effective throughputs of the -th best SU. Considering is a continuous random variable, where and is constant, then the CDF of the random variable , , can be given as FP(t)=FX(t)+u(t−PS)−u(t−PS)FX(t), (16) where is the CDF of the random variable and is the unit step function. Then it follows that the PDF of , , can be expressed as fP(t)=fX(t)[1−u(t−PS)]+δ(t−PS)[1−FX(PS)], (17) where is the PDF of the random variable and is the Dirac delta function, the derivative of . Using the PDF of , and variable transformation then it follows that and . Finally we can write fP(t)=ηTt2e−ηTt[1−u(t−PS)]+δ(t−PS)[1−e−ηTPS]. (18) ### Iii-C Average and Effective Throughputs Proposition 2: The average and effective throughputs of the -th best SU for continuous power adaptation with limited ST power are respectively given by E[R(N−k+1)]≈ln(bPS)−E1(ηTPS)−ψ(k)ln(2), (19) α(θ,k,N)≈−1Alog2⎛⎜ ⎜⎝Γ(k+A)Γ(A+1,ηTPS)(bηT)A(k−1)!+Γ(k+A)(1−e−ηTPS)(bPS)A(k−1)!⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠, (20) for fixed and , where is the exponential integral function [28, Eq. (5.1.4)], is the upper incomplete gamma function [27, Eq. (8.350.2)] is the digamma function [27, Eq. (8.360.1)]. Proof: Average Throughput: From Proposition 1, the CDF of approaches the CDF of for a fixed and , where the CDF of is as expressed in (12). Or equivalently, the PDF of can be approximated by the PDF of for a fixed and , where the PDF of is as expressed in (13). Then for a fixed , and conditioning on the ST transmit power , can be approximated as E[R(N−k+1)|P]≈1ln(2)E[log(1+bPZ)|P]=1ln(2)∫∞0ln(1+bPz)e−z−1zk+1(k−1)!dz. (21) Noting that is an increasing function of , we have in (21) for large . Using this and variable transformation of , can be further approximated as E[R(N−k+1)|P]≈∫∞0−ln(u)(bP)ke−bPuuk−1ln(2)(k−1)!du=ln(bP)−ψ(k)ln(2), (22) where the above integral is evaluated with help of [27, Eq. (4.352. 1) ]. Averaging over the PDF of in (18) yields ∫∞0ln(t)fP(t)dt=∫PS0ln(t)ηTt2e−ηTtdt+ln(PS)[1−e−ηTPS]. (23) Using variable transformation of with help of [27, Eq. (4.331. 2) ] and after some basic algebraic manipulation, we have ∫PS0ln(t)ηTt2e−ηTtdt=e−ηTPSln(PS)−E1(ηTPS). (24) Combining (22), (23) and (24), it follows that is as expressed in (19). Effective Throughput: Conditioning on the ST transmit power in (9) and by exploiting Lemma 2 of the Appendix, we infer that can be approximated as E[(1+PZ(N−k+1))−A|P]≈E[(1+bPZ)−A|P]=∫∞0(1+bPz)−Ae−z−1zk+1(k−1)!dz, (25) for fixed and . Making use as above of for large in (25) and variable transformation of , can be further approximated as E[(1+PZ(N−k+1))−A|P]≈∫∞0(bP)kuA+k−1e−bPu(k−1)!dz=(bP)−AΓ(A+k)(k−1)!, (26) where the above integral is evaluated with help of [27, Eq. (3.381.4)]. Averaging (26) over the PDF of in (18) yields E[(1+PZ(N−k+1))−A]≈∫∞0(bt)−AΓ(A+k)(k−1)!fP(t)dt=∫PS0Γ(k+A)ηTe−ηTtbAtA+2(k−1)!dt+Γ(k+A)(1−e−ηTPS)(bPS)A(k−1)!. (27) Using variable transformation of and using the definition of the upper incomplete gamma function, , we have ∫PS0Γ(k+A)ηTe−ηTtbAtA+2(k−1)!dt=Γ(k+A)Γ(A+1,ηTPS)(bηT)A(k−1)!. (28) Combining (27), (28) and (9), it follows that is as expressed in (20). While we focused in Proposition 2 on analyzing the average and effective throughputs under the limited ST power adaptation, i.e., , it should be noted that simpler expressions can be obtained under the unlimited ST power case. i.e., . These expressions are useful when [23], [22] and they serve as upper bounds on the average and effective throughput under the limited ST power case. Using the result from Proposition 2, we derive the average and effective throughputs of the -th best SU with unlimited ST power in the following corollary. Corollary 1: The average and effective throughputs of the -th best SU for continuous power adaptation with unlimited ST power are respectively given by E[R(N−k+1)]≈ln(bTη)−ψ(k)+E0ln(2), (29) α(θ,k,N)≈ln(bTη)ln(2)−1Alog2(Γ(A+k)Γ(A+1)(k−1)!), (30) for fixed and , where is the Euler constant. Proof: Average Throughput: Using Puiseux series for the exponential integral function, , we have E1(x)=−E0−ln(x)−∞∑n=1(−x)nnn!, x>0. (31) Invoking (19) and with the help of (31), one can show that . Therefore, as , the average throughput is as expressed in (29). Effective Throughput: Invoking (20) and one can show that the effective throughput is as expressed in (30) as . ### Iii-D Average BER We now derive the average BER for the limited and unlimited continuous ST power in the following proposition. Proposition 3: The average BER of the -th best SU for continuous limited power adaptation scheme can be approximated as ¯¯¯¯¯¯Pe(k,N)≈∫PS02(vbt)k/2\slKk(2√vbt)ηT(k−1)!t2e−ηTtdt+2(vbPS)k/2\slKk(2√vbPS)(k−1)!(1−e−ηTPS), (32) for fixed and , where is the modified Bessel function of the second kind and order [28, Eq. (8.407.1)] Furthermore, for the unlimited ST transmit power, the average BER of the -th best SU can be approximated as ¯¯¯¯¯¯Pe(k,N)≈c(ηTvb)k/2−1G3,00,3(ηTvb∣∣−1+k/2,2−k/2,1−k/2)(k−1)!, (33) for fixed and , where is the Meijer G-function [29]. Proof: Conditioning on the ST transmit power in (11) and by exploiting Lemma 1 of the Appendix, we infer that can be approximated as E[e−vPZ(N−k+1)∣∣P]≈E[e−vbPZ∣∣P]=∫∞0e−vbPze−z−1zk+1(k−1)!dz=2(vbP)k/2\slKk(2√vbP)(k−1)!, (34) for a fixed and , where the above integral is evaluated with help of [30, Eq. (2.11)]. It is hard to find an analytical expression for the average BER for the limited ST transmit power case. Therefore, averaging (34) over in (18) yields the average BER for limited ST power as in (32). For the unlimited ST transmit power, one can show that by letting in (32), we have ¯¯¯¯¯¯Pe(k,N)≈∫∞02c(vbt)k/2\slKk(2√vbt)(k−1)!ηTt2e−ηTtdt. (35) The above integral can be expressed in terms of the Meijer G-function as in (33). It should be noted that the Meijer G-function can be easily and efficiently computed using the most standard software packages like MAPLE and MATHEMATICA. ### Iii-E Outage Probability We now derive the outage probability for limited and unlimited continuous ST power in the following proposition. Proposition 4: The outage probability of the -th best SU for continuous limited power adaptation scheme can be approximated as (36) for fixed and . Furthermore, for the unlimited ST transmit power, the outage probability of the -th best SU can be approximated as Pout(x0)≈2(ηTbx0)k2\slKk(2√ηTbx0)(k−1)!, (37) for fixed and . Proof: The outage probability of the -th best SU, , can be expressed as Pout(x0)=∫∞0Pr{tZ(N−k+1)≤x0}fP(t)dt=∫∞0Pr{tZ(N−k+1)≤x0}fP(t)dt. (38) where as given in (18). From Proposition 1, the CDF of approaches the CDF of for fixed and , where the CDF of is as expressed in (12). Then, we have Pout(x0)=∫∞0Pr{tZ(N−k+1)≤x0}fP(t)dt=∫∞0Pr{Z(N−k+1)b≤x0bt}fP(t)dt≈∫∞0Pr{Z≤x0bt}fP(t)dt=∫∞0Γ(k,btx0)(k−1)!fP(t)dt. (39) Making use of (18) in (39), the outage probability for the limited ST power is as in (36). For the unlimited ST transmit power, one can show that by letting in (36), we have Pout(x0)≈∫∞0Γ(k,btx0)ηT(k−1)!t2e−ηTtdt=2(ηTbx0)k2\slKk(2√ηTbx0)(k−1)!, (40) where the integral above is evaluated with help of [27, Eq. (6.453)] after variable transformation of . ### Iii-F Effect of Imperfect CSI In practical environments, the ST has only a partial channel knowledge of the ST to PR channel, . In this case, the CSI on provided to the ST is outdated due to the time-varying nature of the wireless link [31]. The outdated CSI can be described using the correlation model as [31]. h0=ρ^h0+√1−ρ2~h0, (41) where is the outdated channel information available at the ST, is a complex Gaussian random variable with zero mean and unit variance, and uncorrelated with . The correlation coefficient ( ) is a constant, which is used to evaluate the impact of channel estimation error and feedback delay on the CSI [31]. It is assumed that the ST knows the outdated channel information and the correlation coefficient as well. In view of being an exponentially distributed random variable with parameter , the estimated channel power gain is also an exponentially distributed RV with parameter , where . As we discussed in Section II, when the ST has a perfect CSI of , it can access the spectrum if the peak interference power constraint can be satisfied. However, it is hard to satisfy the instantaneous interference constraint at the PR if only the outdated CSI is available at ST [31]. Therefore, a more flexible constraint based on a pre-selected interference outage probability is adopted [31], [32]. Considering the imperfect CSI effect, the transmit power of the ST in (2) can be rewritten as [31] P=min(PS,rIT|^h0|2), (42) where denotes the power margin factor which can be expressed as [31] rI=(−1+2ρ2)+1−ρ2−(1−2Γ0)√(1−ρ2)(1−(1−2Γ0)2ρ2)2Γ0(1−Γ0), (43) where denotes the predetermined interference outage probability. As a special case, a power margin factor of , i.e., () indicates the perfect CSI of and therefore the ST transmit power in (42) reduces to (2). For further practical considerations, we address the imperfect CSI of the channels in the secondary network, , for . The outdated CSI can be described as hi=δ^hi+√1−δ2 ~hi,   i=1,2,...,N (44) where is the outdated channel information of the -th secondary link available at the ST, is a complex Gaussian random variable with zero mean and unit variance, and uncorrelated with . The correlation coefficient () is a constant that describes the impact of outdated CSI. In view of being a Gamma distributed random variable with parameters and , the estimated channel power gain is also a Gamma distributed random variable with parameters and . It should be noted that the expressions derived for average throughput, effective throughput, average BER and outage probability in previous subsections assuming a perfect CSI hold for the imperfect CSI case after replacing with and with , due to the imperfect CSI on . And replacing with due to the imperfect CSI on , where . ## Iv Numerial results ### Iv-a Perfect CSI In this subsection, we numerically illustrate and verify the obtained asymptotic expressions in Section III under the perfect CSI condition, which refers to with ST transmit power is as in (2) as described at the end of Section III. F . In Fig. 2, we plot the average throughput of the -th best SU versus the number of secondary users, , for unlimited ST power, , and limited ST power with , for . We validate the obtained asymptotic expressions for the average throughput using Monte Carlo simulations. We observe that the accuracy increases as increases. Furthermore, we observe that the asymptotic results are accurate for not so large (realistic) values of . For example, is considered sufficiently large to confirm the accuracy of the asymptotic results compared to simulations. This suggests that the EVT is a powerful approach that approximates the performance for realistic and large values of as well. In Fig. 3, we plot the average throughput of the best SU versus the ST power, , in dB. We observe that, compared to the simulations, the accuracy of the asymptotic average throughput increases as increases from 6 to 30. We also observe that the accuracy of the asymptotic average throughput increases as increases. Furthermore, for larger values of the asymptotic average throughput approaches the one with unlimited ST power, . In Fig. 4, we plot the average throughput of the best SU versus the interference level, , in dB for unlimited ST power, , and limited ST power with . Some interesting observations can be made from this figure. First, we observe that, compared to the simulations, the accuracy of the asymptotic average throughputs increase as increases from 20 to 200. Second, as or increases, the accuracy of the asymptotic average throughputs also increases. Last, for the limited ST power, , the average throughput is saturated and it does not improve as . This is due to the fact that for higher values of , the ST will select with a higher probability. In Fig. 5, we plot the effective throughput of the -th best SU versus the number of secondary users, , with unlimited ST power, and limited ST power with , for . We observe that the accuracy of the asymptotic effective throughput increases as increases. However, it is shown that for , the asymptotic effective throughputs is less accurate for small to moderate values of . This is because the asymptotic analysis is more accurate for large relative to a fixed . Consequently, if the value of is close enough to , it is expected that the asymptotic expression will be less accurate. In Fig. 6, we plot the effective throughput of the best SU versus the delay exponent, , for and different values of . We observe that the effective throughput significantly decreases for smaller values of . On the other hand, for reasonably large values of , the effective throughput does not significantly improve compared to the unlimited ST power case, . This is due to the fact that for higher values of the effective throughput is dominated by the interference level . In Fig. 7, the outage probability of the -th best SU is plotted versus interference level, , in at with unlimited ST power, and limited ST power with , for at dB. The saturation in the outage probability for the limited ST power case is due to the fact that for higher values of , the ST will select for most of the time. In Fig. 8, we plot the asymptotic average BER as a function of the number of secondary users, , for the unlimited ST power with
2022-01-25 05:38:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8437212109565735, "perplexity": 544.6747015897549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304760.30/warc/CC-MAIN-20220125035839-20220125065839-00385.warc.gz"}
https://easystats.github.io/modelbased/articles/describe_nonlinear.html
This vignette will present how to model and describe non-linear relationships using estimate. Warning: we will go full Bayesian. If you’re not familiar with the Bayesian framework, we recommend starting with this gentle introduction. Most of relationships present in nature are non-linear, consisting of quadratic curves or more complex shapes. In spite of that, scientists tend to model data through linear links. Reasons for that include technical and interpretation complexity. However, advances in software makes modeling of non-linear relationship very straightforward (insert link to future blogpost). Nevertheless, the added cost in terms of interpretation, report and communication often remain a barrier, as the human brain more easily understands linear relationships (e.g., as this variable increases, that variable increases). The estimate package aims at easing this step by summarizing non-linear curves in terms of linear segments. # estimate_smooth Let’s start by creating a very simple dataset: data <- data.frame(x = -50:50) # Generate dataframe with one variable x data$y <- data$x^2 # Add a variable y data$y <- data$y + rnorm(nrow(data), mean = 0, sd = 100) # Add some gaussian noise library(ggplot2) # For plotting library(see) # For nice themes ggplot(data, aes(x = x, y = y)) + geom_point() + see::theme_modern() Looking nice! Now let’s model this non-linear relationship using a polynomial term: model <- glm(y ~ poly(x, 2), data = data) Let’s continue with visualising the fitted model: library(modelbased) estim <- estimate_relation(model, length = 50) ggplot(estim, aes(x = x, y = Predicted)) + geom_line(color = "purple") + geom_point(data = data, aes(x = x, y = y)) + # Add original data points see::theme_modern() Although a visual representation is usually recommended, how can we verbally describe this relationship? describe_nonlinear(estim, x = "x", y = "Predicted") > Start | End | Length | Change | Slope | R2 > ------------------------------------------------------ > -50.00 | -1.02 | 0.48 | -2490.97 | -50.86 | 4.90e-07 > -1.02 | 50.00 | 0.50 | 2492.80 | 48.86 | 4.90e-07 describe_nonlinear will decompose this curve into linear parts, returning their size (the percentage of the curve of the segment), and the trend (positive or negative). We can now say that that the relationship can be summarised as one negative link and positive link, with a changing point located roughly around 0. # Real application: Effect of time on memory We will download and use a dataset where participants had to answer questions about the movie Avengers: Age of ultron (combined into a memory score) a few days after watching it at the theater (the delay variable). Let’s visualize how the Delay, in days, influences the Memory score, by plotting the data points and a raw loess fit on this raw data. library(ggplot2) library(dplyr) library(see) # Load the data and filter out outliers df <- dplyr::filter(df, Delay <= 14, Memory >= 20) # Plot the density of the point and a loess smooth line ggplot(df, aes(x = Delay, y = Memory)) + stat_density_2d(geom = "raster", aes(fill = ..density..), contour = FALSE) + geom_jitter(width = 0.2, height = 0.2) + scale_fill_viridis_c() + geom_smooth(formula = "y ~ x", method = "loess", color = "red", se = FALSE) + theme_modern(legend.position = "none") Unsurprisingly, the forgetting curve appears to be non-linear, as supported by the literature suggesting a 2nd order polynomial curve (Averell and Heathcote 2011). # Modelling non-linear curves We can fit a Bayesian linear mixed regression to model such relationship, adding it a few other variables that could influence this curve, such as the familiarity with the characters of the movie, the language of the movie, the immersion (2D/3D). library(lme4) model <- lmer(Memory ~ poly(Delay, 2) * Characters_Familiarity + (1 | Movie_Language) + (1 | Immersion), data = df) We can visualize the link between the Delay and the Memory score by using the estimate_relation. library(modelbased) estim <- estimate_relation(model, target = "Delay", ci = c(0.50, 0.69, 0.89, 0.97)) ggplot(estim, aes(x = Delay, y = Predicted)) + geom_jitter(data = df, aes(y = Memory), width = 0.2, height = 0.2) + geom_ribbon(aes(ymin = CI_low_0.97, ymax = CI_high_0.97), alpha = 0.2, fill = "blue") + geom_ribbon(aes(ymin = CI_low_0.89, ymax = CI_high_0.89), alpha = 0.2, fill = "blue") + geom_ribbon(aes(ymin = CI_low_0.69, ymax = CI_high_0.69), alpha = 0.2, fill = "blue") + geom_ribbon(aes(ymin = CI_low_0.5, ymax = CI_high_0.5), alpha = 0.2, fill = "blue") + geom_line(color = "blue") + theme_modern(legend.position = "none") + ylab("Memory") It seems that the memory score starts by decreasing, up to a point where it stabilizes (and even increases, which might be related by some other factors, such as discussions about the movie, watching of YouTube reviews and such). But what is the point of change? # Describing smooth estimate_smooth(estim, x = "Delay") > Start | End | Length | Change | Slope | R2 > ---------------------------------------------- > 0.00 | 7.78 | 0.53 | -15.68 | -2.02 | 0.50 > 7.78 | 14.00 | 0.37 | 4.69 | 0.75 | 0.50 # References Averell, Lee, and Andrew Heathcote. 2011. “The Form of the Forgetting Curve and the Fate of Memories.” Journal of Mathematical Psychology 55 (1): 25–35. https://doi.org/10.1016/j.jmp.2010.08.009.
2021-09-27 18:35:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3790762424468994, "perplexity": 4812.925170389103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058467.95/warc/CC-MAIN-20210927181724-20210927211724-00575.warc.gz"}
https://ora.ox.ac.uk/objects/uuid:eef0ecf6-b0ee-4d24-9d7c-b9742f37ea87
Journal article ### Enzyme kinetics at high enzyme concentration. Abstract: We re-visit previous analyses of the classical Michaelis-Menten substrate-enzyme reaction and, with the aid of the reverse quasi-steady-state assumption, we challenge the approximation d[C]/dt approximately 0 for the basic enzyme reaction at high enzyme concentration. For the first time, an approximate solution for the concentrations of the reactants uniformly valid in time is reported. Numerical simulations are presented to verify this solution. We show that an analytical approximation can b... Publication status: Published ### Access Document Publisher copy: 10.1006/bulm.1999.0163 ### Authors More by this author Institution: University of Oxford Department: Oxford, MPLS, Mathematical Inst Role: Author Journal: Bulletin of mathematical biology Volume: 62 Issue: 3 Pages: 483-499 Publication date: 2000-05-05 DOI: EISSN: 1522-9602 ISSN: 0092-8240 URN: uuid:eef0ecf6-b0ee-4d24-9d7c-b9742f37ea87 Source identifiers: 25479 Local pid: pubs:25479 Language: English Keywords:
2021-07-27 10:05:10
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8780380487442017, "perplexity": 5697.302458726718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153223.30/warc/CC-MAIN-20210727072531-20210727102531-00126.warc.gz"}
https://www.physicsforums.com/threads/about-bare-and-physical-mass-juan-r.88867/
# About bare and physical mass, Juan R 1. Sep 13, 2005 ### EL Since my discussion with Juan R in the thread "photon's mass is zero?" under Special and General relativity went off topic I will try to continue it here: As I wrote before (#88) books often start with assuming that m in the Lagrangian is the ordinary mass (i.e. the one you can find in tables) just to later find out that this leads to infinities when calculating higher order processes. Then this problem can be solved by noticing that if we in the Lagrangian substitute m with the bare mass m0 instead, the amplitudes turn out to be finite when we express them in terms of the physical mass. (Of course we also have to do a charge renormalization, but let's just stick to the mass for simplicity.) Hence the correct Lagrangian density should include the bare mass (as well as the bare charge), and not the physical (since that leads to infinities). However, all results will of course be expressed in terms of the physical mass (i.e. the one we find in tables). Please could any mentor or advisor verify or crank down on what I am saying, so we can get an end to this... 2. Sep 13, 2005 ### Physics Monkey EL, I haven't been following the discussion in the other thread, but I think I can answer your question. The Lagrangian is written in terms of the bare mass, bare field, etc. (i.e. the nice pretty looking Lagrangians in your book always refers to the bare quantities). Let us focus on the behavior of the bare mass. The physical mass of a particle is usually defined in terms of the pole of the propagator, and at tree level the pole is located at the bare mass. Therefore, at tree level the bare mass is the physical mass. However, if the propagator is evaluated beyond tree level then one finds that the pole shifts. In other words, the pole of the propagator is not given by the simple parameter m^2 that appears in the Lagrangian. Mass renormalization is then the procedure whereby we correct the pole structure of the propagator so that the propagator maintains its pole at the physical mass. The really interesting thing to me is that renormalization is not inherently associated with removing infinities. Even in a theory where all momentum integrals converged, mass renormalization would still be necessary because you still have to correct the pole. If I may, let me recommend Weinberg's magnificent text on quantum field theory to the interested people out there. Everything is wonderfully clear. In particular, an excellent discussion of this very subject (including a reference to the importance of renormalization apart from infinities) can be found around p. 438 in volume 1. 3. Sep 14, 2005 ### EL Juan R, have a look at this: I will try Weinberg as soon as I get some time over. 4. Sep 14, 2005 ### EL So in that case the relation beween bare and physical mass won't include any "infinities"? 5. Sep 14, 2005 ### Juan R. Thanks by continue this interesting discussion! My point is as follow. The mass of an electron is m, its rest mass, which appears in handbooks or tables of universal constants. The Lagrangian contains that m. I already cited to you three textbooks. In Weinberg, the lagrangian of QED appears in volume 1 equation 8.6.1. It contain m. Then, there are problems with infinites in the computation of interactions and that mass may be changed via renormalization procedure. However "physical" mass after procedure and measured exp is not the mass of the electron "nude". It is really the mass of the electron nude more cloud of virtual particles surrounded the electron. The same about the physical charge. I also cited a book on quantum physics where this problem of changing of mass -mass before renormalization is different from mass after it- is claimed to be one of main flaws of current theoretical physics. The infinites are artificial. 6. Sep 14, 2005 ### EL Yes but that is the physical mass. Yes but that is the bare mass. However, as Physics Monkey pointed out, at tree level they are the same, and that's why it's possible to calulate first order processes without encountering infinities. But for the theory to be consistent at higher order corrections you need to start from a Lagrangian where m is the bare mass. Hence the correct Lagrangian is written in terms of the bare mass (and charge). You can cite as many books as you want, the problem is that it seems you don't understand them. 7. Sep 14, 2005 ### Physics Monkey Juan R, Unfortunately, Weinberg is being a bit careless here. He does not specify whether this is the physical or bare mass. As I said, it is true that the the paramters of the Lagrangian correspond to the physical particle parameters at tree level. Notice that he does not calculate anything beyond tree level in Ch. 8, so his statement is harmless at that point: physical and bare mass are the same. However, please note eq. 11.1.1 in Weinberg where he explictly indicates that the mass, etc. in the Lagrangian you are talking about is the bare mass, etc. He proceeds to break the Lagrangian into a term which looks like the free Lagrangian (but now written with physical paramters), a renormalized interaction term, and the renormalization counterterms. EL, You right about the corrections being finite if the momentum integrals are all finite. All this means is that in any interacting field theory, the bare charge and mass get 'dressed' by the interaction. 8. Sep 14, 2005 ### EL Great. Could you give an example of a theory where this happens? I would guess this could occur when using quantum field theory in solid state physics? 9. Sep 15, 2005 ### vanesch Staff Emeritus Just a guess: any finite field theory would be good enough, no ? Like phi^4 in less than 4 spacetime dimensions ? 10. Sep 15, 2005 ### Juan R. No, you are not fixing my point of the change of mass! I already cited several books on the topic saying the same i said, including the "Fisica quantica" where clearly states that re-definition of mass and charge is one of flaws of QFT that nobody has solved. I began again now from Weinberg Equation (8.6.2) for Lagrangian (i ignore the field "FF" contribution by comodity) $$\mathcal{L} = - \overline{\Psi} (\gamma^{\mu}[\partial_{\mu} + ieA_{\mu}] + m) \Psi$$ $$m$$ is the rest mass that appears in special relativity, in Maxwell electromagnetism, and in the Dirac equation. Now, this does not work well for the computation of higher orders in scattering, and then one may ad hoc change it a posteriori. Then the (11.1.1) is (again I omit "FF" terms) $$\mathcal{L} = - \overline{\Psi_{B}} (\gamma_{\mu}[\partial^{\mu} + ieA^{\mu}] + m_{B}) \Psi_{B}$$ where the new variables are related to previous one via (11.1.2), (11.1.3), etc. For example the mass is (11.1.5) $$m \equiv m_{B} +\delta m$$ But i maintain again my initial points: the bare mass $$m_{B}$$ is not identical to initial $$m$$ mass that appears in the lagrangian. Note also that $$m$$ does not disappear of Lagrangian after of the use of bare constants even if equation (11.1.1) suggests that at the first look. In the (11.1.7) the free Lagrangian is defined in terms of $$m$$. instead of $$m_{B}$$. Also the E.g. equation 12.1.1 of Michel le Bellac. Quantum and statistical field theory. Oxford university Press, 1991. i cited begin with $$m$$ and change after to $$m_{B}$$. Also the J. Sanchez Guillen and M. A. Braun. Física cuántica. Alianza Editorial S.A., 1993. does and add in its pag 362. It is saying that [my translation] In fact, that Weinberg is doing in chapter 11 that Physics Monkey cited is adding ad hoc counter terms via $$\mathcal{L}_{2}$$ related to new renormalized constants Z, etc to the initial lagrangian (8.6.2). It was my point in the other thread, that one may begin with physical rest mass and after changes it ad hoc if one want compute higher orders, if one want compute lower orders, one may use the Lagrangian (8.6.2). In fact, if one want obtain the Dirac equation one may use the (8.6.2) with $$m$$, and i do not see because i would am wrong. Please any comment! Last edited: Sep 15, 2005 11. Sep 15, 2005 ### EL Yes. But why not start from the same Lagrangian in both lowest order and higher order calculations? Since both the Lagrangian with the physical mass and the Lagrangian with the bare mass gives the same result for tree diagrams, which one would you say is the "correct" one? The one which is correct to all orders, or the one which only works in a special case? 12. Sep 15, 2005 ### EL That's fine to me. 13. Sep 15, 2005 ### Physics Monkey EL, Most field theories in condensed matter still have divergent integrals, but the cutoff is a very physical thing. The lattice spacing is the natural limit to the high energy behavior of the system. As vanesch said, you usually need to look at field theories in low space-time dimensions to see examples of finite theories. Juan R, You are correct that bare mass and physical mass are not the same. What is also true is that the Lagrangian, $$\mathcal{L} = - \overline{\Psi} (\gamma^{\mu}[\partial_{\mu} + ieA_{\mu}] + m) \Psi - \frac{1}{4} F^{\mu \nu} F_{\mu \nu}$$ does not describe particles of mass m. One finds that at tree level approximation, the theory does contain a particle of mass m. However, as the interaction energy is included in higher order loop calculations, the mass of the particle predicted by the theory is altered. Now we want the mass of the particle predicted by theory to be the mass we observe, so we renormalize. We adjust the parameters in the Lagrangian in such a way that the pole of the propagator is always located at the physical mass no matter what order of perturbation theory we are doing. The confusion stems in part from the fact that the electron we observe is not really described by a free Lagrangian but rather is described by the interacting Lagrangian "summed to all orders". The electron we see is already dressed by electromagnetic interactions. 14. Sep 15, 2005 ### Juan R. I'm sorry but I do not understand to you. If you want compute self-reaction effects and radiative corrections correctly you may work with $$\mathcal{L} = - \overline{\Psi_{B}} (\gamma_{\mu}[\partial^{\mu} + ieA^{\mu}] + m_{B}) \Psi_{B}$$ but if want compute energy spectra of H atom to the Dirac level of precision (without self-reaction), the Dirac-like equation for the electron field $$\Psi(x)$$ is $$(\gamma_{\mu}[\partial^{\mu} + ieA^{\mu}] + m) \Psi(x) = 0$$ (1) not $$(\gamma_{\mu}[\partial^{\mu} + ieA^{\mu}] + m_{B}) \Psi(x) = 0$$ (2) or similar unless you assumed a priori that $$\delta m = 0$$ and $$Z_{2} = 1$$ and then counterterms from (11.1.1) are cancelled. But then you are assuming that $$m_{B}$$ and $$m$$ are the same and by using (2) you are really using (1) which is the correct. Renormalization may be done order by order. There is not general prescription. Moreover the correction of mass cannot be computed, only obtained via experiments. I always thought that correct mass was $$m$$ and introduction of other mass was a "mathematical trick" for accounting effects like interaction of electron with itself and polarization of vacuum. I think that is the reason that all textbooks I know begin with Lagrangian defined in terms of rest mass of the electron $$m$$ and only in precision computation change the mass, charge used. Last edited: Sep 15, 2005 15. Sep 15, 2005 ### Juan R. Yes i agree. This was precisely my point in the other thread. The mass and charges of electron nude are altered due polarization of vacuum. E.g. the induced virtual cloud around the electron modifies his initial mass. Electron when moves may also move the cloud surrounding it and his inertial properties vary. However, standard QFT states that only physical electron is the observed electron, which is the "dressed electron". I claim that dressed electron = nude electron + virtual cloud In fact, standard QFT claims that $$\delta m$$ has no physical sense, since virtual cloud form part of that QFT calls the "observed electron". I think that above decomposition between electron and polarization of vacuum would offer full physical value to $$\delta m$$ Last edited: Sep 15, 2005 16. Sep 15, 2005 ### vanesch Staff Emeritus I agree that this there is a temptation to do so, especially in theories which have small coupling constants (such as QED) and where successive orders seem to have physical meaning (so that the lines in a Feynman diagram seem to have some physical meaning). However, I think it is fundamentally misleading. After all, what do we have ? We have a theory that is supposed to crank out quantum amplitudes for different measurements (usually scattering experiments) as a function of a few parameters, here mB and eB. It turns out that certain behaviours of those quantum amplitudes are very similar to those of a free field theory, or have in other ways behaviours which make us think of classical particle theories. So we identify certain approximate properties of these quantum amplitudes (in certain limiting conditions) as defining something we call "the physical mass" mP or "the physical charge" eP of the particle. This comes down in setting up an experiment (satisfying the said limiting conditions) to extract these quantities from the experimental results (which are predicted by the quantum amplitudes). Let us for the moment assume that our theory is finite. This means that the theory gives us a function f1(mB,eB) which gives us mP and f2(mB,eB) which gives us eP: mP = f1(mB,eB) eP = f2(mB,eB) We could then use our experimental knowledge of mP and eP to fix the parameters mB and eB. However, in the perturbative approach, we introduce an extra perturbation parameter lambda in our theory, and do a series devellopment wrt lambda. So we've now introduced a new function f1(mB,eB,lambda) such that: f1(mB,eB) = f1(mB,eB,lambda=1), and we write the second term out in a series in lambda: f1(mB,eB,lambda) = f1_0(mB,eB) + lambda f1_1(mB,eB) + lambda^2 f1_2(mB,eB) + ... It now turns out that f1_0(mB,eB) = mB So we have that to zeroth order, mP is equal to mB. In the same way, we can inverse the relation, and use mP as an input. mB will now be a function of mP and eP: mB = g1(mP,eP) (the inverse of f1 and f2) eB = g2(mP,eP) In a similar series devellopment, we now have: mB = g1_0(mP,eP) + lambda g1_1(mP,eP) + .... and it turns out that g1_0 = mP. All other quantum amplitudes, for other experiments, are of course just a function of eB and mB: A(mB,eB) = A( g1(mP,eP), g2(mP,eP) ) = A{mP,eP} It is a priori not clear to me why the mathematical trick of writing f1 as a series in lambda, should give physical meanings to the different orders in lambda. In the case of infinite (but renormalizable) theories, the function f1(mB,eB) is ill-defined, so we introduce an extra parameter C (cutoff). With C, the theory becomes finite, and we are again in the same situation as above, only now with f1(mB,eB,C). This means that we can have a g1(mP,eP,C) ... It also means that OTHER quantum amplitudes, A(mB,eB) now become a function of C too: A(mB,eB,C), and the trick of renormalization is that: A( g1(mP,eP,C), g2(mP,eP,C), C) is, in the limit of large C, asymptotically not dependent anymore on C. If that's true for all A, we say that the theory is renormalizable, because it means that in the end, A is only a function of mP and eP. The point I'm trying to make is that it is somehow a coincidence that the 0-th order terms have mP and mB coincide. It is not really a coincidence, because these parameters where choosen so that they corresponded to the corresponding parameters in free field theories, which themselves were of course designed to describe free particles with mass mB for instance, so we shouldn't be surprised in fact that this comes out of it again when we look at zeroth order. But there's nothing in fact wrong or magical that the approximate property we understand as "physical mass" is a complicated function of the parameters of our theory. 17. Sep 15, 2005 ### EL Ok, now I'm lost again. Question to you who know this subject well: Who is right, me or Juan R? Or are we partly right both of us? Or maybe none of us? 18. Sep 15, 2005 ### vanesch Staff Emeritus My opinion is that you are right, but Juan is also right of course if you stick to lowest-order interactions because both are the same. On top of that, there's an extra confusion if you ask different people, because of the technique of counter terms in the Lagrangian. In that case, you keep the physical mass and charge in the lagrangian, but you add correction terms to the lagrangian. So people doing that would say that you have the physical mass in the lagrangian, not the bare one. 19. Sep 15, 2005 ### EL Nice to here that! (And of course Juan is right to lowest order, I never doubted that.) Yes sure it's possible to express the Lagrangian in terms of the physical mass if we add correction terms instead, but I don't think that is the case we have been discussing, right Juan R? 20. Sep 16, 2005 ### Juan R. Are you claiming that, in rigor, there are not particles?
2017-09-20 00:31:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7472327351570129, "perplexity": 755.6341592892048}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686077.22/warc/CC-MAIN-20170919235817-20170920015817-00163.warc.gz"}
http://mymathforum.com/physics/21510-magnetic-force.html
My Math Forum MAgnetic Force Physics Physics Forum October 6th, 2011, 07:34 AM #1 Member   Joined: Oct 2011 Posts: 81 Thanks: 0 MAgnetic Force please explain the concept involved and how to solve this problem : 1. A straight, horizontal wire of mass 10g and length 1.0 m carries a current of 2.0 A. What minimum magnetic field B should be applied in the region so that the magnetic force on the wire may balance its weight? October 7th, 2011, 12:40 PM #2 Senior Member   Joined: Jul 2011 Posts: 118 Thanks: 0 Re: MAgnetic Force $F_{grav}=F_{magn} mg=BIl B=\frac{mg}{Il}$ October 10th, 2011, 06:31 AM #3 Member   Joined: Oct 2011 Posts: 81 Thanks: 0 Re: MAgnetic Force Got it... Thnx Octahedron. Tags force, magnetic , ### a straight horizontal wire of mass 10 mg and length 1.0m carries a current of 2.0A tha minimum magnetic field B which should be applied Click on a term to search for related topics. Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post FishFace Physics 3 October 27th, 2011 06:39 AM r-soy Physics 1 February 18th, 2011 11:59 AM r-soy Physics 3 February 6th, 2011 10:15 AM TsAmE Physics 0 October 30th, 2010 04:49 AM Contact - Home - Forums - Cryptocurrency Forum - Top
2019-09-24 09:41:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3706604540348053, "perplexity": 6143.006017696649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572896.15/warc/CC-MAIN-20190924083200-20190924105200-00375.warc.gz"}
https://www.physics.uoguelph.ca/problem-13-72-double-source-interference
# Problem 13-72 Double source interference An interference pattern using microwaves of wavelength $3.0\; cm$ is set up in a physics laboratory. (Microwaves are part of the electromagnetic spectrum; they travel at a speed of $3.0 \times 10^8\; m/s$ in air.) Two sources of in-phase waves are placed $18 \;cm$ apart and a receiver is located $4.8\; m$ away from the midpoint between the sources. (a) What is the frequency of the microwaves? Express your answer in megahertz $(MHz)$ and gigahertz $(GHz)$. (b) As the receiver is moved across the pattern parallel to an imaginary line joining the sources, what is the distance between adjacent maxima, between adjacent minima, and between a maximum and an adjacent minimum? [Ans. (a) $10000\; MHz; 10\; GHz$  (b) $0.80\; m; 0.80\; m; 0.40\; m$ ] The correct relationship to solve part (a) is: (A)  $v= f\lambda$ (B)  $x = ML/d$ (C)  $x = (N - 1)L\lambda/d$
2022-05-19 12:45:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4534532129764557, "perplexity": 1136.5995855063395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662527626.15/warc/CC-MAIN-20220519105247-20220519135247-00037.warc.gz"}
https://cstheory.stackexchange.com/questions/40602/tighter-probability-bounds
# Tighter Probability Bounds Let $\mathcal{F}$ be a class of binary functions on a probability space $\Omega$. For $f \in \mathcal{F}$, let $P(f) =\mathbb{E}(f(Z))$ and $P_n(f) = \frac{1}{n} \sum_{i=1}^n f(Z_i)$ where $Z_i$'s are i.i.d. samples from $\Omega$. It is known that Theorem 1. If $\mathcal{F}$ has finite VC-dimension $d$, $$\mathbb{P}\left(\sup_{f \in \mathcal{F}} |P_n(f) - P(f)| \leq \sqrt{\frac{8}{n} \left(\log(\frac{4}{\delta}) + d\log(\frac{ne}{d})\right)}\right) \geq 1 - \delta$$ The following bound based on the Rademacher complexity of $\mathcal{F}$, denoted $\mathcal{R}_n(\mathcal{F})$, is also known. Theorem 2. $$\mathbb{P}\left(\sup_{f \in \mathcal{F}} |P_n(f) - P(f)| \leq 2\mathcal{R}_n(\mathcal{F}) + \sqrt{\frac{1}{2n} \log (\frac{2}{\delta})}\right) \geq 1- \delta$$ Are there other bounds on $\sup_{f\in \mathcal{F}}|P_n(f) - P(f)|$ tighter than the ones above? This is almost a duplicate (see my comment above) but I'll give a quick answer before it (likely) gets closed. The first inequality in the OP has a superfluous $\log(n/d)$, which may be removed via chaining, as discussed here: Tight VC bound for agnostic learning
2020-10-25 11:25:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9632598757743835, "perplexity": 237.60726530779533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107888931.67/warc/CC-MAIN-20201025100059-20201025130059-00142.warc.gz"}
http://forums.codeblocks.org/index.php?topic=15164.0;all
### Author Topic: Configuring Boost with Code::Blocks  (Read 58851 times) #### aaronds • Single posting newcomer • Posts: 5 ##### Configuring Boost with Code::Blocks « on: August 24, 2011, 04:00:35 pm » Hi, I've never configured an external library with a C/C++ IDE before so this is all new to me. I'm looking to configure the Boost library with Code::Blocks (windows, MinGW) but I just can't get it working. I have built and installed the Boost library, I just need to configure it with my project. I have of course consulted the documentation but it appears to be some what out of date, as it uses Boost 1.42 (I'm using 1.47). The documentation talks about include and lib folders, neither of which are in my installation. So far, I have set up a global variable in the IDE called "boost" that links to the base directory of the boost installation as the base field (under builtin fields) and links to the boost/ subdirectory as the include field. Within the build options for my project, under search directories, I have set the compiler directory to the boost subdirectory within the boost installation folder, while I have also set the linker directory to the stage folder. However, I am aware that even if all that I have done so far is correct, I still need to add additional linker settings but I'm lost as to what to do here. From what I can make out in the documentation I need to link specific libraries (I'm trying to use the asio library within Boost), but I can't find anything relevent within my boost installation. If anyone could tell me whether what I've done so far is correct or not and perhaps direct me on where I should go from here, I would be very appreciative. Cheers #### Alpha • Developer • Lives here! • Posts: 1513 ##### Re: Configuring Boost with Code::Blocks « Reply #1 on: August 24, 2011, 07:00:35 pm » It sounds like you have done most everything correctly so far, but just in case, I will list out several steps. Create the global variable boost with extract_dir as the base.  (This is C:\Libraries\boost_1_47_0 on my computer.)  The other fields do not need anything (except possibly lib; if you used a custom directory while building boost, put the path here). Next (assuming you are starting a project, not adding to an existing one), create a new project (a console app should be fine; see this page if you need step-by-step instructions on that). iii. In Search directories tab Linker sub-tab filed entered $(#boost.lib) #### Alpha • Developer • Lives here! • Posts: 1513 ##### Re: Configuring Boost with Code::Blocks « Reply #11 on: October 19, 2011, 01:08:46 am » d) c:\program files\codeblocks\mingw\bin\..\lib\gcc\mingw32\4.5.2\include\c++\bits\stl_algo.h:4185|2|instantiated from '_Funct std::for_each(_IIter, _IIter, _Funct) [with _IIter = std::istream_iterator<int>, _Funct = boost::lambda::lambda_functor<boost::lambda::lambda_functor_base<boost::lambda::bitwise_action<boost::lambda::leftshift_action>, boost::tuples::tuple<boost::lambda::lambda_functor<boost::lambda::lambda_functor_base<boost::lambda::bitwise_action<boost::lambda::leftshift_action>, boost::tuples::tuple<std::basic_ostream<char>&, boost::lambda::lambda_functor<boost::lambda::lambda_functor_base<boo| 2) Used software versions a. Code::Blocks 10.05 b. MinGW 4.6 c. Boost 1.47.0 It looks like you may have two conflicting MinGW installations. (I just tested the code with 4.6; it compiled with zero warnings or errors). Note: error messages are more readable when enclosed in code tags. #### ptolomey • Multiple posting newcomer • Posts: 12 ##### Re: Configuring Boost with Code::Blocks « Reply #12 on: October 19, 2011, 11:45:29 am » Alpha thanks for advise, I have installed Qt SDK (4.7), it comes with MinGW 4.5, built inside package. I will uninstall it and try to compile above mentioned code one more time Code: [Select] #include <boost/lambda/lambda.hpp>#include <iostream>#include <iterator>#include <algorithm>int main(){ using namespace boost::lambda; typedef std::istream_iterator<int> in; std::for_each( in(std::cin), in(), std::cout << (_1 * 3) << " " );} In any case thanks for advise. I will response if succeeded or not. #### ptolomey • Multiple posting newcomer • Posts: 12 ##### Re: Configuring Boost with Code::Blocks « Reply #13 on: February 11, 2012, 02:42:17 pm » Finally I succeeded. The only mistake was in definition of CodeBlocks Search Directories of Compiler. It has to be:$(#boost.include) and NOT $(#boost) as written in Wiki http://wiki.codeblocks.org/index.php?title=BoostWindowsQuickRef #### Master • Multiple posting newcomer • Posts: 53 ##### Re: Configuring Boost with Code::Blocks « Reply #14 on: April 04, 2012, 06:25:05 pm » hello all , i have the same problem getting boost to work with CB . i can compile this source code : Code: [Select] #include <boost/lambda/lambda.hpp>#include <iostream>#include <iterator>#include <algorithm>int main(){ using namespace boost::lambda; typedef std::istream_iterator<int> in; std::for_each( in(std::cin), in(), std::cout << (_1 * 3) << " " );} i used these command to compile boost : boost is extracted to F:\ and main folder address is : Code: [Select] F:\boost_1_49_0here are the commands : Code: [Select] F:\F:\ cd Boost_1_44_0F:\ Boost_1_44_0> bootstrap.batF:\ Boost_1_44_0>bjam toolset=gcc --build-type=complete stage------------------- the above didnt do any good so i wrote this and actually compiled just fine : Code: [Select] F:\ Boost_1_44_0>bjam variant=debug,release link=static address-model=32 and then Code: [Select] F:\boost_1_49_0>bjam toolset=gcc variant=debug,release link=static threading=multi address-model=32 --build-type=complete stagethen when i tried to compile a thread example : Code: [Select] #include <boost/thread.hpp>#include <iostream>void wait(int seconds){ boost::this_thread::sleep(boost::posix_time::seconds(seconds));}boost::mutex mutex;void thread(){ for (int i = 0; i < 5; ++i) { wait(1); mutex.lock(); std::cout << "Thread " << boost::this_thread::get_id() << ": " << i << std::endl; mutex.unlock(); }}int main(){ boost::thread t1(thread); boost::thread t2(thread); t1.join(); t2.join();} which failed miserably with these errors : Code: [Select] obj\Debug\main.o||In function Z6threadv':|D:\Documents and Settings\Master\My Documents\Projects\Boost Example\main.cpp|18|undefined reference to _imp___ZN5boost11this_thread6get_idEv'|obj\Debug\main.o||In function main':|D:\Documents and Settings\Master\My Documents\Projects\Boost Example\main.cpp|27|undefined reference to _imp___ZN5boost6thread4joinEv'|D:\Documents and Settings\Master\My Documents\Projects\Boost Example\main.cpp|28|undefined reference to _imp___ZN5boost6thread4joinEv'|D:\Documents and Settings\Master\My Documents\Projects\Boost Example\main.cpp|28|undefined reference to _imp___ZN5boost6threadD1Ev'|D:\Documents and Settings\Master\My Documents\Projects\Boost Example\main.cpp|28|undefined reference to _imp___ZN5boost6threadD1Ev'|D:\Documents and Settings\Master\My Documents\Projects\Boost Example\main.cpp|28|undefined reference to _imp___ZN5boost6threadD1Ev'|D:\Documents and Settings\Master\My Documents\Projects\Boost Example\main.cpp|28|undefined reference to _imp___ZN5boost6threadD1Ev'|F:\boost_1_49_0\boost\thread\win32\thread_data.hpp|161|undefined reference to _imp___ZN5boost11this_thread18interruptible_waitEPvNS_6detail7timeoutE'|obj\Debug\main.o||In function thread<void (*)()>':|F:\boost_1_49_0\boost\thread\detail\thread.hpp|205|undefined reference to _imp___ZN5boost6thread12start_threadEv'|||=== Build finished: 9 errors, 0 warnings (0 minutes, 49 seconds) ===| i configured CB like this : right clicked on the active project >build options >Debug>Search Directories >Compiler: added these: Code: [Select] $(#boost.include)F:\boost_1_49_0F:\boost_1_49_0\boostF:\boost_1_49_0\stage\liband under linker i added: Code: [Select] \$(#boost.lib)F:\boost_1_49_0\stage\libF:\boost_1_49_0\libsthen i added to the Linker section and selected all of the files in stage/libs : these are : here is the pic : and for the variable part : i have this : -------------------------- what is it that i am missing ? a man's dream is an index to his greatness... #### killerbot • Lives here! • Posts: 5242 ##### Re: Configuring Boost with Code::Blocks « Reply #15 on: April 04, 2012, 09:01:27 pm » #### Master • Multiple posting newcomer • Posts: 53 ##### Re: Configuring Boost with Code::Blocks « Reply #16 on: April 04, 2012, 10:04:44 pm » thank you , are'nt all of the libs created already ? because im sure i set up the linker to look for the libraries in the correct folder . would you please tell me how i can build just everything ? i assume this library is not built ? right? a man's dream is an index to his greatness... #### oBFusCATed • Developer • Lives here! • Posts: 12737 ##### Re: Configuring Boost with Code::Blocks « Reply #17 on: April 04, 2012, 10:12:46 pm » Master: Keep in mind that boost has they support, here we support C::B and your question is quite off topic! (most of the time I ignore long posts) [strangers don't send me private messages, I'll ignore them; post a topic in the forum, but first read the rules!] #### MortenMacFly • Lives here! • Posts: 9595 ##### Re: Configuring Boost with Code::Blocks « Reply #18 on: April 04, 2012, 10:14:44 pm » what is it that i am missing ? You miss to inspect / provide the full build log (see my sig). Compiler logging: Settings->Compiler & Debugger->tab "Other"->Compiler logging="Full command line" C::B Manual: http://www.codeblocks.org/docs/main_codeblocks_en.html C::B FAQ: http://wiki.codeblocks.org/index.php?title=FAQ #### Master • Multiple posting newcomer • Posts: 53 ##### Re: Configuring Boost with Code::Blocks « Reply #19 on: April 04, 2012, 10:48:11 pm » Master: Keep in mind that boost has they support, here we support C::B and your question is quite off topic! yes you are right , but since it was concerned with CB in terms of configuration and again because of using a svn version , i thought there might be sth wrong with the CB svn or sth has changed that i am unware of .so basically the best place to find about this would be here to my opinion . and googling got me here by the way , thought i could use some help with the people working daily with CB . Quote You miss to inspect / provide the full build log (see my sig). here is the full build log . Code: [Select] http://upload.ustmb.ir/uploads/13335802852.zipThank you in advance a man's dream is an index to his greatness... #### killerbot • Lives here! • Posts: 5242 ##### Re: Configuring Boost with Code::Blocks « Reply #20 on: April 04, 2012, 10:52:07 pm » thank you , are'nt all of the libs created already ? because im sure i set up the linker to look for the libraries in the correct folder . would you please tell me how i can build just everything ? i assume this library is not built ? right? telling where the libraries are is not enough, you need to tell the linker to which ones one to link. #### Master • Multiple posting newcomer • Posts: 53 ##### Re: Configuring Boost with Code::Blocks « Reply #21 on: April 04, 2012, 11:06:19 pm » thank you , are'nt all of the libs created already ? because im sure i set up the linker to look for the libraries in the correct folder . would you please tell me how i can build just everything ? i assume this library is not built ? right? telling where the libraries are is not enough, you need to tell the linker to which ones one to link. how am i supposed to that ? have i not already done that ? (in linker section (in both debug and release ) i selected all of built libs and .a files ! a man's dream is an index to his greatness... #### Master • Multiple posting newcomer • Posts: 53 ##### Re: Configuring Boost with Code::Blocks « Reply #22 on: April 05, 2012, 05:26:43 am » ok i believe i compiled all necessary libs ( they are now more than 4Gigabyte !) here are the screen shots of my settings . its all i have done to the CB at the moment . : what is wrong with it ? a man's dream is an index to his greatness... #### MortenMacFly • Lives here! • Posts: 9595 ##### Re: Configuring Boost with Code::Blocks « Reply #23 on: April 05, 2012, 06:30:57 am » what is wrong with it ? You still did not post the full build log. So I cannot tell. From the screenshots I see that you link against all boost libs, this will not work, too. Boost compiles in many flavours. You need to pick one flavour of your choice and then only link against those lib(s) you are actually using, not just all. Probably you should start reading the boost manual - it seems you don't really know what you are doing. We don't provide boost support here, this is a Code::Blocks forum. Boost has its own forum. You need to know what lib you need to use in what flavour. Tat's a decision I nobody make for you, as it depends on what you have in mind with your project. Compiler logging: Settings->Compiler & Debugger->tab "Other"->Compiler logging="Full command line" C::B Manual: http://www.codeblocks.org/docs/main_codeblocks_en.html C::B FAQ: http://wiki.codeblocks.org/index.php?title=FAQ #### Master • Multiple posting newcomer • Posts: 53 ##### Re: Configuring Boost with Code::Blocks « Reply #24 on: April 05, 2012, 06:35:59 am » what is wrong with it ? You still did not post the full build log. So I cannot tell. From the screenshots I see that you link against all boost libs, this will not work, too. Boost compiles in many flavours. You need to pick one flavour of your choice and then only link against those lib(s) you are actually using, not just all. Probably you should start reading the boost manual - it seems you don't really know what you are doing. We don't provide boost support here, this is a Code::Blocks forum. Boost has its own forum. You need to know what lib you need to use in what flavour. Tat's a decision I nobody make for you, as it depends on what you have in mind with your project. thank you . and sorry for that , i have already posted the full build log : here it is again : and ok . i'll try to fix that too a man's dream is an index to his greatness... #### oBFusCATed • Developer • Lives here! • Posts: 12737 ##### Re: Configuring Boost with Code::Blocks « Reply #25 on: April 05, 2012, 08:21:14 am » Also don't use full paths to the libraries, just their names. In the Build options -> Linker setting -> Link libraries If I remember correctly using full paths is known to fail. (most of the time I ignore long posts) [strangers don't send me private messages, I'll ignore them; post a topic in the forum, but first read the rules!] #### Master • Multiple posting newcomer • Posts: 53 ##### Re: Configuring Boost with Code::Blocks « Reply #26 on: April 05, 2012, 11:03:14 pm » Also don't use full paths to the libraries, just their names. In the Build options -> Linker setting -> Link libraries If I remember correctly using full paths is known to fail. thank you dear oBFusCATed , i tested your suggestion , but it turned up that it was wrong . actually there is nothing wrong with the full path . here is the answer though : http://stackoverflow.com/questions/10022706/having-issue-in-configuring-boost-with-codeblocks/10035917#10035917 i rebuilt the boost with Code: [Select] bjam toolset=gcc --build-type=complete stage variant=debug,release threading=multi link=static and then added the following line to my source code Code: [Select] #define BOOST_THREAD_USE_LIBand there it goes compiling , linking just fine and i didnt change any of my previouse configurations , just rebuilt and added that line , done thanks again everyone a man's dream is an index to his greatness... #### GigaGerard • Single posting newcomer • Posts: 4 ##### Re: Configuring Boost with Code::Blocks « Reply #27 on: December 22, 2015, 10:26:36 pm » How to get Boost working in CodeBlocks with GCC and get the test file of Alpha in post 2 running? I did it! Advice that helped me most was: Change the word msvc to gcc in the boost file project-config.jam and run b2 to compile boost (again). Not only set the two paths in CodeBlocks > Settings > Global Variables > Build-in fields: > base > C:\MY_DIR\boost_1_56_0\ > lib > C:\MY_DIR\boost_1_56_0\stage\lib But also set them in CodeBlocks > Settings > Compiler > Search directories
2020-08-07 18:42:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6567580103874207, "perplexity": 4834.021135262683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737206.16/warc/CC-MAIN-20200807172851-20200807202851-00223.warc.gz"}
https://socratic.org/questions/how-do-you-solve-for-y-in-3x-5y-7
# How do you solve for y in 3x+5y=7? Jan 23, 2016 $y = \frac{1}{5} \left(7 - 3 x\right)$ #### Explanation: To find y place the other terms in the equation to the right hand side of the equation. Thus : 5y = 7 - 3x ( now divide both sides by 5 ) $\Rightarrow \cancel{5} \frac{y}{\cancel{5}} = \frac{7 - 3 x}{5} = \frac{1}{5} \left(7 - 3 x\right)$ Jan 23, 2016 $3 x + 5 y = 7$ Subtract $\textcolor{b l u e}{3 x}$ from both sides. $\implies 3 x - \textcolor{b l u e}{3 x} + 5 y = 7 - \textcolor{b l u e}{3 x}$ $\implies 5 y = 7 - 3 x$ Divide both sides by $\textcolor{red}{5}$ $\implies \frac{5 y}{\textcolor{red}{5}} = \frac{7 - 3 x}{\textcolor{red}{5}}$ $\implies y = \frac{7}{5} - \frac{3 x}{5}$
2021-12-08 21:55:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7513226270675659, "perplexity": 1032.1783635237928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363598.57/warc/CC-MAIN-20211208205849-20211208235849-00180.warc.gz"}
http://mathhelpforum.com/calculus/81742-some-derivative-help.html
# Math Help - Some Derivative Help 1. ## Some Derivative Help Just need a line by line for how to find the derivative of - 6(2x-9)^5 and x^-4+(x^3-4)^-2/5 thanks 2. Originally Posted by TneedsHelp Just need a line by line for how to find the derivative of - 6(2x-9)^5 and x^-4+(x^3-4)^-2/5 thanks Do you know how to use the chain rule? $\frac d{dx}\left[6(2x-9)^5\right]$ $=30(2x-9)^4\frac d{dx}\left[2x-9\right]$ 3. 6(2x-9)^5 Alright, you have the chain rule here, so.. the exponent comes down, the polynomial's exponent is subtracted by 1, and you take the inside of the parenthesis' derivative and multiply it 5(6(2x-9)^4) (2) everything else remains unchanged. x^-4 is the same principle as up above. think of it as (x)^(-4) (x^3-4)^-2/5 the exponent comes down, polynomial's exponent is subtracted by 1, and you take the inside of the parenthesis' derivative and multiply it (-2/5)(x^3-4)^(-7/5)[(3x^2)]
2014-11-27 14:33:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8850637078285217, "perplexity": 2579.7571178722983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931008720.43/warc/CC-MAIN-20141125155648-00090-ip-10-235-23-156.ec2.internal.warc.gz"}
https://muthu.co/computing-the-discrete-frechet-distance-using-dynamic-programming/
Computing the discrete Fréchet distance using dynamic programming Definition The Fréchet distance is usually explained using the analogy of a man walking his dog. A man is walking a dog on a leash: the man can move on one curve, the dog on the other; both may vary their speed, but backtracking is not allowed. What is the length of the shortest leash that is sufficient for traversing both curves? Intuition We can formally define it as: Given two sequence of points in $$\mathbb{R}^d$$, p={p1, p2, p3…..pn} and q= {q1, q2, q3…. qm}, Fréchet distance represented by $$f(x,y)$$ is the maximum of the minimum distances between points pi and qi. To understand it more intuitively, let us look at a simple example. In the below graph, I have two set of polylines with positions: $$P = {(2,1) , (3,1), (4,2), (5,1)}$$ $$Q = {(2,0),(3,0),(4,0)}$$ Starting and ending points being (p1,q1) and (p4,q3) respectively. We find all the pairs of walks between p and q given as, $$p1 -> (p1, q1) , (p1,q2), (p1,q3)$$ $$p2 -> (p2, q1), (p2,q2), (p2,q3)$$ $$p3 -> (p3, q2), (p3,q2), (p3,q3)$$ $$p4 -> (p4, q2), (p4,q2), (p4,q3)$$ then, we find the minimum distances for each pair. $$min\{p1\} = min\{dist(p1,q1), dist(p1,q2), dist(p1,q3)\} = min\{1, 1.4, 2.2\} = 1$$ $$min\{p2\} = min\{dist(p2,q1), dist(p2,q2), dist(p2,q3)\} = min\{1.4, 1, 1.4\} = 1$$ $$min\{p3\} = min\{dist(p3,q1), dist(p3,q2), dist(p3,q3)\} = min\{2.8, 2.2, 2\} = 2 \}$$ $$min\{p4\} = min\{dist(p3,q1), dist(p3,q2), dist(p3,q3)\} = min\{3.2, 2.2, 2\} = 2$$ Now, we can find the Fréchet distance by finding the maximum of all the minimums we calculated in the last step. $$f(x,y) = max\{min\{p1\}, min\{p2\}, min\{p3\}, min\{p4\}\} = max\{1,1,2,2\} = 2$$ Implementation The implementation described by Thomas Eiter and Heikki Mannila [1] uses dynamic programming to reduce the time complexity of finding the Fréchet distance by navigating through only three possible paths. • P and Q both move one step • P stays where it is, Q moves one step • P moves one step, Q stays where it is This reduces the total number of pairs for each point in P. The implementation is as below: from scipy.spatial.distance import euclidean import seaborn as sns P = [[2,1] , [3,1], [4,2], [5,1]] Q = [[2,0] , [3,0], [4,0]] p_length = len(P) q_length = len(Q) distance_matrix = np.ones((p_length, q_length)) * -1 # fill the first value with the distance between # the first two points in P and Q distance_matrix[0, 0] = euclidean(P[0], Q[0]) # load the first column and first row with distances (memorize) for i in range(1, p_length): distance_matrix[i, 0] = max(distance_matrix[i-1, 0], euclidean(P[i], Q[0])) for j in range(1, q_length): distance_matrix[0, j] = max(distance_matrix[0, j-1], euclidean(P[0], Q[j])) for i in range(1, p_length): for j in range(1, q_length): distance_matrix[i, j] = max(min(distance_matrix[i-1, j], distance_matrix[i, j-1], distance_matrix[i-1, j-1]), euclidean(P[i], Q[j])) distance_matrix[p_length-1, q_length-1] sns.heatmap(distance_matrix, annot=True) The preview of our sample as a heatmap is as below: References: [1] Thomas Eiter and Heikki Mannila. Computing discrete Frechet distance. Technical report, 1994. http://www.kr.tuwien.ac.at/staff/eiter/et-archive/cdtr9464.pdf [2] Jupyter Notebook with Frechet distance implementation.
2021-08-01 19:05:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.544262707233429, "perplexity": 3292.129819935207}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154219.62/warc/CC-MAIN-20210801190212-20210801220212-00564.warc.gz"}
https://stats.stackexchange.com/questions/408616/segmented-regression-of-a-seasonal-time-series-in-r?noredirect=1
# Segmented Regression of a Seasonal Time-series in R I have a time-series of diurnal temperature range (DTR), 1961 - 2013 from a single weather station. Visually, the first part of the TS seems to have a downward trend, so I used the package segmented to verify, and specifying 4 break points the downward trend is confirmed. A downward trend would be significant for my research, but I want to avoid confirmation bias. Could the trend be an artefact of the strong seasonality of the TS? Using changepoints to search for step changes of mean value confirms the segmented regression findings, but after attempting to deseasonalize the series by differencing it with lag = 371 days (the maximum ACF value), the trend is completely different. What I want to ask is: Is it correct to apply segmented regression (and/or changepoints detection) to the raw time-series, or does it need to be pre-processed somehow first? • changepoint detection needs to be done in concert with pulse detection , time trend detection , seasonal pulse detection , local time trends AND of course SARIMA detection. – IrishStat May 16 at 22:26 • I am positively not trying to predict the future of this time-series, and I am only concerned with the seasonal structure only insofar it impacts what I am actually after: the local time-trends, and in second place the global time-trend. I'd like to minimize the workload and focus on my targets. – Fabio Capezzuoli May 17 at 3:22 • Whether or not you are trying to predict or to characterize , the same advice holds. One needs to segment signal and noise of the time series in order to clearly see the intrinsic patterns. – IrishStat May 17 at 7:34 • Does changepoints account for serial correlation and seasonality? If not, you cannot trust its results--the approach may be right but the software could be wrong. I cannot find documentation of any changepoints function in the segmented package. – whuber May 17 at 19:08 • it does not ... – IrishStat May 17 at 20:08
2019-06-16 03:36:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6021912693977356, "perplexity": 1362.0226750354523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997533.62/warc/CC-MAIN-20190616022644-20190616044644-00016.warc.gz"}
https://plus.google.com/+KristianK%C3%B6hntopp/posts/GNPt5hqRUV5?pid=6098237333273801074&oid=111979905295470608180
Inspired by David Galloway's "Square Wave from the summation of odd-integer harmonic frequencies", I made a spin-off based on the same code:
2017-06-27 16:28:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8357178568840027, "perplexity": 3126.2039809740254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321458.47/warc/CC-MAIN-20170627152510-20170627172510-00075.warc.gz"}
http://mathhelpforum.com/advanced-algebra/159670-linear-transformation.html
# Thread: Linear transformation 1. ## Linear transformation Can someone check my answer Show that this is not a linear transformation $T(\mathbf{x})=\begin{pmatrix}x_1^2\\x_2^2\end{pmat rix}$ Check scalar multiplication condition $T(\lambda \mathbf{x})=\begin{pmatrix}\lambda ^2 x_1^2\\\lambda ^2 x_2^2\end{pmatrix}$ $=\lambda ^2\begin{pmatrix}x_1^2\\x_2^2\end{pmatrix}$ $\neq \lambda\begin{pmatrix}x_1^2\\x_2^2\end{pmatrix} \ \forall\lambda$ 2. Just append $\lambda T(\mathbf{x})$ at the end there, and you'll have it.
2017-09-23 17:21:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8017416000366211, "perplexity": 4841.010745089061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689752.21/warc/CC-MAIN-20170923160736-20170923180736-00478.warc.gz"}
http://mathhelpforum.com/pre-calculus/212846-factoring-rationalizing-print.html
Factoring and rationalizing • Feb 9th 2013, 09:09 PM skg94 Factoring and rationalizing 1/ sqroot(x) - 1/2 / x-4 I first simplified by multiplying by 2sqroot(x) = 2-sqroot(x) / (x-4) (2sqroot(x)) than i was going to rationalize the numerator by multiplying by 2+ sqroot(x) but that seemed too complicated. Where did i go wrong? • Feb 9th 2013, 09:24 PM earthboy Re: Factoring and rationalizing Quote: Originally Posted by skg94 1/ sqroot(x) - 1/2 / x-4 I first simplified by multiplying by 2sqroot(x) = 2-sqroot(x) / (x-4) (2sqroot(x)) than i was going to rationalize the numerator by multiplying by 2+ sqroot(x) but that seemed too complicated. Where did i go wrong? Why dont you type in latex or give proper brackets?? It helps a lot.... is your question:Rationalize and factorize $\frac{1}{\sqrt{x}}-\frac{1}{2(x-4)}$ • Feb 11th 2013, 08:34 PM skg94 Re: Factoring and rationalizing I dont know how to, but well i suppose its the same but its (1/rootx - 1/2 ) / (x-4) • Feb 11th 2013, 09:29 PM Soroban Re: Factoring and rationalizing Hello, skg94! Quote: $\text{Rationalize: }\:\dfrac{\frac{1}{\sqrt{x}} - \frac{1}{2}}{x-4}$ I first simplified by multiplying by $\tfrac{2\sqrt{x}}{2\sqrt{x}}$ .and got: . $\frac{2-\sqrt{x}}{2\sqrt{x}\:\!(x-4)}$ Then I was going to rationalize the numerator by multiplying by $2+ \sqrt{x}$ but that seemed too complicated. . But did you try it? Where did i go wrong? . Nowhere . . . your work is correct! You would have: / $\frac{2-\sqrt{x}}{2\sqrt{x}\;\!(x-4)}\cdot {\color{blue}\frac{2+\sqrt{x}}{2 + \sqrt{x}}} \;=\;\frac{4-x}{2\sqrt{x}\;\!(x-4)(2+\sqrt{x})} \;=\; \frac{-(x-4)}{2\sqrt{x}\;\!(x-4)(2+\sqrt{x})}$ . . . . . . . . . . . . . . $=\;\frac{-({\color{red}\rlap{/////}}x-4)}{2\sqrt{x}\;\!({\color{red}\rlap{/////}}x-4)(2+\sqrt{x})} \;=\;\frac{-1}{2\sqrt{x}\;\!(2+\sqrt{x})}$
2016-09-28 13:10:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9026541709899902, "perplexity": 2632.0376517167365}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661367.29/warc/CC-MAIN-20160924173741-00248-ip-10-143-35-109.ec2.internal.warc.gz"}
https://mathematica.stackexchange.com/questions/71808/plotrange-automatic-the-exact-function-used-to-calculate-outliers
# PlotRange->Automatic the exact function used to calculate outliers I know that PlotRange->Automatic does the following: "the distribution of coordinate values is found, and any points sufficiently far out in the distribution are dropped. Such points are often produced as a result of singularities in functions being plotted." This quoted verbatim from the the PlotRange option page on Mathematica. My question is does anyone know exactly what function Mathematica is using? I have this nifty plotting function I called ExpPlot[] that combines a lot of options and plot types. Long story short in Mathematica 10.0.2 PlotRange is set by only the first set of data in a multiset. I want a simple work around that gives me the "Automatic" plot range option. • a "not simple" approach is to use AbsoluteOptions to discern the auto-range for each plot, then devise a scheme to merge the ranges together. ( I feel this has been asked before.. ) – george2079 Jan 15 '15 at 22:09 • – george2079 Jan 15 '15 at 22:11 • I am not really interested in actually using any kind of option or part of the ListPlot function. I am simply wondering if anyone knows what Mathematica is using for "sufficiently far" out in the distribution. I just want to write something that mirrors the Automatic setting and implement that. – Nick Jan 16 '15 at 15:54 • interesting question, but I think as a practical matter, even if you knew exactly the criteria, in order to to replicate Plot's result you would need to replicate Plot's recursive function mapping to use it. – george2079 Jan 16 '15 at 17:27 • I have access to the PlotRange option of list plot as well as the input data which is always 2D data sets. So I should be able to replicate it assuming I know what the criteria is, yes? – Nick Jan 20 '15 at 0:38
2019-11-12 23:56:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20234626531600952, "perplexity": 733.4422543705838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665809.73/warc/CC-MAIN-20191112230002-20191113014002-00406.warc.gz"}
http://mathhelpforum.com/advanced-algebra/153272-3x3-matrix-i-can-find-characteristic-polynomial-but-not-eigenvalues.html
# Thread: 3X3 Matrix, I can find the Characteristic polynomial but not the eigenvalues 1. ## 3X3 Matrix, I can find the Characteristic polynomial but not the eigenvalues I need to learn to find eigenvalues and vectors. I can find the Characteristic polynomial (and if i have the eigenvalues i can find the vectors) However i cannot get from the Characteristic polynomial to the eigenvalues for example if i have a 3x3 matrix (1 0 4) (0 2 0) (3 1 -3) The Characteristic polynomial is x^3 - 19x + 30 = 0 (x=lambda) I know that the eigenvalues are Real eigenvalues: {-5, 2, 3} and then i can work out the eigenvectors. But how do you get from this step (x^3 - 19x + 30 = 0) to the eigenvalues. Thankyou! 2. u can use Horner's scheme (are you familiar with it) ? sorry I forgot do they acquainted you with that on your level of education? or u can : $x^3-19x+30=0$ $x^3-19x+30=x^3-9x -10x +30 = x(x^2-9) -10(x-3)= x(x^2-3^2) -10(x-3) =$ $= x(x-3)(x+3)-10(x-3) = (x-3) [x(x+3) -10]$ $= (x-3)(x^2+3x-10) = (x-3)(x-2)(x+5) =0$ so u have $x-3=0 \Rightarrow x_1=3$ $x-2=0 \Rightarrow x_2=2$ $x+5=0 \Rightarrow x_3=-5$ 3. Another thing you can do is use the "rational roots theorem": if $a_nx^2+ a_{n-1}x^{n-1}+ \cdot\cdot\cdot+ a_1x+ a_0= 0$ is a polynomial equation with integer coefficients and $\frac{m}{n}$ is a rational root of the equation, then the denominator, n, must divide the leading coefficient, $a_n$, and the numerator, m must divide the constant term, $a_0$. Here, your equation is $x^3- 19x+ 30= 0$, a polynomial equation with integer coefficients. The leading coefficient is 1 and the only integers that divide 1 are 1 and -1. Since the denominator of any rational root must be 1 or -1, the only rational roots must be integers. The constant term is 30 so any rational root must be a factor of 30. Since 30= 2(3)(5), the only possible rational roots are 1, -1, 2, -2, 3, -3, 5, -5, 6, -6, 10, -10, 15, -15, 30, and -30. Just put each of those into the equation to see whether or not they satisfy the equation and you will see that 2, 3, and -5 satisfy it. Note- this does not guarentee that there are any rational roots to a polynomial equation but if there are, it will find them. We should point out that, in general, finding eigenvalues is not at all an easy task!
2017-01-22 03:59:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8748005628585815, "perplexity": 177.74731079501524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00386-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.utab.com/music/Kg7kkbj8cIP
# Bahamas - Lost In the Light Report copyright infringement November 12, 2015 ## Lyrics I'm lost in the light I pray for the night To take me, to take me to After so many words Still nothing's heard Don't know what we should do So if someone can see me now, let them see you It was my greatest thrill But we just stood still You let me hold your hand 'til I had my fill Even countin' sheep Don't help me sleep I just toss and turn right there beside you So if someone could help me now, they'd help you too. They'd help you to See you through All the hard things we've all gotta do 'Cause this life is long And so you wouldn't be wrong Bein' free, leavin' me on my own And I held my own Still I rattled your bones I said some awful things and I take them back If we would try again Just remember when Before we were lovers, I swear we were friends So if someone could see me now let them see you Let them see you See you through All the hard things we've all gotta do 'Cause this life is long So you wouldn't be wrong Bein' free here with me on my own Show moreShow less
2017-03-26 03:55:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8410645127296448, "perplexity": 14854.512869305465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189092.35/warc/CC-MAIN-20170322212949-00341-ip-10-233-31-227.ec2.internal.warc.gz"}
https://wlord.org/ag-algebraic-geometry-what-is-the-idea-behind-the-proof-of-the-isogeny-theorem-and-theorem-iii-7-9-serre-in-silvermans-book-answer/
# ag.algebraic geometry – What is the idea behind the proof of the Isogeny theorem and Theorem III.7.9 (Serre) in Silverman’s book? Answer Hello dear visitor to our network We will offer you a solution to this question ag.algebraic geometry – What is the idea behind the proof of the Isogeny theorem and Theorem III.7.9 (Serre) in Silverman’s book? ,and the answer will be typical through documented information sources, We welcome you and offer you new questions and answers, Many visitor are wondering about the answer to this question. ## ag.algebraic geometry – What is the idea behind the proof of the Isogeny theorem and Theorem III.7.9 (Serre) in Silverman’s book? 1. let $$E_1$$ other $$E_2$$ be Elliptic curves over the field $$K$$ other $$l\neq\text{char}(K)$$ be a prime number. let $$T_l(E_i)$$ is the Tate module of $$E_i$$, $$i=1.2$$. Then the natural map $$\mathrm{Hom}_K(E_1,E_2)\otimes\mathbb{Z}_l\longrightarrow\mathrm{Hom}_K(T_l(E_1),T_l(E_2))$$ is an isomorphism if: i) $$K$$ is a finite field. ii) $$K$$ is a number field. 1. let $$K$$ be a number field and $$E/K$$ be an elliptic curve without complex multiplication. let $$\rho_l:G_{\bar{K}/K}\longrightarrow\mathrm{Aut}(T_l(E))$$ be the $$l$$-adic representation of $$G_{\bar{K}/K}$$ associated to $$E$$. Then: i) $$\rho_l(G_{\bar{K}/K})$$ is of finite index in $$\mathrm{Aut}(T_l(E))$$ for all primes $$l\neq\text{char}(K)$$. ii)$$\rho_l(G_{\bar{K}/K})=\mathrm{Aut}(T_l(E))$$ for all but finitely many primes $$l$$. I have recently started studying about Elliptic curves from the book of Silverman (The Arithmetic of Elliptic Curves) and I am a proper beginner to the theory of Elliptic curves so I am looking for the proof of 1 I checked out the cited paper of Tate in the book which further directs me to an article which is written in German so I couldn’t read up anything there. And, for 2 I couldn’t really understand the proof from the source cited in the book. So, if anyone could explain me the idea behind the proofs of these two theorems or maybe just how to visualize these two theorems geometrically or algebraically I would really appreciate it. PS I am not looking for a proper proof for either of this theorem. we will offer you the solution to ag.algebraic geometry – What is the idea behind the proof of the Isogeny theorem and Theorem III.7.9 (Serre) in Silverman’s book? question via our network which brings all the answers from multiple reliable sources.
2022-06-28 06:47:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6630523800849915, "perplexity": 213.77613611761777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103355949.26/warc/CC-MAIN-20220628050721-20220628080721-00127.warc.gz"}
http://motls.blogspot.dk/
## Tuesday, January 24, 2017 ... ///// ### Quantum computing lady: feminized physics is a formula for failure The 2017 Australia Day address (full video in AU only; transcript globally) denounces the feminist dumbing down of physics education Ms Michelle Simmons is a physics professor in New South Wales, Australia focusing on quantum computation – which isn't a soft science, I assure you – and surrounding fields and boasting physics/chemistry degrees, 360 publications including 27 in PRL, and $h=40$, among other things. She's spent some time in a leading Cambridge, UK lab and is doing well in the land of the kangaroos, too. Her lab has a nontrivial chance to become the first group that actually constructs the quantum computer, whether it's based on quantum dots or a few more approaches she's involved with. You may find lots of her talks at YouTube. The Australian, a top daily, dared to publish the views of this British-born lady on the deterioration of the physics education in Australia five hours ago: ‘Feminised’ physics a formula for failure, says Michelle Simmons Also: Australia Day Address orator Michelle Simmons horrified at 'feminised' physics curriculum (SMH) Also: 'What a disaster': Leading scientist says high school physics is being 'feminised' - with difficult equations taken out of exams to make the subject more appealing to girls (Daily Mail) The text starts with a rather incredible comparison of some exam questions in 1998 on one side and 2001-2006 on the other side: In 1998, the students were given a diagram with wires and dimensions and were expected to compute magnetic fluxes and determine directions etc. In the newer type of exams, they were supposed to write essays about the "impact of electronics on the society" and speculate whether electronics will keep on getting cheaper and more powerful. One must worry how much cherry-picking was made – or how representatives the questions have been. ### Czech diplomacy frees Polish evangelist in Syria Two Czech pro-Kurdish warriors against ISIS were caught by the pro-ISIS Turkish government and most TRF readers believe that they're doomed. But it doesn't always have to be like that. Even seemingly tougher situations may be resolved. Hours ago, we've heard about such a great example. Leszek Marian Panek (54) has been a well-known character in Poland, especially in Wroclav. He believes that the return of Jesus Christ is imminent and will be accompanied by a nuclear war. His golden Nissan recommends Jesus Christ as the new king of Poland, among other things. God told him to sleep in that car and distribute tens of thousands of flyers. American and other readers surely know similar characters. ## Monday, January 23, 2017 ... ///// HiLASE, a $50 million center near Prague employing numerous Japanese, Indian, and Italian folks, among others, has launched the new 1,000-watt laser DiPOLE 100 (Google Images), a fully diode pumped solid state laser (DPSSL) designed and constructed at STFC’s Central Laser Facility (CLF) at Rutherford Appleton Laboratory in the U.K. and transferred to Czechia in two big trucks in late 2015. For half a year, the laser will only be used by local employees. Companies will be able to exploit the device from July 2017. The idea is that the laser should be used to manipulate surfaces, test components of the aircraft, and do other things that I am extremely far from being good at. The center hoping to become an important hi-tech hub is located in Dolní Břežany [The Lower Birchvilles], Southern outskirts of Prague near the river: maps, Google Images. To make the geography more confusing, a similarly named village Panenské Břežany [The Virgin Birchvilles] with a memorial is located some 10 miles North of Prague. That village with 500 inhabitants has no big laser but has also punched above its weight because that's where the Imperial Protectors Konstantin von Neurath and Reinhard Heydrich lived in the early 1940s. After the latter, a violinist and a main author of the Holocaust nicknamed the Blonde Beast, was executed by the Czechoslovak government in exile (while commuting from the Virgin Birchvilles to the Prague Castle in his Mercedes 320 Convertible B) in 1942, the house was used by his wife Lina and her four kids (Klaus, Haider, Frauke, and Marine – OK, I admit the last two, girls, should have been Silke and Marte LOL) up to 1945 when the Heydrich family became a bit unpopular in the Czech lands and the house was taken and exploited by The Research Institute for Metals. Even though Lina also lost her son Klaus in a 1943 car accident, she – a romanticized Nazi up to her death in the 1980s – remembered the years in Tschechei as the most wunderbar years of her life. ### Obnoxious climate alarmist had to be ejected from an airplane Spinoff, off-topic: a new spinoff of The Big Bang Theory is being prepared. A twelve-year-old Sheldon Cooper will be educated by his evangelical mother and others in Texas. I guess that the lead actor will be earning less than Sheldon's, Leonard's, and Penny's$1 million per episode. ;-) On Saturday, Trump supporter Scott Koteskey and his fellow passengers released and combined this video footage: On a flight from Baltimore to Seattle, his female neighbor asked him whether he was for Trump or against Trump. Her name isn't known so the Internet only refers to her descriptively as the "wretched liberal hag". He answered that he had come to the East Coast to celebrate democracy, ma'am. She didn't like the answer so she promised to vomit on his lap and demanded that he would be moved elsewhere. Her complaint was that folks like Koteskey enabled Trump to control the nuclear button. But you may see that the most important concern of hers was that he doesn't "believe" climate change. (The word "believe" was stressed and her hands indicated the quotation marks that I have added to the sentence, too.) Do you believe in gravity, Mr Koteskey was asked? Did you know that gravity is just a theory? ## Sunday, January 22, 2017 ... ///// ### Arts vs sciences, Rovelli vs Dawkins In two days, American readers will be provided with an English translation of Reality Is Not What It Seems: The Journey to Quantum Gravity by Carlo Rovelli. Rovelli is tightly connected to the Italian (and French, I believe) inkspillers' community which is the main reason – I believe – why the book became a bestseller in Italy in 2014 and has sold something like 1 million copies in the world so far. Just to be sure, his book Seven Brief Lessons on Physics was published after the Reality... in Italy but the English translation emerged before the Reality.... ## Saturday, January 21, 2017 ... ///// ### How many problems were fixed by the inauguration? First, off-topic. Google Maps have finally adopted the short name "Czechia" as the primary country name on their maps. The frequency of usage of "Czechia" has tripled since early 2016 but it's still a factor of 50 below the "Czech Republic". I am not dreaming about the eradication of the term "The Czech Republic". I just want many people with common sense to understand that it's so much meaningful to use a standardized official short name when it makes sense – e.g. on the maps where the room is often insufficient. But back to the main topic. Many of us watched most of the inauguration yesterday. Donald Trump did well and I didn't expect otherwise. He has all the basic skills to be a good actor – and the inauguration is a ceremony that needs a good actor. He enjoyed it, gave a good and somewhat touching inauguration speech, but we didn't learn too much from it. Also, I would agree that the speech basically said FU to the rest of the world which hopefully justifies the detached feeling that unAmerican Trumpites like me may have experienced. ;-) ## Friday, January 20, 2017 ... ///// ### QM is self-evidently free of causality paradoxes Someone sent me a 2012 preprint by Aharonov and 3 co-authors that claims that one may prove some acausal influence – future decisions affect past outcomes – with the help of the problematic "weak measurement" concept. This is such a self-evident piece of rubbish that I am amazed how any physics PhD may ever fail to see it. In the v5 arXiv version of the paper, the paradox is described as an experiment in bullets on page 12-of-15. In the morning, they measure some spins weakly, in the evening, they do so strongly, and some alleged agreement between the two types of measurements is said to prove that the "later randomly generated numbers" were already known in the morning. ## Thursday, January 19, 2017 ... ///// ### A monstrously symmetric cousin of our heterotic Universe Natalie M. Paquette, Daniel Persson, and Roberto Volpato (Stanford, Sweden, Italy) published a mathematically pretty preprint based on the utterly physical construction of the heterotic string. BPS Algebras, Genus Zero, and the Heterotic Monster Well, this paper elaborates upon their previous PPV1 paper which is exactly 1 year old now but I am sure that you will forgive me a 1-year delay in the reporting. It's just remarkable that something so mathematically exceptional – by its symmetries – may be considered "another solution" to the same spacetime equations that also admit our Universe as a solution. I still consider the $E_8\times E_8$ heterotic string to be the most well-motivated candidate description of Nature including quantum gravity. Dualities probably admit other descriptions as well – F-theory, M-theory, braneworlds – but the heterotic string may be the "closest one" or the "most weakly coupled" among all the descriptions. Heterotic string theory describes our Universe as a 10-dimensional spacetime occupied by weakly coupled strings whose 2-dimensional world sheet is a "hybrid" ("heterosis" is "hybrid vigor", the ability of offspring to surpass the average of both parents). The left-moving excitations on the world sheet are taken from the $D=26$ bosonic string theory while the right-moving ones are those from the $D=10$ fermionic string theory (with the $\NNN=1$ world sheet supersymmetry). Because the critical dimensions don't agree, the remaining $D_L-D_R=26-10=16$ left-moving dimensions have to be compactified on the torus deduced from an even self-dual lattice (or fermionized to 32 fermions whose boundary conditions must be modular invariant). There are two even self-dual lattices in 16 dimensions and we obtain theories with spacetime gauge groups $SO(32)$ or $E_8\times E_8$. Both of them have rank $16$ and dimension $496$. ### Brno, Czechia joins plans to build Hyperloop Ten months ago, I mentioned that our Slovak brothers – with the unmatched support from the Slovak government – decided to seriously work on plans to build Hyperloop between Bratislava, the Slovak capital, and nearby cities like Budapest and Vienna. Brno [pronounce: burn-naw] is well-known for the Masaryk racing circuit/automotodrom, some industrial exhibitions, Brno's giant penis statues (it's actually Jobst of Moravia and Luxembourg on a female horse), as the golden ship filled with pretty girls (orig.), crooked spire on their city hall saying something about the justice over there, the Špilberk castle with a prison, functionalist villa Tugendhat, and as the place where Gregor Mendel discovered the laws of genetics, among other things Today, Czech media and Wired (and other English-language outlets) Slovakia's Hyperloop moves a step closer to not being a joke told us that my homeland has finally joined this experimental movement. Brno (DE: Brünn), the modern capital of Moravia (an ex-margraviate formally outside the Czech/Bohemian kingdom) and Czechia's second largest city (400,000 people and twice as much in the broader area), signed a declaration with HTT vowing to work on Hyperloop. They would like to connect Brno with Prague – the Czech capital hasn't signed anything (and the Czech government finds Hyperloop too experimental) – but as far as the city halls' OK goes, you could at least connect Brno and Bratislava which are 70 miles away. That's not terribly helpful because it only takes some 80 minutes by car to go from one city to the other. ## Wednesday, January 18, 2017 ... ///// ### GISS: 1998-2016 comparison suggests a trend of 2 °C per century Thursday update: British HadCRUT4 have completed their 2016 data, too. The last column contains the annual averages. The difference from GISS is significant. 2016 was only 0.013 °C (GISS: 0.13 °C!) warmer than 2015. December 2016 was 0.432 °C (GISS: 0.30 °C) cooler than December 2015. And 2016 was 0.237 °C (GISS: 0.36 °C) warmer than 1998, indicating just 1.3 °C (GISS: 2 °C, satellites: 0.11 °C) of warming per century! While Czechia is enjoying the best skiing season – when it comes to the snow conditions – in years (Ore Mountains and the Bohemian Forest often provide skiers with up to 150 cm of snow) and I've exploited this fact as well, The New York Times told us about a press conferences by NOAA and NASA today that finally announced the temperature data for 2016. GISS temperature anomalies, 1880-2016, in multiples of 0.01 °C On January 3rd, I mentioned that both satellite-based teams quantifying the global mean temperature (UAH AMSU, RSS AMSU) concluded that 2016 was 0.02 °C warmer than 1998. These were otherwise very similar "end of a strong El Niño years" separated by 18 years. According to these numbers and nothing else, one could estimate that the warming per century is some 0.11 °C, a negligible amount. The GISS data derived from surface measurements (weather stations for the land and some other gadgets in the ocean) ended up with a very different number than 0.02 °C for the difference between the temperatures in 2016 and 1998. ### Maybe tariffs are not worse than taxes And all sensible "protectionist fees" in the whole economy are basically tariffs While I sympathize with most plans of Donald Trump's – and his philosophy about many things – it's likely that the potential worsening of the international trade is something that I have the biggest trouble with. His protectionist measures may hurt those who export to the U.S. They may also lead to more or less symmetric responses so the exporters from the U.S. will be hurt, too, like all consumers. But is it so bad? Am I really scared or disturbed? Tariffs are worse than nothing, I thought – for those who trade internationally. But they're also an extra income of the government. If the total income by the government is kept constant, the tariffs may really replace some other sources of the government's income – which is mainly taxes. When I think about the protectionist matters in this way, in this context, tariffs look much less bad. Tariffs are just another form of taxation, one that is robbing a particular group of people – the foreign exporters or the domestic importers who are in between or the domestic consumers buying the foreign goods. (Which of these three participants in the international transaction really pays is a purely administrative detail that doesn't change anything about the essence and impact of these fees.) Is it better or worse when the money is collected from these groups of people – relatively to the taxation which collects the money from all the domestic folks and companies for their sins known as paid work? ## Monday, January 16, 2017 ... ///// ### By his Euroskepticism etc., Trump is helpful for most Europeans Two days ago, I wanted to discuss Black Lives Matter and DisruptJ20, a terrorist organization that plans to disrupt the inauguration on Friday (not to mention the traffic in D.C.), maybe ignite a new U.S. civil war, and that instructs its member terrorists how to deal with cops, courts, and prisons. But at the end, I think that these radical loons will stay irrelevant and the following topic is more important. Donald Trump has given an interview to Bild, I don't know how long my trust in Putin will survive (paywall), which was fortunately summarized in a tendentious (but that doesn't matter) article in WaPo. Like the PC WaPo inkspillers, the Eurosoviet apparatchiks are shocked and they talk about a looming trans-Atlantic split! But Donald Trump didn't say anything that the Europeans should be scared of. He just makes sense. Much of what he's saying just reproduces what wise Europeans like me have been saying for many years. ### Does an increased number and exposure of traders slow down convergence of prices to fair values? I don't think so, markets with lots of motivated traders are equally fast and more accurate Here's another thought about the currencies, especially the Czech crown. As I approximately predicted, the December 2016 reading for the year-on-year inflation rate was 2.0%, in precise agreement with the Czech National Bank inflation target, which leads to fundamental reasons to exit the intervention regime. Inflation rates are rising in Germany, the Eurozone, the U.S. – across the world where bankers were (unjustifiably) scared of deflation. The anomalous era of deflation and especially negative interest rates simply had to end. It's ironic that what central banks couldn't do after purchases of trillions of dollars in bonds and other things for several years (the efforts to increase the inflation rate), the dead squirrel on Donald Trump's head was capable of achieving within a month and for free. (He has also cooled down the Earth and Nature had to pay for it.) In Czechia, the recent steep jumps in the inflation rate were also helped by the EET Big Brother monitoring of all cash receipts that has already been introduced to the restaurant+hotel industry and will spread to the rest of the businesses receiving cash (and payment cards) in three more waves. But most of the revived inflation is more global, has various reasons (including the non-weakening of oil in the recent year). But yes, I think that Trump's "fresh wind" is the most important single global reason for the growth of the inflation and inflation expectations across the Western world. He's already returned some common sense. It's common sense that you pay positive and nontrivial interest rates for loans. So it will probably be so under common-sense Trump. It's also common sense that a government capable of borrowing – and perhaps intimidating creditors – will probably do so which is why it may be reasonable to expect that despite his affinity to the fiscal responsibility, Trump will run big budget deficits and further increase the inflation rate in this way. During November 2013 when the floor "EUR/CZK shall be above 27" was introduced, the Czech National Bank reserves jumped from 35 to 41 billion euros (euros are relevant because that's where a majority of the reserves are denominated). By the end of 2016, they stood at 81 billion – more than doubled since late 2013 – because the central bank had to print (both electronic and physical) crowns and buy euros (and euro-denominated bonds and other things) in exchange. In October 2016, the jump was 5 billion euros. Both in November and December, the buying was close to 0.5 billion euros per month. But that post-Trump-victory slowdown dramatically changed in early 2017. In the first two weeks of the year, 10 billion euros were poured into the Czech currency. At the end of the month, the ČNB reserves may be up to 30 billion or so higher than in the previous month because they also needed to add some 10 billion because of some EU regulation and there are two more weeks. ## Saturday, January 14, 2017 ... ///// ### Princeton climate realist Happer meets Trump The media reported that Will Happer, a wise Princeton physicist and climate skeptic whom I have exchanged a couple of nontrivial e-mails with, has visited the Trump Tower in New York and met Donald Trump. Google News. I guess that Happer's background is sufficiently different from Trump's but I think it's vital for the soon-to-be U.S. president to keep some interaction with scholars like Happer. If you're not familiar with Happer, you should listen to this 31-minute 5-weeks-old interview. He's an important guy in a coalition of friends of CO2 (I've never memorized the exact name, maybe just the CO2 Coalition), has been famous in science for figuring out how to suppress the sodium-line-based twinkling in the telescopes by lasers, and was interested in the environmental and climatological issues since his service in the DOE under Bush Sr. See also this written interview via WUWT and Climate Depot's useful collection of hyperlinks about Happer. ## Friday, January 13, 2017 ... ///// ### Klaus on Shevarnadze's RT show Viewers with common sense can distinguish the nuances Czech ex-president Václav Klaus traveled to Moscow because Russian became another language in which his and his aide Weigl's recently penned book about the "Migration Period v2.0" (which has a removal van on the cover in Czech and some other languages which use the same word for migration and moving) so he used the opportunity to give an interview for SophieCo, an RT show. Web page of the show and transcript, YouTube backup In Fall 2015, he already talked to RT's Oksana Boyko at the Worlds Apart show. I think that both young women do their job very well but both have shown some kind of unfamiliarity with the intellectual discourse that Klaus and similar people represent. Sophie Shevarnadze, the granddaughter of the well-known Soviet minister of foreign affairs Eduard Sh., is considered the hottest woman in Russia by many people. I think she's primarily very smart and her business-like short haircut emphasized that point and reduced the room for distractions. ;-) ### Volkswagen #1 carmaker again, Fiat-Chrysler and Renault harassed for emissions cheating In September 2015, the U.S. Environmental Protection Agency began its holy war against the Volkswagen Group which has used a "defeat device", a clever software-hardware gadget that reduces emissions (but also efficiency of the engine) during the emissions testing, but is turned off otherwise. This war has led to the resignation of the VW boss as well as a brutal collapse of the stocks. Look at the graphs of VOW3, the main publicly traded Volkswagen stock, what it looked like in September 2015. In the month, it collapsed from €170 to €90 or so, almost by one-half. For a year, Volkswagen also lost its yellow shirt for the #1 carmaker to Toyota. But things are different in early 2017. Volkswagen is the world's #1 automaker again and the current price of the stock is €148, much closer to the September 2015 maximum than the minimum. VW has already paid over $17 billion to U.S. car owners – which I find insanely high but it wasn't lethal. In comparison with that, the 2-day old news that VW would pay$4.3 billion to the U.S. government looked like good news. ## Thursday, January 12, 2017 ... ///// ### Rex Tillerson, a lukewarmer, stands out like a sore thumb in the new era Donald Trump has said that global warming was a hoax invented by the Chinese in order to weaken America. And believe me, Trump isn't a great fan of China so this link between China and the man-made global warming movement wasn't meant to be a compliment for the latter. He has chosen numerous folks for his administration whose climate realist credentials seem indisputable: Scott Pruitt for the EPA, Cathy McMorris Rodgers for the Interior Department, and Rick Perry for the Department of Energy. Given the fact that Rex Tillerson has served as a CEO of ExxonMobil, you would think that it's similar with this guy. Except that it's not. All climate jihadists who have been fighting "climate change" and ExxonMobil should notice: If you have a relative ally in the Trump administration, it's the former CEO of ExxonMobil! ;-) What an irony. But the green morons don't understand it – instead, they are terribly alarmed by Tillerson. Don't get me wrong. He is not as superficial and insane as his predecessor – he should be an improvement relatively to John Kerry. However, his views are mixed. ### Czech president allowed to say many things in WaPo interview In recent months, The Washington Post has emerged as a flagship among the media outlets that don't hesitate to aggressively promote the misleading and sometimes utterly ludicrous memes associated with the outgoing politically correct U.S. administration. They published an interview with someone on the opposite side of these cultural wars, Czech president Miloš Zeman. And I would praise the interview for one important quality: Zeman gave his answers to some of the most important questions or questions most often associated with him. One may say that he was not being censored. However, there is a flip side. The journalist was basically trying to mock Zeman from the beginning to the end. At least in between the lines, almost every comment or question contains a suggestion that the reader shouldn't take Zeman seriously. Let me rephrase the interview in a language that is just a little bit exaggerated.
2017-01-24 23:14:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2592775821685791, "perplexity": 3468.51242169057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00432-ip-10-171-10-70.ec2.internal.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/a-solenoid-is-14-m-long-and-has-530-turns-per-meter-what-is-the-cross-sectional-area-of-th-q2377941
## A solenoid is 1.4 m long and has 530 turns per meter. A solenoid is 1.4 m long and has 530 turns per meter. What is the cross-sectional area of this solenoid if it stores 0.34 J of energy when it carries a current of 16A?
2013-05-19 22:33:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8193723559379578, "perplexity": 911.0392380595512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698104521/warc/CC-MAIN-20130516095504-00065-ip-10-60-113-184.ec2.internal.warc.gz"}
http://aapo.freccezena.it/how-to-plot-complex-numbers-in-matlab.html
# How To Plot Complex Numbers In Matlab This example shows how to plot the imaginary part versus the real part of two complex vectors, z1 and z2. Numbers and Booleans Strings Portable Functions Complex Numbers Arrays. Multiple sets of parameters can be given to plot; each pair of arguments is taken as x and y data pairs. For more information, see Work with Complex Numbers on a GPU (Parallel Computing Toolbox). in the set of real numbers. Ask Question Asked 6 years, 5 months ago. Learn more about complex number, line plot MATLAB. We need to be careful here, we are plotting complex numbers so we will actually plot the magnitude of the complex coefficients. The language of MATLAB is taken from that of Linear Algebra. Complex numbers may easily be plotted in the complex plane. Simplex Noise Matlab. Students learn how to write clean, efficient, and well-documented programs, while gaining an understanding of the many practical functions of MATLAB®. Thus MATLAB can handle complex numbers. Firstly, if you say a phase Domain of a signal or of a number we understand by it that this one is a complex number (Phase exists only in complex numbers). In MATLAB ®, i and j represent the basic imaginary unit. After, you create a 3D mesh plot that will plot the real and imaginary axis in the first two dimensions and the magnitude of the complex number in the third dimension. a = rand + 1i*rand. It turns out that the routine bumped a to -0. Here are some tips for how to go about doing so. The built-in MATLAB function "cart2pol" converts cartesian coordinates (x,y) to polar coordinates (Theta,R). Either modify your code from the previous problem or write a new code to get a MATLAB program newton(f,df,niter), that takes a function f and its derivative df. A complex number 3 + 10 i may be input as 3 + 10i or 3 + 10*i in Matlab (make sure not to use i as a variable). This means that you can take powers and roots of any number. Matlab can define a set of numbers with a common increment using colons. It works quite fine, exceptionally when it Comes to calculate the square root of a complex number. If X is a vector, then fft (X) returns the Fourier transform of the vector. How can I plot the spectrum of a signal in MATLAB? Ask Question Asked 3 years, 4 months ago. Easy plotting and visualization Easy Integration with other Languages/OS’s – Interact with C/C++, COM Objects, DLLs – Build in Java support (and compiler) – Ability to make executable files – Multi-Platform Support (Windows, Mac, Linux) Extensive number of Toolboxes – Image, Statistics, Bioinformatics, etc Matlab. Polar Form of a Complex Number. Save the Excel file into your MATLAB folder. Complex numbers. For example,. The FFT function computes the complex DFT and the hence the results in a sequence of complex numbers of form. z = peaks(25); figure mesh(z) Surface Plot. Use matlab to output the Cartesian representation of these 6 complex numbers. I tried already 3 different implementations on how to calculate a complex square root in C, but None of this implementation Matches the matlab result. In this tutorial, I am decribing the classification of three dimentional [3D] MATLAB plot. k is called a time series. matfile_listvar — Lists variables of a Matlab V5 binary MAT-file. Plot a regular polygon of N sides on com puter screen. How to plot complex functions in Matlab? For example: Y[e^jx] = 1 / (1 - cosx + j4) I tried some code, but I think the right way is by plotting real and imaginary part separately. matlab/Octave Python R 2. Store all these complex numbers in a single array and use a for loop to make your plot. Find the absolute value of the elements of the vector. , on the Real and Imaginary axes). Second number: 1/(10*jw) This is some college assignment for Matlab. That is, solve completely. To find the roots of $$z^2+6z+25$$ you enter the coefficients of $$z$$ >>eqn = [1 6 25] eqn = 1 6 25 and ask for the roots: >>roots(eqn) ans = -3. Graphing in Matlab Multi-plot window Complex number Magnitude of complex numbers Plotting two graphs on one figure Legend plot Title subplot in Matlab Figure in Matlab Created by Eli Chmouni. I tried already 3 different implementations on how to calculate a complex square root in C, but None of this implementation Matches the matlab result. The tutorial here seemed good to me at first glance, though I can't claim to have read it through. plot(x,y), where x and y are arrays of the same length that specify the (x;y) pairs that form the line. 0000i >> A=[1 2+3*j 4+5*j; 6+7*j 8 9*j] A = 1. This article covers how to create matrices, vectors, and cell arrays with the programming software MATLAB. Consequently, you have to convert each of your complex numbers to a vector (with 0 as the first element) and then plot those vectors. The frequency points are chosen automatically based on the system poles and zeros. Plotting a complex number $$a+bi$$ is similar to plotting a real number, except that the horizontal axis represents the real part of the number, $$a$$, and the vertical axis represents the imaginary part of the number, $$bi$$. how to plot hyperbola in matlab? Here is a number of keywords that visitors typed in recently in order to visit our math help pages. It works quite fine, exceptionally when it Comes to calculate the square root of a complex number. Notice that the names of some basic operations are unexpected, e. In MATLAB ®, i and j represent the basic imaginary unit. MATLAB provides an int command for calculating integral of an expression. Plot aesthetics. Oberbroeckling, Spring 2018. The problem is, that I want lines along the rows and Matlab mixes the coordinates of the complex numbers up and the line appears in a zig-zag all over the place. Plotting inequalities can be a bit difficult because entire portions of the graph that you see must be included to make the plot correct. Therefore it surprises people sometimes when the output of fft is unexpectedly complex. Use the direct method supported by MATLAB and the specific complex functions abs, angle, imag, real, conj, complex, etc. How to represent waveform (sum of sinusoids) in Learn more about signal processing, wavelet, plot. csv file such that Matlab considers some of the actual numbers as the imaginary part of another number. In this interpretation we call the x. Plotting the complex numbers in Python. pyplot as plt import numpy as np. Enter transfer function in MATLAB. Plotting and graphics in MATLAB 12. Matlab allows you to create symbolic math expressions. 3 Form of Complex Number Real Axis Imaginary Axis ( , )x y z r x iy z 4. Create a numeric vector of real values. A menu should open up that will allow you to add x and y axis labels, change the range of the x-y axes; add a title to the plot, and so on. Second number: 1/(10*jw) This is some college assignment for Matlab. You can also determine the real and imaginary parts of complex numbers and compute other common values such as phase and angle. The numerical scope of MATLAB is very wide. You can use them to create complex numbers such as 2i+5. But complex numbers, just like vectors, can also be expressed in polar coordinate form, r ∠ θ. >> plot (x, y) >> plot (x, y, 'rx') >> help plot PLOT Linear plot. This combines the magnitude (modulus) and angle (phase) information into a single plot. my original waveform must be complex so I can multiply two complex numbers together to get the amplified waveform. The complex conjugate of a + bi is a – bi , and similarly the complex conjugate of a – bi is a + bi. r = randi ( [10 50],1,5) r = 1×5 43 47 15 47 35. z= a+ bi a= Re. To plot the real part versus the imaginary part for multiple complex inputs, you must explicitly pass the real. 10*log( abs( fftshift(fft(y)) ) /length(y) ) : Will scale the spectrum on a logarithmic scale. It has to be written as. You could change these cells to import as number if you would like but we’re not going to need to do that in this video. MATLAB 3D plot examples explained with code and syntax for Mesh, Surface Ribbon, Contour and Slice. complex number. m - Matlab; Visual Complex Function Links; Complex Function Grapher. Show Hide 1 older comment. The three dimensional plot command is. You can also determine the real and imaginary parts of complex numbers and compute other common values such as phase and angle. A MATLAB ® Companion to Complex Variables provides readers with a clear understanding of the utility of MATLAB in complex variable calculus. Instead of using free online services or even complex software like Mathworks Matlab or R Programming language to make your graphics and then export the plots as image format. 1 Solving a basic differential equation 15. The Symbolic Toolbox is happy to take erfc() of a complex number. I'm very new to MatLab, how do i write a program in MatLab to plot : x(t) = cos (t) + j sin (t) I was told it should be a circle but I'm seeing sinusoidal signal. The basic imaginary unit is equal to the square root of -1. Chapter 2: Basic MATLAB Concepts. I'm having trouble producing a line plot graph using complex numbers. The language of MATLAB is taken from that of Linear Algebra. 4142i 0 + 1. Here we use a function plot in MATLAB library. Matlab/Octave Examples This appendix provides Matlab and Octave examples for various topics covered in this book. ) from a source file and create a kml file to display the bathymetric data. In linear algebra of MATLAB we call these scalars. For practice, you will be asked to calculate and plot the basins of attraction for a polynomial in the complex plane. The plot data isn't really doubled when you do two plots because the result of an FFT of strictly real inputs in conjugate symmetric. Then create complex numbers from these values using z = x + 1i*y. This article will walk through the steps to implement the algorithm from scratch. NaNs are MATLAB's way of representing values that are not real or complex numbers. Learn how to use Complex Numbers functions in MATLAB. This page will make an xy plot of some mathematical expression for you. Plot pole-zero diagram for a given tran. Define the complex data. It is needed to solve a system of complex-valued equations to compute normalized mode shapes of the system. The Matlab function ‘sphere’ generates the x-, y-, and z-coordinates of a unit sphere for use with ‘surf’ and ‘mesh’. 5 is exactly halfway between 11 and 14. 2 Complex Plane Real Axis x y Imaginary Axis 3. Lectures by Walter Lewin. Set the matrices and vectors - Manipulate arrays and perform various linear algebra operations, such as finding eigenvalues and eigenvectors, and looking up values in arrays. A complex number is a number that can be expressed in the form a + bi, where a and b are real numbers, and i is a solution of the equation x 2 = −1. In linear algebra of MATLAB we call these scalars. This video reviews the functions complex, real, imag, isreal, conj, abs, and angle. How is this of help ? find the search keyword that you are searching for (i. Enter transfer function in MATLAB. Plot the graph, name and find point on the graph d. 002 seconds of data contains 5000 samples. Surf plot of the phase (phase angle in radians / S). Here we show the number 0. The result of an FFT is a sequence of complex numbers of the form (x + iy). Because the radius is 2 ( r = 2), you start at the pole and move out 2 spots in the direction of the angle. The real part of the complex exponential is a cosine, and its imaginary part is the sine function, so a plot of the complex exponential is a rotating vector with a constant length A. angle takes a complex number z = x + iy and uses the atan2 function to compute the angle between the positive x -axis and a ray from the origin to the point ( x, y) in the xy -plane. The location of the base of each arrow is the origin. Learn more about complex numbers, z palne, magnitude and phase response. To build on what Luis Mendo was talking about, I don't believe there is a utility in MATLAB that prints out a complex number in polar form. Random Numbers Within a Specific Range. First, create a mesh of values over -3 < x < 3 and -3 < y < 3 using meshgrid. dk 2015-05-28 1/ 50. The Symbolic Toolbox is happy to take erfc() of a complex number. The color, point marker, and line style can be changed on a plot by adding a third parameter (in single quotes) to the plot command. There are 5, 5 th roots of 32 in the set of complex numbers. The three sinusoids with the amplitude and time shift of each annotated on the plot. The basic imaginary unit is equal to the square root of -1. To plot the real part versus the imaginary part for multiple complex inputs, you must explicitly pass the real. >>>>> plot(z,’x’)plot(z,’x’)plot(z,’x’) since the MATLAB default for plotting complex numbers is to plot the real parts on the horizontal axis and the imaginary parts on the vertical axis. re: how to plot complex values in matlab ? I think this isn't possible. It goes into the fourth index based on counting across columns in the first row as 1 and 2, then columns in the the second row giving index 3 and 4. That should be the case for 9 numbers. Complex numbers can be entered directly at the command line in rectangular form and stored in variables for further computation. If X is a matrix, then fft (X) treats the columns of X as vectors and returns the Fourier transform of each column. First, use Matlab's find command to find all entries that are not real. To enter a complex number, type at the prompt: EDU>>z = a +bj or a + bi. the Interleaved Complex API). De ning complex numbers: >> z1=2+3i; z2 = 4-5i; or >>z1 = complex(2,3) (Use this option, especially if you want to plot real numbers on the complex plane) To extract the real and imaginary parts use the MATLAB functions real and imag, resp. You can use them to create complex numbers such as 2i+5. Notation: a p qmatrix has prows and qcolumns; its entries are usually real numbers in these notes, but they can also be complex numbers. In this first Matlab tutorial, I will try to show you the basics of Matlab user interface, data types, simple functions and mathematical operations. MATLAB 3D plot examples explained with code and syntax for Mesh, Surface Ribbon, Contour and Slice. ) from a source file and create a kml file to display the bathymetric data. Calculate poles and zeros from a given transfer function. With these, we can define an auxiliary function that helps print out the magnitude and phase of a complex number in polar form. Notice the use of comments. Solving a basic differential equation 11. 1 MATLAB Matlab is an interactive system for doing numerical computations. Valid forms for a complex number are. Create a script file and type the following code − x = [0:5:100]; y = x; plot(x, y) When you run the file, MATLAB displays the following plot − Let us take one more example to plot the function y = x 2. We can think of complex numbers as vectors, as in our earlier example. If X or Y is a matrix, then the vector is plotted versus the rows or columns of the matrix, whichever line up. → Use MATLAB to find Q +D,Q −D,QD 8FD Q/D. Finding roots of polynomials Octave provides a function, roots , which can be used to find the roots of polynomial equations. 89 i Which is the same as e 1. Customize as it fits your purpose. Some equations, f(x), have complex roots that you can't see by just plotting using only real values of x. The language of MATLAB is taken from that of Linear Algebra. How To Plot Phase Plane In Matlab. [See more on Vectors in 2-Dimensions]. You clicked a link that corresponds to this MATLAB command: Run the command by entering it in the MATLAB Command Window. freehanddraw. Usually RGB colors have values from 0 to 255. The I' and J' forms are true constants, and cannot be. To plot the real part versus the imaginary part for multiple complex inputs, you must explicitly pass the real parts and the imaginary parts to plot. The size and data type of the output array is the same as. Use the randi function (instead of rand) to generate 5 random integers from the uniform distribution between 10 and 50. How to calculate distance between 2 complex Learn more about distance, complex values, plot, signal, qpsk. You can also determine the real and imaginary parts of complex numbers and compute other common values such as phase and angle. A menu should open up that will allow you to add x and y axis labels, change the range of the x-y axes; add a title to the plot, and so on. You could change these cells to import as number if you would like but we’re not going to need to do that in this video. The aim of this module is to enable you to enter an arithmetic expression in MATLAB and understand the results produced, particularly when these results involve scientific notation, complex numbers and Inf and NaN. But I understand! You do have a function from the real numbers to the complex numbers, so the way to represent it visually is unclear. The discrete Fourier transform (DFT) is a basic yet very versatile algorithm for digital signal processing (DSP). The real part of the complex number is 3, and the imaginary part is -4 i. Puede utilizarlos para crear números complejos como 2i+5. Graphing in Matlab Multi-plot window Complex number Magnitude of complex numbers Plotting two graphs on one figure Legend plot Title subplot in Matlab Figure in Matlab Created by Eli Chmouni. How to calculate distance between 2 complex Learn more about distance, complex values, plot, signal, qpsk. The Plot Function. There is absolutely no other context to this question. If we wish to plot the point 3 + 2i, we note that the number is made up of the real number 3 and the imaginary number 2i. We can think of this complex number as either the point (a,b) in the standard Cartesian coordinate system or as the vector that starts at the origin and ends at the point (a,b). 6 MATLAB enables youto add axis x Labels and. By default, MATLAB accepts complex numbers only in rectangular form. You can use them to create complex numbers such as 2i+5. They are also helpful in changing the axes in the polar plots. Complex numbers and image analysis Henrik Skov Midtiby Maersk McKinney Moller Institute [email protected] The plot is shown below: 2. "Plot two complex numbers: the y-axis is the value of the number and the x-axis is the parameter w (Omega)" First number: 5*jw. Such function doesn't exist in Matlab since it's not very clear how it would be useful for anything (it doesn't make much sense to remove those numbers). matlab/Octave Python R. How can I plot the spectrum of a signal in MATLAB? Ask Question Asked 3 years, 4 months ago. Working with complex numbers in MATLAB is easy. Introduction to Complex Numbers in MATLAB. Chapter 11: Loops, Conditions, and Intro to Programming in. GPU Code Generation Generate CUDA® code for NVIDIA® GPUs using GPU Coder™. To find the magnitude and angle of z, use the abs() and angle. Learn how to use Complex Numbers functions in MATLAB. To plot the real part versus the imaginary part for multiple complex inputs, you must explicitly pass the real parts and the imaginary parts to plot. This allows us to collect all the effects, simple and complex, into a single equation. We care about x=y=1, so there is no real need to go past about 3. If X is complex, abs (X) returns the complex magnitude. The 6th Edition of Stephen J. Lectures by Walter Lewin. 2 Using interactively Desc. 7 Complex numbers Desc. The axes are counted along the top row of the Figure window, then the second row, etc. 0000i >> i=sqrt(-1) i = 0 + 1. *exp(1j*theta) at every spacial coordinates? Read more. Read through the file to familiarize yourself with the structure and syntax of MATLAB m-files. Determine the real part and the imaginary part of the complex number. on the complex plane. How to Divide Complex Numbers. For example, to plot the above. Complex Numbers are the combination of real numbers and imaginary numbers in the form of p+qi where p and q are the real numbers and i is the imaginary number. Also shown in Figure 2 are the amplitude and time shift values for. Colors in MATLAB plots. How can I plot the spectrum of a signal in MATLAB? Ask Question Asked 3 years, 4 months ago. Introduction In this first lab, we will learn how to - perform basic mathematical operations on simple variables, vectors, matrices and complex numbers. For the complex number a + bi, a is called the real part, and b is called the imaginary part. Say my model were named 'x' and had a complex transfer function (i. Since MATLAB is a program offering endless possibilities, being able to understand the basics will lead to the ability to write more complex codes later on. so how can I plot a tangent line from this point to this circle with 'r' radius. 20:45 MATLAB BASICS, MATLAB Codes, MATLAB for Beginners, MATLAB Program for beginners, MATLAB Videos A pangram, or holoalphabetic sentence, is a sentence using every letter of the alphabet at least once. I am new to Matlab and my Signals and Systems teach threw a tough lab at us that involves plotting the real and imaginary parts of a signal on the same plot but only using the subplot function, anything helps. Then you can insert the plot in the PowerPoint slide just as inserting any picture in PowerPoint 2010 or 2007. Complex step differentiation is a technique that employs complex arithmetic to obtain the numerical value of the first derivative of a real valued analytic function of a real variable, avoiding the loss of precision inherent in traditional finite differences. The generalization to complex exponentials is important for later work in Fourier analysis, so we are laying a foundation for the future. Chapter 4: Graphics. Visualizes real-time logged data from Arduino® in MATLAB®. Alternatively, some routines can be called as MATLAB functions. Im trying to create a Hovmoller diagram in MATLAB but I'm very new to programming and have no clue what to do. Extended Capabilities. The [code]spectrogram[/code] function in MATLAB will bring up a plot of the spectrogram in a new figure window as shown here - Spectrogram using short-time Fourier transform. If X is a vector, then fft (X) returns the Fourier transform of the vector. The following statement shows one way of creating a complex value in MATLAB. Note that, if some of the eigenvalues are complex, the plot command with the option '*' will plot the column vector of eigenvalues as points on the complex plane. Working with complex numbers in MATLAB is easy. ^2)) Let’s go go trough the used functions step-by-step, with the example vec. To get a plot from to , use the fftshift function. Distributed Arrays Partition large arrays across the combined memory of your cluster using Parallel Computing Toolbox™. simple numbers, not expressions. The matlab variable pi is also predefined, and is changeable. So I have my function which calculates the rates of change of these at a certain temperature and am calling the function using ODE 45. Discrete Time Signals & Matlab A discrete-time signal x is a bi-in nite sequence, fx kg1 k=−1. Lectures by Walter Lewin. All applicable mathematical functions support arbitrary-precision evaluation for complex values of all parameters, and symbolic operations automatically treat complex variables with full generality. You can plot your data. Similarly, the number 1 - sin(2)i has no meaning for MATLAB. And our vertical axis is going to be the imaginary part. The plot is shown below: 2. Use i or j to represent the imaginary number −1. The default MATLAB floating-point representation can represent various numbers from -1. 20:45 MATLAB BASICS, MATLAB Codes, MATLAB for Beginners, MATLAB Program for beginners, MATLAB Videos A pangram, or holoalphabetic sentence, is a sentence using every letter of the alphabet at least once. In MATLAB, both i and j denote the square root of -1. Students learn how to write clean, efficient, and well-documented programs, while gaining an understanding of the many practical functions of MATLAB®. If X is complex, then it must be a single or double array. If a number is real, then its real part equals itself. The first column is x and second column is y. Similarly, plots can also be pasted to your word document. Learn more about plot, complex number plot. Although you graph complex numbers much like any point in the real-number coordinate plane, complex numbers aren't real! The x-coordinate is the only real part of a complex number, so you call the x-axis the real axis and the y-axis the imaginary axis when graphing in the complex coordinate plane. You can also determine the real and imaginary parts of complex numbers and compute other common values such as phase and angle. MATLAB supports various numeric classes that include signed and unsigned integers and single-precision and double-precision floating-point numbers. One can specify colors using a vector that gives the RGB triple where in MATLAB, each of the three values are numbers from 0 to 1. The plot function in Matlab is used to create a graphical representation of some data. ', etc) to be plotted at the point. That is the only way to get a variable whose real and imaginary parts are both 0. Two points regarding complex numbers bear particular importance: The number a is called the real part of the complex number a+bi. MATLAB - Numbers. Label the real and imaginary axes and provide a legend. abs() takes the absolute value in MATLAB. You can use them to create complex numbers such as 2i+5. Random numbers from complex PDF. The plane in which one plot these complex numbers is called the Complex plane, or Argand plane. For example, to declare a variable as '1 + i' just type: >> compnum = 1 + i compnum = 1. PLOT (Y) plots the columns of Y versus their. And, as in this example, let Mathematica do the work of showing that the image points lie. C/C++ Code Generation Generate C and C++ code using MATLAB® Coder™. 002 seconds-long "piece" of the full 10 second-long sample. Solution uses ability of MATLAB to work with complex numbers. If you specify two vectors as arguments, plot (x,y) produces a graph of y versus x. You can plot your data. Store all these complex numbers in a single array and use a for loop to make your plot. Complex Numbers in MATLAB / FreeMat MATLAB uses the letters i or j to represent the square root of -1. Sort a list of complex numbers based on how far they are from the origin 23:36 Mathematics , MATLAB BASICS , MATLAB Codes , MATLAB for Beginners , MATLAB Program for beginners , MATLAB Videos Given a list of complex numbers z, return a list zSorted such that the numbers that are farthest from the origin (0+0i) appear first. Introduction to Complex Numbers in MATLAB. The function breaks the figure into matrix specified by user and selects the corresponding axes for the current plot SYNTAX : subplot (m,n,p) – Divides the figure window into m x n matrix of small axes and selects the p th. The beautiful Mandelbrot Set (pictured here) is based on Complex Numbers. This example shows how to create a variety of 3-D plots in MATLAB®. It's thrown into a mix of other questions. The basic imaginary unit is equal to the square root of -1. It works quite fine, exceptionally when it Comes to calculate the square root of a complex number. plot vector using complex numbers. Choose Math Help Item Calculus, Derivatives Calculus, Integration Calculus, Quotient Rule Coins, Counting Combinations, Finding all Complex Numbers, Adding of Complex Numbers, Calculating with Complex Numbers, Multiplying Complex Numbers, Powers of Complex Numbers. There is absolutely no other context to this question. Working with complex numbers in MATLAB is easy. I suspect that there is some irregularity in that. Once this step is complete, you should see your Excel file in the current folder section in MATLAB. Learn more about complex numbers, z palne, magnitude and phase response. Lab 1 should introduce students to MATLAB, m files, command window, workspace, arrays, multiplication, powers, exp, sum, component-wise operations, defining complex numbers, complex arrays, plot, abs, phase, for loop and repeated addition for computing sums. Just type your formula into the top box. If a number is real, then its real part equals itself. A menu should open up that will allow you to add x and y axis labels, change the range of the x-y axes; add a title to the plot, and so on. Download the le Complex. A function of a complex variable, w = f(z), can be thought in terms of its real components: We will demonstrate a number of ways to visualize the set of points (x, y, u, v) satisfying this equation. We have met a similar concept to "polar form" before, in Polar Coordinates, part of the analytical geometry section. A complex number, z, has the form x+iy, where x and y are real and i is. 0000i >> B=A' (conjugate transpose) B = 1. In linear algebra of MATLAB we call these scalars. If the input vector contains complex numbers, MATLAB plots the real part of each element (on the horizontal axis) versus the imaginary part (on the vertical axis). Line Plot Complex Numbers. This is represented in MATLAB ® by either of two letters: i or j. The conjugate of a complex number $z = a+ib$ is noted with a bar $\overline {z}$ (or sometimes with a star $z^*$) and is. Complex numbers Built-in complex number suppport Keywords i,j both equal √ −1 (watch when using index variables and complex numbers in the same function),→ example: creating a complex number >>x = 1 + 2j Functions for manipulating complex numbers: real,imag,conj,abs,angle,cart2pol,pol2cart SYSC 4405 An Introduction to Matlab for DSP. Chapter 8: Working with Matrices in Matlab. Complex Sine-Wave Analysis To illustrate the use of complex numbers in matlab, we repeat the previous sine-wave analysis of the simplest lowpass filter using complex sinusoids instead of real sinusoids. Matlab complex numbers 1. It is often very easy to "see" a trend in data when plotted, and very difficult when just looking at the raw numbers. Chapman's highly successful MATLAB® Programming for Engineers teaches MATLAB® as a technical programming language with an emphasis on problem-solving skills. The mesh function creates a wireframe mesh. Testing and comparison is done using two test waveforms - 1) sawtooth waveform (represented by a vector containing only real numbers) , 2) A complex sinusoidal waveform (vector having both real and imaginary part). Chapter 2: Basic MATLAB Concepts. Complex Function Plots - Maple; ComplexGrid. However, if you specify both X and Y , MATLAB ® ignores the imaginary part. Complex Numbers and Constants Both iand jare defined by default to be p −1, unless you assign them to some other value. The matlab variable pi is also predefined, and is changeable. In MATLAB ®, i and j represent the basic imaginary unit. We usually use a single letter such as zto denote the complex number a+ bi. Example: '%s' converts pi to 3. Solving ordinary differential equations (ODEs) using MATLAB 11. The Plot Function. ECE 201 - Introduction to Signal Analysis Fall 2014 Lab 3: Complex Numbers in MATLAB The objective of this lab is to learn about di erent MATLAB functions which operate on complex numbers. how to plot hyperbola in matlab? Here is a number of keywords that visitors typed in recently in order to visit our math help pages. For practice, you will be asked to calculate and plot the basins of attraction for a polynomial in the complex plane. MATLAB knows when you are dealing with matrices and adjusts your calculations accordingly. If you pass multiple complex arguments to plot, such as plot(z1,z2), then MATLAB® ignores the imaginary parts of the inputs and plots the real parts. By dividing by the number of sample, we get an amplitude that is not dependant on the acquisition length. MATLAB provides an int command for calculating integral of an expression. Matlab provides quite a few different functions for manipulating complex numbers. Figure 15-7. Plot the imaginary part versus the real part of a complex vector. In linear algebra of MATLAB we call these scalars. 2 Using interactively Desc. Introduction In this first lab, we will learn how to - perform basic mathematical operations on simple variables, vectors, matrices and complex numbers. For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox). An imaginary number is defined where i is the result of an equation a^2=-1. For more information type help plot in matlab. There is absolutely no other context to this question. The built-in MATLAB function "cart2pol" converts cartesian coordinates (x,y) to polar coordinates (Theta,R). When we use this polar-to-cartesian function, we enter a magnitude and an angle in degrees as parameters. However, convential random numbers are all real-valued, not complex. Complex Numbers : MATLAB examples Here, we graphically study how a curve C in the complex plane transforms under a function f(C) (refer to file posted to compass). Generate a single random complex number with real and imaginary parts in the interval (0,1). Calculate poles and zeros from a given transfer function. Learn how to use Complex Numbers functions in MATLAB. most of the material also applies to using octave. Using either notation, a single complex number contains two separate pieces of information, either a & b, or M. Either modify your code from the previous problem or write a new code to get a MATLAB program newton(f,df,niter), that takes a function f and its derivative df. how to plot hyperbola in matlab) in the table below. Finding roots of polynomials MATLAB can find the roots of polynomials via the roots command. You can also determine the real and imaginary parts of complex numbers and compute other common values such as phase and angle. In MATLAB ®, i and j represent the basic imaginary unit. Thus you can create the input for your reconstruction IFFT from only the first half of the data for the real plot and for the imaginary plot (by conjugate mirroring it to the other half). To do this, go through the following 3 steps: Open a file using fopen. Complex numbers do pretty much what you expect them to do in MATLAB. → Use MATLAB to find Q +D,Q −D,QD 8FD Q/D. Working with complex numbers in MATLAB is easy. I feel like don't even understand the foundations regrading complex numbers, complex plane, Bode plot, etc. a = rand + 1i*rand. 7 Complex numbers Desc. Chapman's highly successful MATLAB® Programming for Engineers teaches MATLAB® as a technical programming language with an emphasis on problem-solving skills. Maciejowski's Getting started with Matlab and Dr Paul Smith's Tutorial Guide to Matlab. Alternatively, some routines can be called as MATLAB functions. The imaginary part is declared by using the 'i' or 'j' character. For starters, I want to review the use of complex numbers in R. Below are the syntaxes which are used in Matlab to denote Identity Matrix: U = eye: This syntax returns 1 of type scalar. Numeric conversions print only the real component of complex numbers. Examples of how complex numbers and complex exponentials can be handled by MATLAB. Discover what MATLAB. The contour plot on the right shows curves in the (x,y) plane where the function is constant. Complex Analysis - The Complex Analysis Project; W L Chen's Notes; The Origin of Complex Numbers; A Short History of Complex Numbers; Short Course on Complex Numbers; Visualizing Complex Functions. Example: Find the 5 th roots of 32 + 0i = 32. The aim of this module is to further explore the two-dimensional and three-dimensional plotting tools that are available in MATLAB, What you should know by the end of this module: how to create plots with log scales on the x-axis, y-axis or both; how to plot complex numbers; how to create a contour plot of a function of two variables;. Now abs function calculates magnitude of this complex matrix. As far as I aware there is no function doing this for you. y is the signal; (amplitude and phase in complex numbers). Apologies for the clearing of the throat. straight lines) on a Bode plot,. The plot data isn't really doubled when you do two plots because the result of an FFT of strictly real inputs in conjugate symmetric. Thus, the sinusoidal motion is the projection of the circular motion onto the (real-part) axis, while is the projection of onto the (imaginary-part) axis. In this activity you will learn about vector and matrix data types in Matlab, how to enter them into Matlab's workspace, how to edit, how to index, and you will also explore various vector, matrix, and matrix-vector operation. Complex numbers are expressed in one of two forms: a + bj (rectangular), or Me jθ (polar), where j is a symbol representing − 1. Note again that MATLAB doesn't require you to deal with matrices as a collection of numbers. Here's a short video showing how to draw a picture that shows complex roots of a function. You can also determine the real and imaginary parts of complex numbers and compute other common values such as phase and angle. The answer is a combination of a Real and an Imaginary Number, which together is called a Complex Number. For example, MATLAB computes the sine of /3 to be (approximately) 0. The amplitude spectrum is obtained The amplitude spectrum is obtained For obtaining a double-sided plot, the ordered frequency axis (result of fftshift) is computed based on the sampling frequency and the amplitude spectrum is plotted. Plot an Inequality - powered by WebMath. How To Plot Phase Plane In Matlab. Remove all the consonants using MATLAB 11:16 Mathematics , MATLAB Codes , MATLAB for Beginners , MATLAB Program for beginners , MATLAB PROGRAMS , MATLAB Videos Program to remove consonants from a String Prerequisite: How to remove all the vowels from a string: Reference (ASCII TA. plot my experiments in a 2d plot as lines (but the points have to be in the right order) plot my y (usually called z ) coordinate as a surface over the axes x and experiment. 1 Review of complex numbers 1. >> plot (x, y) >> plot (x, y, 'rx') >> help plot PLOT Linear plot. If X is a vector, then fft (X) returns the Fourier transform of the vector. Example: '%s' converts pi to 3. This article will walk through the steps to implement the algorithm from scratch. How can I plot the spectrum of a signal in MATLAB? Ask Question Asked 3 years, 4 months ago. The plot function in Matlab is used to create a graphical representation of some data. The angle must be converted to radians when entering numbers in complex exponential form: >> x = 2*exp(j*45*pi/180). where F, R, and theta are functions of w, and i is an imaginary number? I'm just confused where to start…like how to define w and where to go from there. MATLAB Programming. Whenever a plot is drawn, title’s and a label’s for the x axis and y axis are required. This MATLAB function returns the phase angle in the interval [-π,π] for each element of a complex array z. r = randi ( [10 50],1,5) r = 1×5 43 47 15 47 35. To create complex variables z1 = 7+j and z2 = 2ejˇ simply enter z1 = 7 + j z2 = 2*exp(j*pi) Table 2 gives an overview of the basic functions for. In linear algebra of MATLAB we call these scalars. It works for many langueges including MATLAB, the choice of this class. The size and data type of the output array is the same as. m - Matlab; Visual Complex Function Links; Complex Function Grapher. MATLAB has all the standard scalar arithmetic operators for real and complex numbers: Unless redefined by the user, ' i ' and 'j' are special constants referring to the principal square toot of -1. The angle must be converted to radians when entering numbers in complex exponential form: >> x = 2*exp(j*45*pi/180). You clicked a link that corresponds to this MATLAB command: Run the command by entering it in the MATLAB Command Window. To derive an expression for the indefinite integral of a function, we write − For example, from our previous example − syms x int(2*x) MATLAB executes the above statement and returns the following result − In this example, let us find the integral of some commonly. The numerical scope of MATLAB is very wide. Doing length(y) is the same as fs*T (where T the length of the acquisition in time). You can plot your results. Plot the spectrum Suppose I want to plot the spectrum of the first 0. 1 Solving a basic differential equation 15. It's thrown into a mix of other questions. If you pass multiple complex arguments to plot, such as plot(z1,z2), then MATLAB® ignores the imaginary parts of the inputs and plots the real parts. Among others, see the Complex Numbers core MATLAB and the Symbolic Math Toolbox Complex Numbers documentation. a = rand + 1i*rand. plot my experiments in a 2d plot as lines (but the points have to be in the right order) plot my y (usually called z ) coordinate as a surface over the axes x and experiment. 5 Complex numbers MATLAB has excellent support for complex numbers with several built-in functions available. Valid forms for a complex number are. After, you create a 3D mesh plot that will plot the real and imaginary axis in the first two dimensions and the magnitude of the complex number in the third dimension. The imaginary part is declared by using the 'i' or 'j' character. For example, if we type. Chapter 1: Introduction. Matlab as a Calculator Largest integer that can be represented is of range 10200. MATLAB stores rational numbers as doubles by default, which is a measure of the number of decimal places that are stored in each variable and thus of how accurate the values are. A menu should open up that will allow you to add x and y axis labels, change the range of the x-y axes; add a title to the plot, and so on. Switching back to MATLAB we can see the data that we imported. That leads in well to how we plot things in MATLAB. Discrete Time Signals & Matlab A discrete-time signal x is a bi-in nite sequence, fx kg1 k=−1. Hence to scale and obtain the sampled version of then is multiplied by as per equation above. Basic Graphics Commands Annotating Plots. To derive an expression for the indefinite integral of a function, we write − For example, from our previous example − syms x int(2*x) MATLAB executes the above statement and returns the following result − In this example, let us find the integral of some commonly. Write content using fprintf. And our vertical axis is going to be the imaginary part. Complex step differentiation is a technique that employs complex arithmetic to obtain the numerical value of the first derivative of a real valued analytic Complex Step Differentiation » Cleve's Corner: Cleve Moler on Mathematics and Computing - MATLAB & Simulink. Vectors in MATLAB 10. For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox). I'm not completely new to MATLAB but I can't figure out an approach for this task. It will also accept 2+3*i to mean the same thing and it is necessary to use the multiplication symbol when the imaginary part is a variable as in x+y*i. In linear algebra of MATLAB we call these scalars. Chapter 2: Basic MATLAB Concepts. Steps to plot the complex numbers in Python 3 : Import the matplotlib library. A plot can present the data in continuous, discrete, surface or volume form. The generalization to complex exponentials is important for later work in Fourier analysis, so we are laying a foundation for the future. matlab/Octave Python R 2. Line Plot Complex Numbers. Euler's formula states that for any real number x : e i x = cos ⁡ x + i sin ⁡ x , {\displaystyle e^ {ix}=\cos x+i\sin x,}. Numbers and Booleans Strings Portable Functions Complex Numbers Arrays. A p 1 matrix is also called a column vector, and a 1 qmatrix is called a row vector. Complex numbers in MATLAB WARNING: Do not use the i as a variable in your code. Input array, specified as a scalar, vector, matrix, or multidimensional array. Working with Phasors and Using Complex Polar Notation in MATLAB Tony Richardson University of Evansville By default, MATLAB accepts complex numbers only in rectangular form. But complex numbers, just like vectors, can also be expressed in polar coordinate form, r ∠ θ. The language of MATLAB is taken from that of Linear Algebra. Random numbers from complex PDF. A function of a complex variable, w = f(z), can be thought in terms of its real components: We will demonstrate a number of ways to visualize the set of points (x, y, u, v) satisfying this equation. Solution uses ability of MATLAB to work with complex numbers. The example from Matlab help above was using one second for the duration of the data and it sampled the data at a sampling frequency such that. Complex numbers can be entered directly at the command line in rectangular form and stored in variables for further computation. Matlab Tutorial 1: Hello world, plotting, mathematical functions and file types. Plotting a complex number $$a+bi$$ is similar to plotting a real number, except that the horizontal axis represents the real part of the number, $$a$$, and the vertical axis represents the imaginary part of the number, $$bi$$. 1 Review of complex numbers 1. As always, if you’re interested in reviewing the mathematics of complex numbers, I’d start by browsing references online. This is achieved by using the following command. nyquist(sys,w. "Complex" numbers have two parts, a "real" part (being any "real" number that you're used to dealing with) and an "imaginary" part (being any number with an " i " in it). It is often useful to consider complex numbers in their polar form (Theta, R). Complex Numbers in Matlab and Octave. The angle must be converted to radians when entering numbers in complex exponential form: >> x = 2*exp(j*45*pi/180). As I hope you understand, a complex number is really a two dimensional animal. Select a Web Site. Becoming familiar with this format is useful because: 1. This page will make an xy plot of some mathematical expression for you. For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox). In the MIMO case, nyquist produces an array of Nyquist plots, each plot showing the response of one particular I/O channel. ', etc) to be plotted at the point. Also shown in Figure 2 are the amplitude and time shift values for. This example shows how to plot the imaginary part versus the real part of two complex vectors, z1 and z2. Note that the complex numbers surrounding the Nyquist index are complex conjugates and they represent positive and negative frequencies respectively. Plot aesthetics. The pathway for the folder typically is: C:\Users\[your account name]\Documents\MATLAB. We can use i or j to denote the imaginary units. This MATLAB function returns the phase angle in the interval [-π,π] for each element of a complex array z. Use the direct method supported by MATLAB and the specific complex functions abs, angle, imag, real, conj, complex, etc. In this first Matlab tutorial, I will try to show you the basics of Matlab user interface, data types, simple functions and mathematical operations. We want a plot in radians from to. Consequently, you have to convert each of your complex numbers to a vector (with 0 as the first element) and then plot those vectors. MATLAB user will often string together MATLAB commands to get sequential variable names s1, s2, s3 … only to then have to use another EVAL statement to work with the sequential variable names! Very often, a cell array indexed with s {1}, s {2}, s {3}… would work much better. The fft function puts the negative part of the spectrum on the right. In this activity you will learn about vector and matrix data types in Matlab, how to enter them into Matlab's workspace, how to edit, how to index, and you will also explore various vector, matrix, and matrix-vector operation. Complex numbers have a real and imaginary parts. where F, R, and theta are functions of w, and i is an imaginary number? I'm just confused where to start…like how to define w and where to go from there. Chapter 8: Working with Matrices in Matlab. How is this of help ? find the search keyword that you are searching for (i. We can replace each complex entry in the vector y with a NaN. Learn more about plot vector using complex numbers. Complex numbers $$x+iy$$ can be dealt with "natively" in MATLAB®. Firstly, if you say a phase Domain of a signal or of a number we understand by it that this one is a complex number (Phase exists only in complex numbers). Thevariable kis an integer and is called the discrete time. how to plot hyperbola in matlab) in the table below. Create a numeric vector of real values. Many of the examples are taken from Prof J. For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox). 002 seconds of data contains 5000 samples. in the complex plane, we see that sinusoidal motion is the projection of circular motion onto any straight line. MATLAB works with the rectangular representation. For the phases, use the last two digits of your telephone number for Á 1 (in degrees), and take Á 2 = 30 ±. NaNs are MATLAB's way of representing values that are not real or complex numbers. In this example, we will draw. Try the following: zz=3+4i conj(zz) abs(zz) angle(zz) real(zz) imag(zz) help zprint %- requires DSP First Toolbox exp( sqrt(-1)*pi ) exp(j*[ pi/4 -pi/4 ]) (f) Plotting is easy in Matlab, for both real and complex. You could change these cells to import as number if you would like but we're not going to need to do that in this video. ECE2610 Lab 1: Introduction to MATLAB Student Name - 3 - 08/04/10 Figure 2. Select a Web Site. For practice, you will be asked to calculate and plot the basins of attraction for a polynomial in the complex plane. Also, how would I limit this to the point where it only gives me a certain amount of cycles?. Here on the horizontal axis, that's going to be the real part of our complex number. complex (Matlab function) — Returns the complex form corresponding to the given real part and imaginary part conj (Matlab function) — Complex conjugate continue (Matlab function) — Keyword to pass control to the next iteration of a loop. When a = 0, the number is called a pure imaginary. After you have run the program, click on 'File', go to 'Open', and then select the file 'complex_numbers_demo. The Plot Function. Matlab: a Practical Introduction to Programming and Problem Solving By Stormy Attaway College of Engineering, Boston University Boston, MA AMSTERDAM • BOSTON • HEIDELBERG • LONDON NEW YORK • OXFORD • PARIS • SAN DIEGO SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO Butterworth-Heinemann is an imprint of Elsevier. However, getting the square root symbol the right size and with the bar extended over the expression whose root is being taken requires LaTeX. Generate a single random complex number with real and imaginary parts in the interval (0,1). After that it is just a matter of putting together the formulas, and deciding which representation one wishes to use for complex plots. Introduction to 2D Plots in Matlab. Line Plot Complex Numbers. It is a standard format, so using that format facilitates communication between engineers. The Julia set is the set of complex numbers z that do not diverge under the following iteration: $$z = z^2 + c$$ Where c is a constant complex number. Matlab and Octave have the following primitives for complex numbers: octave:1> help j j is a built-in constant - Built-in Variable: I - Built-in Variable: J - Built-in Variable: i - Built-in Variable: j A pure imaginary number, defined as `sqrt (-1)'. If X or Y is a matrix, then the vector is plotted versus the rows or columns of the matrix, whichever line up. The FFT is a complicated algorithm, and its details are usually left to those that specialize in such things. For example, let’s plot the cosine function from 2 to 1. MATLAB makes displaying a square root symbol easy. If X is a matrix, then fft (X) treats the columns of X as vectors and returns the Fourier transform of each column. MATrix LABoratory) was designed for numerical linear algebra. The graph of x k vs. • Introduction to Complex Numbers (which Frequency Response Theory is based on) • Frequency Response from Transfer Functions • Frequency Response from Input/output Signals • PID Controller Design and Tuning (Theory) • PID Controller Design and Tuning using MATLAB • Stability Analysis using MATLAB • Stability Analysis of Feedback. m', 'html') and create a new zip file that is labeled with your UNI and homework number (eg. I assume that G returns a complex result, for complex input. How to Model a Simple Spring-Mass-Damper Dynamic System in Matlab: In the field of Mechanical Engineering, it is routine to model a physical dynamic system as a set of differential equations that will later be simulated using a computer.
2020-07-14 03:42:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5779609084129333, "perplexity": 660.3944330768784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657147917.99/warc/CC-MAIN-20200714020904-20200714050904-00377.warc.gz"}
https://www.physicsforums.com/threads/i-squared-act-bill.830579/
# News I-Squared Act Bill 1. Sep 2, 2015 2. Sep 2, 2015 ### mheslep The term "tax" does not appear in the bill. How does hiring an H1B avoid taxes? 3. Sep 3, 2015 ### Student100 Lower wages = lower taxes paid by the employer. Employees with H1B visas tend to earn less then equivalent US workers. Also depending on totalitarian agreements/level of schooling the employees/r may not pay social security, or medicare taxes at all. Which obviously saves the employer money. Then you have the problem of what the supply of cheap skilled labor does for the salaries of everyone in those fields. They tend to be pressured downward. 4. Sep 3, 2015 ### Staff: Mentor The only wage connected tax that I know of paid by the employer is their half of the employee's payroll tax. It's difficult for me to see that as a tax avoidance scheme because it is only 6% of the wage (the other 94% savings is the lower wage itself!). What it could result in, however, is higher taxes associated with higher profits (33-39%). Or is there another, specific tax I'm missing? 5. Sep 3, 2015 ### Student100 I had thought it was closer to 8 percent for SS and Medicare together. Then you have workers comp payments which vary depending on the job. Assuming most H1B employees are going to work in office environments that would probably be close to ~1 percent. State and Federal unemployment taxes probably won't come into play since I'm assuming most of these employees would also have salaries above the income cap. There's also benefit packages and yearly raises, but these are harder to look at and I'm just ignoring them. Hadn't thought of the profit being taxed, since the employees earning the higher wages would have a lower effective tax rate than the corporate rate. Assuming that most of the H1B employees aren't from treaty countries then the US government may actually see more taxes from lower waged employees, at least in the general fund. I'm not against H1B visas, I just think there's a lot of abuse going on in the current framework. If this bill were to make it into law, then it only allows for more abuse. http://www.latimes.com/opinion/editorials/la-ed-visas-tech-workers-h1b-20150217-story.html America already produces more STEM graduates then industry needs, so the idea that H1B needs to be expanded on the grounds that companies can't find enough skilled workers is kind of silly. 6. Sep 3, 2015 ### mheslep I think that's at the very least debatable, even assuming its a given employers are apt to exaggerate shortages. Industry does not per se require STEM graduates, industry requires specific STEM competencies. For example, one current, commonly heard complaint is with CS majors who obtain degrees knowing little about robust software development in teams and have zero experience with recent platforms (e.g. mobile Android, iPhone). 7. Sep 3, 2015 ### SteamKing Staff Emeritus For 2014, each employee has deducted from his pay 6.2% of gross compensation under $117,000 for SS and 1.45% for Medicare (no income limits). The employer must match these deductions, so the total FICA comes to 12.4% for SS and 2.9% for Medicare, or 15.3% total https://en.wikipedia.org/wiki/Federal_Insurance_Contributions_Act_tax If instead of paying a US worker$100,000 a year in salary you can find an H1B visa holder for $50,000 a year, the employer not only saves$50,000 due to the gross salary differential, but he must pay only $3825.00 for his portion of the FICA, instead of$7650.00. Unemployment insurance rates depend on the state in which the employer operates and the industry in which he is engaged. Also, if an H1B visa holder loses his job, his visa is subject to being revoked. 8. Sep 3, 2015 ### Student100 I think if those competencies were in such large demand that wages would reflect this. Software development wages have remained nearly stagnant for the last decade. Has a neat graph up to 2012. Normally when the demand rises, unemployment decreases and wages increase, to attract the best talent and retain it. Those with experience should be paid a premium for the needed skills, but that doesn't look like thats the case from payscale: http://www.payscale.com/research/US/Job=Computer_Programmer/Salary Late career programmers aren't making nearly the premium you'd expect compared to new hires. (Assuming payscale is accurate, it looks reasonably ok, but if there is problems with the source let me know) 9. Sep 3, 2015 ### mheslep I doubt the H1Bs are coming in to the US to compete with the mean "computer programmer" category quoted in those pay scale stats. Rather, I expect they're coming in to compete at this level, where the prevailing wage has indeed gone astronomical: 10. Sep 3, 2015 ### Student100 I find this hard to believe, and the statistics don't really support this as the norm in the industry. I doubt any H1B's are commanding such a salary, or even commanding a salary close to average American workers. The company basically owns them, without the job they lose the visa. It doesn't really place them in a great bargaining position. 11. Sep 3, 2015 ### SteamKing Staff Emeritus Hmm... let's see, $500 K salary from a startup or$3 million in cash & Google stock. It doesn't take an MBA to figure this one out. But H1Bs are not merely being used as stalking horses to slay high-salaried programmers at startups. The famous layoffs at Disney were reportedly reversed after the Mouse garnered a lot of attention and no small amount of negative press over their plans to replace some IT workers with H1Bs, with the added insult that those being laid off were supposed to train their replacements before heading out the door. http://dailycaller.com/2015/06/12/disney-abc-cancels-plans-to-layoff-dozens-of-tech-workers/ Some tech workers were brought in from India to work on installing systems at an American business and paid as little as $1.21 an hour. http://www.mercurynews.com/business/ci_26778017/tech-company-paid-employees-from-india-little-1 Once these workers were finished, they were flown right back to India. No$15 / hour for them. But it's not just IT and electronics employers who are abusing the visa system. Large numbers of foreign workers are imported by shipyards, for instance, housed in company quarters on site, and work nearby on the vessels they are building or repairing. If the workers cause any trouble, they get fired summarily and put on the next plane out of town. With record numbers of Americans out of the labor force, these abuses should not be tolerated. 12. Sep 3, 2015 ### mheslep As the linked story makes clear, and I wrote, those S. Valley jobs are not the norm. If for some reason you find that CEO unbelievable (??), confirmation is easily found elsewhere. And H1Bs don't lose their visa if they can quickly get another job, which somebody at this level likely can. 13. Sep 3, 2015 ### SteamKing Staff Emeritus My point is, if you are going to recruit someone to jump ship from a good job, you don't start out by offering less money ... Now, this startup may be the next Google, or it could be the next Napster, you just don't know ... 14. Sep 3, 2015 ### mheslep The latter example has to do with outrageous employer behavior and nothing to do with H1B; the Indian immigrants were not H1Bs but entered the country under false pretenses for a few months. If they had been H1Bs, the employer could not have gotten away with the illegal wages. That said, i) I'm aware that there are abuses of H1B, and I'm skeptical of raising the cap to over 100K per year because Google's Schmidt thinks its the thing to do. ii) Abolishing all H1Bs does not necessarily mean US jobs will be retained over a foreign national, as owners/founders can and do pack up and move overseas, or simply start the next Google 2.0 in some other country in the first place. 15. Sep 3, 2015 ### SteamKing Staff Emeritus There's nothing to stop them from doing this now. Everybody knocks Bill Gates and Microsoft for whatever reason, but it still amazes me that he didn't abandon Redmond, WA years ago. He could have moved across the border to BC, or bought his own island (or small country) somewhere ... Heck, he could have built a big bote or two, put MS on the High Seas, and declared himself stateless. Why didn't he? Why is Silicon Valley still operating in California, with the earthquakes and the droughts and the fires and the high cost of living and the high taxes and crumbling, almost Third-World infrastructure and feudal politics? Is the view of sunsets on the Pacific really that great? Everybody wants US kids to grow up, study hard, go to college and get a STEM degree, and when all this is done, we say, "Sorry, but I'm gonna hire this H1B visa holder to fill that job you wanted. But, hey, now they're paying \$15 an hour at Starbucks! Get your application in before they convert to robot baristas!" 16. Sep 4, 2015 ### Student100 Have you been to that CEO's website? I don't think his credibility as someone knowledgeable of the industry is well established. Unlike his father, the Yale computer science professor, I don't think he even studied computer science from the information I could find online. The 3 million dollar developers will always have good paying jobs, whether they increased H1B visas to a million a year or abolished them completely. Any great STEM worker who's proven himself as a innovator or worth large sums of money will never have to fret over employment, this is true of any field. It's the good to decent green graduates that H1B's are replacing, at lower wages. What I'm looking at here is average pay, pay which has remained stagnant for a decade. If there was a real shortage of supply, I don't think this would be the case. The average pay would be increasing as employers pay for the value of having experienced and loyal STEM workers (and as they bought out talent from other companies). The current H1B visa program seems like a way to offshore onshore work. Tech companies still respect american innovation and universities, this is why they haven't all packed up and moved completely to India (and why foreign students still come here to study, paying large sums of money to get degrees from our universities). However, the allure to replace the decent to good entry STEM workers with decent to good foreign STEM workers at lower wages seems like a powerful motivation to me. Upon termination, the H1B visa expires. If they find a new sponsor after termination they still must leave the country and reenter with a valid visa. This is from the US gov website dealing with H1B. 17. Sep 4, 2015 ### mheslep 18. Sep 4, 2015 ### Student100 19. Sep 4, 2015 ### Student100 Employers have all the bargaining power when it comes to H1B visa holders.
2017-08-23 20:38:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17091959714889526, "perplexity": 4520.559646596262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886123359.11/warc/CC-MAIN-20170823190745-20170823210745-00395.warc.gz"}
https://codereview.stackexchange.com/questions/147251/calculating-the-difference-between-2-arrays/147313
Calculating the difference between 2 arrays I made this function to calculate the difference between 2 arrays and return it in a third array. I'm using this function as a part of stocktaking functionality I apply in my accounting system. function stockdiff ($array2,$array1){ //inputs : //$array2 is the stocktaking result in an array that has the format "$product_id=>$quantity" //$array1 is the stock according to the data in the system in an array that has the format "$product_id=>$quantity" foreach($array2 as$key=> $value){ if(isset($array1[$key])){$result[$key] =$array2[$key] -$array1[$key];} else{$result[$key] =$array2[$key];} }$array1diff = array_diff_key($array1,$array2); foreach($array1diff as$key=> $value){$result[$key] = -1 *$array1[$key]; } return$result; } Case example When the user do the stocktaking, its result are going to be saved in $array2 in the format$product_id=>$quantity. This is a stocktaking result example:$array2[2] = 500; $array2[3] = 7;$array2[1] = 302; $array2[105] = 7000;$array2[7] = 304; $array2[8] = 20;$array2[9] = 20; $array2[11] = 20;$array2[73] = 32; $array2[21] = 35; I then compare it against the stock quantities according to the data in the system$array1 which might be something like this: $array1[1] = 30;$array1[2] = 60; $array1[3] = 202;$array1[4] = 200; $array1[7] = 0;$array1[8] = 0; $array1[9] = 0;$array1[11] = 52; $array1[21] = 70;$array1[99] = 21; I made this function to do that job. I need it to tell me the changes that happened for every product between the data in the system and the actual data come from the stocktaking, for example, let's take product_id 2. In the system it was $array1[2] = 60; 60 units from the product 2, and the stocktaking told us no, we found 500 units from the product 2$array2[2] = 500;. So, the function must return [2]=>440. The function also needs to consider about these points: 1. If there is a product in the stocktaking array ($array2) that has no matching product in the stock-in-system array($array1), then the function will assume it's whole quantity has been increased and return it's quantity from $array2. 2. If there is a product in the stock-in-system array ($array1) that has no matching product in the stocktaking array ($array2), then the function will assume it's whole quantity has been decreased and return -1 * quantity. What do you think about the function? 1 Answer Regarding the way to achieve this task I couldn't find any better strategy than yours. On the other hand your code may be made cleaner and more readable by: • first merely following best-practices (don't put several statements in the same line, • using more significant names (such as$initial and $final instead of$array1 and $array2,$final_value and $initial_value rather than only$value) • avoiding to declare useless variables (like $array1diff, used only once) • using the more direct expressions when possible (e.g.$array2[key] is already available as $value, while -1 *$array1[$key] is merely -$array1[$key]) • using the ternary operator where it can simplify the code (see also the note below) It results in a reduced code: function stock_diff($initial, $final) { foreach ($final as $key =>$final_value) { $result[$key] = $final_value - @$initial[$key] ?: 0; } foreach(array_diff_key($initial, $final) as$key => $initial_value) {$result[$key] = -$initial_value; } return $result; } NOTE: regarding the use of @ in the ternary operator, I know that many people automatically banish it as globally evil. I don't agree with this too general point of view: sure it must be avoided in many situations where it might lead to some issues, but it's pretty legal when used carefully and with discernment. In the case above, the only "error" it may hide is the one we know it may happen (undefined index), so it keeps sure. And so it helps making the code more readable and reduced: (isset($initial[$key]) ?$initial[$key] : 0) is replaced by @$initial[$key] ?: 0 The only con is about performance: using @ works slightly slower than isset(). So avoid it when you know that a piece of code will be executed a significantly huge number of times. • Thank you so much for your advises , I'm going to take every point into serious consideration. I also want to ask you about something regarding this line foreach(array_diff_key($initial, \$final) Isn't that going to call array_diff_key() function for every iteration in the loop, or the loop is going to call it only at the first iteration? – Accountant م Nov 17 '16 at 19:04 • @Accountantم Glad to help. Regarding your question: no, the array_diff_key() function is called only once, at the foreach() init time. In the other hand, your remark made me think to add a precision about the performance with @: see my edited answer. – cFreed Nov 17 '16 at 20:37
2020-08-11 13:32:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37501633167266846, "perplexity": 2508.8265737520214}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738777.54/warc/CC-MAIN-20200811115957-20200811145957-00373.warc.gz"}
https://nebusresearch.wordpress.com/tag/rings/
## My All 2020 Mathematics A to Z: Zero Divisor Jacob Siehler had several suggestions for this last of the A-to-Z essays for 2020. Zorn’s Lemma was an obvious choice. It’s got an important place in set theory, it’s got some neat and weird implications. It’s got a great name. The zero divisor is one of those technical things mathematics majors have deal with. It never gets any pop-mathematics attention. I picked the less-travelled road and found a delightful scenic spot. # Zero Divisor. 3 times 4 is 12. That’s a clear, unambiguous, and easily-agreed-upon arithmetic statement. The thing to wonder is what kind of mathematics it takes to mess that up. The answer is algebra. Not the high school kind, with x’s and quadratic formulas and all. The college kind, with group theory and rings. A ring is a mathematical construct that lets you do a bit of arithmetic. Something that looks like arithmetic, anyway. It has a set of elements.  (An element is just a thing in a set.  We say “element” because it feels weird to call it “thing” all the time.) The ring has an addition operation. The ring has a multiplication operation. Addition has an identity element, something you can add to any element without changing the original element. We can call that ‘0’. The integers, or to use the lingo $Z$, are a ring (among other things). Among the rings you learn, after the integers, is the integers modulo … something. This can be modulo any counting number. The integers modulo 10, for example, we write as $Z_{10}$ for short. There are different ways to think of what this means. The one convenient for this essay is that it’s the integers 0, 1, 2, up through 9. And that the result of any calculation is “how much more than a whole multiple of 10 this calculation would otherwise be”. So then 3 times 4 is now 2. 3 times 5 is 5; 3 times 6 is 8. 3 times 7 is 1, and doesn’t that seem peculiar? That’s part of how modulo arithmetic warns us that groups and rings can be quite strange things. We can do modulo arithmetic with any of the counting numbers. Look, for example, at $Z_{5}$ instead. In the integers modulo 5, 3 times 4 is … 2. This doesn’t seem to get us anything new. How about $Z_{8}$? In this, 3 times 4 is 4. That’s interesting. It doesn’t make 3 the multiplicative identity for this ring. 3 times 3 is 1, for example. But you’d never see something like that for regular arithmetic. How about $Z_{12}$? Now we have 3 times 4 equalling 0. And that’s a dramatic break from how regular numbers work. One thing we know about regular numbers is that if a times b is 0, then either a is 0, or b is zero, or they’re both 0. We rely on this so much in high school algebra. It’s what lets us pick out roots of polynomials. Now? Now we can’t count on that. When this does happen, when one thing times another equals zero, we have “zero divisors”. These are anything in your ring that can multiply by something else to give 0. Is, zero, the additive identity, always a zero divisor. … That depends on what the textbook you first learned algebra from said. To avoid ambiguity, you can write a “nonzero zero divisor”. This clarifies your intentions and slows down your copy editing every time you read “nonzero zero”. Or call it a “nontrivial zero divisor” or “proper zero divisor” instead. My preference is to accept 0 as always being a zero divisor. We can disagree on this. What of zero divisors other than zero? Your ring might or might not have them. It depends on the ring. The ring of integers $Z$, for example, doesn’t have any zero divisors except for 0. The ring of integers modulo 12 $Z_{12}$, though? Anything that isn’t relatively prime to 12 is a zero divisor. So, 2, 3, 6, 8, 9, and 10 are zero divisors here. The ring of integers modulo 13 $Z_{13}$? That doesn’t have any zero divisors, other than zero itself. In fact any ring of integers modulo a prime number, $Z_{p}$, lacks zero divisors besides 0. Focusing too much on integers modulo something makes zero divisors sound like some curious shadow of prime numbers. There are some similarities. Whether a number is prime depends on your multiplication rule and what set of things it’s in. Being a zero divisor in one ring doesn’t directly relate to whether something’s a zero divisor in any other. Knowing what the zero divisors are tells you something about the structure of the ring. It’s hard to resist focusing on integers-modulo-something when learning rings. They work very much like regular arithmetic does. Even the strange thing about them, that every result is from a finite set of digits, isn’t too alien. We do something quite like it when we observe that three hours after 10:00 is 1:00. But many sets of elements can create rings. Square matrixes are the obvious extension. Matrixes are grids of elements, each of which … well, they’re most often going to be numbers. Maybe integers, or real numbers, or complex numbers. They can be more abstract things, like rotations or whatnot, but they’re hard to typeset. It’s easy to find zero divisors in matrixes of numbers. Imagine, like, a matrix that’s all zeroes except for one element, somewhere. There are a lot of matrices which, multiplied by that, will be a zero matrix, one with nothing but zeroes in it. Another common kind of ring is the polynomials. For these you need some constraint like the polynomial coefficients being integers-modulo-something. You can make that work. In 1988 Istvan Beck tried to establish a link between graph theory and ring theory. We now have a usable standard definition of one. If $R$ is any ring, then $\Gamma(R)$ is the zero-divisor graph of $R$. (I know some of you think $R$ is the real numbers. No; that’s a bold-faced $\mathbb{R}$ instead. Unless that’s too much bother to typeset.) You make the graph by putting in a vertex for the elements in $R$. You connect two vertices a and b if the product of the corresponding elements is zero. That is, if they’re zero divisors for one other. (In Beck’s original form, this included all the elements. In modern use, we don’t bother including the elements that are not zero divisors.) Drawing this graph $\Gamma(R)$ makes tools from graph theory available to study rings. We can measure things like the distance between elements, or what paths from one vertex to another exist. What cycles — paths that start and end at the same vertex — exist, and how large they are. Whether the graphs are bipartite. A bipartite graph is one where you can divide the vertices into two sets, and every edge connects one thing in the first set with one thing in the second. What the chromatic number — the minimum number of colors it takes to make sure no two adjacent vertices have the same color — is. What shape does the graph have? It’s easy to think that zero divisors are just a thing which emerges from a ring. The graph theory connection tells us otherwise. You can make a potential zero divisor graph and ask whether any ring could fit that. And, from that, what we can know about a ring from its zero divisors. Mathematicians are drawn as if by an occult hand to things that let you answer questions about a thing from its “shape”. And this lets me complete a cycle in this year’s A-to-Z, to my delight. There is an important question in topology which group theory could answer. It’s a generalization of the zero-divisors conjecture, a hypothesis about what fits in a ring based on certain types of groups. This hypothesis — actually, these hypotheses. There are a bunch of similar questions about invariants called the L2-Betti numbers can be. These we call the Atiyah Conjecture. This because of work Michael Atiyah did in the cohomology of manifolds starting in the 1970s. It’s work, I admit, I don’t understand well enough to summarize, and hope you’ll forgive me for that. I’m still amazed that one can get to cutting-edge mathematics research this. It seems, at its introduction, to be only a subversion of how we find x for which $(x - 2)(x + 1) = 0$. And this, I am amazed to say, completes the All 2020 A-to-Z project. All of this year’s essays should be gathered at this link. In the next couple days I plan t check that they actually are. All the essays from every A-to-Z series, going back to 2015, should be at this link. I plan to soon have an essay about what I learned in doing the A-to-Z this year. And then we can look to 2021 and hope that works out all right. Thank you for reading. ## The Summer 2017 Mathematics A To Z: Prime Number Gaurish, host of, For the love of Mathematics, gives me another topic for today’s A To Z entry. I think the subject got away from me. But I also like where it got. # Prime Number. Something about ‘5’ that you only notice when you’re a kid first learning about numbers. You know that it’s a prime number because it’s equal to 1 times 5 and nothing else. You also know that once you introduce fractions, it’s equal to all kinds of things. It’s 10 times one-half and it’s 15 times one-third and it’s 2.5 times 2 and many other things. Why, you might ask the teacher, is it a prime number if it’s got a million billion trillion different factors? And when every other whole number has as many factors? If you get to the real numbers it’s even worse yet, although when you’re a kid you probably don’t realize that. If you ask, the teacher probably answers that it’s only the whole numbers that count for saying whether something is prime or not. And, like, 2.5 can’t be considered anything, prime or composite. This satisfies the immediate question. It doesn’t quite get at the underlying one, though. Why do integers have prime numbers while real numbers don’t? To maybe have a prime number we need a ring. This is a creature of group theory, or what we call “algebra” once we get to college. A ring consists of a set of elements, and a rule for adding them together, and a rule for multiplying them together. And I want this ring to have a multiplicative identity. That’s some number which works like ‘1’: take something, multiply it by that, and you get that something back again. Also, I want this multiplication rule to commute. That is, the order of multiplication doesn’t affect what the result is. (If the order matters then everything gets too complicated to deal with.) Let me say the things in the set are numbers. It turns out (spoiler!) they don’t have to be. But that’s how we start out. Whether the numbers in a ring are prime or not depends on the multiplication rule. Let’s take a candidate number that I’ll call ‘a’ to make my writing easier. If the only numbers whose product is ‘a’ are the pair of ‘a’ and the multiplicative identity, then ‘a’ is prime. If there’s some other pair of numbers that give you ‘a’, then ‘a’ is not prime. The integers — the positive and negative whole numbers, including zero — are a ring. And they have prime numbers just like you’d expect, if we figure out some rule about how to deal with the number ‘-1’. There are many other rings. There’s a whole family of rings, in fact, so commonly used that they have shorthand. Mathematicians write them as “Zn”, where ‘n’ is some whole number. They’re the integers, modulo ‘n’. That is, they’re the whole numbers from ‘0’ up to the number ‘n-1’, whatever that is. Addition and multiplication work as they do with normal arithmetic, except that if the result is less than ‘0’ we add ‘n’ to it. If the result is more than ‘n-1’ we subtract ‘n’ from it. We repeat that until the result is something from ‘0’ to ‘n-1’, inclusive. (We use the letter ‘Z’ because it’s from the German word for numbers, and a lot of foundational work was done by German-speaking mathematicians. Alternatively, we might write this set as “In”, where “I” stands for integers. If that doesn’t satisfy, we might write this set as “Jn”, where “J” stands for integers. This is because it’s only very recently that we’ve come to see “I” and “J” as different letters rather than different ways to write the same letter.) These modulo arithmetics are legitimate ones, good reliable rings. They make us realize how strange prime numbers are, though. Consider the set Z4, where the only numbers are 0, 1, 2, and 3. 0 times anything is 0. 1 times anything is whatever you started with. 2 times 1 is 2. Obvious. 2 times 2 is … 0. All right. 2 times 3 is 2 again. 3 times 1 is 3. 3 times 2 is 2. 3 times 3 is 1. … So that’s a little weird. The only product that gives us 3 is 3 times 1. So 3’s a prime number here. 2 isn’t a prime number: 2 times 3 is 2. For that matter even 1 is a composite number, an unsettling consequence. Or then Z5, where the only numbers are 0, 1, 2, 3, and 4. Here, there are no prime numbers. Each number is the product of at least one pair of other numbers. In Z6 we start to have prime numbers again. But Z7? Z8? I recommend these questions to a night when your mind is too busy to let you fall asleep. Prime numbers depend on context. In the crowded universe of all the rational numbers, or all the real numbers, nothing is prime. In the more austere world of the Gaussian Integers, familiar friends like ‘3’ are prime again, although ‘5’ no longer is. We recognize that as the product of $2 + \imath$ and $2 - \imath$, themselves now prime numbers. So given that these things do depend on context. Should we care? Or let me put it another way. Suppose we contact a wholly separate culture, one that we can’t have influenced and one not influenced by us. It’s plausible that they should have a mathematics. Would they notice prime numbers as something worth study? Or would they notice them the way we notice, say, pentagonal numbers, a thing that allows for some pretty patterns and that’s about it? Well, anything could happen, of course. I’m inclined to think that prime numbers would be noticed, though. They seem to follow naturally from pondering arithmetic. And if one has thought of rings, then prime numbers seem to stand out. The way that Zn behaves changes in important ways if ‘n’ is a prime number. Most notably, if ‘n’ is prime (among the whole numbers), then we can define something that works like division on Zn. If ‘n’ isn’t prime (again), we can’t. This stands out. There are a host of other intriguing results that all seem to depend on whether ‘n’ is a prime number among the whole numbers. It seems hard to believe someone could think of the whole numbers and not notice the prime numbers among them. And they do stand out, as these reliably peculiar things. Many things about them (in the whole numbers) are easy to prove. That there are infinitely many, for example, you can prove to a child. And there are many things we have no idea how to prove. That there are infinitely many primes which are exactly two more than another prime, for example. Any child can understand the question. The one who can prove it will win what fame mathematicians enjoy. If it can be proved. They turn up in strange, surprising places. Just in the whole numbers we find some patches where there are many prime numbers in a row (Forty percent of the numbers 1 through 10!). We can find deserts; we know of a stretch of 1,113,106 numbers in a row without a single prime among them. We know it’s possible to find prime deserts as vast as we want. Say you want a gap between primes of at least size N. Then look at the numbers (N+1)! + 2, (N+1)! + 3, (N+1)! + 4, and so on, up to (N+1)! + N+1. None of those can be prime numbers. You must have a gap at least the size N. It may be larger; how we know that (N+1)! + 1 is a prime number? No telling. Well, we can check. See if any prime number divides into (N+1)! + 1. This takes a long time to do if N is all that big. There’s no formulas we know that will make this easy or quick. We don’t call it a “prime number” if it’s in a ring that isn’t enough like the numbers. Fair enough. We shift the name to “prime element”. “Element” is a good generic name for a thing whose identity we don’t mean to pin down too closely. I’ve talked about the Gaussian Primes already, in an earlier essay and earlier in this essay. We can make a ring out of the polynomials whose coefficients are all integers. In that, $x^2 + 1$ is a prime. So is $x^2 - 2$. If this hasn’t given you some ideas what other polynomials might be primes, then you have something else to ponder while trying to sleep. Thinking of all the prime polynomials is likely harder than you can do, though. Prime numbers seem to stand out, obvious and important. Humans have known about prime numbers for as long as we’ve known about multiplication. And yet there is something obscure about them. If there are cultures completely independent of our own, do they have insights which make prime numbers not such occult figures? How different would the world be if we knew all the things we now wonder about primes? ## The Summer 2017 Mathematics A To Z: Gaussian Primes Once more do I have Gaurish to thank for the day’s topic. (There’ll be two more chances this week, providing I keep my writing just enough ahead of deadline.) This one doesn’t touch category theory or topology. # Gaussian Primes. I keep touching on group theory here. It’s a field that’s about what kinds of things can work like arithmetic does. A group is a set of things that you can add together. At least, you can do something that works like adding regular numbers together does. A ring is a set of things that you can add and multiply together. There are many interesting rings. Here’s one. It’s called the Gaussian Integers. They’re made of numbers we can write as $a + b\imath$, where ‘a’ and ‘b’ are some integers. $\imath$ is what you figure, that number that multiplied by itself is -1. These aren’t the complex-valued numbers, you notice, because ‘a’ and ‘b’ are always integers. But you add them together the way you add complex-valued numbers together. That is, $a + b\imath$ plus $c + d\imath$ is the number $(a + c) + (b + d)\imath$. And you multiply them the way you multiply complex-valued numbers together. That is, $a + b\imath$ times $c + d\imath$ is the number $(a\cdot c - b\cdot d) + (a\cdot d + b\cdot c)\imath$. We created something that has addition and multiplication. It picks up subtraction for free. It doesn’t have division. We can create rings that do, but this one won’t, any more than regular old integers have division. But we can ask what other normal-arithmetic-like stuff these Gaussian integers do have. For instance, can we factor numbers? This isn’t an obvious one. No, we can’t expect to be able to divide one Gaussian integer by another. But we can’t expect to divide a regular old integer by another, not and get an integer out of it. That doesn’t mean we can’t factor them. It means we divide the regular old integers into a couple classes. There’s prime numbers. There’s composites. There’s the unit, the number 1. There’s zero. We know prime numbers; they’re 2, 3, 5, 7, and so on. Composite numbers are the ones you get by multiplying prime numbers together: 4, 6, 8, 9, 10, and so on. 1 and 0 are off on their own. Leave them there. We can’t divide any old integer by any old integer. But we can say an integer is equal to this string of prime numbers multiplied together. This gives us a handle by which we can prove a lot of interesting results. We can do the same with Gaussian integers. We can divide them up into Gaussian primes, Gaussian composites, units, and zero. The words mean what they mean for regular old integers. A Gaussian composite can be factored into the multiples of Gaussian primes. Gaussian primes can’t be factored any further. If we know what the prime numbers are for regular old integers we can tell whether something’s a Gaussian prime. Admittedly, knowing all the prime numbers is a challenge. But a Gaussian integer $a + b\imath$ will be prime whenever a couple simple-to-test conditions are true. First is if ‘a’ and ‘b’ are both not zero, but $a^2 + b^2$ is a prime number. So, for example, $5 + 4\imath$ is a Gaussian prime. You might ask, hey, would $-5 - 4\imath$ also be a Gaussian prime? That’s also got components that are integers, and the squares of them add up to a prime number (41). Well-spotted. Gaussian primes appear in quartets. If $a + b\imath$ is a Gaussian prime, so is $-a -b\imath$. And so are $-b + a\imath$ and $b - a\imath$. There’s another group of Gaussian primes. These are the numbers $a + b\imath$ where either ‘a’ or ‘b’ is zero. Then the other one is, if positive, three more than a whole multiple of four. If it’s negative, then it’s three less than a whole multiple of four. So ‘3’ is a Gaussian prime, as is -3, and as is $3\imath$ and so is $-3\imath$. This has strange effects. Like, ‘3’ is a prime number in the regular old scheme of things. It’s also a Gaussian prime. But familiar other prime numbers like ‘2’ and ‘5’? Not anymore. Two is equal to $(1 + \imath) \cdot (1 - \imath)$; both of those terms are Gaussian primes. Five is equal to $(2 + \imath) \cdot (2 - \imath)$. There are similar shocking results for 13. But, roughly, the world of composites and prime numbers translates into Gaussian composites and Gaussian primes. In this slightly exotic structure we have everything familiar about factoring numbers. You might have some nagging thoughts. Like, sure, two is equal to $(1 + \imath) \cdot (1 - \imath)$. But isn’t it also equal to $(1 + \imath) \cdot (1 - \imath) \cdot \imath \cdot (-\imath)$? One of the important things about prime numbers is that every composite number is the product of a unique string of prime numbers. Do we have to give that up for Gaussian integers? Good nag. But no; the doubt is coming about because you’ve forgotten the difference between “the positive integers” and “all the integers”. If we stick to positive whole numbers then, yeah, (say) ten is equal to two times five and no other combination of prime numbers. But suppose we have all the integers, positive and negative. Then ten is equal to either two times five or it’s equal to negative two times negative five. Or, better, it’s equal to negative one times two times negative one times five. Or suffix times any even number of negative ones. Remember that bit about separating ‘one’ out from the world of primes and composites? That’s because the number one screws up these unique factorizations. You can always toss in extra factors of one, to taste, without changing the product of something. If we have positive and negative integers to use, then negative one does almost the same trick. We can toss in any even number of extra negative ones without changing the product. This is why we separate “units” out of the numbers. They’re not part of the prime factorization of any numbers. For the Gaussian integers there are four units. 1 and -1, $\imath$ and $-\imath$. They are neither primes nor composites, and we don’t worry about how they would otherwise multiply the number of factorizations we get. But let me close with a neat, easy-to-understand puzzle. It’s called the moat-crossing problem. In the regular old integers it’s this: imagine that the prime numbers are islands in a dangerous sea. You start on the number ‘2’. Imagine you have a board that can be set down and safely crossed, then picked up to be put down again. Could you get from the start and go off to safety, which is infinitely far away? If your board is some, fixed, finite length? No, you can’t. The problem amounts to how big the gap between one prime number and the next largest prime number can be. It turns out there’s no limit to that. That is, you give me a number, as small or as large as you like. I can find some prime number that’s more than your number less than its successor. There are infinitely large gaps between prime numbers. Gaussian primes, though? Since a Gaussian prime might have nearest neighbors in any direction? Nobody knows. We know there are arbitrarily large gaps. Pick a moat size; we can (eventually) find a Gaussian prime that’s at least that far away from its nearest neighbors. But this does not say whether it’s impossible to get from the smallest Gaussian primes — $1 + \imath$ and its companions $-1 + \imath$ and on — infinitely far away. We know there’s a moat of width 6 separating the origin of things from infinity. We don’t know that there’s bigger ones. You’re not going to solve this problem. Unless I have more brilliant readers than I know about; if I have ones who can solve this problem then I might be too intimidated to write anything more. But there is surely a pleasant pastime, maybe a charming game, to be made from this. Try finding the biggest possible moats around some set of Gaussian prime island. Ellen Gethner, Stan Wagon, and Brian Wick’s A Stroll Through the Gaussian Primes describes this moat problem. It also sports some fine pictures of where the Gaussian primes are and what kinds of moats you can find. If you don’t follow the reasoning, you can still enjoy the illustrations. ## The End 2016 Mathematics A To Z: Quotient Groups I’ve got another request today, from the ever-interested and group-theory-minded gaurish. It’s another inspirational one. ## Quotient Groups. We all know about even and odd numbers. We don’t have to think about them. That’s why it’s worth discussing them some. We do know what they are, though. The integers — whole numbers, positive and negative — we can split into two sets. One of them is the even numbers, two and four and eight and twelve. Zero, negative two, negative six, negative 2,038. The other is the odd numbers, one and three and nine. Negative five, negative nine, negative one. What do we know about numbers, if all we look at is whether numbers are even or odd? Well, we know every integer is either an odd or an even number. It’s not both; it’s not neither. We know that if we start with an even number, its negative is also an even number. If we start with an odd number, its negative is also an odd number. We know that if we start with a number, even or odd, and add to it its negative then we get an even number. A specific number, too: zero. And that zero is interesting because any number plus zero is that same original number. We know we can add odds or evens together. An even number plus an even number will be an even number. An odd number plus an odd number is an even number. An odd number plus an even number is an odd number. And subtraction is the same as addition, by these lights. One number minus an other number is just one number plus negative the other number. So even minus even is even. Odd minus odd is even. Odd minus even is odd. We can pluck out some of the even and odd numbers as representative of these sets. We don’t want to deal with big numbers, nor do we want to deal with negative numbers if we don’t have to. So take ‘0’ as representative of the even numbers. ‘1’ as representative of the odd numbers. 0 + 0 is 0. 0 + 1 is 1. 1 + 0 is 1. The addition is the same thing we would do with the original set of integers. 1 + 1 would be 2, which is one of the even numbers, which we represent with 0. So 1 + 1 is 0. If we’ve picked out just these two numbers each is the minus of itself: 0 – 0 is 0 + 0. 1 – 1 is 1 + 1. All that gives us 0, like we should expect. Two paragraphs back I said something that’s obvious, but deserves attention anyway. An even plus an even is an even number. You can’t get an odd number out of it. An odd plus an odd is an even number. You can’t get an odd number out of it. There’s something fundamentally different between the even and the odd numbers. And now, kindly reader, you’ve learned quotient groups. OK, I’ll do some backfilling. It starts with groups. A group is the most skeletal cartoon of arithmetic. It’s a set of things and some operation that works like addition. The thing-like-addition has to work on pairs of things in your set, and it has to give something else in the set. There has to be a zero, something you can add to anything without changing it. We call that the identity, or the additive identity, because it doesn’t change something else’s identity. It makes sense if you don’t stare at it too hard. Everything has an additive inverse. That is everything has a “minus”, that you can add to it to get zero. With odd and even numbers the set of things is the integers. The thing-like-addition is, well, addition. I said groups were based on how normal arithmetic works, right? And then you need a subgroup. A subgroup is … well, it’s a subset of the original group that’s itself a group. It has to use the same addition the original group does. The even numbers are such a subgroup of the integers. Formally they make something called a “normal subgroup”, which is a little too much for me to explain right now. If your addition works like it does for normal numbers, that is, “a + b” is the same thing as “b + a”, then all your subgroups are normal groups. Yes, it can happen that they’re not. If the addition is something like rotations in three-dimensional space, or swapping the order of things, then the order you “add” things in matters. We make a quotient group by … OK, this isn’t going to sound like anything. It’s a group, though, like the name says. It uses the same addition that the original group does. Its set, though, that’s itself made up of sets. One of the sets is the normal subgroup. That’s the easy part. Then there’s something called cosets. You make a coset by picking something from the original group and adding it to everything in the subgroup. If the thing you pick was from the original subgroup that’s just going to be the subgroup again. If you pick something outside the original subgroup then you’ll get some other set. Starting from the subgroup of even numbers there’s not a lot to do. You can get the even numbers and you get the odd numbers. Doesn’t seem like much. We can do otherwise though. Suppose we start from the subgroup of numbers divisible by 4, though. That’s 0, 4, 8, 12, -4, -8, -12, and so on. Now there’s three cosets we can make from that. We can start with the original set of numbers. Or we have 1 plus that set: 1, 5, 9, 13, -3, -7, -11, and so on. Or we have 2 plus that set: 2, 6, 10, 14, -2, -6, -10, and so on. Or we have 3 plus that set: 3, 7, 11, 15, -1, -5, -9, and so on. None of these others are subgroups, which is why we don’t call them subgroups. We call them cosets. These collections of cosets, though, they’re the pieces of a new group. The quotient group. One of them, the normal subgroup you started with, is the identity, the thing that’s as good as zero. And you can “add” the cosets together, in just the same way you can add “odd plus odd” or “odd plus even” or “even plus even”. For example. Let me start with the numbers divisible by 4. I will have so much a better time if I give this a name. I’ll pick ‘Q’. This is because, you know, quarters, quartet, quadrilateral, this all sounds like four-y stuff. The integers — the integers have a couple of names. ‘I’, ‘J’, and ‘Z’ are the most common ones. We get ‘Z’ from German; a lot of important group theory was done by German-speaking mathematicians. I’m used to it so I’ll stick with that. The quotient group ‘Z / Q’, read “Z modulo Q”, has (it happens) four cosets. One of them is Q. One of them is “1 + Q”, that set 1, 5, 9, and so on. Another of them is “2 + Q”, that set 2, 6, 10, and so on. And the last is “3 + Q”, that set 3, 7, 11, and so on. And you can add them together. 1 + Q plus 1 + Q turns out to be 2 + Q. Try it out, you’ll see. 1 + Q plus 2 + Q turns out to be 3 + Q. 2 + Q plus 2 + Q is Q again. The quotient group uses the same addition as the original group. But it doesn’t add together elements of the original group, or even of the normal subgroup. It adds together sets made from the normal subgroup. We’ll denote them using some form that looks like “a + N”, or maybe “a N”, if ‘N’ was the normal subgroup and ‘a’ something that wasn’t in it. (Sometimes it’s more convenient writing the group operation like it was multiplication, because we do that by not writing anything at all, which saves us from writing stuff.) If we’re comfortable with the idea that “odd plus odd is even” and “even plus odd is odd” then we should be comfortable with adding together quotient groups. We’re not, not without practice, but that’s all right. In the Introduction To Not That Kind Of Algebra course mathematics majors take they get a lot of practice, just in time to be thrown into rings. Quotient groups land on the mathematics major as a baffling thing. They don’t actually turn up things from the original group. And they lead into important theorems. But to an undergraduate they all look like text huddling up to ladders of quotient groups. We’re told these are important theorems and they are. They also go along with beautiful diagrams of how these quotient groups relate to each other. But they’re hard going. It’s tough finding good examples and almost impossible to explain what a question is. It comes as a relief to be thrown into rings. By the time we come back around to quotient groups we’ve usually had enough time to get used to the idea that they don’t seem so hard. Really, looking at odds and evens, they shouldn’t be so hard. ## The End 2016 Mathematics A To Z: Kernel I told you that Image thing would reappear. Meanwhile I learned something about myself in writing this. ## Kernel. I want to talk about functions again. I’ve been keeping like a proper mathematician to a nice general idea of what a function is. The sort where a function’s this rule matching stuff in a set called the domain with stuff in a set called the range. And I’ve tried not to commit myself to saying anything about what that domain and range are. They could be numbers. They could be other functions. They could be the set of DVDs you own but haven’t watched in more than two years. They could be collections socks. Haven’t said. But we know what functions anyone cares about. They’re stuff that have domains and ranges that are numbers. Preferably real numbers. Complex-valued numbers if we must. If we look at more exotic sets they’re ones that stick close to being numbers: vectors made up of an ordered set of numbers. Matrices of numbers. Functions that are themselves about numbers. Maybe we’ll get to something exotic like a rotation, but then what is a rotation but spinning something a certain number of degrees? There are a bunch of unavoidably common domains and ranges. Fine, then. I’ll stick to functions with ranges that look enough like regular old numbers. By “enough” I mean they have a zero. That is, something that works like zero does. You know, add it to something else and that something else isn’t changed. That’s all I need. A natural thing to wonder about a function — hold on. “Natural” is the wrong word. Something we learn to wonder about in functions, in pre-algebra class where they’re all polynomials, is where the zeroes are. They’re generally not at zero. Why would we say “zeroes” to mean “zero”? That could let non-mathematicians think they knew what we were on about. By the “zeroes” we mean the things in the domain that get matched to the zero in the range. It might be zero; no reason it couldn’t, until we know what the function’s rule is. Just we can’t count on that. A polynomial we know has … well, it might have zero zeroes. Might have no zeroes. It might have one, or two, or so on. If it’s an n-th degree polynomial it can have up to n zeroes. And if it’s not a polynomial? Well, then it could have any conceivable number of zeroes and nobody is going to give you a nice little formula to say where they all are. It’s not that we’re being mean. It’s just that there isn’t a nice little formula that works for all possibilities. There aren’t even nice little formulas that work for all polynomials. You have to find zeroes by thinking about the problem. Sorry. But! Suppose you have a collection of all the zeroes for your function. That’s all the points in the domain that match with zero in the range. Then we have a new name for the thing you have. And that’s the kernel of your function. It’s the biggest subset in the domain with an image that’s just the zero in the range. So we have a name for the zeroes that isn’t just “the zeroes”. What does this get us? If we don’t know anything about the kind of function we have, not much. If the function belongs to some common kinds of functions, though, it tells us stuff. For example. Suppose the function has domain and range that are vectors. And that the function is linear, which is to say, easy to deal with. Let me call the function ‘f’. And let me pick out two things in the domain. I’ll call them ‘x’ and ‘y’ because I’m writing this after Thanksgiving dinner and can’t work up a cleverer name for anything. If f is linear then f(x + y) is the same thing as f(x) + f(y). And now something magic happens. If x and y are both in the kernel, then x + y has to be in the kernel too. Think about it. Meanwhile, if x is in the kernel but y isn’t, then f(x + y) is f(y). Again think about it. What we can see is that the domain fractures into two directions. One of them, the direction of the kernel, is invisible to the function. You can move however much you like in that direction and f can’t see it. The other direction, perpendicular (“orthogonal”, we say in the trade) to the kernel, is visible. Everything that might change changes in that direction. This idea threads through vector spaces, and we study a lot of things that turn out to look like vector spaces. It keeps surprising us by letting us solve problems, or find the best-possible approximate solutions. This kernel gives us room to match some fiddly conditions without breaking the real solution. The size of the null space alone can tell us whether some problems are solvable, or whether they’ll have infinitely large sets of solutions. In this vector-space construct the kernel often takes on another name, the “null space”. This means the same thing. But it reminds us that superhero comics writers miss out on many excellent pieces of terminology by not taking advanced courses in mathematics. Kernels also appear in group theory, whenever we get into rings. We’re always working with rings. They’re nearly as unavoidable as vector spaces. You know how you can divide the whole numbers into odd and even? And you can do some neat tricks with that for some problems? You can do that with every ring, using the kernel as a dividing point. This gives us information about how the ring is shaped, and what other structures might look like the ring. This often lets us turn proofs that might be hard into a collection of proofs on individual cases that are, at least, doable. Tricks about odd and even numbers become, in trained hands, subtle proofs of surprising results. We see vector spaces and rings all over the place in mathematics. Some of that’s selection bias. Vector spaces capture a lot of what’s important about geometry. Rings capture a lot of what’s important about arithmetic. We have understandings of geometry and arithmetic that transcend even our species. Raccoons understand space. Crows understand number. When we look to do mathematics we look for patterns we understand, and these are major patterns we understand. And there are kernels that matter to each of them. Some mathematical ideas inspire metaphors to me. Kernels are one. Kernels feel to me like the process of holding a polarized lens up to a crystal. This lets one see how the crystal is put together. I realize writing this down that my metaphor is unclear: is the kernel the lens or the structure seen in the crystal? I suppose the function has to be the lens, with the kernel the crystallization planes made clear under it. It’s curious I had enjoyed this feeling about kernels and functions for so long without making it precise. Feelings about mathematical structures can be like that. ## A Leap Day 2016 Mathematics A To Z: Isomorphism Gillian B made the request that’s today’s A To Z word. I’d said it would be challenging. Many have been, so far. But I set up some of the work with “homomorphism” last time. As with “homomorphism” it’s a word that appears in several fields and about different kinds of mathematical structure. As with homomorphism, I’ll try describing what it is for groups. They seem least challenging to the imagination. ## Isomorphism. An isomorphism is a kind of homomorphism. And a homomorphism is a kind of thing we do with groups. A group is a mathematical construct made up of two things. One is a set of things. The other is an operation, like addition, where we take two of the things and get one of the things in the set. I think that’s as far as we need to go in this chain of defining things. A homomorphism is a mapping, or if you like the word better, a function. The homomorphism matches everything in a group to the things in a group. It might be the same group; it might be a different group. What makes it a homomorphism is that it preserves addition. I gave an example last time, with groups I called G and H. G had as its set the whole numbers 0 through 3 and as operation addition modulo 4. H had as its set the whole numbers 0 through 7 and as operation addition modulo 8. And I defined a homomorphism φ which took a number in G and matched it the number in H which was twice that. Then for any a and b which were in G’s set, φ(a + b) was equal to φ(a) + φ(b). We can have all kinds of homomorphisms. For example, imagine my new φ1. It takes whatever you start with in G and maps it to the 0 inside H. φ1(1) = 0, φ1(2) = 0, φ1(3) = 0, φ1(0) = 0. It’s a legitimate homomorphism. Seems like it’s wasting a lot of what’s in H, though. An isomorphism doesn’t waste anything that’s in H. It’s a homomorphism in which everything in G’s set matches to exactly one thing in H’s, and vice-versa. That is, it’s both a homomorphism and a bijection, to use one of the terms from the Summer 2015 A To Z. The key to remembering this is the “iso” prefix. It comes from the Greek “isos”, meaning “equal”. You can often understand an isomorphism from group G to group H showing how they’re the same thing. They might be represented differently, but they’re equivalent in the lights you use. I can’t make an isomorphism between the G and the H I started with. Their sets are different sizes. There’s no matching everything in H’s set to everything in G’s set without some duplication. But we can make other examples. For instance, let me start with a new group G. It’s got as its set the positive real numbers. And it has as its operation ordinary multiplication, the kind you always do. And I want a new group H. It’s got as its set all the real numbers, positive and negative. It has as its operation ordinary addition, the kind you always do. For an isomorphism φ, take the number x that’s in G’s set. Match it to the number that’s the logarithm of x, found in H’s set. This is a one-to-one pairing: if the logarithm of x equals the logarithm of y, then x has to equal y. And it covers everything: all the positive real numbers have a logarithm, somewhere in the positive or negative real numbers. And this is a homomorphism. Take any x and y that are in G’s set. Their “addition”, the group operation, is to multiply them together. So “x + y”, in G, gives us the number xy. (I know, I know. But trust me.) φ(x + y) is equal to log(xy), which equals log(x) + log(y), which is the same number as φ(x) + φ(y). There’s a way to see the postive real numbers being multiplied together as equivalent to all the real numbers being added together. You might figure that the positive real numbers and all the real numbers aren’t very different-looking things. Perhaps so. Here’s another example I like, drawn from Wikipedia’s entry on Isomorphism. It has as sets things that don’t seem to have anything to do with one another. Let me have another brand-new group G. It has as its set the whole numbers 0, 1, 2, 3, 4, and 5. Its operation is addition modulo 6. So 2 + 2 is 4, while 2 + 3 is 5, and 2 + 4 is 0, and 2 + 5 is 1, and so on. You get the pattern, I hope. The brand-new group H, now, that has a more complicated-looking set. Its set is ordered pairs of whole numbers, which I’ll represent as (a, b). Here ‘a’ may be either 0 or 1. ‘b’ may be 0, 1, or 2. To describe its addition rule, let me say we have the elements (a, b) and (c, d). Find their sum first by adding together a and c, modulo 2. So 0 + 0 is 0, 1 + 0 is 1, 0 + 1 is 1, and 1 + 1 is 0. That result is the first number in the pair. The second number we find by adding together b and d, modulo 3. So 1 + 0 is 1, and 1 + 1 is 2, and 1 + 2 is 0, and so on. So, for example, (0, 1) plus (1, 1) will be (1, 2). But (0, 1) plus (1, 2) will be (1, 0). (1, 2) plus (1, 0) will be (0, 2). (1, 2) plus (1, 2) will be (0, 1). And so on. The isomorphism matches up things in G to things in H this way: In G φ(G), in H 0 (0, 0) 1 (1, 1) 2 (0, 2) 3 (1, 0) 4 (0, 1) 5 (1, 2) I recommend playing with this a while. Pick any pair of numbers x and y that you like from G. And check their matching ordered pairs φ(x) and φ(y) in H. φ(x + y) is the same thing as φ(x) + φ(y) even though the things in G’s set don’t look anything like the things in H’s. Isomorphisms exist for other structures. The idea extends the way homomorphisms do. A ring, for example, has two operations which we think of as addition and multiplication. An isomorphism matches two rings in ways that preserve the addition and multiplication, and which match everything in the first ring’s set to everything in the second ring’s set, one-to-one. The idea of the isomorphism is that two different things can be paired up so that they look, and work, remarkably like one another. One of the common uses of isomorphisms is describing the evolution of systems. We often like to look at how some physical system develops from different starting conditions. If you make a little variation in how things start, does this produce a small change in how it develops, or does it produce a big change? How big? And the description of how time changes the system is, often, an isomorphism. Isomorphisms also appear when we study the structures of groups. They turn up naturally when we look at things called “normal subgroups”. The name alone gives you a good idea what a “subgroup” is. “Normal”, well, that’ll be another essay. ## A Leap Day 2016 Mathematics A To Z: Dedekind Domain When I tossed this season’s A To Z open to requests I figured I’d get some surprising ones. So I did. This one’s particularly challenging. It comes from Gaurish Korpal, author of the Gaurish4Math blog. ## Dedekind Domain A major field of mathematics is Algebra. By this mathematicians don’t mean algebra. They mean studying collections of things on which you can do stuff that looks like arithmetic. There’s good reasons why this field has that confusing name. Nobody knows what they are. We’ve seen before the creation of things that look a bit like arithmetic. Rings are a collection of things for which we can do something that works like addition and something that works like multiplication. There are a lot of different kinds of rings. When a mathematics popularizer tries to talk about rings, she’ll talk a lot about the whole numbers. We can usually count on the audience to know what they are. If that won’t do for the particular topic, she’ll try the whole numbers modulo something. If she needs another example then she talks about the ways you can rotate or reflect a triangle, or square, or hexagon and get the original shape back. Maybe she calls on the sets of polynomials you can describe. Then she has to give up on words and make do with pictures of beautifully complicated things. And after that she has to give up because the structures get too abstract to describe without losing the audience. Dedekind Domains are a kind of ring that meets a bunch of extra criteria. There’s no point my listing them all. It would take several hundred words and you would lose motivation to continue before I was done. If you need them anyway Eric W Weisstein’s MathWorld dictionary gives the exact criteria. It also has explanations for all the words in those criteria. Dedekind Domains, also called Dedekind Rings, are aptly named for Richard Dedekind. He was a 19th century mathematician, the last doctoral student of Gauss, and one of the people who defined what we think of as algebra. He also gave us a rigorous foundation for what irrational numbers are. Among the problems that fascinated Dedekind was Fermat’s Last Theorem. This can’t surprise you. Every person who would be a mathematician is fascinated by it. We take our innings fiddling with cases and ways to show an + bn can’t equal cn for interesting whole numbers a, b, c, and n. We usually go about this by saying, “Suppose we have the smallest a, b, and c for which this is true and for which n is bigger than 2”. Then we do a lot of scribbling that shows this implies something contradictory, like an even number equals an odd, or that there’s some set of smaller numbers making this true. This proves the original supposition was false. Mathematicians first learn that trick as a way to show the square root of two can’t be a rational number. We stick with it because it’s nice and familiar and looks relevant. Most of us get maybe as far as proving there aren’t any solutions for n = 3 or maybe n = 4 and go on to other work. Dedekind didn’t prove the theorem. But he did find new ways to look at numbers. One problem with proving Fermat’s Last Theorem is that it’s all about integers. Integers are hard to prove things about. Real numbers are easier. Complex-valued numbers are easier still. This is weird but it’s so. So we have this promising approach: if we could prove something like Fermat’s Last Theorem for complex-valued numbers, we’d get it up for integers. Or at least we’d be a lot of the way there. The one flaw is that Fermat’s Last Theorem isn’t true for complex-valued numbers. It would be ridiculous if it were true. But we can patch things up. We can construct something called Gaussian Integers. These are complex-valued numbers which we can match up to integers in a compelling way. We could use the tools that work on complex-valued numbers to squeeze out a result about integers. You know that this didn’t work. If it had, we wouldn’t have had to wait for the 1990s for the proof of Fermat’s Last Theorem. And that proof would have anything to do with this stuff. It hasn’t. One of the problems keeping this kind of proof from working is factoring. Whole numbers are either prime numbers or the product of prime numbers. Or they’re 1, ruled out of the universe of prime numbers for reasons I get to after the next paragraph. Prime numbers are those like 2, 5, 13, 37 and many others. They haven’t got any factors besides themselves and 1. The other whole numbers are the products of prime numbers. 12 is equal to 2 times 2 times 3. 35 is equal to 5 times 7. 165 is equal to 3 times 5 times 11. If we stick to whole numbers, then, these all have unique prime factorizations. 24 is equal to 2 times 2 times 2 times 3. And there are no other combinations of prime numbers that multiply together to give us 24. We could rearrange the numbers — 2 times 3 times 2 times 2 works. But it will always be a combination of three 2’s and a single 3 that we multiply together to get 24. (This is a reason we don’t consider 1 a prime number. If we did consider a prime number, then “three 2’s and a single 3” would be a prime factorization of 24, but so would “three 2’s, a single 3, and two 1’s”. Also “three 2’s, a single 3, and fifteen 1’s”. Also “three 2’s, a single 3, and one 1”. We have a lot of theorems that depend on whole numbers having a unique prime factorization. We could add the phrase “except for the count of 1’s in the factorization” to every occurrence of the phrase “prime factorization”. Or we could say that 1 isn’t a prime number. It’s a lot less work to say 1 isn’t a prime number.) The trouble is that if we work with Gaussian integers we don’t have that unique prime factorization anymore. There are still prime numbers. But it’s possible to get some numbers as a product of different sets of prime numbers. And this point breaks a lot of otherwise promising attempts to prove Fermat’s Last Theorem. And there’s no getting around that, not for Fermat’s Last Theorem. Dedekind saw a good concept lurking under this, though. The concept is called an ideal. It’s a subset of a ring that itself satisfies the rules for being a ring. And if you take something from the original ring and multiply it by something in the ideal, you get something that’s still in the ideal. You might already have one in mind. Start with the ring of integers. The even numbers are an ideal of that. Add any two even numbers together and you get an even number. Multiply any two even numbers together and you get an even number. Take any integer, even or not, and multiply it by an even number. You get an even number. (If you were wondering: I mean the ideal would be a “ring without identity”. It’s not required to have something that acts like 1 for the purpose of multiplication. If we insisted on looking at the even numbers and the number 1, then we couldn’t be sure that adding two things from the ideal would stay in the ideal. After all, 2 is in the ideal, and if 1 also is, then 2 + 1 is a peculiar thing to consider an even number.) It’s not just even numbers that do this. The multiples of 3 make an ideal in the integers too. Add two multiples of 3 together and you get a multiple of 3. Multiply two multiples of 3 together and you get another multiple of 3. Multiply any integer by a multiple of 3 and you get a multiple of 3. The multiples of 4 also make an ideal, as do the multiples of 5, or the multiples of 82, or of any whole number you like. Odd numbers don’t make an ideal, though. Add two odd numbers together and you don’t get an odd number. Multiply an integer by an odd number and you might get an odd number, you might not. And not every ring has an ideal lurking within it. For example, take the integers modulo 3. In this case there are only three numbers: 0, 1, and 2. 1 + 1 is 2, uncontroversially. But 1 + 2 is 0. 2 + 2 is 1. 2 times 1 is 2, but 2 times 2 is 1 again. This is self-consistent. But it hasn’t got an ideal within it. There isn’t a smaller set that has addition work. The multiples of 4 make an interesting ideal in the integers. They’re not just an ideal of the integers. They’re also an ideal of the even numbers. Well, the even numbers make a ring. They couldn’t be an ideal of the integers if they couldn’t be a ring in their own right. And the multiples of 4 — well, multiply any even number by a multiple of 4. You get a multiple of 4 again. This keeps on going. The multiples of 8 are an ideal for the multiples of 4, the multiples of 2, and the integers. Multiples of 16 and 32 make for even deeper nestings of ideals. The multiples of 6, now … that’s an ideal of the integers, for all the reasons the multiples of 2 and 3 and 4 were. But it’s also an ideal of the multiples of 2. And of the multiples of 3. We can see the collection of “things that are multiples of 6” as a product of “things that are multiples of 2” and “things that are multiples of 3”. Dedekind saw this before us. You might want to pause a moment while considering the idea of multiplying whole sets of numbers together. It’s a heady concept. Trying to do proofs with the concept feels at first like being tasked with alphabetizing a cloud. But we’re not planning to prove anything so you can move on if you like with an unalphabetized cloud. A Dedekind Domain is a ring that has ideals like this. And the ideals come in two categories. Some are “prime ideals”, which act like prime numbers do. The non-prime ideals are the products of prime ideals. And while we might not have unique prime factorizations of numbers, we do have unique prime factorizations of ideals. That is, if an ideal is a product of some set of prime ideals, then it can’t also be the product of some other set of prime ideals. We get back something like unique factors. This may sound abstract. But you know a Dedekind Domain. The integers are one. That wasn’t a given. Yes, we start algebra by looking for things that work like regular arithmetic do. But that doesn’t promise that regular old numbers will still satisfy us. We can, for instance, study things where the order matters in multiplication. Then multiplying one thing by a second gives us a different answer to multiplying the second thing by the first. Still, regular old integers are Dedekind domains and it’s hard to think of being more familiar than that. Another example is the set of polynomials. You might want to pause for a moment here. Mathematics majors need a pause to start thinking of polynomials as being something kind of like regular old numbers. But you can certainly add one polynomial to another, and you get a polynomial out of it. You can multiply one polynomial by another, and you get a polynomial out of that. Try it. After that the only surprise would be that there are prime polynomials. But if you try to think of two polynomials that multiply together to give you “x + 1” you realize there have to be. Other examples start getting more exotic. They’re things like the Gaussian integers I mentioned before. Gaussian integers are themselves an example of a structure called algebraic integers. Algebraic integers are — well, think of all the polynomials you can out of integer coefficients, and with a leading coefficient of 1. So, polynomials that look like “x3 – 4 x2 + 15 x + 6” or the like. All of the roots of those, the values of x which make that expression equal to zero, are algebraic integers. Yes, almost none of them are integers. We know. But the algebraic integers are also a Dedekind Domain. I’d like to describe some more Dedekind Domains. I am foiled. I can find some more, but explaining them outside the dialect of mathematics is hard. It would take me more words than I am confident readers will give me. I hope you are satisfied to know a bit of what a Dedekind Domain is. It is a kind of thing which works much like integers do. But a Dedekind Domain can be just different enough that we can’t count on factoring working like we are used to. We don’t lose factoring altogether, though. We are able to keep an attenuated version. It does take quite a few words to explain exactly how to set this up, however. ## Ring. Early on in her undergraduate career a mathematics major will take a class called Algebra. Actually, Introduction to Algebra is more likely, but another Algebra will follow. She will have to explain to her friends and parents that no, it’s not more of that stuff they didn’t understand in high school about expanding binomial terms and finding quadratic equations. The class is the study of constructs that work much like numbers do, but that aren’t necessarily numbers. The first structure studied is the group. That’s made of two components. One is a set of elements. There might be infinitely many of them — the real numbers, say, or the whole numbers. Or there might be finitely many — the whole numbers from 0 up to 11, or even just the numbers 0 and 1. The other component is an operation that works like addition. What we mean by “works like addition” is that you can take two of the things in the set, “add” them together, and get something else that’s in the set. It has to be associative: something plus the sum of two other things has to equal the sum of the first two things plus the third thing. That is, 1 + (2 + 3) is the same as (1 + 2) + 3. Also, by the rules of what makes a group, the addition has to commute. First thing plus second thing has to be the same as second thing plus first thing. That is, 1 + 2 has the same value as 2 + 1 does. Furthermore, there has to be something called the additive identity. It works like zero does in ordinary arithmetic. Anything plus the additive identity is that original thing again. And finally, everything in the group has something that’s its additive inverse. The thing plus the additive inverse is the additive identity, our zero. If you’re lost, that’s all right. A mathematics major spends as much as four weeks in Intro to Algebra feeling lost here. But this is an example. Suppose we have a group made up of the elements 0, 1, 2, and 3. 0 will be the additive identity: 0 plus anything is that original thing. So 1 plus 0 is 1. 1 plus 1 is 2. 1 plus 2 will be 3. 1 plus 3 will be … well, make that 0 again. 2 plus 0 is 2. 2 plus 1 will be 3. 2 plus 2 will be 0. 2 plus 3 will be 1. 3 plus 0 will be 3. 3 plus 1 will be 0. 3 plus 2 will be 1. 3 plus 3 will be 2. Plus will look like a very strange word at this point. All the elements in this have an additive inverse. Add 3 to 1 and you get 0. Add 2 to 2 and you get 0. Add 1 to 3 and you get 0. And, yes, add 0 to 0 and you get 0. This means you get to do subtraction just as well as you get to do addition. We’re halfway there. A “ring”, introduced just as the mathematics major has got the hang of groups, is a group with a second operation. Besides being a collection of elements and an addition-like operation, a ring also has a multiplication-like operation. It doesn’t have to do much, as a multiplication. It has to be associative. That is, something times the product of two other things has to be the same as the product of the first two things times the third. You’ve seen that, though. 1 x (2 x 3) is the same as (1 x 2) x 3. And it has to distribute: something times the sum of two other things has to be the same as the sum of the something times the first thing and the something times the second. That is, 2 x (3 + 4) is the same as 2 x 3 plus 2 x 4. For example, the group we had before, 0 times anything will be 0. 1 times anything will be what we started with: 1 times 0 is 0, 1 times 1 is 1, 1 times 2 is 2, and 1 times 3 is 3. 2 times 0 is 0, 2 times 1 is 2, 2 times 2 will be 0 again, and 2 times 3 will be 2 again. 3 times 0 is 0, 3 times 1 is 3, 3 times 2 is 2, and 3 times 3 is 1. Believe it or not, this all works out. And “times” doesn’t get to look nearly so weird as “plus” does. And that’s all you need: a collection of things, an operation that looks a bit like addition, and an operation that looks even more vaguely like multiplication. Now the controversy. How much does something have to look like multiplication? Some people insist that a ring has to have a multiplicative identity, something that works like 1. The ring I described has one, but one could imagine a ring that hasn’t, such as the even numbers and ordinary addition and multiplication. People who want rings to have multiplicative identity sometimes use “rng” to speak — well, write — of rings that haven’t. Some people want rings to have multiplicative inverses. That is, anything except zero has something you can multiply it by to get 1. The little ring I built there hasn’t got one, because there’s nothing you can multiply 2 by to get 1. Some insist on multiplication commuting, that 2 times 3 equals 3 times 2. Who’s right? It depends what you want to do. Everybody agrees that a ring has to have elements, and addition, and multiplication, and that the multiplication has to distribute across addition. The rest depends on the author, and the tradition the author works in. Mathematical constructs are things humans find interesting to study. The details of how they’re made will depend on what work we want to do. If a mathematician wishes to make clear that she expects a ring to have multiplication that commutes and to have a multiplicative identity she can say so. She would write that something is a commutative ring with identity. Or the context may make things clear. If you’re not sure, then you can suppose she uses the definition of “ring” that was in the textbook from her Intro to Algebra class sophomore year. It may seem strange to think that mathematicians don’t all agree on what a ring is. After all, don’t mathematicians deal in universal, eternal truths? … And they do; things that are proven by rigorous deduction are inarguably true. But the parts of these truths that are interesting are a matter of human judgement. We choose the bunches of ideas that are convenient to work with, and give names to those. That’s much of what makes this glossary an interesting project.
2021-03-02 07:56:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 48, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6410251259803772, "perplexity": 413.319264273515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363782.40/warc/CC-MAIN-20210302065019-20210302095019-00190.warc.gz"}
https://datascience.stackexchange.com/questions/88042/how-is-bayesian-risk-computed-to-prune-decision-trees
# How is bayesian risk computed to prune decision trees? I've been trying to follow this paper on Bayesian Risk Pruning. I'm not very familiar with this type of pruning, but I'm wondering a few things: (1) The paper describes risk-rates to be defined per example. We have $$R_k(a_i|x)=\sum\limits_{j=1,j \neq i}^{T_c} L_k(a_i|C_j)p_k(C_j|x)$$. $$L_k(a_i|C_j)$$ is defined to be the loss of an example being predicted in class $$C_j$$ when the true class is $$C_i$$. $$p_k(C_j|x)$$ is the estimated probability of an example belonging to $$C_j$$. Above is the decision tree that was produced from a C4.5 algorithm. Pruning occurs from left to right, bottom-up. My main question: How are the risk-rates found in the decision tree such as for Node 3? (2) There's also conflicting statements here: The first image states that if parent risk-rate exceeds the total risk-rate of the leaves, then the parent is pruned to a leaf. However, the second claims that pruning occurs if the leaf risk-rate exceeds the parent. To confirm, if the risk-rate of the parent is less than the risk-rate of the leaves under the subtree of the parent, then I would set the parent to be a leaf? (3) From (1), loss would be 0-1 in the binary case. What could be a reasonable loss for multi-class output? (4) From (1), would estimated probability of $$C_j$$ be the proportion of $$C_j$$ in the partitioned output class at a node? For instance, at node 3, we're looking at output = [No]. (5) From (1), would risk-rate be over all training examples? • link is not available Jan 16, 2021 at 18:08 • @NikosM. I changed the link. Let me know if it works now. Jan 16, 2021 at 22:01 • first: the if condition in the algorithm is probably a typo, it should read if (Rp < Ri) convert parent node to leaf instead Jan 17, 2021 at 8:44 • Yup that makes sense. Thanks! And for the others? Jan 17, 2021 at 16:30 I'm going to try to answer my question. To (4), I would say yes. For (5), I believe that for each node, there's a specific partition that falls into the node when following the branches down the decision tree. For instance, for Node 2, there are 3 instances (where a=1) that results in [Yes, No, No] for the partitioned output class at Node 2. For (1), I would calculate the risks over each of these relevant examples and sum them. Then at node 2: $$\frac{2}{3}(1)+\frac{1}{3}(1)+\frac{1}{3}(1) = \frac{4}{3}$$ is the bayes risk where I use 0-1 loss. Then I would use these estimated risks in the pruning algorithm. Can anyone corroborate?
2022-08-12 05:15:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6713465452194214, "perplexity": 836.7586437216818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00166.warc.gz"}
https://www.mersenneforum.org/showthread.php?s=cea5da68e1179bfad7f9155c227b70e1&t=13255
mersenneforum.org Aliquot Termination Question - Largest Prime? Register FAQ Search Today's Posts Mark Forums Read 2010-04-05, 00:31 #1 EdH     "Ed Hall" Dec 2009 Adirondack Mtns 5,261 Posts Aliquot Termination Question - Largest Prime? Sorry if I should be able to easily locate this, but I'm wondering what the largest prime termination is for any sequence. All the curves I've reviewed seem to decrease considerably prior to terminating in a prime. Is there a mathematical explanation for this behavior, other than probability? 2010-04-05, 02:22   #2 TimSorbet Account Deleted "Tim Sorbera" Aug 2006 San Antonio, TX USA 10B716 Posts Quote: Originally Posted by EdH Sorry if I should be able to easily locate this, but I'm wondering what the largest prime termination is for any sequence. This is far from conclusive, but see http://factordb.com/search.php?so=1&...limit=100&ew=1 The largest of those that actually terminates in a prime (darn bugs/hacks...) is 1923540, with a P14. The largest one that started below 1000000 is 891210, with a P10. Quote: Originally Posted by EdH All the curves I've reviewed seem to decrease considerably prior to terminating in a prime. Is there a mathematical explanation for this behavior, other than probability? Yes (well, it's still about probability, but of a different sort than the chances of e.g. a 100 digit number being prime). IIRC, to become odd, (and so have a chance of terminating in a prime besides 2) the line (besides the 2^x factor) needs to be a square (whether p^2 or p^4*q^2 or what). This grows extremely unlikely as the sequence grows past 5-20 digits. Hence nearly all prime terminations happen when the sequence is very small. Last fiddled with by TimSorbet on 2010-04-05 at 02:25 2010-04-05, 03:41 #3 RichD     Sep 2008 Kansas 25·7·17 Posts I questioned the same thing. Why is a multiple of 2 always in the next factorization sequence? Thereby the sum being (most likely) even. So I put pen to paper and came up with the following - FWIW. Assume the factors are of the form 2^n * p1 * p2 * ..., with n > 0 and possible p's being an odd number from 3 to X. It doesn't make any difference if pX is squared, it's just another "odd" p in this explanation. Any factor or multiple of 2 will always be even. These can be excluded. The need is to focus on the odd p's to see if the total sum will have a chance of being odd (and possible prime). Assuming the sequence has one pX then the sum of the odd factors will be p1 + 1, or an even number. Bummer! (2) Let's assume the sequence has two pX. The sum would be p1 + p2 + p1*p2 + 1. Again an even number of odds! (4) With three odd p's the total sum would be p1 + p2 + p3 + p1*p2 + p1*p3 + p2 *p3 + p1*p2*p3 + 1. Darn, again an even number of odd primes! (8) I'm sure you can see the sequence by now. Not until the numbers approach a very small number (as Mini-Geek pointed out) is there a chance of a prime. The best down driver is 2^1 with no small p. This will cut the next number nearly in half. This exercise is left to the reader. 2010-04-05, 07:21 #4 10metreh     Nov 2008 2×33×43 Posts There is a formula for the aliquot sum of a number: If the prime factorization of N is pa * qb * rc, with p, q, r etc. all prime, then its aliquot sum is: $\frac{p^{a+1}-1}{p-1}\ *\ \frac{q^{b+1}-1}{q-1}\ *\ \frac{r^{c+1}-1}{r-1}\ *\ . . . \ -\ N$ From this it is easy to prove many results about sums of divisors, such as the (very obvious) even number one. Another one that can be easily proved with this formula is that when a term in a sequence has a factor of 2 raised to an even power but no factor of 3, then the next term can acquire a factor of 3 if and only if there is no prime factor (other than 2) of the form 3n-1. This is left as an exercise to the reader. And as an answer to the original post, the largest known prime termination is that of sequence 243112609-1. Last fiddled with by 10metreh on 2010-04-05 at 07:30 2010-04-05, 11:59   #5 TimSorbet Account Deleted "Tim Sorbera" Aug 2006 San Antonio, TX USA 102678 Posts Quote: Originally Posted by 10metreh And as an answer to the original post, the largest known prime termination is that of sequence 243112609-1. Nope, I've got a larger one: 243112609 It terminates in 243112609-1 at index 1. (Of course, the aliquot sum of p2^n is p2^n-1.) But I think we were all referring to examples that aren't that trivial... 10metreh: I thought we were talking about the highest prime, not the highest sequence. Mini-Geek: He did indeed refer to " the largest prime termination", but you said "is that of sequence ...", so I thought you were talking about the largest sequence, not the largest prime. On closer reading, and with your response, it seems I was mistaken. Quote: Originally Posted by RichD The best down driver is 2^1 with no small p. Except for odd numbers, of course. They drop rather quickly. Last fiddled with by TimSorbet on 2010-04-05 at 12:22 2010-04-05, 14:57 #6 EdH     "Ed Hall" Dec 2009 Adirondack Mtns 5,261 Posts Thank you for the replies. I think I have a handle on it. And, I do realize that any large prime would equal itself, but I was more so thinking of a sequence as having more than one iteration, a requirement which, of course, 243112609 meets, although, again, not qute what I was interested in. However, I appreciate all the answers and thank you again. Take Care, Ed 2010-04-06, 00:12   #7 RichD Sep 2008 Kansas 1110111000002 Posts Quote: Originally Posted by Mini-Geek Except for odd numbers, of course. They drop rather quickly. Ah, yes. Especially when p1 and p2 are not "near" each other. (Also meaning p1 can not be squared.) How often does this appear in the wild, especially after index 10? Similar Threads Thread Thread Starter Forum Replies Last Post dabaichi News 571 2020-10-26 11:02 10metreh Aliquot Sequences 0 2010-03-11 18:24 philmoore Math 3 2009-03-20 19:04 amcfarlane Math 6 2004-12-26 23:15 wfgarnett3 Lounge 7 2002-11-25 06:34 All times are UTC. The time now is 19:21. Tue Feb 7 19:21:43 UTC 2023 up 173 days, 16:50, 1 user, load averages: 1.15, 1.14, 1.00
2023-02-07 19:21:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6051878333091736, "perplexity": 1562.66842525987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500628.77/warc/CC-MAIN-20230207170138-20230207200138-00036.warc.gz"}
https://en.wikipedia.org/wiki/Lambertian_reflection
Lambertian reflectance (Redirected from Lambertian reflection) Lambertian reflectance is the property that defines an ideal "matte" or diffusely reflecting surface. The apparent brightness of a Lambertian surface to an observer is the same regardless of the observer's angle of view. More technically, the surface's luminance is isotropic, and the luminous intensity obeys Lambert's cosine law. Lambertian reflectance is named after Johann Heinrich Lambert, who introduced the concept of perfect diffusion in his 1760 book Photometria. Examples Unfinished wood exhibits roughly Lambertian reflectance, but wood finished with a glossy coat of polyurethane does not, since the glossy coating creates specular highlights. Not all rough surfaces are Lambertian reflectors, but this is often a good approximation when the characteristics of the surface are unknown. Spectralon is a material which is designed to exhibit an almost perfect Lambertian reflectance. Use in computer graphics In computer graphics, Lambertian reflection is often used as a model for diffuse reflection. This technique causes all closed polygons (such as a triangle within a 3D mesh) to reflect light equally in all directions when rendered. In effect, a point rotated around its normal vector will not change the way it reflects light. However, the point will change the way it reflects light if it is tilted away from its initial normal vector since the area is illuminated by a smaller fraction of the incident radiation.[1][verification needed] The reflection is calculated by taking the dot product of the surface's normal vector, ${\displaystyle \mathbf {N} }$, and a normalized light-direction vector, ${\displaystyle \mathbf {L} }$, pointing from the surface to the light source. This number is then multiplied by the color of the surface and the intensity of the light hitting the surface: ${\displaystyle I_{D}=\mathbf {L} \cdot \mathbf {N} CI_{L}}$, where ${\displaystyle I_{D}}$ is the intensity of the diffusely reflected light (surface brightness), ${\displaystyle C}$ is the color and ${\displaystyle I_{L}}$ is the intensity of the incoming light. Because ${\displaystyle \mathbf {L} \cdot \mathbf {N} =|N||L|\cos {\alpha }=\cos {\alpha }}$, where ${\displaystyle \alpha }$ is the angle between the directions of the two vectors, the intensity will be the highest if the normal vector points in the same direction as the light vector (${\displaystyle \cos {(0)}=1}$, the surface will be perpendicular to the direction of the light), and the lowest if the normal vector is perpendicular to the light vector (${\displaystyle \cos {(\pi /2)}=0}$, the surface runs parallel with the direction of the light). Lambertian reflection from polished surfaces are typically accompanied by specular reflection (gloss), where the surface luminance is highest when the observer is situated at the perfect reflection direction (i.e. where the direction of the reflected light is a reflection of the direction of the incident light in the surface), and falls off sharply. This is simulated in computer graphics with various specular reflection models such as Phong, Cook-Torrance. etc. Other waves While Lambertian reflectance usually refers to the reflection of light by an object, it can be used to refer to the reflection of any wave. For example, in ultrasound imaging, "rough" tissues are said to exhibit Lambertian reflectance.
2016-09-26 03:53:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 10, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8278375267982483, "perplexity": 446.35582389310827}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660602.38/warc/CC-MAIN-20160924173740-00094-ip-10-143-35-109.ec2.internal.warc.gz"}
https://anthony-tan.com/tags/algorithm/
## From Linear Regression to Linear Classification Preliminaries An Introduction to Linear Regression A Simple Linear Regression Bayesian theorem Feature extraction Recall Linear Regression The goal of a regression problem is to find out a function or hypothesis that given an input $$\mathbf{x}$$, it can make a prediction $$\hat{y}$$ to estimate the target. Both the target $$y$$ and prediction $$\hat{y}$$ here are continuous. They have the properties of numbers1: Consider 3 inputs $$\mathbf{x}_1$$, $$\mathbf{x}_2$$ and $$\mathbf{x}_3$$ and their coresponding targets are $$y_1=0$$, $$y_2=1$$ and $$y_3=2$$.... February 17, 2020 · (Last Modification: April 28, 2022) · Anthony Tan
2022-09-26 03:45:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7387902140617371, "perplexity": 569.8637539346993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00343.warc.gz"}
https://en.khanacademy.org/math/multivariable-calculus/thinking-about-multivariable-function/ways-to-represent-multivariable-functions/a/transformations
If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. ## Multivariable calculus ### Course: Multivariable calculus>Unit 1 Lesson 6: Visualizing multivariable functions (articles) # Transformations Here we see how to think about multivariable functions through movement and animation. ## The idea of transformations In all of our methods for visualizing multivariable functions, the goal is to somehow see the connection between the input and the output of a function. • With graphs, this means plotting points whose coordinates include both input and output information. • With contour maps this means marking which input values will go to certain output values. • With parametric functions, you mark where the input lands in the output space. • With vector fields you plot the output as a vector whose tail sits at the input. The thought behind transformations is to simply watch (or imagine) each input point moving to its corresponding output point. It can be a bit of a mind-warp to view functions as transformations if you never have before, so if it feels confusing at first, that's okay. To whet your appetite for what this might look like, here's a video from the parametric surface article which shows how a certain function transforms a square into a torus (doughnut shape): Khan Academy video wrapper ## Concept over precision Thinking about functions as transformations can be very powerful for a few reasons: • We are not constrained as much by dimension. Both the input and the output can have either one, two or three dimensions, and there will be a way to concretely think about what the function is doing. Even when the dimensions are too big to look at, thinking in terms of a transformation at least allows for a vague idea of what's happening in principle. For example, we can know that a function from 100-dimensional space to 20-dimensional space is "flattening" down 80 dimensions, perhaps analogous to squishing three-dimensional space onto the line. • This idea generalizes more easily to functions with different types of inputs and outputs, such as functions of the complex numbers, or functions that map points of the sphere onto the x, y-plane. • Understanding functions in this capacity will make it easier to see the connections between multivariable calculus and linear algebra. However, with all that said, it should be stressed that transformations are most powerful as an understanding of what functions do, not as a precise description. It would be rare to learn the properties of a given function by observing what it looks like as a transformation. ## Example 1: From line to line Let's start simple, with a single-variable function. f, left parenthesis, x, right parenthesis, equals, x, squared, minus, 3 Consider all the input-output pairs. x (input)x, squared, minus, 3 (output) minus, 21 minus, 1minus, 2 0minus, 3 1minus, 2 21 \varvdots, rectangle\varvdots, rectangle What would it look like for all the inputs on the number line to slide over onto their corresponding output? If we pictured the input space as one number line, and the output space as another number line, we might get a motion like this: Khan Academy video wrapper Alternatively, since in this case the input space and output space are really the same thing, a number line, we could think of the line transforming onto itself, dragging each point x to where the point x, squared, minus, 3 started off, like this: Khan Academy video wrapper ## Example 2: From line to plane Now let's take a function with a one-dimensional input and a two-dimensional output, like \begin{aligned} \quad f(x) = \left( \cos(x), \dfrac{x}{2}\sin(x) \right) \end{aligned} Again we consider all input-output pairs. Inputs xOutputs left parenthesis, cosine, left parenthesis, x, right parenthesis, comma, start fraction, x, divided by, 2, end fraction, sine, left parenthesis, x, right parenthesis, right parenthesis 0left parenthesis, 1, comma, 0, right parenthesis start fraction, pi, divided by, 2, end fractionleft parenthesis, 0, comma, start fraction, pi, divided by, 4, end fraction, right parenthesis pileft parenthesis, minus, 1, comma, 0, right parenthesis \varvdots, rectangle\varvdots, rectangle Imagine all possible inputs on the number line sliding onto their corresponding outputs. This time, since the outputs have two coordinates, they live in the x, y-plane. Khan Academy video wrapper Notice, the final image of the warped and twirled number line inside the x, y-plane is what we would have drawn if we interpreted f as a parametric function, but this time, we can actually see which input points end up where on the final curve. Let's take a moment to watch it again and follow some specific inputs as they move to their outputs. \begin{aligned} \quad \blueE{0} &\to f(0) = (\cos(0), 0\sin(0)) = \blueE{(1, 0)} \\ \\ \greenE{\frac{\pi}{2}} &\to f\left(\frac{\pi}{2}\right) = \left(\cos\left(\frac{\pi}{2}\right), \frac{\pi}{4}\sin\left(\frac{\pi}{2}\right) \right) = \greenE{(0, \pi/4)}\\ \\ \redE{\pi} &\to f(\pi) = (\cos(\pi), \frac{\pi}{2}\sin(\pi)) = \redE{(-1, 0)} \\ \\ \end{aligned} Khan Academy video wrapper ## Example 3: Simple plane to plane transformation Consider a 90, degrees rotation of the plane (arrows are pictured just to help follow the transformation): Khan Academy video wrapper This could be considered a way to visualize a certain function with a two-dimensional input and a two-dimensional output. Why? This transformation moves points in two-dimensional space to other points in two-dimensional space. For example, the point that starts at left parenthesis, 1, comma, 0, right parenthesis ends at left parenthesis, 0, comma, 1, right parenthesis. The point that starts at left parenthesis, 1, comma, 2, right parenthesis ends at left parenthesis, minus, 2, comma, 1, right parenthesis, etc. The function describing this transformation is f, left parenthesis, x, comma, y, right parenthesis, equals, left parenthesis, minus, y, comma, x, right parenthesis For any given point, like left parenthesis, 3, comma, 4, right parenthesis, this function f tells you where that point lands after you rotate the plane 90, degrees counterclockwise, (in this case left parenthesis, minus, 4, comma, 3, right parenthesis). ## Example 4: More complicated plane to plane transformation Now let's look at a more complicated function with a two-dimensional input and a two-dimensional output: f, left parenthesis, x, comma, y, right parenthesis, equals, left parenthesis, x, squared, plus, y, squared, comma, x, squared, minus, y, squared, right parenthesis. Each input is a point on the plane, such as left parenthesis, 1, comma, 2, right parenthesis, and it moves to another point on the plane, such as left parenthesis, 1, squared, plus, 2, squared, comma, 1, squared, minus, 2, squared, right parenthesis, equals, left parenthesis, 5, comma, minus, 3, right parenthesis. When we watch every point on the plane slide over to its corresponding output point, it looks as if a copy of the plane is morphing: Khan Academy video wrapper Notice, all the points end up on the right side of the plane. This is because the first coordinate of the output is x, squared, plus, y, squared, which must always be positive. Challenge question: In the transformation above, representing the function f, left parenthesis, x, comma, y, right parenthesis, equals, left parenthesis, x, squared, plus, y, squared, comma, x, squared, minus, y, squared, right parenthesis, notice that all points end up in the sideways-V-shaped region between the lines x, equals, y and x, equals, minus, y. Which of the following numerical facts explains this? ## Example 5: From plane to line Next think of a function with a two-dimensional input and a one-dimensional output. f, left parenthesis, x, comma, y, right parenthesis, equals, x, squared, plus, y, squared, The corresponding transformation will squish the x, y-plane onto the number line. Khan Academy video wrapper Such squishification can make it hard to follow everything that's going on, so for the sake of a precise and clear description, you would be better off using a graph or a contour map. Nevertheless, it can be a helpful concept to keep in the back of your mind that what function from two dimensions to one dimension does is squish the plane onto the line in a certain way. For instance, this gives a new way to interpret the level sets in a contour map: they are all the points of the plane which scrunch together into a common point on the line. ## Example 6: From plane to space Functions with a two-dimensional input and three-dimensional output map the plane into three-dimensional space. For instance, such a transformation might look like this (the red and blue lines are just to help keep track of what happens to the x and y directions): Khan Academy video wrapper Analogous to the one-to-two dimensions example above, our final image reflects the surface we would get by interpreting the function as a parametric function. ## Example 7: From space to space Functions from three dimensions to three dimensions can be seen as mapping all three-dimensional space onto itself. With this many variables, actually looking at the transformation can be a combination of horrifying, beautiful, and confusing. For instance, consider this function: f, left parenthesis, x, comma, y, comma, z, right parenthesis, equals, left parenthesis, y, z, comma, x, z, comma, x, y, right parenthesis Here's what it looks like as a transformation. Khan Academy video wrapper It might be pretty, but it's a serious spaghettified mess to actually try to follow. ## Final thoughts Transformations can provide wonderful ways to interpret properties of a function once you learn them. For instance, constant functions squish their input space to a point, and discontinuous functions must tear apart the input space during the movement. These physical interpretations can become particularly helpful as we venture into the topics of multivariable calculus, in which one runs the risk of learning concepts and operations symbolically without an underlying understanding of what's happening. ## Want to join the conversation? • I would like to suggest in this article to implement the idea of Jacobian determinant to introduce the unit area or volume of a transformed object. • Can someone explain to me better what exactly the reflection question is? I don't understand it, although I feel like I understand the material. • Can anyone explain a little bit more about the challenge question? I guessed it correctly, but couldn`t understand it even with the answer. • The 𝑥-coordinate of the output, 𝑥² + 𝑦², is always greater than or equal to 0, which means that the output will always be on or to the right of the 𝑦-axis. The absolute value of the 𝑦-coordinate of the output, |𝑥² − 𝑦²|, is always less than or equal to the 𝑥-coordinate of the output, which means that the output's distance to the 𝑥-axis is never greater than its distance to the 𝑦-axis. Thereby the output must be below the line 𝑦 = 𝑥 and above the line 𝑦 = −𝑥. • What about transformations from space to plane, from line to space, and from space to line? • Yes, they are possible. But they might've been left out of this article for sake of brevity. • If we consider a function from 3-dimensional space to 1-d real line (like a scalar-valued f(x,y,z) kind of function) and look at it as a transformation, we will see a 3d space being squeezed to a line. Similarly in a 1-input 3-output vector valued function we will see the real line to take a curvy shape in 3-space. But, is it possible to devise a function from 1D space to 3D space that will convert a line to a surface ? Intuitively it seems unlikely to have such a vector valued function, but is it really mathematically impossible? • Nice question! :-) After thinking about it for a while I'm come to the conclusion that it is NOT possible. Anytime you only have one argument in your function/transformation then as you change that argument we will draw a curve in the N-dimensional space depending on the dimensions of our output vector. Now I suppose we could write another mathematical function that takes this output and create a surface using its outputs, but I can't think of a way to create a surface by itself. So as far as I can tell we would need (at least) a a function with a 2-dimensional domain (2 operands). If anyone else has additional thoughts feel free to share them with us, but this makes sense to me at least. Hope this helps! - Convenient Colleague
2023-03-29 08:19:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 69, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5460142493247986, "perplexity": 1230.2635389852155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948951.4/warc/CC-MAIN-20230329054547-20230329084547-00654.warc.gz"}
https://rationalwiki.org/wiki/index.php?title=User_talk:Radioactive_afikomen&diff=prev&oldid=302710
# Difference between revisions of "User talk:Stabby the Misanthrope" Talk to me Archives for this talk page: <1>, <2>, <3>, <4>, <5>, <6>, <7>, <8>, <9>, <10>, <11>, <12>, <13>, <14>, <15>, <16>, <17>, (new) ## Are you back? Huh? PFoster 19:44, 19 December 2008 (EST) I don't know. Radioactive afikomen Please ignore all my awful pre-2014 comments. 01:52, 20 December 2008 (EST) Well, either way, nice to see you around again. I hope all is well with you. What's Hebrew for "hugs 'n' kisses"? ħuman 03:33, 20 December 2008 (EST) "Shalom" is close to what you mean. Radioactive afikomen Please ignore all my awful pre-2014 comments. 07:25, 20 December 2008 (EST) Ah, very well, then. Shalom! ħuman 14:47, 20 December 2008 (EST) ## A very important e-mail... You know the drill....PFoster 16:48, 27 December 2008 (EST) ## Nazism/Fascism... I assume that when you're done splitting the cats they'll become blue links again, and the old cat will go bye-bye? ħuman 23:17, 3 January 2009 (EST) Correct. Care to help me? Radioactive afikomen Please ignore all my awful pre-2014 comments. 23:19, 3 January 2009 (EST) Ah, nevermind, I'm done. Radioactive afikomen Please ignore all my awful pre-2014 comments. 23:31, 3 January 2009 (EST) I figured you'd "complete" the task you set for yourself, I guess I was just checking. I'd help, but, as you pointed out, you're done. You're pretty efficient when you get going, and it wasn't a huge task (hehe, I remember you racing against the "footnotes" bot, I think you either tied or won a small victory on the race from A to M vs. Z to M?) ħuman 00:08, 4 January 2009 (EST) I think it was a tie.  : ) And thank you. Radioactive afikomen Please ignore all my awful pre-2014 comments. 00:12, 4 January 2009 (EST) Hehe... yah, probably. Very close, either way. And you're welcome. New project ;) Alphabetize the categories on every article! Isn't it time we did that? Food for thought = brain worms? ħuman 00:16, 4 January 2009 (EST) Alphabetize the categories? *shudders* That's crazy talk. Radioactive afikomen Please ignore all my awful pre-2014 comments. 00:23, 4 January 2009 (EST) I was about to explain how I intended to continue arranging categories using my logical yet highly arbitrary rubric, but now that I think about it, alphabetizing them makes much more sense. Radioactive afikomen Please ignore all my awful pre-2014 comments. 00:23, 4 January 2009 (EST) ## Kindly Demote Me I know that this will get people bickering. I have hit upon the key. And yes, I'm serious. --"ConservapediaUndergroundResistor"feline fanatic 17:55, 4 January 2009 (EST) What a coincidence—I was planning to demote you today anyways!  : D Radioactive afikomen Please ignore all my awful pre-2014 comments. 17:57, 4 January 2009 (EST) Why thank you. --"ConservapediaUndergroundResistor"feline fanatic 17:58, 4 January 2009 (EST) ## Best random quote from your sig. Hiya, Netharian! Welcome to the international pussy of icicles!--Netharian 19:15, 5 January 2009 (EST) ## Mathematics Articles I see you've taken notice of my work. I recently uploaded some images and used them in a few articles, but for some reason I cannot upload any more images. Do you know what could be causing this? thescaryworker 21:12, 5 January 2009 (EST) New users have the number of images they can upload limited, in order to head off vandalism. You can either wait until tomorrow, or... I can make you a sysop. (Seeing all the hard work you're doing, I will do so if you ask.) Radioactive afikomen Please ignore all my awful pre-2014 comments. 21:41, 5 January 2009 (EST) Thanks, that would be great. Currently I'm coming over from Conservapedia's moronic work environment and bringing my articles with me. I'm working in conjunction with a few people who recently left CP's diminishing ranks of mathematics editors. thescaryworker 21:50, 5 January 2009 (EST) Actually the uploads are shut down because I am in proccess of moving the site to a new server. Everything will be back to normal as soon as we are moved. tmtoulouse 21:53, 5 January 2009 (EST) *shrugs* Oh well. But regardless, Scaryworker, you've been "demoted"—check your talk page. Radioactive afikomen Please ignore all my awful pre-2014 comments. 21:57, 5 January 2009 (EST) ## Um...You Doing Alright Over Here??? Heavens to betsy. I felt obligated to log on just to talk you down. Everything's going to be okay, man. Keep your eyes on the prize. (Or something.) A Writer of Vaudevilles 23:54, 6 January 2009 (EST) Oh, thank you for talking to me. It was getting so lonely here. Radioactive afikomen Please ignore all my awful pre-2014 comments. 00:11, 7 January 2009 (EST) Aloha, hello, holla, and shalom. ħuman 00:54, 7 January 2009 (EST) Hi J.! Can I use your full first name, or that still "private"? ħuman 01:16, 7 January 2009 (EST) Feel free to call me Jacob. (No "Jake", though—only my parents get to call me that.) Radioactive afikomen Please ignore all my awful pre-2014 comments. 01:23, 7 January 2009 (EST) Thanks. And, yes, whenever there is a potential nickname issue I always ask first what a person prefers to be called, and then respect it. ħuman 01:26, 7 January 2009 (EST) Thanks, Huw. That's very considerate of you. Radioactive afikomen Please ignore all my awful pre-2014 comments. 01:30, 7 January 2009 (EST) ## The hell? THE HELL>>>??>?>??/ You really don’t remember me, do you, Lieutenant? Lance Corporal John V. Dempski? You don’t remember the tarantulas? That warehouse on Mt. Kilamanjaro? You fucking S.O.B. You set me up and you killed my wife and you broke my spine and you DON’T REMEMBER MY NAME?!? THE FUCKING SPIDERS, MAN! REMEMBER THE FUCKING SPIDERS???The electrocutioner 04:43, 8 January 2009 (EST) ## I ... ... have ditched that facebook account. I'd forgotten it was there until they emailed me that you'd "written on my wall".I don't really think it's my thing. Toast 22:19, 8 January 2009 (EST) ## Side-by-side-by-side I did one for Conservapedia:Quantifying Mental Strength. - User $n=0}^{\infty}(-1)^{n}(2n+1)^{-1$ 02:00, 11 January 2009 (EST) I saw that, but I'd never actually processed that it used three columns instead of two. I probably would've been easier if I'd just copied that instead of trying to splice a third column into the two-column version on my own. Radioactive afikomen Please ignore all my awful pre-2014 comments. 02:04, 11 January 2009 (EST) ## Old server Jeeves has stolen your account on the old server; I would advise you to change your password. Phantom Hoover 07:47, 11 January 2009 (EST) Yeah, paranoia on my part. I was panicking somewhat. Phantom Hoover 12:41, 11 January 2009 (EST) Yes you have. I would like my password and my email returned to what they were. I would also like an apology. It doesn't matter that you did it on the old RationalWiki—what you did was a gross violation of privacy. Radioactive afikomen Please ignore all my awful pre-2014 comments. 17:43, 11 January 2009 (EST) I hope you liked my little present to you and other post RW1ers. ħuman 23:42, 11 January 2009 (EST) The password is foobar. Phantom Hoover 13:07, 12 January 2009 (EST) ## Standards thing I meant to write this earlier, but now will do. Thanks for going to all the effort of restructuring the new version of the community standards. Maybe it's just me, but I like the see the "old" next to the "new" so I can "think" about it. I appreciate your efforts, and I hope other users don't take the sparks that flew along the way as personal. Well, at least, I don't, and I hope you don't. Shalom (I don't mean to abuse the word) - or at, at least, thank you. ħuman 23:24, 13 January 2009 (EST) Aw, shucks, Huw... Radioactive afikomen Please ignore all my awful pre-2014 comments. 00:46, 14 January 2009 (EST) Hugs, Jacob. We're on the same side, after all. ħuman 01:01, 14 January 2009 (EST) ## Emails I apologize for filling your inbox with duplicate emails. Phantom Hoover 16:58, 14 January 2009 (EST) Eh? What emails? Radioactive afikomen Please ignore all my awful pre-2014 comments. 14:53, 15 January 2009 (EST) I used the send email form many times, because the page wasn't loading and I wasn't sure if they had gone through. If there were no emails, then the above post never happened. Phantom Hoover 14:58, 15 January 2009 (EST) This message will self-destruct in 5... 4... 3... Radioactive afikomen Please ignore all my awful pre-2014 comments. 15:08, 15 January 2009 (EST) ## Help (other than psychological) Hi, I still prefer that you kill the redirects to this page in your userspace, though—double redirects don't work. I'm new to this janitor job and I don't know how to find the double redirects to stamp them out. Would you mind walking me through it? Thanks. --UnicornTapestry 04:55, 24 January 2009 (EST) Of course I don't mind. ^_^ It's easy: just go to Special:DoubleRedirects, and you'll find every double redirect listed there. Of the handful we have, most are there on purpose (like the "Circular reasoning" one). The page I was prompting you to delete is listed there: User:UnicornTapestry/sandbox/Goatism. Another page you should clear out is User talk:UnicornTapestry/sandbox/Goatism—it's not a double redirect (and hence not listed with the others), but it is a redirect no one will ever use. Radioactive afikomen Please ignore all my awful pre-2014 comments. 05:08, 24 January 2009 (EST) Thanks! Easy once you've shown me how. --UnicornTapestry 05:35, 24 January 2009 (EST) Just a reminder, Unicorn: you're not supposed to delete talk pages with content in them. Ever. Just so you know, I already resurrected it. Radioactive afikomen Please ignore all my awful pre-2014 comments. 05:48, 24 January 2009 (EST) Thanks for the save, there. I realized I'd made a mistake after I'd done it. --UnicornTapestry 05:50, 24 January 2009 (EST) No harm done in the end  : ) Radioactive afikomen Please ignore all my awful pre-2014 comments. 05:53, 24 January 2009 (EST) ## Recatting Could you please make yourself a bot while recatting things - you're cluttering up recent changes. Phantom Hoover 05:37, 25 January 2009 (EST) Is that a serious request? Radioactive afikomen Please ignore all my awful pre-2014 comments. 05:37, 25 January 2009 (EST) A little Uncle Ed their on the girl recat. - User $n=0}^{\infty}(-1)^{n}(2n+1)^{-1$ 05:39, 25 January 2009 (EST) If I were Ed Poor I would've recatted them as "organisms"  : ) Radioactive afikomen Please ignore all my awful pre-2014 comments. 05:44, 25 January 2009 (EST) Yes. Why wouldn't it be? Phantom Hoover 05:45, 25 January 2009 (EST) It just struck me as a silly request. Could you tell me how it makes things difficult for you? Radioactive afikomen Please ignore all my awful pre-2014 comments. 05:52, 25 January 2009 (EST) It makes it harder to find interesting edits and is a general nuisance. Phantom Hoover 05:54, 25 January 2009 (EST) I understand, Phantom. So I'm sorry to say that I am not willing to change myself into a bot whenever I feel like mass recatting. I do not feel I am out of line to ask you to just bear with it. Radioactive afikomen Please ignore all my awful pre-2014 comments. 06:00, 25 January 2009 (EST) Turning human editors into bots is pretty dubious. Are you sure we should be doing it? WēāŝēīōīďMethinks it is a Weasel 16:18, 25 January 2009 (EST) As long as they get turned back before they need to eat (or drink) again, the lasting effects are negligible. ħuman 19:11, 25 January 2009 (EST) Phantom Hoover (as he requested, and I granted) or anyone else is free to bot themselves when making repetitive edits. I view it as a matter of personal preference, and I will not do so myself. Radioactive afikomen Please ignore all my awful pre-2014 comments. 19:18, 25 January 2009 (EST) ## "Last one for the night" It's quarter to twelve in the morning for me. Phantom Hoover 06:43, 25 January 2009 (EST) It was 3:23 in the morning for me. Radioactive afikomen Please ignore all my awful pre-2014 comments. 16:03, 25 January 2009 (EST) My goat. Are you a vampire? Phantom Hoover 16:09, 25 January 2009 (EST) ### OMG censorship! Nah, he's has no resemblence to anything otherkin in the least- if he was a vampire, I'd know. --"ConservapediaUndergroundResistorfeline fanatic 18:04, 25 January 2009 (EST) No, just unemployed.  : ) Radioactive afikomen Please ignore all my awful pre-2014 comments. 18:34, 25 January 2009 (EST) Told you. --"ConservapediaUndergroundResistorfeline fanatic 18:34, 25 January 2009 (EST) So you're an unemployed vampire? Stupid cubic Phantom! 12:38, 26 January 2009 (EST) Kind of. Except without the "vampire" part. Radioactive afikomen Please ignore all my awful pre-2014 comments. 20:47, 27 January 2009 (EST) ## CUR CUR is currently being dragged across RationalWiki:Administrative Abuse‎ as you seem largely disinterested and around now could you make the final decision? - User $n=0}^{\infty}(-1)^{n}(2n+1)^{-1$ 22:21, 27 January 2009 (EST)
2022-05-17 06:54:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 3, "math_score": 0.4938383996486664, "perplexity": 6548.932217496409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517018.29/warc/CC-MAIN-20220517063528-20220517093528-00443.warc.gz"}
https://tianjara.net/blog/tags/ubuntu/
# tianjara.net | Andrew Harvey's Blog ### Entries tagged "ubuntu". 26th September 2009 I tried out Gnome Shell today. (And it didn't break everything! I followed their instructions it build and ran fine, and when I killed it, my normal environment with the normal Gnome Panel and Compiz... went back to normal.) Its shaping together nicely, there are many good things and I think its a great effort by everyone behind it. (But just a warning I don't know all the technical things behind everything here, so please excuse me if I miss something or don't use the correct terminology. This review is just from my perspective/my view. It is not a proper usability evaluation, nor have I looked and which is better engineered or anything too technical.) [caption id="attachment_812" align="aligncenter" width="450" caption="My Desktop Running Gnome Shell"][/caption] [caption id="attachment_814" align="aligncenter" width="450" caption="My Desktop Running Gnome"][/caption] The obvious difference is there is no bottom panel in Gnome Shell and the top panel is different (but its still in development of course so in a later version they may make more use of it). ## My Current Work-flow ### Window Management Normally in Gnome I use Compiz a lot to help me manage my open windows. Compiz/Compiz Fusion has a lot of plugins, but over time I've found a few which I really like and I use all the time. If I have a bunch of windows in one workspace and I want to switch to another I usually use Scale (shortcut of Super + Tab), although I still sometimes use the bottom taskbar, and I always use that taskbar when the window is minimised (Because Compiz can't access minimised windows pixmaps so they don't appear in Scale unfortunately. This is a real killer.). I can also right click on a window in this view to close it. This makes it really easy and fast to kill a heap of windows that I have finished with. This makes my search space when changing windows much less and hence much easier. [caption id="attachment_818" align="aligncenter" width="450" caption="The Compiz Fusion Scale Plugin"][/caption] To change workspaces I use Expo (shortcut of Super + E). But I don't actually use more than I workspace all that often, even though I think I should be. The other great thing is I can drag windows from one workspace to another while in Expo. [caption id="attachment_819" align="aligncenter" width="450" caption="Compiz Expo Plugin"][/caption] Some other shortcuts I use for window management very frequently are, • Alt + Left Mouse to move a window (with the great wobbly windows effect) • Alt + Middle Mouse to resize a window. • Alt + Right Mouse to close a window. • Super + Scroll to zoom in. • Ctrl + Alt + (1-9) on the keypad to place a window in a grid. This is great for getting say a terminal to run your program next to the editor with the code. This gives me the benefits of a tiling window manager such as xmonad (although changing the window focus between two side by side windows is not as easy as it would be in xmonad)/an arrangement similar to what you can get using Terminator. • Super + Shift + (Up Arrow/Right Arrow) to extend a window to the maximum extents in a vertical/horizontal direction. I keep making refinements to this, but it works very well for me as it is. ### Application Starting In Gnome I use the Panels Run Application dialogue (see pic) (with the shortcut Alt + ) and the terminal (with the shortcut Ctrl +) to start new applications. Those shortcuts really make things easier and faster. [caption id="attachment_815" align="aligncenter" width="450" caption="The Panel's Run Application Dialogue"][/caption] The run dialogue is good. I can run programs like firefox, gedit just as you would in the terminal but it means I don't have to have a terminal open or open one first (its all amount maximising efficiency, so I can get to where I want to be as fast as possible). Also I can enter locations such as /etc/whatever and Nautilus will be opened to that location. That text box has tab completion (and it actually shows the suggestions) which makes things easier and faster. ## In Gnome Shell [caption id="attachment_821" align="aligncenter" width="450" caption="Gnome Shell Activity Mode (sorry, I'm not sure what its actually called)"][/caption] ### Window Management In Gnome Shell (it uses Metacity not Compiz) you can do all your window management and application starting through the Activities mode. Which can be started either by the Super key, clicking Activities, or dragging the mouse to the top left edge (although it seems I must go to the exact 0,0 pixel not 0,1 or 1,0 which is a bit annoying). This is good it gives the user some choice they may happen to have their hand near Super so they use that, or they may only using the mouse so they can use that (actually I will set up Compiz Scale to work with both Super Tab and a top left mouse move). On the down side, Gnome Shell did not seem to be as fast and responsive as similar Compiz tools. What I mean is that on my system where the Scale tool is fast, as in the windows move smoothly and quickly, when I go into the Activities mode its has a small delay (less than a second, but its still annoying) and its seems a bit jumpy and jerky when everything is moving. But of course its still in development so I'm not going to criticise this. Apart from this, it seems just like Compiz Expo + Scale together. This activity mode window management is good, but there are some small things like I can't seem to close windows from this activity mode (like I can in Compiz's Scale), but I can move windows from one workspace to another in Gnome Shell just like in Compiz's Expo. Also it can also be annoying to have Scale and Expo mixed together (of course I can just just Alt + Tab or move windows around so I can focus on another, but I don't really like that idea). Unlike Compiz/Gnome's multiple workspaces, in Gnome shell you can add these dynamically. Which I think is a better idea than the static type that normal Gnome/Compiz uses. [caption id="attachment_822" align="aligncenter" width="450" caption="Gnome Shell allows you to dynamically add/remove workspaces"][/caption] Things seems to be shifting towards emphasising multiple workspaces. What I need to try to remember to do is USE these multiple workspaces, grouping windows together where they group nicely, instead of just putting everything in one workspace. Window managers could help me with this, like they could remember that I often have Firefox on workspace 2, so when I run it automatically put it there and switch to workspace 2. I haven't tried this, so I don't know if it would help me, or just frustrate me by doing what I don't want every time. I'm not even sure if Compiz can do this anyway. I'm not sure where dock's like Avant or Cairo fit in, but I never really found them to make things easier. ### Application Starting The other noticeable thing in Gnome Shell is that bar on the left. In normal Gnome you have your menu bar which has Applications, Places and System (which I wish I could easily shorten to Apps, Places and Sys to save space). Given I have this new user thing on the right where I can shutdown/logout/suspend/hibernate... from the only real thing I use System for is the Preferences and Administration. Yet I can never remember if what I want is in admin or pref. I recently discovered this system preferences thing which just puts it all in one window categorised into appropriate groups. I'm sure some find the two lists easier and some find the single window easier. When I scan with my eyes in a list I just go up/down, but when I scan a grid my eyes wander all over the place with no apparent system. As such its probably a more random search than a well defined one. There is heaps of things you could test out (we looked at some in my HCI course) to try to make the grid layout faster but nonetheless I think I like the grid better. I use the Places bar often, and I think the Gnome Shell implementation makes things easier as they are listed in two columns, unlike traditionally where the number of bookmarks is limited and I need to navigate to a sub-menu to show them all. It seems I can't change the size proportions of those three sections on the side, but again its still in development. You could look at this a number of ways but because the panels are gone, if you are using a full screen application you can focus on that, with nothing cluttering the edge or distracting you from your task at hand. Traditionally everything in layered down, you have panels, then window decorations then menu bars, status bars, tabs (in Firefox), removing all that so that you just have the task at hand in your vision can be a great thing (yes I know there is a full screen feature in Firefox, and you can set Gnome panels to hide). When you are working in a browser its up to the web site (unless you have the time to do some Greasemonkey scripts) to allow you to again remove outside clutter, yet many application-like web sites allow you to do this (Alt + Shift + G when editing in Wordpress, u in Google Reader (to some degree)). Anyway that is moving away a bit from the topic of this post. At the bottom of the left bar, you have recent documents. I use recent documents very very rarely (as in the shortcuts to them, not the documents themselves). Although I still think that a well designed system for access to recent documents integrated with some kind of search capability would be very useful for me, and I would use it often. However I am yet to find such a system that I like. The concept in my mind is something like the Lifestream design that Wei Zhou blogged about. An interface where time is on the horizontal axis, where you could change the scale and location of this view easily, view related things such as the weather for that particular time, your location if you have a GPS enabled laptop, etc. Also it should be integrated with a good filter feature (anything such as file type, file size, location, tags...) that lets you narrow down your search space. Something like that is what I have in my mind as a great use of a "recent documents" feature. GNOME Zeitgeist looks like it may address some of this. Lastly the top section is the application launcher. [caption id="attachment_825" align="aligncenter" width="450" caption="Gnome Shell Menu"][/caption] The actual menu in some ways is much better than the normal Gnome menu. Larger icons and a short description of the application are good. When I open the Gnome menu bar, I never need to see what's on my screen in order to make my selection from the menu bar (and if I forgot what I wanted to start I can always close it then open it again). You have the whole screen so you may as well use it, and Gnome Shell seems better in this respect. The bad thing is I don't like the use of pages. If not everything will fit on one column, you have to change the pages at the bottom. Instead you should be able to scroll through the options with the mouse wheel, or the ones that don't fit go in another column to the right (like Windows XP can do, and yes I used to use Windows XP). The search box above this doesn't behave like the traditional Gnome Panel's Run Application dialogue. For example I can't type a file path, and tying gedit then enter won't take me where I want to go (gedit). Instead it takes me to some other entry I have defined in the menu bar. Now I can see some reasons why this could be better. Really I want to launch any executable files in my PATH, but a user who doesn't use the terminal probably doesn't want this. An option so that the user can choose how they want it to behave would be better, I think. [caption id="attachment_826" align="aligncenter" width="450" caption="Gnome Shell's search box doesn't behave as I expected."][/caption] Having all my icon application starters in the top Gnome Panel was nice but there is no reason those can't be added to Gnome Shell, but again it's still in development. Although now that I've been using the interface for an hour or so, I think that they may create more clutter. Actually I think I would prefer that that top panel bar in Gnome shell would only appear in the Activity mode (but still recognise the top left mouse gesture). Although this may be scary for newbie's (hey I got intimidated the first time I used Blackbox, I couldn't work out that right clicking on the desktop gave me a menu) so an option would be much better. Anything thing I wanted to mention was, I use Firefox a lot, and a lot of the concepts and issues with window management can be applied to tabs in a browser. The folks over at Mozilla are working on this so I'm eager to see what they come up with, but as more and more things are done through HTML web pages, it just means I'm going to have more and more tabs open that I need to manage, and navigate. Like starting a new application in a desktop environment you often start a new task (web page/tab) in a web browser. I've been using Ubiquity for a while now at I find it really good. Although they are up to release 0.5, I'm still using 0.1.9rc6. Although I can think of many improvements, its still really efficient at starting new tasks. Oh an in case you were wondering from my Screenshots there, I'm using the orange-theme (orange-theme - 1.3.0.jaunty.ppa2+nmu1) from https://launchpad.net/~bisigi/+archive/ppa/+packages. Tags: computing, ubuntu. 24th September 2009 A feature that I thought was very much lacking from the Compiz Fusion Scale plugin (as shown)... [caption id="attachment_803" align="aligncenter" width="450" caption="Compiz Scale Plugin (1)"][/caption] ...was that I could not seem to close windows in this view. After some investigation you can. I had not noticed that the Scale Addons tool in Utility in CCSM (CompizConfig Settings Manager) is related to the Scale tool in Window Management. Now there under bindings in the Scale Addons tool is Close Window. It turns out I had two problems, 1. I could not grab the mouse button like you can grab the keyboard combination when setting new bindings. As such I didn't know which was Button 1, 2, 3 and so on. Turns out Button 1 is the left mouse, button 2 is the scroll button, and button 3 is the right mouse. But another common model would be button 2 for right mouse and button 3 for middle. After all I could have just used some trial and error, but because of problem 2 I wasn't sure if it was the mouse bindings that were the problem or something else. 2. Problem 2 was because I had "Key Bindings Toggle Scale Mode" in the Scale plugin turned off (i.e. when I initiated the window picker using Super+Tab I had to keep holding Super to key all the windows up and letting go of super would select the selected window.) As such when in my mind I thought that I wanted right click to close the window, I really needed to set the binding to Super+right click. Tags: ubuntu, usability. 29th August 2009 (Warning, I really have no idea what I'm talking about here, especially tty) Yesterday Compiz crashed, probably seg faulted. I managed to restart it but then Cario (which Compiz was using) seg faulted. I ended up restart the X server. But just now Compiz crashed again. Most of the time when it crashes all is well it uses the default window manager without the flashy Compiz effects, but this time didn't, I couldn't move windows at all and there were no decorations. I could click launch icons on my top Gnome panel, but whenever I opened up a terminal I could not enter any text although I could still interact with it with the mouse (ie. use menu bars). I ended up (since my web browser was still displaying stuff) typing compiz into a text area in Firefox and selecting it, so I could middle click paste it. Then I created a custom application launcher in my Gnome panel, middle click pasted the command compiz and then I could click that one in the Gnome panel to start compiz again. All was fixed. But there is something I still can't find the answer for. I tried going into a different tty using Ctrl+Alt+F1, but trying to run compiz in tty1 failed because it was "unable to open display """ and also "no xterm found". What I wanted to do was start a process in tty1 for tty7. I have no clue how to do that. Any ideas? I endeavour to learn more about the X Window System. Oh and if all else failed I could have restarted the X server, but then I loose a lot of stuff in RAM (such as things I haven't saved). Tags: linux, ubuntu. 7th March 2009 My Boot Times... (so that I can compare them with 9.04 when it comes out) ### Ubuntu 8.10 0.00 Power on 0.06 Start GPU Memory Test (512MB) 0.15 Start Motherboard Loading Screen 0.22 End Motherboard Loading Screen (shows some hardware config) 0.30 GRUB Menu Displayed 0.32 OS Load (I have GRUB set on a 2 sec timeout) 1.15 Login Screen Ready 1.55 GNOME Desktop ready with taskbars loaded (OS loaded and responsive) ### Windows XP 0.00 Power on 0.06 Start GPU Memory Test (512MB) 0.15 Start Motherboard Loading Screen 0.22 End Motherboard Loading Screen (shows some hardware config) 0.30 GRUB Menu Displayed 0.32 OS Load (I have GRUB set on a 2 sec timeout) 1.17 Login Screen Ready 1.47 Background Loaded 2.02 Taskbar Loaded 2.17 OS Fully Loaded and Responsive ### Summary Hardware Level - 24 sec GRUB - 6 sec XP OS Level - 1 min 45 sec Ubuntu OS Level - 1 min 23 sec Tags: ubuntu. 31st January 2009 Here is a bash script I wrote to backup some of my user preferences and configuration for my Ubuntu 8.10 installation and some of my apps settings. #!/bin/sh #will put all backups in ./backups/YYYYMMDD_HHMM/ DATETIME=date +%Y%m%e_%H%M USER="YOURUSERNAME" DEST="backups/DATETIME" mkdir "backups" mkdir "backups/$DATETIME" #computer setups cp /boot/grub/menu.lst$DEST/menu.lst cp /etc/X11/xorg.conf $DEST/xorg.conf cp /etc/fstab$DEST/fstab #user files/profiles ##files cp /home/$USER/.gnome2/stickynotes_applet$DEST/stickynotes_applet cp /home/$USER/.gtk-bookmarks$DEST/.gtk-bookmarks ##folders tar -cvvf $DEST/purple.tar /home/$USER/.purple/ tar -cvvf $DEST/mozilla.tar --exclude-tag-all='_CACHE_MAP_' --exclude='urlclassifier3.sqlite' /home/$USER/.mozilla/ tar -cvvf $DEST/Templates.tar /home/$USER/Templates/ tar -cvvf $DEST/gconf.tar /home/$USER/.gconf/ tar -cvvf $DEST/gnome2.tar /home/$USER/.gnome2/ tar -cvvf $DEST/gnome-color-chooser.tar /home/$USER/.gnome-color-chooser/ Tags: ubuntu.
2020-07-13 23:13:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3301803767681122, "perplexity": 2060.9504840750274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657147031.78/warc/CC-MAIN-20200713225620-20200714015620-00359.warc.gz"}
http://molecularmodelingbasics.blogspot.com/2013/10/chemistry-assignments-that-use-molecule.html
## Sunday, October 13, 2013 ### Chemistry assignments that use Molecule Calculator (MolCalc) 1. One of the reviewers of our J. Chem. Ed. paper on MolCalc included the following tutorial: Molecular Orbital Calculations of Molecules I.Diatomics, Triatomics and Reactions 2.  n-Butane can exist in two different conformations called gauche and anti (Google butane and conformation).  Use Molecular Calculator to estimate the fraction of molecules in the gauche conformation at 25 $^\circ$C. $\Delta H^\circ$  can be computed as the difference in heat of formation. 3. Estimate $\Delta H^\circ$  the for the following reaction at 25 $^\circ$C NH$_2$CHO + H$_2$O $\rightleftharpoons$ NH$_3$ + HCOOH a. Using bond energies b. Using Molecule Calculator 4. How does the molecular structure determine the rotational entropy?  Find out by constructing a molecule with the largest possible rotational entropy using Molecule Calculator.  The largest value I could find was 133 J/molK.  Can you beat that? 5. How well do the simple solvation models work? a. Estimate the solvation energy of NH$_4^+$ using MolCalc? b. What is the polar solvation energy of NH$_4^+$ in water at 25 $^\circ$C assuming that it is spherical? 6. Why do ionic compounds dissolve in water?  Use MolCalc to estimate $\Delta G^\circ$ at 25 oC for the following equilibrium N(CH$_3$)$_4^+\cdot$Cl$^-$ $\rightleftharpoons$ N(CH$_3$)$_4^+$ + Cl$^-$ a. in the gas phase b. in aqueous solution 7. Solvent screening: charge-charge interactions are weaker in aqueous solution than in the gas phase.  Compute the difference in G$^\circ$ at 25 $^\circ$C between these two molecules using MolCalc a. in the gas phase b. in aqueous solution 8. Build a molecule with a solvation energy that is as close to 0 as possible.  The closest I got is -1.3 kJ/  How close can you get?
2018-09-20 13:39:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.623620331287384, "perplexity": 4472.802700978609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156471.4/warc/CC-MAIN-20180920120835-20180920141235-00383.warc.gz"}
http://ndlib.readthedocs.io/en/latest/custom/custom.html
Custom Model Definition¶ NDlib exposes a set of built-in diffusion models (epidemic/opinion dynamics/dynamic network): how can I describe novel ones? In order to answer such question we developed a syntax for compositional model definition. Rationale¶ At a higher level of abstraction a diffusion process can be synthesized into two components: • Available Statuses, and • Transition Rules that connect them All models of NDlib assume an agent-based, discrete time, simulation engine. During each simulation iteration all the nodes in the network are asked to (i) evaluate their current status and to (ii) (eventually) apply a matching transition rule. The last step of such process can be easily decomposed into atomic operations that we will call compartments. Note NDlib exposes two classes for defining custom diffusion models: • CompositeModel describes diffusion models for static networks • DynamicCompositeModel describes diffusion models for dynamic networks To avoid redundant documentation, here we will discuss only the former class, the latter behaving alike. Compartments¶ We adopt the concept of compartment to identify all those atomic conditions (i.e. operations) that describe (part of) a transition rule. The execution of a compartment can return either True (condition satisfied) or False (condition not satisfied). Indeed, several compartments can be described, each one of them capturing an atomic operation. To cover the main scenarios we defined three families of compartments as well as some operations to combine them. Node Compartments¶ In this class fall all those compartments that evaluate conditions tied to node status/features. They model stochastic events as well as deterministic ones. Edge Compartments¶ In this class fall all those compartments that evaluate conditions tied to edge features. They model stochastic events as well as deterministic ones. Time Compartments¶ In this class fall all those compartments that evaluate conditions tied to temporal execution. They can be used to model, for instance, lagged events as well as triggered transitions. Compartments Composition¶ Compartment can be chained in multiple ways so to describe complex transition rules. In particular, a transition rule can be seen as a tree whose nodes are compartments and edges connections among them. • The initial node status is evaluated at the root of the tree (the master compartment) • if the operation described by such compartment is satisfied the conditions of (one of) its child compartments is evaluated • if a path from the root to one leaf of the tree is completely satisfied the transition rule applies and the node change its status. Compartments can be combined following two criteria: A rule can be defined by employing all possible combinations of cascading and conditional compartment composition. Examples¶ Here some example of models implemented using compartments. SIR¶ import networkx as nx import ndlib.models.ModelConfig as mc import ndlib.models.CompositeModel as gc import ndlib.models.compartments.NodeStochastic as ns # Network generation g = nx.erdos_renyi_graph(1000, 0.1) # Composite Model instantiation model = gc.CompositeModel(g) # Model statuses model.add_status("Susceptible") model.add_status("Infected") model.add_status("Removed") # Compartment definition c1 = ns.NodeStochastic(0.02, triggering_status="Infected") c2 = ns.NodeStochastic(0.01) # Rule definition model.add_rule("Susceptible", "Infected", c1) model.add_rule("Infected", "Removed", c2) # Model initial status configuration config = mc.Configuration() config.add_model_parameter('percentage_infected', 0.1) # Simulation execution model.set_initial_status(config) iterations = model.iteration_bunch(5)
2018-05-22 05:47:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5679768919944763, "perplexity": 3858.39340853716}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864626.37/warc/CC-MAIN-20180522053839-20180522073839-00269.warc.gz"}