url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://the-user.org/category/computing/hardware
## Archive for the ‘Hardware’ Category ### MeeGo? Where are you? Monday, July 9th, 2012 There is a rumour that a small Finish company called Jolla lead by ex-Nokia-employees is trying to revive MeeGo. (here, here and hier) Is there a new chance for a real GNU/Linux mobile operating system with great hardware support being free like in freedom, not free like in Android? And they want to use Qt instead of this hip, kinky HTML5+JavaScript (cf. Tizen). Sounds like really good news, but unfortunately I am not that confident. There have been so many setbacks. Maybe it will stay a dream in the near future. What are your opinions about it? PS: Yes, I know about Mer. However, I cannot judge its current state, of course I hope there will not be too much fragmentation of Mer/MeeGo/Jolla/Tizen. ### Old Regression by Leonardo da Pisa Saturday, June 11th, 2011 After reading this blog post I thought a bit about endianness (big-endian is just bad), and while having a shower a theory came into my mind: Maybe Arabs had little-endian integers (meaning least-significant bit first) but wrote (and still do) from right to left (meaning least-significant bit/digit at the right). And when Leonardo da Pisa (Fibonacci) brought Arabic numerals to Europe, he wrote in the same style, not flipping the digits, hence establishing big-endian. In fact I could verify that with Wikipedia. But I also noticed that this “bug” has been there before, Indians write from left to right (Wikipedia told me about a coin in Brahmi written from right to left, but that was before there were any numerals), and they have always used big-endian. Thus Arabs fixed that issue (maybe not knowingly), but stupid Europeans did not get why big-endian is stupid. Furthermore, big-endian numerals look more like those stupid Roman numerals, and our usual way of vocalising them is like in Roman times. And because of Leonardo da Pisa there are those stupid architectures using big-endian representation (fortunately not x86, amd64), causing non-portability, byte-order-marks and all that stupid stuff. And left-shifts could actually be left-shifts and right-shifts could be right-shifts. Short list of arguments for little-endian: • Value of a digit d at position i is simply d·b**i (b is the base). That would obviously be the most natural representation if you would implement integers by using bit-arrays. It does not depend on the length, no look-ahead required. • You can simply add numbers from left to right (no right-alignment for summation). • For radix sort you can begin from left. • Simple cast between longer and shorte integers without moving any bits. • You do not need words like “hundred”, “ten-million”, “billiard” etc., because you can interprete a sequence online without look-ahead. • Repeating modulo and integer division by the base gives little-endian-representation. • The least-significant bits carry more interesting number theoretical information. Well, big-endian is more like lexicographic order, although I am not sure if it is clearly better for natural languages. For division you have to start with the most-significant bit, but—hey—division is obviously not as important as all the other operations where you start with the least-significant bit. Of course sometimes little-endian is not a good notation, for measurements one should use floating point numbers (in a decimal world called “scientific notation”) and the mantissa should start with the most-significant bit/digit, after the exponent to avoid look-ahead (unlike the scientific notation). If Leonardo da Pisa would have thought a bit about what he is doing, there would not be all those drawbacks! Just my thoughts about that regression. ### Fun with a Wacom-Tablet and openSuSE Sunday, February 20th, 2011 Fun! Graphics tablets! Oh, wait, why did I mention the distribution? And The User is not one of those great artists using Krita, he is a clumsy nerd. May it be irony? Maybe, but there has actually been some fun. Okay, I started at 1:00 last night, I wanted to try a “Wacom Intuos2 9×12” (xsetwacom list, I guess it refers to the size) with openSuSE (of course, I do not use any non-GNU/Linux-system). Well, I had some weird problems: First I could just move the cursor, no clicks, then I installed some stuff, and I could use the pen as a mouse, but without pressure detection or anything like that and with an awkward behaviour: After having drawn a line (i.e. after releasing) the cursor did no longer move, until pressing it again or lifting it a few centimetres, drawing lines, hatching etc. are of course not possible that way. So I continued playing around, xsetwacom could not recognize the tablet, openSuSE’s xinput version has this bug, so I was very confused, although it is only a bug in the output and does not affect xsetwacom. I have upgraded X to version 7.6 using this repository, but now fglrx failed, ugly backtraces at startup. I started in failsafe-mode without fglrx and after short time the tablet worked with Krita and different pressures etc. It was 4:30, I was quite tired, and I went to sleep. But of course I wanted to get fglrx back, I know, it is a proprietary driver, but without it 3D is terrible and with fglrx my battery life is one hour longer (without fglrx only 90 minutes or something like that). I downgraded the X back to version 7.5, but after some time (maybe two hours of useless recompiling, reinstalling of drivers, rebooting) I noticed that ATI provides drivers for X.org 7.6 at their website, unfortunately they do not provide official openSuSE-repositories any longer, so I had inofficial, outdated fglrx installed. Now I was confident, it had already worked with 7.6, upgrading, running the official ATI-driver-installation-script (it even generates a rpm, nice)… It did not work, I tried some source-version for the wacom-kernel-module and the xf86-input-driver, but it did not work. But finally I noticed that xorg-x11-driver-input had not been updated, probably because of the dependencies of the inofficial wacom-driver-rpm. And finally everything worked some minutes after 17:00 (I had been afk for few hours, and do not forget sleeping, so it took less than 16 hours ;)). It is awesome! The tablet is awesome! Krita is awesome! My drawing-skills are awesome, ehh, not awesome! Long story short for those of you wanting to use a wacom-tablet with openSuSE 11.3: • zypper ar “http://download.opensuse.org/repositories/X11:/XOrg/openSUSE_11.3/X11:XOrg.repo” • zypper dup • Make sure, all xorg-x11-packages are now up-to-date • Install wacom-kmp-desktop (for desktop kernel) and xorg-x11-driver-input-wacom from some repositories, have a look at http://software.opensuse.org • Alternatively visit http://linuxwacom.com and install the drivers from source(git://linuxwacom.git.sourceforge.net/gitroot/linuxwacom/xf86-input-wacom, git://linuxwacom.git.sourceforge.net/gitroot/linuxwacom/linuxwacom) • Reboot, everything should works now kcm_tablet does not work for me, maybe it will magically work after some rebooting, but for now it does not detect the tablet. However, the standard-configuration is okay and I can still use xsetwacom for configuring the device. I do not want to tell you about my attempts with UDBA-graphics-driver-installation and the long startup-times of fglrx. My first work I have stored (the bamboo is a Krita-default-brush :D): show image in full size.
2018-08-21 06:06:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5204566121101379, "perplexity": 3018.716003202275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221217970.87/warc/CC-MAIN-20180821053629-20180821073629-00579.warc.gz"}
https://journal.kib.ac.cn/EN/abstract/abstract30942.shtml
Plant Diversity ›› 1996, Vol. 18 ›› Issue (03): 1-3. • Articles • ### STUDY ON THE KARYOTYPES OF TWO SPECIES AND ONE VARIETY IN RANUNCULUS FROM CHINA LIAO Liang, XU Ling-Ling, FANG Liang 1. Institute of Biology, Jiujang Teachers College, Jiangxi 332000 • Online:1996-06-25 Published:1996-06-25 Abstract: In the present paper, karyological studies were carried out in two species and one variety of Ranunculus L.from China. Their chromosome numbers and karyotypes are all reported here for the first time. Ranunculus japonicus Thunb. var. ternatifolius L.Liao shows two different karyotypes i.e. common-type 2n = 2x= 14= 6m+4sm+4st(2SAT) and heterozygous type 2n = 2x= 14= 6m+4sm+4st(1SAT). The former karyotype characteristic is that the satellite is larger. The latter karyotype variations are the occurrence of heteromorphous homologous chromosomes of second pair and the sixth pair SAT chromosome posses only one satellite; Ranunculus dopsus DC. karyotype formula is 2n = 4x = 32 = 10m+8sm+12st(2SAT)+2t(2SAT). It consists of two different genomes. One of them is that the satellite is far larger than the short arm. The other is that the satellite is smaller than the short arm, Ranunculus silerifolius Levl. karyotype formula is 2n = 4x = 32 = 8m+6sm+18st(4SAT). It consists of four chromosome sets which is basicly similar in karyotype. Key words: Ranunculus japonicus var. ternatfiolius
2023-02-01 02:40:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20546863973140717, "perplexity": 10897.073418945356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499899.9/warc/CC-MAIN-20230201013650-20230201043650-00376.warc.gz"}
https://dsp.stackexchange.com/questions/58024/question-about-sampling
Is it possible to represent an aperiodic signal using an array of N samples? I am confused about this because you obviously have to window a function in the time domain to sample it. Now what happens with aperiodic signals? • is this really a question about the DFT and the inherent periodicity that comes from that? – robert bristow-johnson Apr 30 at 22:08 • Hi, no. Its just a question about an aperiodic signal of infinite duration, and how we can sample it N times. – Nash Brewer Apr 30 at 22:10 Whether the signal is periodic or not is largely irrelevant to sampling. What matters is the bandwidth. See this answer: https://dsp.stackexchange.com/a/10339/11256 There are two main cases: • If the signal is aperiodic and of infinite duration (for example, Gaussian noise), then $$N$$ samples will always be insufficient for reconstruction. • If the signal is aperiodic and of finite duration, then in theory its bandwidth is infinite and it cannot be reconstructed from any set of samples. However, many practical signals tend to zero as $$t \rightarrow \pm\infty$$, and their spectrum also tend to zero as $$f \rightarrow \pm\infty$$. In this case, you can obtain a reconstructed signal that is very close to the original, and for engineering applications this is more than enough. • thank you. I am interested in the proof that aperiodic, finite duration signal have an infinite bandwith, if you have the time, can you suggest where I can find a proof for this? – Nash Brewer Apr 30 at 22:09 • this is continuous time signal that is finite in duration? or is the signal already sampled but the number of non-zero samples is finite? – robert bristow-johnson Apr 30 at 22:12 • Hi, this is just a continuous signal that is aperiodic and infinite in time. From what I learned we have to take a finite window of this signal to sample it, so my question was how can we sample such a signal. The answer provided above was that aperiodic, N samples, will always be insufficient for reconstruction. In the case of a aperiodic energy signal, the bandwidth will be infinite, but it can be reconstructed as an approximation. This is what I understand now. – Nash Brewer Apr 30 at 22:19 • @ArmaArmedAssualt Here's an intuitive explanation why you can't reconstruct a finite-duration signal from its samples: Say you sample from $t=-\infty$ to $\infty$ at sampling rate $f_s$. The signal starts at an unknown time. When the signal starts, you'll have a sample that is zero followed by another that is not zero. However, you don't know when the signal started; it could have been at any time between the two samples. Without that knowledge, exact reconstruction is obviously impossible. – MBaz May 1 at 1:28
2019-08-18 13:58:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7788390517234802, "perplexity": 421.21976492777054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313889.29/warc/CC-MAIN-20190818124516-20190818150516-00446.warc.gz"}
http://www.gradesaver.com/textbooks/math/algebra/college-algebra-7th-edition/chapter-1-equations-and-graphs-section-1-8-solving-absolute-value-equations-and-inequalities-1-8-exercises-page-153/30
## College Algebra 7th Edition $x=-4$ $|x+4|\leq 0$ (We ignore the less than sign because the abosolute value can not be negative.) $|x+4|=0$ $x+4=\pm 0=0$ $x=0-4=-4$
2018-04-21 13:48:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6384376883506775, "perplexity": 1611.0561053009521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945222.55/warc/CC-MAIN-20180421125711-20180421145711-00048.warc.gz"}
https://www.lmfdb.org/EllipticCurve/Q/55440/eh/
Properties Label 55440.eh Number of curves 4 Conductor 55440 CM no Rank 1 Graph Related objects Show commands for: SageMath sage: E = EllipticCurve("55440.eh1") sage: E.isogeny_class() Elliptic curves in class 55440.eh sage: E.isogeny_class().curves LMFDB label Cremona label Weierstrass coefficients Torsion structure Modular degree Optimality 55440.eh1 55440en4 [0, 0, 0, -4750707, 3985475506] [2] 1179648 55440.eh2 55440en2 [0, 0, 0, -305427, 58515154] [2, 2] 589824 55440.eh3 55440en1 [0, 0, 0, -72147, -6476654] [2] 294912 $$\Gamma_0(N)$$-optimal 55440.eh4 55440en3 [0, 0, 0, 407373, 291030514] [2] 1179648 Rank sage: E.rank() The elliptic curves in class 55440.eh have rank $$1$$. Modular form 55440.2.a.eh sage: E.q_eigenform(10) $$q + q^{5} + q^{7} - q^{11} + 2q^{13} - 2q^{17} - 4q^{19} + O(q^{20})$$ Isogeny matrix sage: E.isogeny_class().matrix() The $$i,j$$ entry is the smallest degree of a cyclic isogeny between the $$i$$-th and $$j$$-th curve in the isogeny class, in the LMFDB numbering. $$\left(\begin{array}{rrrr} 1 & 2 & 4 & 4 \\ 2 & 1 & 2 & 2 \\ 4 & 2 & 1 & 4 \\ 4 & 2 & 4 & 1 \end{array}\right)$$ Isogeny graph sage: E.isogeny_graph().plot(edge_labels=True) The vertices are labelled with LMFDB labels.
2020-04-05 10:55:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9337833523750305, "perplexity": 7596.397290329542}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371576284.74/warc/CC-MAIN-20200405084121-20200405114121-00400.warc.gz"}
http://math.stackexchange.com/questions/175474/practical-question-about-developable-surface
there's my question: Given 2 regular plane curves (let's say $\mathcal{C}^1$) in the 3D space, is there always a developable surface which contains both curves ? Thanks, anders - Sure. If you have parametric equations for both your curves, and the parameter ranges for both curves are the same, you can then consider the surface drawn out by a moving straight line whose endpoints are at the two curves. – J. M. Jul 26 '12 at 16:44 Do you really want the surface to contain the curves, or to have them as parts of the boundary? Are the curves disjoint? Is the surface required to be embedded (without self-intersections)? – user31373 Jul 26 '12 at 21:16 @J.M.: It seems to me you are proving that you can build a ruled surface from two curves, but nothing proves such built surface is developable. – anderstood Oct 13 '13 at 16:39
2016-05-25 09:38:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.754531979560852, "perplexity": 372.51076019098224}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274324.89/warc/CC-MAIN-20160524002114-00075-ip-10-185-217-139.ec2.internal.warc.gz"}
http://www.mathnet.ru/php/archive.phtml?jrnid=sm&wshow=issue&year=1976&volume=142&volume_alt=100&issue=1&issue_alt=5&option_lang=eng
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Forthcoming papers Archive Impact factor Subscription Guidelines for authors License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Mat. Sb.: Year: Volume: Issue: Page: Find The Twenty-fifth Congress of the Communist Party of the Soviet Union 3 On the boundary values of solutions of second-order elliptic equationsV. P. Mikhailov 5 On a countable base and the conjugate class of a topological spaceP. L. Ul'yanov 14 On the completeness of derived chainsG. V. Radzievskii 37 Uniformization of algebraic curves by discrete arithmetic subgroups of $PGL_2(k_w)$ with compact quotientsI. V. Cherednik 59 Some estimates for power quasipolynomialsI. B. Simonenko 89 On functionals with an infinite number of critical valuesV. S. Klimov 102 On unitary representations of the group $C_0^\infty(X, G)$, $G=SU_2$R. S. Ismagilov 117 Independent systems of defining relations for a free periodic group of odd exponentV. L. Shirvanyan 132 Rational approximations of holomorphic functions with singularities of finite orderE. M. Chirka 137 Estimates of the norm of the holomorphic components of functions meromorphic in domains with a smooth boundaryL. D. Grigoryan 156
2020-07-04 03:30:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3101401627063751, "perplexity": 2731.569923686094}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655883961.50/warc/CC-MAIN-20200704011041-20200704041041-00286.warc.gz"}
http://claesjohnson.blogspot.com/2011/03/what-is-2nd-law-of-radiation-2.html
## måndag 7 mars 2011 ### What is the 2nd Law of Radiation 2? The mysterious unphysical fictitious effect of "backradiation" underlying climate alarmism. In the previous post I asked for a 2nd Law of Radiation, but could not find any such law in the literature. The first place to look for a 2nd Law is of course Stefan-Boltzmann's Law (SB) • Q = sigma T^4 supposedly giving the heat energy radiated by a blackbody of temperature T and sigma is Stefan-Boltzmann's constant. In this form SB does not contain any 2nd Law because there is just one body and a 2nd Law is about transfer of heat energy between different bodies. SB is derived by summation over frequencies from Planck's Law (P) • R_f = gamma f^2 x C(f) where f is wave frequency and C(f) represents an exponential cut-off to zero for large frequencies f scaling with T. The P and SB laws are derived for blackbody surrounded by a medium of temperature 0 K. So what do we find as concerns radiative heat transfer between two bodies, body 1 and body 2 of temperature T1 and T2 (above 0 K)? Well, Planck's derivation only concerns one body radiating into a surrounding medium at 0 K, and thus says nothing about two bodies. But in the engineering literature you find the formulas • Q12 = sigma T1^4 - sigma T2^4, Q21 = sigma T2^4 - sigma T1^4 = - Q12 as the heat transfer Q12 from body 1 to 2 and Q21 the heat transfer from body 2 to 1, with different sign of Q12 and Q21. For example if T1 is larger than T2, then Q12 is positive and Q21 is negative. But you don't find a derivation of these formulas along the lines of Planck's derivation of his one-body law based on statistical mechanics. The engineering formula for the heat exchange between two bodies appears to be ad hoc and as such mysterious. Anything ad hoc is mysterious. The danger using mysterious ad hoc formulas is that it invites your defenseless mind into mysterious physics, as follows: If we stare intensively at the formula • Q12 = sigma T1^4 - sigma T2^4 for a long enough time, assuming that T1 (hot) is larger than T2 (cold), then we could come to envision heat transfer in two directions: a major amount of sigma T1^4 transferred from hot to cold and a minor amount sigma T2^4 transferred from cold to hot. We could be (mis)led to the mysterious concept of "backradiation" underlying climate alarmism, with a cold atmosphere being capable of transferring heat energy to a warm Earth surface. But this is all deception: The formula Q12 = sigma T1^4 - sigma T2^4 is incorrectly stated, and should be stated • Q12 = sigma (T1^4 - T2^4) where T1 is larger than T2. In this form, there is only a net transfer from hot to cold and there is no backradiation. This formula can be seen as a 2nd Law of Radiation. A theoretical derivation without statistics is given in my article Computational Blackbody Radiation is the book Slaying the Sky Dragon further exposed in the upcoming book Mathematical Physics of Blackbody Radiation. How come then that the formula is stated in the form Q12 = sigma T1^4 - sigma T2^4 suggesting mysterious unphysical effects to the defenseless mind? Because it is an ad hoc formula without theoretical derivation, and for such a formula anything is possible as imagination. Interpreting mathematical formulas without theoretical derivation opens to a form of scientific mathematical mysticism widely practiced in the modern physics of quantum mechanics and relativity theory: The idea is to jot down a mathematical equation (formula) and then start to interprete it as physics. This is to be compared with classical rational physics, where you first envision some physics and then describe it by mathematical equations. Examples of modern physics ad hoc mysticism: • The Lorentz transformations of special relativity (weird unphysical physics). • Schrödinger's linear multidimension wave function (statistical unphysical interpretation). Example of rational classical physics: • Maxwell's equations with electric and magnetic fields satisfying similar equations allowing electromagnetic waves. In the next post I continue with an analysis of mysticism of modern physics.
2017-11-25 09:42:20
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8241497278213501, "perplexity": 1644.6473300535226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809746.91/warc/CC-MAIN-20171125090503-20171125110503-00584.warc.gz"}
https://dolafapawojapawe.sunshinesteaming.com/extensions-of-results-concerning-the-derivatives-of-an-algebraic-function-of-a-complex-variable-book-4464mo.php
Last edited by Akizahn Thursday, July 23, 2020 | History 2 edition of Extensions of results concerning the derivatives of an algebraic function of a complex variable found in the catalog. Extensions of results concerning the derivatives of an algebraic function of a complex variable Samuel Beatty # Extensions of results concerning the derivatives of an algebraic function of a complex variable ## by Samuel Beatty • 162 Want to read • 17 Currently reading Published by The University Library: pub. by the librarian in [Toronto] . Written in English Subjects: • Algebraic functions. • Edition Notes Cover-title. Classifications The Physical Object Statement by S. Beatty. Series University of Toronto studies., no. 1 LC Classifications QA1 .T8 no. 1 Pagination 24 p. Number of Pages 24 Open Library OL6581143M LC Control Number 15022692 OCLC/WorldCa 19321324 In mathematical analysis, and applications in geometry, applied mathematics, engineering, natural sciences, and economics, a function of several real variables or real multivariate function is a function with more than one argument, with all arguments being real variables. This concept extends the idea of a function of a real variable to several variables. In a similar vein, the Taylor series for the real exponential and trigonometric functions shows how to extend these definitions to include complex numbers—just use the same series but replace the real variable x by the complex variable z. This idea leads to complex-analytic functions as an extension of real-analytic ones. Table of Contents Preface v 1 The Complex Plane 1 Complex Arithmetic 1 The Real Numbers.   Chapter 2. Functions of a Complex Variable 5. The Concept of a Most General (Single-valued) Function of a Complex Variable 6. Continuity and Differentiability 7. The Cauchy-Riemann Differential Equations Section II. Integral Theorems Chapter 3. The Integral of a Continuous Function 8. Definition of the Definite Integral s: NDSolve gives results in terms of InterpolatingFunction objects. NDSolve [eqns, u [x], {x, x min, x max}] gives solutions for u [x] rather than for the function u itself. Differential equations must be stated in terms of derivatives such as u ' [x], obtained with D, not total derivatives obtained with Dt.   These series involves samples of functions and their partial derivatives. In the case of functions of one variable, f has an extension onto C n to an entire function. 1. You might also like Who Did What Who Did What Hudson Beanbag Hudson Beanbag The China card The China card implications of changing office technology for the pre service training of secretarial personnel. implications of changing office technology for the pre service training of secretarial personnel. Two-Phase Control Absorber Development Program Two-Phase Control Absorber Development Program New thinking on leadership New thinking on leadership Brain and spinal cord Brain and spinal cord Phlogopite Mica in Ontario. Phlogopite Mica in Ontario. Catholic crisis Catholic crisis ### Extensions of results concerning the derivatives of an algebraic function of a complex variable by Samuel Beatty Download PDF EPUB FB2 Dissertation: Extensions of Results Concerning the Derivatives of an Algebraic Function of a Complex Variable. Advisor: John Charles Fields. Students: Click here to see the students listed in chronological order. Name School Year Descendants; Fisher, Mary: University of Toronto: There are some standard results with algebraic functions and they are used as formulas in differential calculus to find the differentiation of algebraic functions. Derivative of Constant. The derivative of any constant with respect to a variable is equal to zero. $\dfrac{d}{dx}{\, (c)} \,=\, 0$. derivative of a function f: C → C, the algebraic definition and manipulation of derivatives follows the pattern of the results for real-valued functions in Chapter 3. 4 The Calculus of Complex Functions. Complex Algebra and the angle θ (= tan−1(y/x)) is labeled the argument or phase of a result that is suggested (but not rigorously proved)3 by Sectionwe have the very useful polar representation z = reiθ. () In order to prove this identity, we use i3 =−i, i4 = 1, etc. in the Taylor expansion of the exponential and trigonometric functions and separate even. 6 Complex Derivatives We have studied functions that take real inputs and give complex outputs (e.g., complex solutions to the damped harmonic oscillator, which are complex functions of time). For such functions, the derivative with respect to its real input is much like the derivative of a real function. complex analysis which shows that C is algebraically closed, and then show that every field has an algebraically closed extension field. Definition An extension field E of field F is an algebraic extension of F if every element in E is algebraic over F. Example. Q(√ 2) and Q(√ 3) are algebraic extensions of Q. R is not an. Functions of One Complex Variable Third Edition Lars V. Ahlfors Professor of Mathematics, Emeritus Definition and Properties of Algebraic Functions Behavior at the Critical Points 3 Picard's Theorem no more than an introduction to the basic methods and results of complex function. Now consider a complex-valued function f of a complex variable say that f is continuous at z0 if given any" > 0, there exists a – > 0 such that jf(z) ¡ f(z0)j. I know the formal definition of a derivative of a complex valued function, and how to compute it (same as how I would for real-valued functions), but after doing some problems, I feel as if I could. Also, differential equations of infinite order play a role in investigating theta functions. The chapter discusses some results concerning the existence of local holomorphic solutions of a differential equation of infinite order Pu=f, f being a given holomorphic function. Various theorems are also proven in. This book is a revision of the seventh edition, which was published in That edition has served, just as the earlier ones did, as a textbook for a one-term intro-ductory course in the theory and application of functions of a complex variable. This new edition preserves the. The primary function is indicated on the key and the secondary function is displayed above it. Press % to activate the secondary function of a given key. Notice that 2nd appears as an indicator on the screen. To cancel it before entering data, press % again. For example, % b 25 result, 5. Modes p. The Derivative Derivative of a function is the limit of the ratio of the incremental change of dependent variable to the incremental change of independent variable as change of independent variable approaches zero. For the function y = f(x), the derivative is symbolized by y’ or dy/dx, where y is the dependent variable and x the independent variable. Section Proof of Various Derivative Properties. In this section we’re going to prove many of the various derivative facts, formulas and/or properties that we encountered in the early part of the Derivatives chapter. Not all of them will be proved here and some will only be proved for special cases, but at least you’ll see that some of them aren’t just pulled out of the air. Complex Variable Class Notes 6 Holomorphic functions, and Cauchy-Riemann equations, and harmonic functions Definition f2C1(U) is holomorphic (analytic) if ∂ ∂z¯ f= 0 at every point of U. Remark A polynomial is holomorphic if and only if it is a function of zalone. A method to approximate derivatives of real functions using complex variables which avoids the subtractive cancellation errors inherent in the classical derivative approximations is described. Numerical examples illustrating the power of the approximation are presented. In mathematics, a holomorphic function is a complex-valued function of one or more complex variables that is, at every point of its domain, complex differentiable in a neighborhood of the point. The existence of a complex derivative in a neighbourhood is a very strong condition, for it implies that any holomorphic function is actually infinitely differentiable and equal, locally, to its own. For a matrix ${\bf A}(z)$ whose entries are complex valued functions of a complex variable z, results are presented concerning derivatives of an eigenvector ${\bf x}(z)$ of ${\bf A}(z)$ associated with a simple eigenvalue $\lambda (z)$ when ${\bf x}(z)$ is restricted to satisfy a constraint of the form $\sigma ({\bf x}(z)) = 1$ where $\sigma$ is a rather arbitrary scaling function. A function field governs the abstract algebraic aspects of an algebraic curve. Before proceeding to the geometric aspects of algebraic curves in the next chapters, we present the basic facts on function fields. In partic-ular, we concentrate on algebraic function fields of one variable and their extensions including constant field extensions. The chapter reviews the definitions of algebraic and transcendental functions. A function f(x) is said to be algebraic if a polynomial P(x, y) in the two variables x, y can be found with the property that P(x, f(x)) = 0 for all x for which f(x) is defined. Functions that are not algebraic are called transcendental functions. In mathematics, precisely in the theory of functions of several complex variables, Hartogs's extension theorem is a statement about the singularities of holomorphic functions of several variables. Informally, it states that the support of the singularities of such functions cannot be compact, therefore the singular set of a function of several complex variables must 'go off to infinity' in some direction. More .In mathematics, the limit of a function is a fundamental concept in calculus and analysis concerning the behavior of that function near a particular input. Formal definitions, first devised in the early 19th century, are given below. Informally, a function f assigns an output f(x) to every input say the function has a limit L at an input p: this means f(x) gets closer and closer to L as.The derivative of a complex valued function f(x) = u(x)+iv(x) is defined by simply differentiating its real and imaginary parts: (10) f0(x) = u0(x)+ iv0(x). Again, one finds that the sum,product and quotient rules also hold for complex valued functions. Theorem. If f,g: I→ C are complex valued functions which are differentiable.
2021-01-26 18:17:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7401288747787476, "perplexity": 582.0087737249994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704803308.89/warc/CC-MAIN-20210126170854-20210126200854-00393.warc.gz"}
http://ndl.iitkgp.ac.in/document/RWhvdjcwams4cU5LZVhWTDNKbXVDMVgrang0b2R2VXdwRXk1TW5DU2pQeVpyRURkc3E1UU5jbDZ6cEQxanhOVQ
### Wave packet dynamics in hole Luttinger systemsWave packet dynamics in hole Luttinger systems Access Restriction Open Author Demikhovskii, V. Ya ♦ Maksimova, G. M. ♦ Frolova, E. V. Source arXiv.org Content type Text File Format PDF Date of Submission 2009-12-02 Language English Subject Domain (in DDC) Computer science, information & general works ♦ Natural sciences & mathematics ♦ Physics Subject Keyword Condensed Matter - Mesoscale and Nanoscale Physics ♦ physics:cond-mat Abstract For hole systems with an effective spin 3/2 we analyzed analytically and numerically the evolution of wave packets with the different initial polarizations. The dynamics of such systems is determined by the $4\times 4$ Luttinger Hamiltonian. We work in the space of arbitrary superposition of light- and heavy-hole states of the "one-particle system". For 2D packets we obtained the analytical solution for the components of wave function and analyzed the space-time dependence of probability densities as well as angular momentum densities. Depending on the value of the parameter $a=k_0d$ ($k_0$ is the average momentum vector and $d$ is the packet width) two scenarios of evolution are realized. For $a>>1$ the initial wave packet splits into two parts and the coordinates of packet center experience the transient oscillations or {\it Zitterbewegung} (ZB) as for other two-band systems. In the case when $a<<1$ the distribution of probability density at $t>0$ remains almost cylindrically symmetric and the ripples arise at the circumference of wave packet. The ZB in this case is absent. We evaluated and visualized for different values of parameter $a$ the space-time dependence of angular momentum densities, which have the multipole structure. It was shown that the average momentum components can precess in the absence of external or effective magnetic fields due to the interference of the light- and heavy hole states. For localized initial states this precession has a transient character. Educational Use Research Learning Resource Type Article Page Count 9
2020-09-27 23:37:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45887303352355957, "perplexity": 1344.5826066424706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401582033.88/warc/CC-MAIN-20200927215009-20200928005009-00120.warc.gz"}
https://www.rayshader.com/reference/render_movie.html
Renders a movie using the av or gifski packages. Moves the camera around a 3D visualization using either a standard orbit, or accepts vectors listing user-defined values for each camera parameter. If the latter, the values must be equal in length to frames (or of length 1, in which the value will be fixed). render_movie( filename, type = "orbit", frames = 360, fps = 30, phi = 30, theta = 0, zoom = NULL, fov = NULL, title_text = NULL, title_offset = c(20, 20), title_color = "black", title_size = 30, title_font = "sans", title_bar_color = NULL, title_bar_alpha = 0.5, image_overlay = NULL, vignette = FALSE, vignette_color = "black", vignette_radius = 1.3, title_position = "northwest", audio = NULL, progbar = interactive(), ... ) ## Arguments filename Filename. If not appended with .mp4, it will be appended automatically. If the file extension is gif, the gifski package will be used to generate the animation. type Default orbit, which orbits the 3D object at the user-set camera settings phi, zoom, and fov. Other options are oscillate (sine wave around theta value, covering 90 degrees), or custom (which uses the values from the theta, phi, zoom, and fov vectors passed in by the user). frames Default 360. Number of frames to render. fps Default 30. Frames per second. Recommmend either 30 or 60 for web. phi Defaults to current view. Azimuth values, in degrees. theta Default to current view. Theta values, in degrees. zoom Defaults to the current view. Zoom value, between 0 and 1. fov Defaults to the current view. Field of view values, in degrees. title_text Default NULL. Text. Adds a title to the movie, using magick::image_annotate. title_offset Default c(20,20). Distance from the top-left (default, gravity direction in image_annotate) corner to offset the title. title_color Default black. Font color. title_size Default 30. Font size in pixels. title_font Default sans. String with font family such as "sans", "mono", "serif", "Times", "Helvetica", "Trebuchet", "Georgia", "Palatino" or "Comic Sans". title_bar_color Default NULL. If a color, this will create a colored bar under the title. title_bar_alpha Default 0.5. Transparency of the title bar. image_overlay Default NULL. Either a string indicating the location of a png image to overlay over the whole movie (transparency included), or a 4-layer RGBA array. This image will be resized to the dimension of the movie if it does not match exactly. vignette Default FALSE. If TRUE or numeric, a camera vignetting effect will be added to the image. 1 is the darkest vignetting, while 0 is no vignetting. If vignette is a length-2 vector, the second entry will control the blurriness of the vignette effect. vignette_color Default "black". Color of the vignette. vignette_radius Default 1.3. Radius of the vignette, as a porportion of the image dimensions. title_position Default northwest. Position of the title. audio Default NULL. Optional file with audio to add to the video. progbar Default TRUE if interactive, FALSE otherwise. If FALSE, turns off progress bar. Will display a progress bar when adding an overlay or title. ... Additional parameters to pass to magick::image_annotate. ## Examples if(interactive()) { filename_movie = tempfile() #By default, the function produces a 12 second orbit at 30 frames per second, at 30 degrees azimuth. # \donttest{ montereybay %>% sphere_shade(texture="imhof1") %>% plot_3d(montereybay, zscale=50, water = TRUE, watercolor="imhof1", waterlinecolor="white", waterlinealpha=0.5) #Un-comment the following to run: #render_movie(filename = filename_movie) # } filename_movie = tempfile() #You can change to an oscillating orbit. The magnification is increased and azimuth angle set to 30. #A title has also been added using the title_text argument. # \donttest{ #Un-comment the following to run: #render_movie(filename = filename_movie, type = "oscillate", # frames = 60, phi = 30, zoom = 0.8, theta = -90, # title_text = "Monterey Bay: Oscillating") # } filename_movie = tempfile() #Finally, you can pass your own set of values to the #camera parameters as a vector with type = "custom". phivechalf = 30 + 60 * 1/(1 + exp(seq(-7, 20, length.out = 180)/2)) phivecfull = c(phivechalf, rev(phivechalf)) thetavec = -90 + 45 * sin(seq(0,359,length.out = 360) * pi/180) zoomvec = 0.45 + 0.2 * 1/(1 + exp(seq(-5, 20, length.out = 180))) zoomvecfull = c(zoomvec, rev(zoomvec)) # \donttest{ #Un-comment the following to run #render_movie(filename = filename_movie, type = "custom", # frames = 360, phi = phivecfull, zoom = zoomvecfull, theta = thetavec) rgl::rgl.close() # } }
2022-06-25 16:54:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22588889300823212, "perplexity": 8976.946024623821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036077.8/warc/CC-MAIN-20220625160220-20220625190220-00026.warc.gz"}
https://socratic.org/questions/5a2eb9c9b72cff567e6786af#538805
# Question #786af Jan 21, 2018 $\frac{1}{4} \left(4 {a}^{3} + 6 {a}^{3} + 4 {a}^{2} + a + 1\right)$ #### Explanation: I'm interpreting the question as: ${\lim}_{h \to 0} \left(h \cdot \left({\left(a + 0 h\right)}^{3} + {\left(a + 1 h\right)}^{3} + \cdots + {\left(a + \left(n - 1\right) h\right)}^{3}\right)\right)$ ${\lim}_{h \to 0} {\sum}_{k = 0}^{n - 1} \left(h \cdot f \left(a + k \cdot h\right)\right)$ where $f \left(x\right) = {x}^{3}$. This is a left Riemann sum for the function $f \left(x\right) = {x}^{3}$ on the interval from $a$ to $a + 1$. In this case $h = \frac{a + 1 - a}{n} = \frac{1}{n}$. We can evaluate the limit by evaluating the equivalent definite integral: ${\int}_{a}^{a + 1} {x}^{3} \mathrm{dx} = {\left[\frac{1}{4} {x}^{4}\right]}_{a}^{a + 1} = \frac{1}{4} \left({\left(a + 1\right)}^{4} - {a}^{4}\right)$ $= \frac{1}{4} \left({a}^{4} + 4 {a}^{3} + 6 {a}^{3} + 4 {a}^{2} + a + 1 - {a}^{4}\right)$ $= \frac{1}{4} \left(4 {a}^{3} + 6 {a}^{3} + 4 {a}^{2} + a + 1\right)$
2022-08-12 11:37:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9984689950942993, "perplexity": 385.2281934786576}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571692.3/warc/CC-MAIN-20220812105810-20220812135810-00270.warc.gz"}
https://blender.stackexchange.com/questions/117151/how-can-i-connect-many-faces-across-many-vertices/117152
How can i connect many faces across many vertices? i have these vertices across from each and i wanna connect many faces instead of just one big face (cause that makes it look weird). • As there are enough vertices in order to form quads you can use F2 addon included in Blender by default – Mr Zak Aug 29 '18 at 0:11 • – Mr Zak Aug 29 '18 at 0:16 If you select both the edge loops, press W to bring up the specials menu and then select Bridge Edge Loops or press E. This will draw faces between the edge loops. If they are of the same number of verts, the faces will be quads.
2020-10-26 19:27:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20833760499954224, "perplexity": 1945.9243135506406}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891624.95/warc/CC-MAIN-20201026175019-20201026205019-00399.warc.gz"}
https://www.studysmarter.us/textbooks/physics/matter-interactions-4th-edition/the-fundamental-interactions/37p-two-thin-hollow-plastic-spheres-about-the-size-of-a-ping/
Suggested languages for you: Americas Europe 37P Expert-verified Found in: Page 125 ### Matter & Interactions Book edition 4th edition Author(s) Ruth W. Chabay, Bruce A. Sherwood Pages 1135 pages ISBN 9781118875865 # Two thin hollow plastic spheres, about the size of a ping-pong ball with masses (${{\mathbf{m}}}_{{\mathbf{1}}}{\mathbf{=}}{{\mathbf{m}}}_{{\mathbf{2}}}{\mathbf{=}}{\mathbf{2}}{\mathbf{×}}{{\mathbf{10}}}^{\mathbf{-}\mathbf{3}}{\mathbf{}}{\mathbf{kg}}$), have been rubbed with wool. Sphere 1 has a charge of ${{\mathbf{q}}}_{{\mathbf{1}}}{\mathbf{=}}{\mathbf{-}}{\mathbf{2}}{\mathbf{×}}{{\mathbf{10}}}^{\mathbf{-}\mathbf{9}}{\mathbf{}}{\mathbf{C}}$ and is at location $\left(0.50,-0.20,0\right)$. Sphere 2 has a charge of ${{\mathbf{q}}}_{\mathbf{2}}{\mathbf{=}}{\mathbf{-}}{\mathbf{4}}{\mathbf{×}}{{\mathbf{10}}}^{\mathbf{-}\mathbf{9}}{\mathbf{}}{\mathbf{C}}$ and is atlocation $\left(-0.40,0.40,0\right)$. It will be useful to draw a diagram of the situation, including the relevant vectors.a) What is the relative position vector $\stackrel{\mathbf{\to }}{\mathbf{r}}$ pointing from q1 to q2? b) What is the distance between q1 and q2? c) What is the unit vector $\stackrel{\mathbf{^}}{\mathbf{r}}$ in the direction of $\stackrel{\mathbf{\to }}{\mathbf{r}}$? d) What is the magnitude of the gravitational force exerted on q2 by q1? e) What is the (vector) gravitational force exerted on q2 by q1? f) What is the magnitude of the electric force exerted on q2 by q1? g) What is the (vector) electric force exerted on q2 by q1? h) What is the ratio of the magnitude of the electric force to the magnitude of the gravitational force? i) if the two masses were four times further away (that is, if the distance between the masses were ${\mathbf{4}}\stackrel{\mathbf{\to }}{\mathbf{r}}$), what would be the ratio of the magnitude of the electric force to the magnitude of the gravitational force now? • a) The relative position vector is $\left(-0.90\mathrm{m}\right)\stackrel{^}{\mathrm{i}}+\left(+0.60\mathrm{m}\right)\stackrel{^}{\mathrm{j}}+0\stackrel{^}{\mathrm{k}}$. • b) the distance between the points ${\mathrm{q}}_{1}\mathrm{and}{\mathrm{q}}_{2}$ is $1.081\mathrm{m}$. • c) the unit vector is $\left(-0.832\mathrm{m}\right)\stackrel{^}{\mathrm{i}}+\left(0.555\mathrm{m}\right)\stackrel{^}{\mathrm{j}}$. • d) the magnitude of the gravitational force is $2.283×{10}^{-16}\mathrm{N}$. • e) the (vector) gravitational force exerted on ${\mathrm{q}}_{2}\mathrm{by}{\mathrm{q}}_{1}$is $\left(-1.899×{10}^{-16}\mathrm{N}\right)\stackrel{^}{\mathrm{i}}+\left(1.266×{10}^{-16}\mathrm{N}\right)\stackrel{^}{\mathrm{j}}$. • f) the magnitude of the electric force exerted on ${\mathrm{q}}_{2}\mathrm{by}{\mathrm{q}}_{1}$ is $6.154×{10}^{-8}\mathrm{N}$. • g) the (vector) electric force exerted on ${\mathrm{q}}_{2}\mathrm{by}{\mathrm{q}}_{1}$ is role="math" localid="1661249258040" $\left(5.12×{10}^{-9}\mathrm{N}\right)\stackrel{^}{\mathrm{i}}+\left(3.415×{10}^{-9}\mathrm{N}\right)\stackrel{^}{\mathrm{j}}$. • h) the ratio of the magnitude of the electric force to the magnitude of the gravitational force is $2.69×{10}^{8}$. • i) The ratio of the magnitude of the electric force and the gravitational force is $2.69×{10}^{8}$. See the step by step solution ## Step 1: Identification of the given data The given data can be listed below as: • The mass of the hollow plastic spheres is $2×{10}^{-3}\mathrm{kg}$. • The charge of sphere 1 is $-2×{10}^{-9}\mathrm{C}$. • The first sphere is at the location $\left(0.50,-0.20,0\right)$. • The charge of sphere 2 is $-4×{10}^{-9}\mathrm{C}$. • The first sphere is at the location $\left(-0.40,0.40,0\right)$. ## Step 2: Significance of the Newton’s gravitational NS Coulomb’s law in identifying the forces Newton’s gravitational law states that the particle in the field of another particle feels a force that is directly proportional to the product of the mass and inversely proportional to the square of the distance. Coulomb’s law states that the unlike charges attract and like charges repel each other. The equation of the gravitational and Coulomb’s equation give the gravitational and the electric force along with the position, distance, unit vector, and the ratio of the gravitational and electric force. ## Step 3: Determination of the gravitational and the electric force along with the position and the unit vector and the ratio of the electric and gravitational forces The free-body diagram of the two particles and the vectors have been provided below. a) According to Newton’s laws, the position vector for the sphere 1 can be expressed as: $\stackrel{\to }{{\mathrm{r}}_{1}}=\left(0.50\mathrm{m}\right)\stackrel{^}{\mathrm{i}}-\left(0.20\mathrm{m}\right)\stackrel{^}{\mathrm{j}}+\left(0\right)\stackrel{^}{\mathrm{k}}$ Similarly, the position vector for the sphere 2 can be expressed as: $\stackrel{\to }{{\mathrm{r}}_{2}}=-\left(0.40\mathrm{m}\right)\stackrel{^}{\mathrm{i}}+\left(0.40\mathrm{m}\right)\stackrel{^}{\mathrm{j}}+\left(0\mathrm{m}\right)\stackrel{^}{\mathrm{k}}$ Hence, the position vector from is described as: $\stackrel{\to }{\mathrm{r}}=\stackrel{\to }{{\mathrm{r}}_{2}}-\stackrel{\to }{{\mathrm{r}}_{1}}\phantom{\rule{0ex}{0ex}}\stackrel{\to }{\mathrm{r}}=\left(-\left(0.40\mathrm{m}\right)\stackrel{^}{\mathrm{i}}+\left(0.40\mathrm{m}\right)\stackrel{^}{\mathrm{j}}+\left(0\mathrm{m}\right)\stackrel{^}{\mathrm{k}}\right)-\left(\left(0.50\mathrm{m}\right)\stackrel{^}{\mathrm{i}}-\left(0.20\mathrm{m}\right)\stackrel{^}{\mathrm{j}}+\left(0\mathrm{m}\right)\stackrel{^}{\mathrm{k}}\right)\phantom{\rule{0ex}{0ex}}\stackrel{\to }{\mathrm{r}}=-\left(0.90\mathrm{m}\right)\stackrel{^}{\mathrm{i}}+\left(0.60\mathrm{m}\right)\stackrel{^}{\mathrm{j}}+\left(0\mathrm{m}\right)\stackrel{^}{\mathrm{k}}$ Thus, the relative position vector is $-\left(0.90\mathrm{m}\right)\stackrel{^}{\mathrm{i}}+\left(0.60\mathrm{m}\right)\stackrel{^}{\mathrm{j}}+\left(0\mathrm{m}\right)\stackrel{^}{\mathrm{k}}$. b) The position vector can help calculate the distance between the particles. Hence, the distance between the points ${\mathrm{q}}_{1}\mathrm{and}{\mathrm{q}}_{2}$ can be expressed as: $\left|\stackrel{\to }{\mathrm{r}}\right|=\sqrt{{\left({\mathrm{r}}_{\mathrm{x}}\right)}^{2}+{\left({\mathrm{r}}_{\mathrm{y}}\right)}^{2}+{\left({\mathrm{r}}_{\mathrm{z}}\right)}^{2}}\phantom{\rule{0ex}{0ex}}$ Substituting the values in the above equation, we get- $\left|\stackrel{\to }{\mathrm{r}}\right|=\sqrt{{\left(0.90\mathrm{m}\right)}^{2}+{\left(0.60\mathrm{m}\right)}^{2}+{\left(0\mathrm{m}\right)}^{2}}\phantom{\rule{0ex}{0ex}}\left|\stackrel{\to }{\mathrm{r}}\right|=1.081\mathrm{m}$ Thus, the distance between the points ${\mathrm{q}}_{1}\mathrm{and}{\mathrm{q}}_{2}$ is $1.081\mathrm{m}$. c) From Newton’s gravitational law, the unit vector can be expressed as: $\stackrel{^}{\mathrm{r}}=\frac{\stackrel{\to }{\mathrm{r}}}{\left|\stackrel{\to }{\mathrm{r}}\right|}$ Here, $\stackrel{^}{\mathrm{r}}$ is the unit vector, $\stackrel{\to }{\mathrm{r}}$ is the position vector, and $\left|\stackrel{\to }{\mathrm{r}}\right|$ is the distance between the points ${\mathrm{q}}_{1}\mathrm{and}{\mathrm{q}}_{2}$. Substituting the values in the above equation and using the value from the equation , we get- $\stackrel{^}{\mathrm{r}}=\frac{\left(-0.90\mathrm{m}\right)\stackrel{^}{\mathrm{i}}+\left(0.60\mathrm{m}\right)\stackrel{^}{\mathrm{j}}+\left(0\mathrm{m}\right)\stackrel{^}{\mathrm{k}}}{1.081\mathrm{m}}\phantom{\rule{0ex}{0ex}}\stackrel{^}{\mathrm{r}}=\left(-0.832\right)\stackrel{^}{\mathrm{i}}+\left(0.555\right)\stackrel{^}{\mathrm{j}}+\left(0\right)\stackrel{^}{\mathrm{k}}$ Thus, the unit vector is $\left(-0.832\right)\stackrel{^}{\mathrm{i}}+\left(0.555\right)\stackrel{^}{\mathrm{j}}$. d) From Newton’s gravitational law, the gravitational force exerted on the sphere q2 can be expressed as: $\mathrm{F}=\frac{{\mathrm{Gm}}_{1}{\mathrm{m}}_{2}}{{\mathrm{r}}^{2}}............\left(2\right)$ Here, F is the gravitational force, G is the gravitational constant, ${\mathrm{m}}_{1}\mathrm{and}{\mathrm{m}}_{2}$ are the mass of the spheres that are $2×{10}^{-3}\mathrm{kg}$, and r is the distance amongst them. Substituting the values in the above equation, we get- $\begin{array}{rcl}\mathrm{F}& =& \frac{\left(6.67×{10}^{-11}\mathrm{N}.{\mathrm{m}}^{2}/{\mathrm{kg}}^{2}\right)×{\left(2×{10}^{-3}\mathrm{kg}\right)}^{2}}{{\left(1.081\mathrm{m}\right)}^{2}}\\ & =& \left(\frac{6.67×{10}^{-11}×{\left(2×{10}^{-3}\right)}^{2}}{{\left(1.081\right)}^{2}}\right).\left(\frac{1\mathrm{N}.{\mathrm{m}}^{2}/{\mathrm{kg}}^{2}×1\mathrm{kg}}{1{\mathrm{m}}^{2}}\right)\\ & =& \frac{26.68×{10}^{-17}}{1.168561}.\left(1\mathrm{N}\right)\\ \mathrm{F}& =& 2.283×{10}^{-16}\mathrm{N}\end{array}$ Thus, the magnitude of the gravitational force is . e) From Newton’s gravitational law, the vector gravitational force exerted on ${\mathrm{q}}_{2}\mathrm{and}{\mathrm{q}}_{1}$ is expressed as: $\stackrel{\to }{\mathrm{F}}=\frac{{\mathrm{Gm}}_{1}{\mathrm{m}}_{2}}{{\mathrm{r}}^{2}}\stackrel{^}{\mathrm{r}}$ Here, $\stackrel{^}{\mathrm{r}}$ is the unit vector exerted on ${\mathrm{q}}_{2}\mathrm{by}{\mathrm{q}}_{1}$ . Substituting the values in the above equation, we get- $\stackrel{\to }{\mathrm{F}}=2.283×{10}^{-16}\mathrm{N}×\left(\left(-0.832\right)\stackrel{^}{\mathrm{i}}+\left(0.555\right)\stackrel{^}{\mathrm{j}}\right)\phantom{\rule{0ex}{0ex}}\stackrel{\to }{\mathrm{F}}=\left(-1.899×{10}^{-16}\mathrm{N}\right)\stackrel{^}{\mathrm{i}}+\left(1.266×{10}^{-16}\mathrm{N}\right)\stackrel{^}{\mathrm{j}}$ Thus, the (vector) gravitational force exerted on ${\mathrm{q}}_{2}\mathrm{by}{\mathrm{q}}_{1}$ is $\left(-1.899×{10}^{-16}\mathrm{N}\right)\stackrel{^}{\mathrm{i}}+\left(1.266×{10}^{-16}\mathrm{N}\right)\stackrel{^}{\mathrm{j}}$. f) According to Coulomb’s law, the magnitude of the electric force can be expressed as: $\mathrm{F}=\mathrm{k}\frac{{\mathrm{q}}_{1}{\mathrm{q}}_{2}}{{\mathrm{r}}^{2}}.......\left(3\right)$ Here, F is the magnitude of the electric force, k is the Coulomb’s constant that is about $8.99×{10}^{9}\mathrm{N}.{\mathrm{m}}^{2}/{\mathrm{C}}^{2}$, ${\mathrm{q}}_{1}\mathrm{and}{\mathrm{q}}_{2}$ are the charges of the masses, and r is the distance amongst them. Substituting the values in the above equation (3), we get- $\begin{array}{rcl}\mathrm{F}& =& 8.99×{10}^{9}\mathrm{N}.{\mathrm{m}}^{2}/{\mathrm{C}}^{2}×\frac{\left(-2×{10}^{-9}\mathrm{C}\right)×\left(-4×{10}^{-9}\mathrm{C}\right)}{{\left(1.081\mathrm{m}\right)}^{2}}\\ & =& \left(8.99×{10}^{9}×\frac{8×{10}^{-18}}{1.16586}\right).\left(1\mathrm{N}.{\mathrm{m}}^{2}/{\mathrm{C}}^{2}×\frac{1{\mathrm{C}}^{2}}{1{\mathrm{m}}^{2}}\right)\\ & =& \frac{7.192×{10}^{-8}}{1.168561}.\left(1\mathrm{N}\right)\\ \mathrm{F}& =& 6.154×{10}^{-8}\mathrm{N}\end{array}$ Thus, the magnitude of the electric force exerted on ${\mathrm{q}}_{2}\mathrm{by}{\mathrm{q}}_{1}$ is $6.154×{10}^{-8}\mathrm{N}$. g) From Coulomb’s law, the vector electric force exerted on ${\mathrm{q}}_{2}\mathrm{by}{\mathrm{q}}_{1}$ is expressed as: $\stackrel{\to }{\mathrm{F}}=\frac{{\mathrm{q}}_{1}{\mathrm{q}}_{2}}{{\mathrm{r}}^{2}}\stackrel{^}{\mathrm{r}}$ Here, $\stackrel{^}{\mathrm{r}}$ is the unit vector exerted on ${\mathrm{q}}_{2}\mathrm{by}{\mathrm{q}}_{1}$. Substituting the values in the above equation, we get- $\stackrel{\to }{\mathrm{F}}=6.154×{10}^{-9}\mathrm{N}×\left(-0.832\right)\stackrel{^}{\mathrm{i}}+\left(0.555\right)\stackrel{^}{\mathrm{j}}\phantom{\rule{0ex}{0ex}}\stackrel{\to }{\mathrm{F}}=\left(5.12×{10}^{-9}\mathrm{N}\right)\stackrel{^}{\mathrm{i}}+\left(3.145×{10}^{-9}\right)\stackrel{^}{\mathrm{j}}$ Thus, the (vector) electric force exerted on ${\mathrm{q}}_{2}\mathrm{by}{\mathrm{q}}_{1}$ is $\left(5.12×{10}^{-9}\mathrm{N}\right)\stackrel{^}{\mathrm{i}}+\left(3.415×{10}^{-9}\right)\stackrel{^}{\mathrm{j}}$. h) The ratio of the magnitude of the electric force to the magnitude of the gravitational force can be expressed as: $\mathrm{ratio}=\frac{6.154×{10}^{-8}\mathrm{N}}{2.283×{10}^{-16}\mathrm{N}}\phantom{\rule{0ex}{0ex}}\mathrm{ratio}=2.69×{10}^{8}\mathrm{N}$ Thus, the ratio of the magnitude of the electric force to the magnitude of the gravitational force is $2.69×{10}^{8}$. i) If the distance between the masses becomes $4\stackrel{\to }{\mathrm{r}}$, then the distance between the masses can be expressed from equation : $\left|\stackrel{\to }{\mathrm{r}}\right|=4×1.081\mathrm{m}\phantom{\rule{0ex}{0ex}}\left|\stackrel{\to }{\mathrm{r}}\right|=4.324\mathrm{m}$ Per equation (3) and for the distance 4$\stackrel{\to }{\mathrm{r}}$ and charges ${\mathrm{q}}_{2}\mathrm{and}{\mathrm{q}}_{1}$ , the magnitude of the electric force will be: $\begin{array}{rcl}\mathrm{F}& =& 8.99×{10}^{9}\mathrm{N}.{\mathrm{m}}^{2}/{\mathrm{C}}^{2}×\frac{\left(-2×{10}^{-9}\mathrm{C}\right)×\left(-4×{10}^{-9}\mathrm{C}\right)}{{\left(4.324\mathrm{m}\right)}^{2}}\\ & =& \left(\frac{8.99×{10}^{9}×-2×{10}^{-9}×4×{10}^{-9}}{18.696}\right)\left(\frac{1\mathrm{N}.{\mathrm{m}}^{2}/{\mathrm{C}}^{2}×1\mathrm{C}×1\mathrm{C}}{1{\mathrm{m}}^{2}}\right)\\ & =& 8.99×{10}^{9}×\frac{8×{10}^{-18}}{18.696}.\left(1\mathrm{N}\right)\\ \mathrm{F}& =& 3.846×{10}^{-9}\mathrm{N}\end{array}$ From equation (2), for the distance $4\stackrel{\to }{\mathrm{r}}$, the magnitude of the gravitational force can be calculated as: $\mathrm{F}=\frac{\left(6.67×{10}^{-11}\mathrm{N}.{\mathrm{m}}^{2}/{\mathrm{kg}}^{2}\right)×{\left(2×{10}^{-3}\mathrm{kg}\right)}^{2}}{{\left(4.324\mathrm{m}\right)}^{2}}\phantom{\rule{0ex}{0ex}}\mathrm{F}=\left(\frac{6.67×{10}^{-11}×{\left(2×{10}^{-3}\right)}^{2}}{18.696}\right)\left(\frac{1\mathrm{N}.{\mathrm{m}}^{2}/{\mathrm{kg}}^{2}×1{\mathrm{kg}}^{2}}{1{\mathrm{m}}^{2}}\right)\phantom{\rule{0ex}{0ex}}\mathrm{F}=1.4269×{10}^{-17}.\left(1\mathrm{N}\right)\phantom{\rule{0ex}{0ex}}\mathrm{F}=1.4269×{10}^{-17}\mathrm{N}$ Then, the ratio of the magnitude of the electric force and the gravitational force is- $\mathrm{ratio}=\frac{3.846×{10}^{-9}\mathrm{N}}{1.4269×{10}^{-17}\mathrm{N}}\phantom{\rule{0ex}{0ex}}\mathrm{ratio}=2.69×{10}^{8}$ Thus, the ratio of the magnitude of the electric force and the gravitational force is $2.69×{10}^{8}$.
2023-03-30 01:48:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 87, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.964271605014801, "perplexity": 240.71649618774157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949093.14/warc/CC-MAIN-20230330004340-20230330034340-00232.warc.gz"}
https://gamedev.stackexchange.com/questions/151672/unity-generates-changes-in-github-on-saving-scene
# Unity generates changes in GitHub on saving scene So, here is my problem. First I have a look at the changes in the repository which I am working with. Initially there are no changes: Then I open up Unity, i don't make any changes and just save the scene: I then go to GitHub Desktop and suddenly there are a lot of changes appearing which I did not make: How can I avoid that? The files which are getting changed are .prefab files. • Have you committed and fetched at least once? – user100681 Dec 3 '17 at 12:55 • @GabrieleVierti, yes I did. – trafalgarLaww Dec 3 '17 at 13:06 • @GabrieleVierti, every time I need to commit I have to go through all that unnecessary changes and chose and commit only those that were generated by me. – trafalgarLaww Dec 3 '17 at 13:07 • @GabrieleVierti, and there are seems to appear more and more changes after saving scene. I mean every time I commit then the next time I save scene there a few more unknown changes. – trafalgarLaww Dec 3 '17 at 13:08 Those changes seem to be of .meta file contents, and/or scene stuff [Update: As pointed out in the comments, they seem to be prefab data. But the issue and solution are the same]. It's not unthinkable that some floats will have the least significant bits changed due to floating-point precision errors. You can just ignore the fact that those changes exist, and commit them. Don't worry... A position or rotation changed by 1e-6 (0.000001; BTW, yours are, like, 1e-12+ [0.000000000001]) is not going to have any noticeable difference in most games (in fact, unity considers anything with difference under 1e-5 as "equal", if you look at the source code). Your game would need to be a game like KSP trying to shoot something across the solar-system to hit a relatively tiny target in order for those to make a noticeable difference. =D • @immibis Well, you don't get much of a choice; .meta files are necessary for the project to work correctly, AFAIK; not committing those would mean that your remote repo does not represent the same as your local repo(s), breaking the primary purpose of having version-control in the first place, and then, even if that wasn't the case, it would give you hell later and/or when working on teams. This is why, basically, all VCS tutorials on unity tell you to configure unity so that it makes .meta files plain-text, so that VCS can work with them easily (because you need to commit them). – XenoRo Dec 3 '17 at 22:50 • @immibis I meant that you can't .ignore the files, and exactly because the changes are 1e-6, and will have no noticeable result in-game, you're better off just committing them. --- If you want to waste effort on reviewing which changes to commit and which not to, I can't understand the logic in that, but do whatever you feel like. 🤷 (BTW, about "moving objects in the order of 1e-6, that's not the question, that's my answer) – XenoRo Dec 3 '17 at 23:13
2021-07-29 03:37:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3762952983379364, "perplexity": 1675.8541639936022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153814.37/warc/CC-MAIN-20210729011903-20210729041903-00055.warc.gz"}
https://crypto.stackexchange.com/questions/19658/use-of-elgamal-encryption-for-signature-generation
# Use of ElGamal encryption for signature generation If RSA (textbook RSA that is) generates a digital signature by using the sender's private key, couldn't any cryptosystem (the only two that come to mind are RSA and ElGamal) capable of asymmetric encryption do the same? For example, I've read that ElGamal is encryption only. Why can't ElGamal encryption (not DSA) do the same that RSA does? Because the function used for RSA encryption and decryption is commutative. This means that given secret key $sk$ and public key $pk$ for all messages $m$ you have that $$D(E(m,pk),sk)=E(D(m,sk),pk)=m.$$ This means that first encrypting a message with the public key and then decrypting the so obtained ciphertext with the corresponding secret key yields the same as first decrypting a message with the secret key and then encrypting the result with the corresponding public key.
2020-11-24 01:41:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49557963013648987, "perplexity": 944.8642985773268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141169606.2/warc/CC-MAIN-20201124000351-20201124030351-00163.warc.gz"}
http://tug.org/pipermail/texhax/2013-December/020755.html
# [texhax] Footnote margin question (redux) Douglas McKenna doug at mathemaesthetics.com Wed Dec 18 03:13:27 CET 2013 On Sept 10, on this list (see <http://tug.org/pipermail/texhax/2013-September/020515.html>), So I'll try one more time. To wit: > This must be simple, but scouring the internet hasn't helped. > > I'm placing text on a page in landscape orientation. > I simply want to make a footnote's right margin conform to my > text's right margin, when that text's right margin has been > adjusted away from the right side of the page. > > Here's a MWE (FWIW, I'm using TeXLive 2010, LaTeX2e, pdftex). > It shows the problem at the bottom of the last page, and asks the > questions I seek the answers to ... Here's the same MWE code again, unquoted and ready for a quick cut-n-paste: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass{article} \usepackage[paperwidth=11in, paperheight=8.5in, margin=1in]{geometry} \usepackage{changepage} \title{FOOTNOTE MARGIN MYSTERY:~A MWE} %\author{} \date{} \begin{document} \maketitle \thispagestyle{empty} \clearpage This is the first page's text:~blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah.\footnote{This is a longish footnote for the first page's text that word-wraps at the right margin. {\bf Notice that the right margin of this footnote is on the right side of the landscape page, as expected.}} \clearpage Now we continue with some text on the second page of the document. Notice that the right margin on this landscape page is now inset by 3 inches, because we've used the {\tt adjustwidth} environment (from the {\tt changepage} package) to reduce the right margin by 3 inches.\footnote{But in this footnote, otherwise similar to what was on the first page, it doesn't seem to be subject to the right margin reduction we thought was in effect via the {\tt adjustwidth} environment to which the main text is currently subject. {\bf WHY?? How does one conform the footnote right margin to the 3 inch inset right margin that the text has??}}
2017-10-22 04:53:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9289796352386475, "perplexity": 3723.842386649675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825141.95/warc/CC-MAIN-20171022041437-20171022061437-00780.warc.gz"}
https://www.greencarcongress.com/2007/12/shell-and-hr-bi.html
## Shell and HR Biopetroleum Form Joint Venture for Algal Biofuel Production ##### 11 December 2007 Royal Dutch Shell plc and HR Biopetroleum, a Hawaii-based algal biofuels company, have formed a joint venture—Cellena—to build a pilot facility in Hawaii to grow marine algae and produce vegetable oil for conversion into biofuel. Shell will have the majority share in Cellena. Construction of the demonstration facility on the Kona coast of Hawaii Island will begin immediately. The site, leased from the Natural Energy Laboratory of Hawaii Authority (NELHA), is near existing commercial algae enterprises, primarily serving the pharmaceutical and nutrition industries. The facility will grow only non-modified, marine microalgae species in open-air ponds using proprietary technology. Algae strains used will be indigenous to Hawaii or approved by the Hawaii Department of Agriculture. Protection of the local environment and marine ecosystem has been central to facility design. Once the algae are harvested, the vegetable oil will be extracted. The facility’s small production volumes will be used for testing. An academic research programme will support the project, screening natural microalgae species to determine which ones produce the highest yields and the most vegetable oil. The program will include scientists from the Universities of Hawaii, Southern Mississippi and Dalhousie, in Nova Scotia, Canada. An advantage of algae is their rapid growth. They can double their mass several times a day and produce at least 15 times more oil per hectare than alternatives such as rape, palm soya or jatropha. Moreover, facilities can be built on coastal land unsuitable for conventional agriculture. Over the long term, algae cultivation facilities also have the potential to capture waste CO2 directly from industrial facilities such as power plants. The Cellana demonstration will use bottled CO2 to explore this potential. Algae have great potential as a sustainable feedstock for production of diesel-type fuels with a very small CO2 footprint. This demonstration will be an important test of the technology and, critically, of commercial viability. —Graeme Sweeney, Shell Executive Vice President Future Fuels and CO2 Well good luck to em. Algae works, but it doesn't work cost effectively. http://greyfalcon.net/algae4 http://greyfalcon.net/algae If $30 per gallon were a viable option, it'd be no problem. The main thing is that with big players now working on this, perhaps someone will find a production process that can significantly reduce costs. I'm hopeful that someone will come up with something, but cost is the biggest issue with this technology at this point. Nobody has a full-up commercial pilot facility yet, so nobody is certain what the final cost will be. ==perhaps someone will find a production process that can significantly reduce costs== Not likely. The major limitations aren't engineering hurdles. They are the raw physics associated with photosynthesis itself. _ Whats more if you have open ponds, with lots of sunlight. Thats a lot of water being used. _ And if it's open ponds, then you can't really pump in much CO2 without a huge percentage of it leaking out. _ Lets just say, it's an uphill battle. "Nobody has a full-up commercial pilot facility yet, so nobody is certain what the final cost will be." Not true... There are several commercial-scale pilot facilities up & running. Bio-King & Solazyme are two of them. @ Greyflcn I think the low-cost approach being explored to date is essentially long plastic bags through which gases and nutrients are sent. The bags lay flat to maximize sun capture and minimize structure costs. It could be a cotton field or a pineaple field other times of year. There's plenty of water in some parts of Hawaii, and sun. I've seen your analysis, and I wonder if you're not counting co products like ethanol, protein for animal feed, flu gas sequestration credits, or somesuch? Certainly in the tropics the PAR should be higher. Some of the most logical cultivation approaches use cheap bags and PVC and leverage standard farm equipment. Some fields need to be left fallow now and then anyway, so perhaps this is "found money," in such cases. I think if you look at a bigger system (much like the dairy in Arizona fermenting the cow dung into ethanol, recapturing the methane to run the distillation boiler, and planning to add algae cultivation to clean up the runoff) the ROI, and EROIE may be considerably more attractive. If the research pans out, they should negotiate a cheap lease for the whole island of Koolahwe, which is uninhabited and devoid of vegitation and was used by the US Navy for bombing practice for several decades. A tax-payer funded effort to clean up unexploded ordnance was incomplete, so Shell would have to spend some money on finishing the job. The pond terraces required for algaculture could be produced quite crudely using explosives and lined with plastics. Solar-powered water pumps, harvesting, logistics and processing equipment on the island would provide "offshore" jobs for the state of Hawaii, so the product loaded onto tankers is finished biodiesel with only a couple of percent of dino-juice to prevent biological contamination. hey guys check out Valcent Products Inc. Vertigro technology to grow algae for biofuel production and other uses. They use a closed loop system that wastes no water. the only water lost is to the algae. the algae is grown in vertical bags in greenhouses to increase density. they say they can put these facilities anywhere. they already built their first test facility in Texas and are going to release results. here's a link.... http://www.valcent.net/s/Ecotech.asp?ReportID=182039 they have a similar system to grow produce. There are a number of as yet untapped and poorly understood potential benifits that await a properly costed sytem. Consider the analogy of compound extraction. Water is a neccesary component of both algae farming, reticulated water supply, sanitation , sewage treatment and conventional coal powered power stations among others. So can be expected to exist in some form around areas of population as these are prerequisites for same. A model algae plant will look at this and try (There is no technical obtsacle) to touch base at all these points. Pre treated sewage with solids removed (put that in a barge) is now a transport medium, transporting recyclable high nutrient water to the algae ponds .The ponds may be a series of "billabongs" on route to the nearby coalpower station. Coalpower and others require a large body of water for cooling. Algaes require depending on species, a stabilized temp for maximum growth. Nutrients dep on variety. CO2 depending on variety sunlight " " " The very worst outcomes from any half baked version will supply clean water to the outflow. Once upon a world we would have dreamed of such a good outcome. In the pollution stressed greenhouse finite world we find ourselves in now, people seem to have fogotten that there are a lot of peripheral? issues re proper stewardship and minimising other enviromental damage. A second stage in this compound algae factory bio remediation project will see the selected standard algae forms providing @20% by weight of oil (given high yeild extraction methods)with a large residual stream of high quality paper making material (more than a guess) or high protein feedstock for animals @20%+ . These nubers are a guide only and depending on the level of extraction and the area of focus, will trend more to one by product or another. - nb Valuble byproduct. Stage three sees further selection of spp with a view to extraction of toxic and later usefull compounds from the waste stream - now a valuable resource. - Phytomining . This resource will be carrying in a prcentage of all the minerals, nutrients and manufactured compounds including some very toxic that are now almost without exception finding their way to ocean outfalls here in Aus and much more commonly on a world stage into the next downstram populations water supply. This third stage would be sited at the various "billabong sited along the canal. Each of these little labs charged with assurring the suitability for next use. Packing and marketing of their component and even the sale or further product development This would leave a high quality water ready for reintroduction for downstream use or further treatment as necessary for general consumption or possibly a reverse osmosis membrane or other technology applied able high quality supply. This model demonstrates multiple testing steps wich should assure consistant quality assurance. The sequestration of CO2 is not to be measured as such bu any study in that context will reveal a 50% saving if CO2 is removed from flue gas( we dont know how much can be absorbed without further vstudy and spp. analysis) This 50% figure is derived from the fact that the CO2 gets to go around again. Students from the local high scools, colleges, prisons, universities or regular council employees would all be capable of understanding the system and the particlar aspect they are resposible for. Much like the local sanitation boards are now. All Criticism Welcome. Two viable algae production systems: (1) Aquaflow Bionomic Corporation, demonstrating the harvesting of wild algae from open sewage ponds at the Marlborough, New Zealand municipal sewage disposal plant. Aquaflow is building a 250,000 gallon per year algae based biodiesel plant in Blenheim, New Zealand. (2) Utah State University in association with Andigen, Inc. has integrated an algae production system into a farm manure operation. The manure waste is anaerobically digested to produce methane and electric power. Liquid effluent from the digester is used to grow the algae, which disposes of the CO2, phosphates, and nitrogenous waste. The algae is then used to provide onsite feed for the animals, which produce meat or dairy products. Surplus algae is converted into algae pellets or biofuels. The undigested manure solids can be used or sold as fertilizer. A Dairy farm integrated with an algae farm produces milk, methane, electric power, animal feed, fertilizer, and biomass feedstock for pellets and biofuels. Integrating algae production with manure and effluent disposal is going to put algae on the map. Nozzle S 0 433 271 268 DLLA150S2120 NOZZLE S 0 433 271 299 DLLA150S616 NOZZLE S 0 433 271 322 DLLA28S656 NOZZLE S 0 433 271 355 DLLA 25 S 722 Nozzle S 0 433 271 366 DLLA144S747 NOZZLE S 0 433 271 377 DLLA149S775 NOZZLE S 0 433 271 403 DLLA142S791 Nozzle S 0 433 271 404 DLLA142S792 Nozzle S 0 433 271 423 DLLA144S829 NOZZLE S 0 433 271 444 DLLA144S485 Nozzle S 0 433 271 462 DLLA150S935 NOZZLE S 0 433 271 471 DLLA134S999 Nozzle S 0 433 271 478 DLLA140S1003 NOZZLE S 0 433 271 487 DLLA 136 S 1034 Nozzle S 0 433 271 499 DLLA136S1094 NOZZLE S 0 433 271 502 DLLA 142 S 1096 Nozzle S 0 433 271 521 DLLA138S1191 NOZZLE S 0 433 271 524 DLLA 134 S 1199 NOZZLE S 0 433 271 718 DLLA 140 S 1116 NOZZLE S 0 433 271 740 DLLA 136 S 943 NOZZLE S 0 433 271 774 DLLA124S1001 NOZZLE S 0 433 271 775 DLLA136S1000 NOZZLE S 0 433 271 781 DLLA144S992 NOZZLE S 0 433 271 874 DLLA150S739 Nozzle DN 0 434 250 011 DNOSD1510 Nozzle DN 0 434 250 063 DNOSD193 Nozzle DN 0 434 250 072 DN0SD220 Nozzle DN 0 434 250 092 DN0SD1930 NOZZLE DN 0 434 250 103 DN0SD293 Nozzle DN 0 434 250 120 DNOSD261 NOZZLE DN_SD 0 434 250 128 DN0SD265 Nozzle DN 0 434 250 138 DN0SD273 Nozzle DN 0 434 250 139 DNOSD274 Nozzle DN 0 434 250 153 DN12SD290 NOZZLE SD 0 434 250 155 DNOSD294 Nozzle DN 0 434 250 159 DN0SD297 NOZZLE DN 0 434 250 161 DNOSD300 NOZZLE SD 0 434 250 162 DNOSD301 Nozzle DN 0 434 250 169 DN0SD308 NOZZLE DN 0 434 250 176 DN0SD314 Nozzle DN 0 434 250 897 DN0SD310 Nozzle DN 093400-0090 DN4SDND90 Nozzle DN 093400-0800 DN4SD24ND80 http://www.dieselinjection.cn/ http://www.china-diesel.com.cn/ http://www.yellowstonediesel.com/ Tel# 0086-594-276-3665 Fax# 0086-594-275-1606 Michael Weng email:sales@dieselinjecton.cn Can the web administrator put a stop to this spammer who keeps popping up? The economics of the algae farms should be compared to cost of a bio-diesel plant -- which is approximately$1 per gallon -- The Daily Telegraph quotes Shell as claiming 60 tons of diesel per hectare. That works out at 1.23 barrels per hectare per day. Its not clear whether the process needs pumped CO2, but I suspect at that rate it can take it from the atmosphere. That makes 650,000 km2 to replace all the world's oil. That's actually conceivable, unlike doing it with maize. Good news for Western Australia and Namibia property prices. And GreenFuel Technolgies has an active demo going in Arizona - moving to a coal fired power plant in Four Corners NM. http://www.azcentral.com/news/green/articles/1004fourcorners-ON.html Efforts should be underway to develop marine algal species with high lipid yields near coastal zones - thereby addressing the water issues. The potential yields on this technology should be enough to invigorate research to lower the cost factors. Contamination continues to be an issue, especially with open air ponds. This was what appeared to tarnish the DOE Aquatic Species studies of 20 years ago. Both Vertigro (Valcent) and GreenFuel are using the closed bag format to combat the contamination issue. On a much larger scale I would imagine that standard panels (4 x 8') of clear acrylic with etched channels for liquid flow could replace the PV bags. These sheets would be assembled in a modular format and readily oriented for best sun exposure. To some degree these panels could function as building facade - greening the structure both aesthetically and literally. And some R&D should go into predator elimination. A simple organism added to the algal soup that eats predators before they eat the friendly algae. All in all there appears to be real progress. Given liquid fuels will be needed for heavy lifting in the foreseeable future - this avenue is of primary importance. The comments to this entry are closed.
2022-12-01 12:21:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4213496148586273, "perplexity": 6162.461680681499}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710813.48/warc/CC-MAIN-20221201121601-20221201151601-00194.warc.gz"}
http://www.helpteaching.com/questions/Rational_and_Irrational_Numbers/Grade_7
Want to see correct answers? Looking for Arithmetic worksheets? Check out our pre-made Arithmetic worksheets! Tweet ##### Filter By Grade You are browsing Grade 7 questions. View questions in All Grades. ##### Browse Questions • Arts (538) • English Language Arts (6091) • English as a Second Language ESL (4332) • Health and Medicine (451) • Life Skills (110) • Math (3517) • ### Subtraction • #### Statistics and Probability Concepts • Physical Education (407) • Science (5242) • Social Studies (2256) • Study Skills and Strategies (37) • Technology (78) • Vocational Education (2) # Seventh Grade (Grade 7) Rational and Irrational Numbers Questions You can create printable tests and worksheets from these Grade 7 Rational and Irrational Numbers questions! Select one or more questions using the checkboxes above each question. Then click the add selected questions to a test button before moving to another page. Grade 7 Rational and Irrational Numbers Grade 7 Rational and Irrational Numbers The number 0.6 is 1. Rational 2. Irrational Grade 7 Rational and Irrational Numbers Grade 7 Rational and Irrational Numbers A number with no fractional part is a(n): 1. Absolute Value 3. Integer 4. Rational Number Grade 7 Rational and Irrational Numbers Which of the following rational numbers are ordered from least to greatest? 1. 6.8, 0, $-9$ 2. $-5$, 59%, $3/5$ 3. $4/5$, 0.09, 83% 4. $1/8$, $-6$, 0.37 Grade 7 Rational and Irrational Numbers List all classifications for the number 5. 1. rational, integer, whole number 2. rational, whole number, natural number 3. rational, whole number, natural number, integer 4. whole number, natural, integer Grade 7 Rational and Irrational Numbers Consists of all positive whole numbers, negative whole numbers, and zero. 1. set of natural numbers 2. set of 10s 3. set of whole numbers 4. set of integers Grade 7 Rational and Irrational Numbers Which one shows the numbers placed correctly in ascending order? 1. $1/2, 0.5, 6^0, 1/3$ 2. $1/7, 20%, 1/3, 0.5$ 3. $5^1, 7/9, 50%, 1/4$ 4. $1/3, 20%, 1/8, 0.75$ Grade 7 Rational and Irrational Numbers Grade 7 Rational and Irrational Numbers CCSS: 7.NS.A.3 $(6/2-8+1/2)-:-4$ 1. $-18$ 2. $-9/8$ 3. $21/20$ 4. $9/8$ Grade 7 Rational and Irrational Numbers Which number doesn't belong with the others? 1. $3/5$ 2. $60%$ 3. $0.35$ 4. $6 xx 10^-1$ Grade 7 Rational and Irrational Numbers Which one does not belong with the others? 1. $25%$ 2. $2/8$ 3. $0.25$ 4. $2/5$ Grade 7 Rational and Irrational Numbers Which one is different from the rest? 1. $1/8$ 2. $12.5%$ 3. $0.125$ 4. $1.25$ Grade 7 Rational and Irrational Numbers CCSS: 7.NS.A.1 Grade 7 Rational and Irrational Numbers You need to have at least 5 reputation to vote a question down. Learn How To Earn Badges.
2017-10-17 18:51:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3257507085800171, "perplexity": 8926.318375819845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822480.15/warc/CC-MAIN-20171017181947-20171017201947-00607.warc.gz"}
https://www.nature.com/articles/s41467-018-03651-9?error=cookies_not_supported&code=d118b79f-5a7d-40e4-88ee-6628677844dc
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Conformational switching within dynamic oligomers underpins toxic gain-of-function by diabetes-associated amyloid ## Abstract Peptide mediated gain-of-toxic function is central to pathology in Alzheimer’s, Parkinson’s and diabetes. In each system, self-assembly into oligomers is observed and can also result in poration of artificial membranes. Structural requirements for poration and the relationship of structure to cytotoxicity is unaddressed. Here we focus on islet amyloid polypeptide (IAPP) mediated loss-of-insulin secreting cells in patients with diabetes. Newly developed methods enable structure-function enquiry to focus on intracellular oligomers composed of hundreds of IAPP. The key insights are that porating oligomers are internally dynamic, grow in discrete steps and are not canonical amyloid. Moreover, two classes of poration occur; an IAPP-specific ligand establishes that only one is cytotoxic. Toxic rescue occurs by stabilising non-toxic poration without displacing IAPP from mitochondria. These insights illuminate cytotoxic mechanism in diabetes and also provide a generalisable approach for enquiry applicable to other partially ordered protein assemblies. ## Introduction Advancing molecular insight faces unique challenges when structure-function arises from partially ordered protein oligomers1. Dynamic oligomers occur in aqueous and membrane milieus, for example, giving rise to nucleoli2 and signalling complexes3, respectively. As is often the case in biology, functional examples are a product of selective pressure optimising an intrinsic physical property of polypeptides. Unchecked, this property can also result in the misassembly of normally monodisperse proteins. Gains-of-function from these states include cytotoxicity while loss-of-function can occur as a result of irreversible fibre formation4. Islet amyloid polypeptide (IAPP) is a 37-residue peptide-hormone cosecreted with insulin by the β-cells of the pancreas. In patients with type 2 diabetes, IAPP aggregation is associated with loss-of-β-cell mass5. Orthologues and mutational studies reveal a strong correlation between IAPP amyloid formation potential and cytotoxicity. Nevertheless, it is the structurally poorly defined, oligomeric species that are present prior to fibrillar aggregates that appear cytotoxic. In solution, IAPP weakly samples α-helical conformations that are stabilised upon interaction with phospholipid bilayers6. Membrane catalysed self-assembly follows binding with gains-of-function that include energy independent membrane translocation, membrane poration and mitochondrial localisation7. At the molecular level, the relationships of structures to functions, including downstream observations of mitochondrial dysfunction and cell death, remain unclear. Membrane interacting oligomers whose structures and dynamics result in poration are widely believed to underpin IAPP’s toxic gain-of-function8. Membrane poration has also been reported for Aβ from Alzheimer’s disease, α-Synuclein from Parkinson’s disease and PrP from spongiform encephalopathy. These proteins also contain disordered regions. Numerous studies using synthetic vesicles have provided evidence for many alternative mechanisms: discrete pore formation, membrane disruption through carpeting, removal of phospholipid like a surfactant or as response to the induction of surface tension9. Which, if any, of these mechanisms is relevant inside cells is unknown. The role of molecular structure in these phenomena is also unclear although there is indirect evidence, such as the near loss-of-poration upon truncation of the non-membrane bound and disordered C-terminal residues10. Additionally, as IAPP localises to mitochondria, it is an appealing, albeit unproven idea that IAPP uncouples mitochondria leading to several downstream dysfunctions11. Clearly a deeper understanding of the molecular basis of IAPP-mediated poration and its relationship to toxicity is required. To address this challenge, we carried out parallel measurements of IAPP on membrane models and inside live cells. Synergy between cellular and model measurements was achieved, in part, by engineering a permeation assay that uses a minimally processed, cell derived membrane. Using this assay, two sequentially sampled forms of poration are newly identified. A previously developed IAPP-specific ligand, ADM-116, is used here as a tool and shows toxic mitochondrial depolarisation to be associated with only one of the two forms of poration. A structure-based fluorescence approach, which we term diluted-FRET, is developed that brings focus onto large, homooligomeric assemblies of IAPP. We find that functional oligomers contain scores to hundreds of IAPP, populate several discrete states and none of these states are canonical amyloid. Our results support a model whereby IAPP-mediated toxicity derives from large, mitochondrial membrane associated assemblies of IAPP. ## Results ### IAPP mediates two classes of membrane permeation IAPP induces two kinetic phases of leakage in giant plasma membrane vesicles (GPMVs). GPMVs were prepared from live INS-1 cells as previously described12,13 with an additional step of pre-staining cytoplasmic components with a thiol-reactive fluorescent probe, CellTracker Orange (Fig. 1a). Upon addition of monomeric (Supplementary Fig. 1) 50 μM IAPP to 5 μM GPMVs, a protein:lipid (P:L) ratio of 10:1, two phases of leakage are observed (Fig. 1b, d). This P:L is consistent with addition of low-μM protein to cell culture which can result in P:Ls of 100:1 or more14. Moreover, high-local concentrations of IAPP are expected in vivo as IAPP is a secreted protein. The first phase of leakage is exponential, k = 33 ± 2 s−1, ending at a plateau in which only 48 ± 4% of the starting intensity is lost. A second exponential decay, k = 49 ± 2 s−1, begins after 1400 ± 200 s. This lag but not the rates are strongly sensitive to protein concentration (Supplementary Fig. 2a). The separation of two nearly identical rates by a lag phase suggests that leakage is an indirect reporter of two separate poration phenomena. The two classes of IAPP-induced membrane permeation have distinct sieve size. Fluorescence correlation spectroscopy (FCS) was used to assess the diffusion of probes that escaped the lumen of the GPMVs. In the absence of added IAPP, only background counts are present. During the first kinetic phase, autocorrelation analysis is possible and fits are readily made to an analytical form that includes a single-diffusing species (Fig. 2a and inset). This gives a diffusion constant of 390 ± 25 μm2/s which corresponds in size to unconjugated dye. In marked contrast, data collection during the second phase of leakage is characterised by large irregular bursts that can persist for seconds and longer (Fig. 2a and Supplementary Fig. 3). This reflects the presence of much larger species (10–100 s of kDa)15. Clearly, the two phases of leakage are characterised by a transition from one in which only small and consistently sized holes are present to one in which large heterogeneously sized species can escape. Poration by IAPP is bidirectional. To measure influx, poration was monitored by application of GFP (26 kDa) to a solution of unlabelled GPMVs. The exclusion of GFP is readily observed and stable (Fig. 2b). Application of 50 μM IAPP at 10:1 P:L shows exclusion of GFP persisting for the first ~30 min. This is followed by an abrupt change in which fluorescence within the lumen rises to that of the surrounding buffer within ~80 min (Fig. 2c). Exclusion during the first 30 min is consistent with large molecule efflux also not occurring on this timescale (Fig. 2a). Small sieve size poration is also bidirectional as evidenced using a small molecule fluorescence quencher (Supplementary Fig. 2b). Clearly, there is a discrete transition between small and large sieve size poration regardless of whether this is observed using influx or efflux assays. ### Leakage resulting in large sieve size can be inhibited ADM-116 is a recently developed16,17 small molecule inhibitor of IAPP-induced cellular toxicity (Supplementary Fig. 7) (P = 0.0001). ADM-116 has been shown to bind specifically to α-helical, membrane-bound conformations of human IAPP. Moreover, ADM-116 can cross the plasma membrane and has been shown to directly bind intracellular IAPP. It is therefore uniquely suited to making measurements on GPMVs that can be paired with assessments performed inside live cells (see below). IAPP assemblies associated with first- and second phase leakage are structurally distinct. Leakage profiles were collected on GPMVs with the addition of equimolar ADM-116 (Fig. 1c,d). The resultant profile is exponential, k = 34 ± 1 s−1, with an amplitude of 52 ± 3 %. This is within error of the first phase of leakage observed in the absence of compound (Fig. 1d). FCS of the media shows that only small solutes, with a diffusion coefficient of 480 ± 50 μm2/s, have been released (Fig. 2a and Supplementary Fig. 3). In measurements using GFP as the reporter, equimolar ADM-116 robustly inhibits influx into the GPMV lumen (Fig. 2b, c). In summary, this ligand shows no effect on small sieve-size leakage processes and wholly inhibits formation of large sieve states. As ADM-116 binding to IAPP is sequence and conformation specific16,17, this suggests that structurally distinct assemblies mediate the two forms of observed leakage. ADM-116 is specific to pre-lag phase states of IAPP. Experiments were performed in which ADM-116 was added at early, middle and end stages of the lag phase (22, 32 and 42 min, respectively). At 22 min, the results are indistinguishable from co-addition of the compound at t = 0 (Fig. 1e). By 42 min, ADM-116 no longer affects the kinetics or sieve size of membrane poration. Partial effects to kinetics and sieve size are apparent for additions of compound at 32 min. These observations reinforce the interpretation that the transition from one leakage phase to next is a result of a structural change to membrane associated IAPP. ### Poration occurs in transient bursts Time resolved imaging was performed using two probes: GFP and 50 μM IAPP doped with 200 nM IAPP labelled with Alexa 594. Striking, transient plumes of GFP can be seen localised to IAPP puncta (Fig. 3a). Many time points can be simultaneously considered by integrating intensity along an axis from the center of the GPMV to the IAPP puncta (Fig. 3b). In the first 20′, no GFP influx is observed. After 20′, plumes are readily apparent and grow more regular in frequency. Each GPMV has many IAPP puncta resulting in the observed global increase in intensity over time (Fig. 3b). Similar measurements monitoring efflux also shows unambiguous localisation of leakage to IAPP puncta (Supplementary Fig. 4). Importantly, when influx measures are performed in the presence of equimolar ADM-116, IAPP puncta are present but plumes of GFP do not occur (Fig. 3c). Previous work from our group suggested that large membrane bound oligomers convert between open and closed states in response to nucleation initiated by a subset of IAPP within the oligomer18. We speculate that such nucleation could form the basis of gating which in this assay, manifests as transient plumes of GFP. The sequential sampling of two forms of poration is a specific property of IAPP. We determined that 10 μM of the mitochondrial protonophore, 2,4-dinitrophenol (DNP)19, can induce CellTracker Orange efflux from GPMVs at a rate of 58 ± 8 s−1 (Fig. 1f). Intensity loss plateaus at ~50% with no large species released (Supplementary Fig. 5) and material leaked has a diffusion constant of 420 ± 24 μm2/s. These measures are within error of the first kinetic phase observed using IAPP. Importantly, there is no evidence of a second phase. In a contrasting study, a sequence variant of IAPP was used. Rat IAPP (rIAPP) is generally presented as non-toxic as cytotoxicity requires protein concentrations that are significantly elevated compared to human IAPP20. Here, 200 μM of rIAPP was empirically determined to induce a first-phase leakage rate comparable to human IAPP (Fig. 1f). The kinetic profile that follows is qualitatively similar in that a plateau is present that is followed by a second phase of leakage. Strong quantitative differences are present. This includes a higher plateau, a lag phase of 10 s of hours and a second leakage phase that does not go to baseline. Plainly, IAPP has the capacity to induce two forms of leakage and the properties that govern these are dependent on sequence. ### IAPP alters intracellular membrane integrity IAPP localisation to mitochondria is correlated with depolarisation. Mitochondrial polarisation in live INS-1 cells was measured using the fluorophore, JC-121 (Fig. 4a). Roughly 50% of intracellular IAPP was localised to mitochondria (Fig. 4h). In contrast, only ~5% of available IAPP overlaps with endoplasmic reticulum (ER) (Fig. 4h and Supplementary Fig. 6) (P = 0.0003). At 13 μM IAPP, >50% cell death is apparent after 48 h (Supplementary Fig. 7). This is accompanied by widespread mitochondrial depolarisation (Fig. 4b, e). Imaging at 24 h, a time point before cell death is widespread, shows considerable depolarisation (Fig. 4e) relative to the no IAPP control (P = 0.002). At this point, more than half of the observable IAPP is localised to depolarised mitochondria (Fig. 4f). That is in the lead up to cell death, mitochondria that show co-localised IAPP also show depolarisation. Toxicity can be reduced by conducting studies at reduced concentrations of IAPP. At 5 μM IAPP, only ~15% of cells are dead after 48 h (Supplementary Fig. 7). Imaging under this reduced toxic condition at 24 h shows the fraction of mitochondria with co-localised IAPP (Fig. 4g) and depolarisation (Fig. 4e) are both greatly reduced (P = 0.002 and 0.001, respectively). Plainly, localisation of IAPP to mitochondria is associated with depolarisation and the extent of depolarisation correlates with toxicity. Cytotoxic rescue by ADM-116 eliminates mitochondrial depolarisation without displacing IAPP. Small molecule rescue was used as an orthogonal approach to reducing IAPP-induced toxicity. Addition of equimolar ADM-116, 3 h after uptake of 13 μM IAPP wholly rescues INS-1 cells from toxicity16. Depolarisation is markedly reduced compared to the 13 μM IAPP-only images (Fig. 4d, e) (P = 0.0001). Importantly, ADM-116 does not significantly change the fraction of IAPP co-localised to mitochondria (Fig. 4h) (P = 0.48), nor the fraction of mitochondria with associated IAPP (Fig. 4g) (P = 0.52). This suggests that toxicity is reduced instead by changing the structure of mitochondria associated IAPP. Our observation of mitochondrial depolarisation in response to IAPP may be direct or may be upstream of stimulating other cellular factors. For example, mitochondria initiated apoptosis includes membrane poration by Bcl-2 family proteins22. Whether direct or indirect, changes in mitochondria localised IAPP structure that we have associated with changes in poration sieve size appear central to the origins of toxicity. ### Oligomers are dynamic during membrane leakage Starting with GPMVs and moving into cells, the above functional analyses were paired with tools for assessing the gross morphology of IAPP oligomers. Remarkably, fluorescently labelled IAPP puncta on GPMVs show time dependent changes (Fig. 5a, b). Oligomer growth dominates observation. Changes in oligomers that can be interpreted as localised losses in size, fission and/or fusion are also routine. These changes in IAPP oligomer morphology occur on minute timescales, which is comparable to the observed IAPP-induced leakage rates (Fig. 1). This suggests that IAPP dynamics and poration are coupled. Changes in dynamic oligomer morphology are also evident using intermolecular FRET. IAPPs were prepared in which either a donor (Alexa 488, IAPPA488) or acceptor (Atto 647, IAPPA647) is covalently attached to the N-terminus. Unlabelled IAPP is then mixed in high proportion (>50:1) with labelled peptides. This orthogonal approach, which we term “diluted-FRET”, focuses data collection on large oligomers. Oligomers, with a monomer count (N) less than the dilution ratio, are statistically unlikely to have both donor and acceptor present. As a non-dynamic control, amyloid fibres of IAPP were prepared in aqueous buffer using 1:1:500 (donor:acceptor:unlabelled IAPP) and imaged using confocal microscopy. A broad distribution of FRETeff is obtained (Fig. 6a). When donor and acceptor fluorophores change their orientations and/or positions on a timescale that is slow relative to the integration time of measurement (here, 0.5 s per pixel), then the expectation is that a broad distribution of FRET efficiencies will be observed, reflecting the underlying spatial distribution of donor–acceptor fluorophores23. This is observed here for canonical amyloid fibres, where the peptide components are not expected to have mobility. In marked contrast, the FRET efficiency distributions observed for the oligomers are very narrow (Fig. 6b); if the donor and acceptor peptides are rapidly equilibrating through multiple conformations on the timescale of the measurement, then narrow peaks corresponding to the population weighted average FRET efficiency are observed. As the FRET observed here is intermolecular, the narrow peaks reflect averaging from oligomer dynamics. GPMV associated IAPP forms a discrete species when in complex with ADM-116. Under conditions that display only small sieve size poration (P:L:ADM-116 = 10:1:10) (Fig. 1c, d), FRETeff is readily measured from membrane associated puncta using 1:1:500 (donor:acceptor:unlabelled IAPP) (Fig. 7a). Time dependent changes to FRETeff show a progression of transitions over ~40′ (Fig. 7c), i.e., structural transitions are occurring on the same timescale as leakage. Signal averaging across repeats gives a final profile of the stabilised state (Fig. 8a). The continued presence of relatively narrow peaks suggests that the small molecule stabilised species are still dynamic. ### Two poration modes mediated by distinct non-amyloid states Under conditions for observing two phase leakage (P:L:ADM-116 = 10:1:0) (Fig. 8d), four peaks are apparent over the first 34′ with FRETeff at 0.21 ± 0.07, 0.36 ± 0.04, 0.41 ± 0.08 and 0.58 ± 0.02, respectively (Fig. 8e). The first three FRETeff peaks dominate over a period of time in which only small molecule leakage is observed. The second poration phase, 48′–60′, shows a marked shift to two high-FRETeff ensembles at 0.78 ± 0.04 and 0.87 ± 0.01. The coincidence of the timescale of these transitions with that observed for leakage (where all IAPP is unlabelled) shows that fluorophore interactions are not contributing significantly to the structures sampled by IAPP oligomers. This is further confirmed by observation of comparable changes when using an alternate donor/acceptor pair (Supplementary Fig. 8a). None of these states yield FRETeff or histological staining24 similar to amyloid (Fig. 6a). Moreover, reducing the protein concentration by half strongly lowers the sampling of second phase leakage on the 48′–60′ timescale (Supplementary Fig. 2a). The FRETeff over this time does not show significant sampling near 0.78 and 0.87 (Fig. 8b) and indeed, more closely resembles what is observed using ADM-116 ligand during this phase of leakage (Figs. 7a, b and 8a). Clearly, membrane associated IAPP oligomers are internally dynamic and are not dominated by amyloid structures over the period of time where poration is observed. Moreover, both time dependence and inhibitor studies show different FRETeff species to be associated with the two leakage processes. IAPP oligomers grow by oligomer and not monomer addition. Inspection of individual FRETeff time courses at 2′ resolution shows peaks change in intensity, but not position (Fig. 7c, d). That is new peaks appear as a result of others disappearing rather than shifting. This is most further apparent in the time averaged data (Fig. 8). The absence of significant peak broadening upon averaging indicates that peaks do not shift in position over the time course. That these changes likely represent oligomer growth is evident in the approximately fivefold increase in intensity over time (Fig. 8). These observations suggest that oligomer growth does not occur by monomer or even small (N < ~10) oligomer addition. Instead, the peaks correspond to distinct species the sheer number of which (>6) make it unlikely that they represent conformational transitions. Rather, we suggest that progression in FRETeff includes contributions that are the result of a stepwise merging of large-N oligomers. ### Dynamic oligomers are responsible for cell toxicity Intracellular IAPP oligomers adopt a single-overall structure when toxicity is rescued. INS-1 cells were incubated with 10 μM IAPP and rescued from toxicity by addition of equimolar ADM-116 3 h later (our standard condition16 to ensure intracellular interaction). Intermolecular FRET is widespread indicating that large N oligomers of IAPP are abundant (Fig. 9a). The FRETeff pixel statistics show a single-narrow peak with FRETeff = 0.60 ± 0.07 (Fig. 9c). Over 48 h, this distribution does not significantly change (Fig. 9c). A repeat analysis under at a different dilution of the FRET pair, (1:1:500) (Fig. 9f), and analysis using a different FRET pair, IAPPA488 and IAPPA594, give similar results (Supplementary Fig. 9b, c), suggesting that fluorophore–fluorophore interactions do not contribute significantly to the observation. The doping ratio of 1:1:100 (donor:acceptor:unlabelled IAPP) that is predominantly used in our cell studies was determined empirically to give data with the best dynamic range. This compares to 1:1:500 used for GPMVs. This suggests that the intracellular oligomers are smaller than those observed on GPMVs. Importantly, a single, long-lived discrete assembly of intracellular IAPP is formed upon cytotoxic rescue by ADM-116; a result directly comparable with that observed on GPMVs (Figs. 7a, c and 8a). Intracellular oligomers of IAPP evolve to high-FRETeff under conditions that are cytotoxic. Diluted-FRET was measured at 24 h and 48 h as above, but without rescue using ADM-116 (Fig. 9b). Cumulative analysis of cell images taken at 24 h reveal a broad range of FRETeff = >0.5 (Fig. 9d). Individual cells (Fig. 9d), as well as small sub-cellular ROIs (Fig. 9e) show recurring and relatively narrow distributions. This is in line with observation of discrete species observed on GPMVs (Figs. 7b, d and 8c). After 48 h, cells show significant morphological signs of dysfunction. This is accompanied by FRET distributions shifting to markedly higher efficiencies (Fig. 9d, e). Plainly, the evolution of IAPP oligomer conformations inside cells (and their inhibition with ADM-116) mimic those sampled on GPMVs. Intracellular oligomers of IAPP associated with cytotoxicity are not canonical amyloid fibres. Diluted-FRET experiments were repeated in cells using a doping ratio matched to that used for the GPMV studies above, 1:1:500 (donor:acceptor:unlabelled IAPP). In the absence of small molecule, the FRETeff progresses with peaks at 0.36 ± 0.08 and 0.62 ± 0.04 at 24 h and 48 h, respectively (Fig. 9g). The FRETeff distributions measured under these toxic conditions are clearly not the same as that of amyloid fibres prepared in aqueous buffer (Figs. 9g and 10). ## Discussion The use of GPMVs as model system enabled the identification of two forms of IAPP-mediated membrane permeation. This is apparent from two kinetic phases in the leakage profiles (Fig. 1d) each with a characteristic size distribution for the escaping molecules (Fig. 2). Small molecule binding to IAPP inhibits the second, but not the first, phase of leakage (Fig. 1d). A parallel can be drawn between these results and the mitochondrial depolarisation measurements in cells (Fig. 4), where the small molecule ligand reverses the depolarisation seen in the presence of IAPP, without displacing IAPP from mitochondria. This observation is significant, as it shows localisation of IAPP to mitochondria to be necessary but not sufficient for toxicity. Instead, it appears that the structure of IAPP at the mitochondria is a more relevant indicator of toxic effect. We propose that nucleated conversion of small pore permeable states to large pore permeation is the origin of gains-in-toxic function by IAPP. This is an important distinction as membrane poration need not be cytotoxic. The transition between small and large pore states reflects a change in oligomer topology, and/or the conformation of IAPP within the oligomer. This is suggested by the deviations of observed FRETeff from simple models. For example, isotropic packing of IAPP into a sphere at high density (4000 Å3/IAPP) can be simulated. At the doping ratios used in GPMV experiments, 1:1:500, FRETeff cannot exceed ~0.27 (Fig. 6c). This is clearly smaller than all but one of the observed peaks (Fig. 8c). Additionally, a ligand stabilising a membrane bound α-helical form of IAPP affects only one of the two phases of poration (Fig. 1). Taken together, these results support our assertion that there are distinct membrane associated oligomeric states. The evolution of the amyloid-hypothesis into the oligomer-hypothesis was pioneered, in part, by the immunostained light-microscopy observation of membrane associated puncta25. A recent effort with bearing on our work is a study in which SDS was used as a membrane model26. Toxic IAPP oligomers, chromatographically stable, discrete in size (~90 kDa), rich in α-helical secondary structure, shown to enter cells and localise to mitochondria were reported. Remarkably, anti-sera from patients with diabetes but not healthy controls were cross-reactive with this oligomer. The 90 kDa size and subcellular localisation observed in that construct are consistent with the size and localisation observations reported here. Our current study does not exclude a significant role for dimer through hexamer formation as suggested by others27,28,29. Instead, our work clarifies that oligomers associated with gains-in-toxic function are much larger and internally dynamic. In functional systems, dynamic assemblies of membrane proteins have been observed for Nephrin30 and more recently LAT3, which forms micron sized clusters during T-cell activation. The multistate nature of dynamic IAPP we report here is unlikely to be unique. If functional membrane systems also leverage disordered regions to create alternative properties, their transitions between states would likely be under regulatory control. We have demonstrated here using IAPP that such transitions are sufficiently defined as to be addressable with a protein-specific ligand. The same may therefore be true for functional dynamic protein phase transitions. ## Methods ### Materials Chemicals were purchased from Sigma Aldrich (St. Louis, MO) unless otherwise specified. Thioflavin T (ThT) was purchased from Acros Organics (Fair Lawn, NJ) and islet amyloid polypeptide (IAPP) from Elim Biopharmaceuticals (Hayward, CA, USA). ADM-116 was previously prepared as described16. IAPP stocks were prepared by solubilising ~2 mg protein in 7 M guanidinium hydrochloride. The solution was filtered (0.2 micron) and transferred to C-18 spin column, washed twice with water followed by a wash of 10% acetonitrile, 0.1% formic acid (v/v) and then eluted into 200 µL of 50% acetonitrile, 0.1% formic acid (v/v). The concentration of IAPP was determined by OD at 280 nm (ε = 1400 M−1 cm−1). The solution was then divided into single use aliquots (20–50 µL), lyophilised, and stored at −80 °C. Stock solutions of IAPP were prepared with water from these aliquots. Alexa 488 carboxylic acid succinimidyl ester (A488), Atto 647 N succinimidyl ester (A647) and Alexa 594 succinimidyl ester (A594) dyes were purchased from Life Technologies (Carlsbad, CA). IAPP labelling was prepared as described previously20. Briefly, IAPP was incubated with dye, succinimidyl ester on a MacroSpin column for 4 h. Labelled IAPP was eluted from the MacroSpin column with 50% acetonitrile/0.2% formic acid solution. This was then diluted with 7 M guanidinium hydrochloride solution to a total organic content of <5%. Labelled protein was then purified by reverse-phase high-performance liquid chromatography, and identity was confirmed by mass spectrometry. Aliquots were lyophilised and stored at −80 °C. Stocks at 100 μM in water were prepared and immediately before use. IAPP control fibres were prepared by incubating 50 μM hIAPP and 100 nM IAPPA647 in 50 mM sodium phosphate buffer, 100 mM KCl, pH 7.4, for 24 h. Fibres were pelleted at 21,000 g for 30 min and resuspended three times using water. ### GPMV GPMVs were isolated from INS-1 cells as previously described13. Briefly, cells were plated in 35 mm dishes and cultured for 48 h, washed with a 10 mM HEPES, 150 mM NaCl, 2 mM CaCl2 (pH = 7.4) twice and then exposed to 2 mM N-ethyl maleimide (NEM, Sigma Aldrich, St. Louis, MI, USA) for 2 h. Collected samples were then passed over a gravity-flow column (Bio-Rad) containing size exclusion Sephacryl of pore size 400-HR (Sigma Aldrich, St. Louis, MI, USA) to separate GPMVs from residual cell debris. For leakage assays, the thiol-reactive fluorescent probe, CellTracker Orange (Thermo Scientific, Rochester, NY, USA) was first applied to cells at 1:1000 dilution and incubated for 1 h at 37 °C. The phospholipid content of unlabelled and labelled final material was measured by total phosphate assay. For leakage assays monitoring the influx of GFP, recombinant expressed GFP (500 nM) was added to the GPMV containing solution and the increase in fluorescence inside the GPMV was monitored. ### GPMV imaging Images were obtained in 8-well NUNC chambers (Thermo Scientific, Rochester, NY, USA) containing 250 µl of GMPV at 5 μM phospholipid lipid concentration. Imaging was carried out at the Yale Department of Molecular, Cellular, and Developmental Biology imaging facility, on a Zeiss LSM 510 confocal microscope, using a ×63 Plan-Apo/1.4-NA oil-immersion objective with DIC capability (Carl Zeiss, Oberkochen, Germany). Image acquisition and processing were achieved using Zeiss Efficient Navigation (ZEN) and Image J software31. ### Fluorescence correlation spectroscopy (FCS) FCS measurements were made on an LSM 880 Airyscan system NLO/FCS Confocal microscope (Carl Zeiss, Oberkochen, Germany) with a C-Apochromat ×40/1.2 NA UV-VIS-IR Korr. water immersion objective (Carl Zeiss, Oberkochen, Germany). Thiol-conjugated molecules were excited at 594 nm. Confocal pinhole diameter was adjusted to 70 μm. Emission signals were detected through a 607-nm long-pass filter. Measurements were made in 8-well chambered coverglasses (Nunc, Rochester, NY, USA). All samples were incubated in GPMV buffer (10 mM HEPES, 150 mM NaCl, 2 mM CaCl2 (pH = 7.4)) for 1 h prior to taking measurements. Autocorrelation data were collected at regular intervals (5 min) with each autocorrelation curve collected over 10 s with 30 repeats. Autocorrelations were fit using the software QuickFit3.032. For thiol-conjugates, a model for a single-diffusing species undergoing 3D Brownian diffusion was used. $$G(\tau ) = \frac{1}{{N}} \times \left[ {1 + \frac{\tau }{{\tau _{d,1}}}} \right]^{ - 1} \times \left[ {1 + \frac{\tau }{{s^2\tau _{d,1}}}} \right]^{ - \frac{1}{2}},$$ (1) here, N is the total thiol-conjugated molecules in the detection volume. The characteristic translational diffusion time of a diffusing particle is given by τ d,1 . The structure factor, s, was determined as a free parameter for solutions of free Alexa 594 hydrazide dye and then fixed to the experimentally determined value of 0.1 for all subsequent fittings. For experiments of conjugates eluting during the second decay phase of leakage diffusion was assessed by burst counts within the time frame analysed. ### Confocal microscopy and cell imaging Images were obtained in 8-well NUNC chambers (Thermo Scientific, Rochester, NY, USA) seeded with 20000–25000 cells/well. Cells were cultured for 48 h after passage before beginning experiments. For time dependent co-localisation experiments of IAPP with JC-1 mitochondrial staining, the medium contained 200 nM IAPPA647, 13 µM unlabelled peptide. For experiments in the presence of ADM-116, 13 µM of molecule was introduced in the medium following a 3 h incubation of cells with IAPP. For experiments monitoring mitochondria, JC-1 was incubated with cells at 1:5000, at 37 °C for 45 min prior to addition of protein. Images were acquired after 48 h total incubation time. Imaging was carried out using a ×100 Plan-Apo/1.4-NA oil-immersion objective with DIC capability (Carl Zeiss, Oberkochen, Germany). For all experiments reporting on the co-localisation of labelled IAPP, the gain setting for the blue channel was kept constant from sample to sample. Mitochondria containing JC-1 aggregates were detected in red (excitation 540 nm, emission 570 nm), and monomers in the green channel (excitation 490 nm, emission 520 nm). Image acquisition and processing were achieved using Zeiss Efficient Navigation (ZEN) and Image J software31. ### Intracellular imaging Förster resonance energy transfer The INS-1 growth media was replaced with media containing 100 nM IAPPA488 and 100 nM IAPPA647 (or 100 nM IAPPA594) and unlabelled IAPP and incubated for the timescales indicated in the text. Media were replaced with fresh prior to imaging. In experiments where ADM-116 was used, small molecule was added 3 h after incubation with protein. Images were acquired using a ×100 Plan-Apo/1.4-NA oil-immersion objective with DIC capability (Carl Zeiss, Oberkochen, Germany). For the donor channel, IAPPA488 was excited with a 488 nm Argon2 laser and detected through a 505/550 nm emission filter. For the acceptor channel, IAPPA647 was excited with a 633 Argon2 laser and detected through a 730/750-nm emission filter. For all experiments the pinhole was kept constant to the Z-slice thickness of each filter channel. Single-cell images were obtained for donor alone, acceptor alone and donor–acceptor fusion channels. Image acquisition and processing were achieved using Zeiss Efficient Navigation (ZEN) and Image J software33. The Image J plugin, RiFRET34, was used to calculate and remove the bleed through for each channel and to calculate a pixel-based FRET efficiency. The FRET distance was then calculated using: $$E = \frac{{R_0^6}}{{R_0^6 + r^6}},$$ (2) where E is the calculated efficiency of FRET energy transfer, R0 is the Förster distance and r is the distance between the donor and the acceptor. ### Imaging FRET GPMV experiments were conducted in 8-well NUNC chambers (Thermo Scientific, Rochester, NY, USA) including 250 µl of GMPV stock solution at 5 μM apparent phospholipid (in monomer units). Wells containing GPMVs were mixed with 100 nM IAPPA488 and 100 nM IAPPA647 (or 100 nM IAPPA594) and unlabelled IAPP. In experiments where ADM-116 was used, small molecule was added at the same time as protein. The same microscope setup and analysis procedure was used to image FRET in GPMVs and cells. ### Cell culture Rat insulinoma INS-1 cells (832/13, Dr. Gary W. Cline, Department of Internal Medicine, Yale University) were cultured at 37 °C and 5% CO2 in phenol red free RPMI 1640 media supplemented with 10% fetal bovine serum, 1% penicillin/streptomycin (Life Technologies, Carlsbad, CA, USA), and 2% INS-1 stock solution (500 mM HEPES, 100 mM l-glutamine, 100 mM sodium pyruvate and 2.5 mM β-mercaptoethanol). Cells were passaged upon reaching ~95% confluence (0.25% Trypsin-EDTA, Life Technologies), propagated, and/or used in experiments. Cells used in experiments were pelleted and resuspended in fresh media with no Trypsin-EDTA. We follow ATCC guidelines for authentication of non-human cell-lines (technical bulletin #8, http://atcc.org). This includes monitoring mycoplasma, morphology and growth rate. We regularly thaw fresh stocks and begin and track new sequences of passages. We do not perform species verification. ### Cell viability Cell viability was measured colourimetrically using the Cell-Titer Blue (CTB, Promega, Madison, WI, USA) fluorescence-based assay. Cells were plated at a density 5000 cells/well in 96-well plates (BD Biosciences, San Diego, CA). Peptide was directly introduced to each well after 48 h of culture and then incubated for an additional 48 h. For time dependent experiments, cells were incubated with peptide for the specified time points. After the incubation period, 20 µL CTB reagent was added to each well and incubated at 37 °C and 5% CO2 for 2.5–3.5 h. Fluorescence of the resorufin product was measured on a FluoDia T70 fluorescence plate reader (Photon Technology International, Birmingham, NJ, USA). All wells included the same amount of water to account for different concentrations of peptide added to sample wells. Wells that included water vehicle but not peptide served as the negative control (0% toxic), and wells containing 10% DMSO were the positive control (100% toxic). Percent toxicity was calculated using the following equation: $${\mathrm{\% }}{\rm Toxicity} = 100 - \left[ {100 \times \left( {\frac{{ < S > - < P > }}{{ < N > - < P > }}} \right)} \right].$$ (3) Each independent variable is the average of eight plate replicates from the negative control (<N>), positive control (<P>) and samples (<S>). Results presented for viability experiments are an average of three such experiments conducted independently on different days. Error bars represent the standard error of the mean. ### Phosphate assay Lipid concentrations for GPMV preparations were determined by measuring total phosphate35, assuming that all measured phosphate is from phospholipids, and that all lipids are phospholipids. This is a practical assumption designed to ensure reproducibility. ### Simulation of diluted-FRET Expected values of FRETeff for dynamic, isotropic distributions of spherically distributed IAPP were computed by numerical integration coded in-house using Mathematica (Wolfram Research, Champaign, IL). Briefly, the experimental ratio of labelled to unlabelled protein defines a binomial probability for the number of fluorophores distributed within trial spheres. The size of the spheres was dictated by constants (stated in the main text) for protein density and the total number of placed protein molecules. Spherical oligomers were generated randomly until the cumulative FRETeff converged to a single value. Occurrences of multiple acceptors and donors within a single oligomer were accommodated by treating each potential dye pair, ij, as having an independent probability of resonance transfer, p ij . For example, a trial oligomer containing a single donor and three acceptors located a distance of Ro away gives a FRETeff = 0.875. Effects of donor–donor and acceptor–acceptor interactions were not considered. In general, multi-fluorophore corrections contribute negligibly to computations performed at the experimental ratios used in this work (Fig. 6c). The fraction of oligomers of size N with one donor and one acceptor at a doping ratio of 1:1:X is binomial with a probability of (2/(X + 2)). This is further scaled by 0.5 to reflect the frequency that the two labelled IAPPs within the oligomer are donor and acceptor. Higher degrees of labelling within oligomers are rare at the doping ratios used in this work and so are ignored. To calculate the number of acceptor molecules contributing to the FRET signal detected, the number of pixels showing FRET was divided by the total number of acceptor pixels obtained from the acceptor channel. ### Statistical analysis For each experiment, means and standard deviations (specified in Figure Legends) of parameters measured were determined. Statistical analyses were performed using the Student’s t-test and expressed as p-values in the text. All analyses were carried out with GraphPad Prism. ### Data availability The data supporting the findings of this manuscript are available from the corresponding authors upon reasonable request. ## References 1. Courchaine, E. M., Lu, A. & Neugebauer, K. M. Droplet organelles? EMBO J. 35, 1603–1612 (2016). 2. Brangwynne, C. P., Mitchison, T. J. & Hyman, A. A. Active liquid-like behavior of nucleoli determines their size and shape in Xenopus laevis oocytes. Proc. Natl Acad. Sci. USA 108, 4334–4339 (2011). 3. Su, X. et al. Phase separation of signaling molecules promotes T cell receptor signal transduction. Science 352, 595–599 (2016). 4. Aguzzi, A. & Altmeyer, M. Phase separation: linking cellular compartmentalization to disease. Trends Cell Biol. 26, 547–558 (2016). 5. Mukherjee, A. M.-S. D., Butler, P. C. & Soto, C. Type 2 diabetes as a protein misfolding disease. Trends Mol. Med. 21, 439–449 (2015). 6. Williamson, J. A., Loria, J. P. & Miranker, A. D. Helix stabilization precedes aqueous and bilayer-catalyzed fiber formation in islet amyloid polypeptide. J. Mol. Biol. 393, 383–396 (2009). 7. Costes, S., Langen, R., Gurlo, T., Matveyenko, A. V. & Butler, P. C. beta-Cell failure in type 2 diabetes: a case of asking too much of too few? Diabetes 62, 327–335 (2013). 8. Mukherjee, A., Morales-Scheihing, D., Butler, P. C. & Soto, C. Type 2 diabetes as a protein misfolding disease. Trends Mol. Med. 21, 439–449 (2015). 9. Last, N. B., Schlamadinger, D. E. & Miranker, A. D. A common landscape for membrane-active peptides. Protein Sci. 22, 870–882 (2013). 10. Brender, J. R., Salamekh, S. & Ramamoorthy, A. Membrane disruption and early events in the aggregation of the diabetes related peptide IAPP from a molecular perspective. Acc. Chem. Res. 45, 454–462 (2012). 11. Kegulian, N. C. et al. Membrane curvature-sensing and curvature-inducing activity of islet amyloid polypeptide and its implications for membrane disruption. J. Biol. Chem. 290, 25782–25793 (2015). 12. Sezgin, E. et al. Elucidating membrane structure and protein behavior using giant plasma membrane vesicles. Nat. Protoc. 7, 1042–1051 (2012). 13. Schlamadinger, D. E. & Miranker, A. D. Fiber-dependent and -independent toxicity of islet amyloid polypeptide. Biophys. J. 107, 2559–2566 (2014). 14. Wimley, W. C. Describing the mechanism of antimicrobial peptide action with the interfacial activity model. ACS Chem. Biol. 5, 905–917 (2010). 15. Elbaum-Garfinkle, S., Ramlall, T. & Rhoades, E. The role of the lipid bilayer in tau aggregation. Biophys. J. 98, 2722–2730 (2010). 16. Kumar, S. et al. Foldamer-mediated manipulation of a pre-amyloid toxin. Nat. Commun. 7, 11412 (2016). 17. Kumar, S., Birol, M. & Miranker, A. D. Foldamer scaffolds suggest distinct structures are associated with alternative gains-of-function in a preamyloid toxin. Chem. Commun. 52, 6391–6394 (2016). 18. Last, N. B., Rhoades, E. & Miranker, A. D. Islet amyloid polypeptide demonstrates a persistent capacity to disrupt membrane integrity. Proc. Natl Acad. Sci. USA 108, 9460–9465 (2011). 19. Perry, R. J., Zhang, D., Zhang, X. M., Boyer, J. L. & Shulman, G. I. Controlled-release mitochondrial protonophore reverses diabetes and steatohepatitis in rats. Science 347, 1253–1256 (2015). 20. Magzoub, M. & Miranker, A. D. Concentration-dependent transitions govern the subcellular localization of islet amyloid polypeptide. FASEB J. 26, 1228–1238 (2012). 21. Perelman, A. et al. JC-1: alternative excitation wavelengths facilitate mitochondrial membrane potential cytometry. Cell Death Dis. 3, e430 (2012). 22. Vaux, D. L. & Korsmeyer, S. J. Cell death in development. Cell 96, 245–254 (1999). 23. Schuler, B., Lipman, E. A. & Eaton, W. A. Probing the free-energy surface for protein folding with single-molecule fluorescence spectroscopy. Nature 419, 743–747 (2002). 24. Wolfe, L. S. et al. Protein-induced photophysical changes to the amyloid indicator dye thioflavin T. Proc. Natl Acad. Sci. USA 107, 16863–16868 (2010). 25. Gurlo, T. et al. Evidence for proteotoxicity in beta cells in type 2 diabetes: toxic islet amyloid polypeptide oligomers form intracellularly in the secretory pathway. Am. J. Pathol. 176, 861–869 (2010). 26. Bram, Y. et al. Apoptosis induced by islet amyloid polypeptide soluble oligomers is neutralized by diabetes-associated specific antibodies. Sci. Rep. 4, 4267 (2014). 27. Young, L. M., Cao, P., Raleigh, D. P., Ashcroft, A. E. & Radford, S. E. Ion mobility spectrometry-mass spectrometry defines the oligomeric intermediates in amylin amyloid formation and the mode of action of inhibitors. J. Am. Chem. Soc. 136, 660–670 (2014). 28. Abedini A., et al. Time-resolved studies define the nature of toxic IAPP intermediates, providing insight for anti-amyloidosis therapeutics. Elife 5, 12977 (2016). 29. Nath, A., Miranker, A. D. & Rhoades, E. A membrane-bound antiparallel dimer of rat islet amyloid polypeptide. Angew. Chem. Int. Ed. Engl. 50, 10859–10862 (2011). 30. Banjade S., Rosen M. K. Phase transitions of multivalent proteins can promote clustering of membrane receptors. Elife 3, e04123 (2014). 31. Schneider, C. A., Rasband, W. S. & Eliceiri, K. W. NIH Image to ImageJ: 25 years of image analysis. Nat. Methods 9, 671–675 (2012). 32. J. W. Krieger J. L. QuickFit 3.0: a data evaluation application for biophysics. In: http://www.dkfz.de/Macromol/quickfit/ (2015). 33. Soty, M. V. M., Soriano, S., del Carmen Carmona, M., Nadal, Á. & Novials, A. Involvement of ATP-sensitive potassium (KATP) channels in the loss of beta-cell function induced by human islet amyloid polypeptide. J. Biol. Chem. 286, 40857–40866 (2011). 34. Roszik, J., Lisboa, D., Szollosi, J. & Vereb, G. Evaluation of intensity-based ratiometric FRET in image cytometry--approaches and a software solution. Cytom. A 75, 761–767 (2009). 35. King, E. J. The colorimetric determination of phosphorus. Biochem. J. 26, 292–297 (1932). ## Acknowledgements This research was supported by the NIH GM094693 (to A.D.M.), GM102815 (to A.D.M. and E.R.) and an American Diabetes Association mentor-based postdoctoral fellowship to M.B. We thank Prof. M. Rosen for critical reading of this work. We thank Alex Miranker for assistance with programming diluted-FRET simulations and Dr. J. Wolenski for technical assistance with microscopy. ## Author information Authors ### Contributions M.B., E.R. and A.D.M. designed the project and experiments. M.B. performed the experiments. M.B., E.R. and A.D.M., analysed the data and wrote the manuscript. S.K. synthesised the oligoquinoline amides. ### Corresponding authors Correspondence to Elizabeth Rhoades or Andrew D. Miranker. ## Ethics declarations ### Competing interests The authors declare no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Birol, M., Kumar, S., Rhoades, E. et al. Conformational switching within dynamic oligomers underpins toxic gain-of-function by diabetes-associated amyloid. Nat Commun 9, 1312 (2018). https://doi.org/10.1038/s41467-018-03651-9 • Accepted: • Published: • DOI: https://doi.org/10.1038/s41467-018-03651-9 • ### Foldamers reveal and validate therapeutic targets associated with toxic α-synuclein self-assembly • Jemil Ahmed • Tessa C. Fitch • Sunil Kumar Nature Communications (2022) • ### Identification of transmissible proteotoxic oligomer-like fibrils that expand conformational diversity of amyloid assemblies • Phuong Trang Nguyen • Ximena Zottig • Steve Bourgault Communications Biology (2021) • ### Fibril structures of diabetes-related amylin variants reveal a basis for surface-templated assembly • Rodrigo Gallardo • Neil A. Ranson Nature Structural & Molecular Biology (2020)
2022-07-01 12:19:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6060457229614258, "perplexity": 10703.657596906492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103940327.51/warc/CC-MAIN-20220701095156-20220701125156-00728.warc.gz"}
https://ruebot.net/post/20180430/
At this past week’s Archives Unleashed dataton, I jokingly created some wordclouds of my Co-PI’s timelines. Mat Kelly asked about the process this morning, so here is a little how-to of the pipeline: Requirements: Process: Use twarc to grab some data. $twarc timeline lintool > lintool.jsonl Extract the tweet text. $ cat lintool.jsonl | jq -r .full_text > lintool_tweet.txt Remove all the URLs from the tweets. $sed -e 's!http[s]\?://\S*!!g' lintool_tweet.txt > lintool.txt Create a Wordcloud. $ wordcloud_cli.py --text lintool.txt --imagefile lintool.png Nota bene • Each of these commands have a whole lot of options. Check them out, and experiment. • Yes, there is probably a better way to do this, and you could even make it into a one-liner. I pulled this together as a favour to Mat. • We were going to initially include wordclouds of collections in AUK, but wordcloud_cli.py doesn’t perform well at scale. Scale being, feeding it txt files of 5G up to 500G of raw text. Maybe one day we’ll revisit it.
2018-12-19 01:19:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42052990198135376, "perplexity": 6265.42636867556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376830305.92/warc/CC-MAIN-20181219005231-20181219031231-00613.warc.gz"}
https://brilliant.org/problems/ratio-15/
# Just In The Middle Geometry Level 1 The above shows a square that has both a circumscribed circle and a circle inscribed inside of it. Find the ratio of areas between the smaller circle versus the larger circle. ×
2021-05-16 18:01:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6631892323493958, "perplexity": 405.5103236941385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991178.59/warc/CC-MAIN-20210516171301-20210516201301-00544.warc.gz"}
https://testbook.com/question-answer/which-of-the-following-diodes-is-most-suitable-for--5f6c7e6484c62e376d63ec67
# Which of the following diodes is most suitable for detection of microwave signals? This question was previously asked in ISRO (VSSC) Technical Assistant Electronics 2018 Official Paper View all ISRO Technician Papers > 1. PIN diode 2. Schottky barrier diode 3. Varactor diode 4. P-N junction diode Option 2 : Schottky barrier diode Free CT 3: Building Materials 2565 10 Questions 20 Marks 12 Mins ## Detailed Solution ​The Schottky barrier diode is most suitable for the detection of microwave signals. Schottky Diode: Its symbolic representation is as shown:- It is not a typical diode because it does not have a p-n junction. Instead, it consists a doped semiconductor (usually n-type) and metal, bound together as shown: • ​The Schottky diode’s significant characteristic is its fast switching speed as it doesn't allow the diode to reach saturation. • This makes the Shottky diode useful for Microwave frequencies and digital applications. • Schottky diode is also known as the Schottky barrier diode, surface barrier diode, majority carrier device, hot-electron diode, or hot carrier diode. Varactor Diode: • It is represented by a symbol of diode terminated in the variable capacitor as shown below: • Varactor diode refers to the variable Capacitor diode, which means the capacitance of the diode varies linearly with the applied voltage when it is reversed biased. • The junction capacitance across a reverse bias pn junction is given by ​           $$C=\frac{A\epsilon}{W}$$ • As the reverse bias voltage increases, the depletion region width increases resulting in the decrease in the junction capacitance. Tunnel diode: • It is a highly doped PN Junction diode, used for low voltage high-frequency switching applications. • It works on the tunneling principle. • When compared to a normal p-n junction diode, tunnel diode has a narrow depletion width. • In normal forward-biased operation, it exhibits the “Negative resistance region” as shown: • The negative differential resistance in their operation, allows them to be used as oscillators, amplifiers, and switching circuits. • Their low capacitance allows them to function at microwave frequencies. • Tunnel diodes are not good rectifiers, as they have relatively high leakage current when reverse biased. PIN diode: • A PIN diode has wide intrinsic layer sandwiched between a P and N layer. • The wide intrinsic region makes the PIN diode an inferior rectifier (one typical function of a diode), but it makes it suitable for attenuators, fast switches, photodetectors, and high voltage power electronics applications. • Thus, it is mostly used as a microwave switch. Note: • PIN DIODE has three regions: • P type layer • Intrinsic Layer • N-Type layer • The depletion region that exists between p and n regions is large. The layer between p and n regions includes no charge carriers • In the reverse bias, as the depletion region of diode has no charge carriers it works as an insulator • The depletion region exists within PIN diode but if the PIN diode is forward biased then carriers come into depletion region and there will be a flow of current
2021-09-24 20:27:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.551092267036438, "perplexity": 3512.522211512464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057580.39/warc/CC-MAIN-20210924201616-20210924231616-00630.warc.gz"}
https://mixomics.org/mixmc/hmp-bodysites-case-study/
# Case Study of MixMC sPLS-DA with HMP Bodysite data (Repeated Measures) The mixMC framework is one that is specifically built for microbial datasets and will be used here on the Human Microbiome Project (HMP) 16S dataset. A sPLS-DA methodology will be employed in order to predict the bodysite a given sample was drawn from based on the OTU data (Operational Taxonomic Unit). The model will select the optimal set of OTUs to perform this prediction. This case study focuses on the exploration and analysis of a repeated measurement design – meaning a multilevel framework will be employed. For background information on the mixMC, multilevel or sPLS-DA methods, refer to the MixMC Method Page, Multilevel Page or sPLS-DA Method Page. ## R script The R script used for all the analysis in this case study is available here. ## To begin Load the latest version of mixOmics. Note that the seed is set such that all plots can be reproduced. This should not be included in proper use of these functions. library(mixOmics) # import the mixOmics library set.seed(5249) # for reproducibility, remove for normal use ### The data The data being used includes only the most diverse bodysites yielded from the HMP studies. It features a repeated measures design which will be accounted for in the following analysis. It is assumed that the data are offset and pre-filtered, as described in mixMC pre-processing steps. The mixOmics HMP dataset is accessed via diverse.16S and contains the following: • diverse.16S$data.TSS (continuous matrix): 162 rows and 1674 columns. The prefiltered normalised data using Total Sum Scaling normalisation. • diverse.16S$data.raw (continuous matrix): 162 rows and 1674 columns. The prefiltered raw count OTU data which include a 1 offset (i.e. no 0 values). • diverse.16S$taxonomy (categorical matrix): 1674 rows and 6 columns. Contains the taxonomy (ie. Phylum, … Genus, Species) of each OTU. • diverse.16S$indiv (categorical matrix): 162 rows and 5 columns. Contains all the sample meta data recorded. • diverse.16S$bodysite (categorical vector): factor of length 162 indicating the bodysite with levels Antecubital_fossa, Stool and Subgingival_plaque. • diverse.16S$sample (categorical vector): factor of length 162 indicating the unique individual ID of each sample. The raw OTU data will be used as predictors (X dataframe) for the bodysite (Y vector). The subject corresponding to each sample is also extracted such that repeated measures can be accounted for. The dimensions of the predictors is confirmed and the distribution of the response vector is observed (note that it is a balanced dataset). data("diverse.16S") # extract the microbial data X <- diverse.16S$data.raw # set the raw OTU data as the predictor dataframe Y <- diverse.16S$bodysite # set the bodysite class as the response vector sample <- diverse.16S$sample dim(X) # check dimensions of predictors ## [1] 162 1674 summary(Y) # check distribution of response ## Antecubital_fossa Stool Subgingival_plaque ## 54 54 54 ## Initial Analysis ### Preliminary Analysis with PCA The first exploratory step involves using PCA (unsupervised analysis) to observe the general structure and clustering of the data to aid in later analysis. As this data are compositional by nature, a centered log ratio (CLR) transformation needs to be undergone in order to reduce the likelihood of spurious results. This can be done by using the logratio parameter in the PCA function. The sample object is also passed in to the multilevel parameter. Here, a PCA with a sufficiently large number of components (ncomp = 10) is generated to choose the final reduced dimension of the model. Using the 'elbow method' in Figure 1, it seems that two components will be more than sufficient for a PCA model. Note: different log ratio transformations, normalisations and/or multilevel designs may yield differing results. Some exploration is recommended to gain an understanding of the impact of each of these processes. # undergo PCA with 10 components and account for repeated measures diverse.pca = pca(X, ncomp = 10, logratio = 'CLR', multilevel = sample) plot(diverse.pca) # plot explained variance FIGURE 1: Bar plot of the proportion of explained variance by each principal component yielded from a PCA. Below (in Figure 2), the samples can be seen projected onto the first two Principal Components . (a) shows the case where the repeated measures was not accounted for while (b) does control for this. Without a multilevel approach, the total explained variation decreases from 43% to 34%. In Figure 2a, the separation of each bodysite is distinct, but not the strongest. This is vastly improved when the multilevel framework is employed. The first component separates all three bodysites to a moderate degree, primarily discriminating between the stool and subgingival plaque.The second principal component separates the antecubital fossa bodysite from the others. # undergo PCA with 2 components diverse.pca.nonRM = pca(X, ncomp = 2, logratio = 'CLR') # undergo PCA with 2 components and account for repeated measures diverse.pca.RM = pca(X, ncomp = 2, logratio = 'CLR', multilevel = sample) plotIndiv(diverse.pca.nonRM, # plot samples projected onto PCs ind.names = FALSE, # not showing sample names group = Y, # color according to Y, title = '(a) Diverse.16S PCA Comps 1&2 (nonRM)') plotIndiv(diverse.pca.RM, # plot samples projected onto PCs ind.names = FALSE, # not showing sample names group = Y, # color according to Y legend = TRUE, title = '(b) Diverse.16S PCA Comps 1&2 (RM)') FIGURE 2: Sample plots from PCA performed on the Diverse 16S OTU data. Samples are projected into the space spanned by the first two components. (a) depicts this when the repeated measures is not accounted for. (b) does use a multilevel framework. (‘RM’ = repeated measures) ### Initial sPLS-DA model The mixMC framework uses the sPLS-DA multivariate analysis from mixOmics [3]. Hence, the next step involves generating a basic PLS-DA model such that it can be tuned and then evaluated. In many cases, the maximum number of components needed is k-1 where k is the number of categories within the outcome vector (y) – which in this case is 3. Once again, the logratio parameter is used here to ensure that the OTU data are transformed into an Euclidean space. basic.diverse.plsda = plsda(X, Y, logratio = 'CLR', ncomp = nlevels(Y), multilevel = sample) ## Tuning sPLS-DA ### Selecting the number of components #### The ncomp Parameter To set a baseline from which to compare a tuned model, the performance of the basic model is assessed via the perf() function. Here, a 5-fold, 10-repeat design is utilised. To obtain a more reliable estimation of the error rate, the number of repeats should be increased (between 50 to 100). Figure 3 shows the error rate as more components are added to the model (for all three distance metrics). As this is a balanced dataset, the overall error rate and balanced error rate are the same (hence there seemingly being only one line on each set of axes in Figure 3). The plot indicates a decrease in the classification error rate (i.e. an increase in classification performance) from one component to 2 components in the model. The performance does not increase after 2 components, which suggests ncomp = 2 for a final PLS-DA model. Note that for the sparse PLS-DA we may obtain a different optimal ncomp. # assess the performance of the sPLS-DA model using repeated CV basic.diverse.perf.plsda = perf(basic.diverse.plsda, validation = 'Mfold', folds = 5, nrepeat = 10, progressBar = FALSE) # extract the optimal component number optimal.ncomp <- basic.diverse.perf.plsda$choice.ncomp["BER", "max.dist"] plot(basic.diverse.perf.plsda, overlay = 'measure', sd=TRUE) # plot this tuning FIGURE 3: Classification error rates for the basic sPLS-DA model on the Diverse OTU data. Includes the standard and balanced error rates across all three distance metrics. ### Selecting the number of variables #### The keepX Parameter Using the tune.splsda() function, the optimal number of components can be confirmed as well as the number of features to use for each component can be determined. Once again, for real analysis a larger number of repeats should be used compared to the 5-fold, 10-repeat structure used here. It can be seen in Figure 4 that adding a third component does not improve the performance of the model, hence ncomp = 2 remains valid. The diamonds indicate the optimal keepX values for each of these components based on the balanced error rate. set.seed(5249) grid.keepX = c(seq(5,150, 5)) diverse.tune.splsda = tune.splsda(X, Y, ncomp = 3, # use optimal component number logratio = 'CLR', # transform data to euclidean space multilevel = sample, test.keepX = grid.keepX, validation = c('Mfold'), folds = 5, nrepeat = 10, # use repeated CV dist = 'max.dist', # maximum distance as metric progressBar = FALSE) # extract the optimal component number and feature count per component optimal.ncomp = diverse.tune.splsda$choice.ncomp$ncomp optimal.keepX = diverse.tune.splsda$choice.keepX[1:optimal.ncomp] plot(diverse.tune.splsda) # plot this tuning FIGURE 4: Tuning keepX for the sPLS-DA performed on the Diverse OTU data. Each coloured line represents the balanced error rate (y-axis) per component across all tested keepX values (x-axis) with the standard deviation based on the repeated cross-validation folds. ## Final Model Following this tuning, the final sPLS-DA model can be constructed using these optimised values. diverse.splsda = splsda(X, Y, logratio= "CLR", # form final sPLS-DA model multilevel = sample, ncomp = optimal.ncomp, keepX = optimal.keepX) ## Plots ### Sample Plots The sample plot found in Figure 5 depicts the projection of the samples onto the first two components of the sPLS-DA model. The subgingival plaque is adequately separated from the other two sites along the first component. The antecibital fossa and stool sites are better separated by the second component, though the overlapping confidence ellipses shows that this component is not the best at discriminating between them. Do no hesitate to add other components and look at the sample plot to visualise the potential benefit of adding a third component as the current separation of bodysites could do with improvement. plotIndiv(diverse.splsda, comp = c(1,2), ind.names = FALSE, ellipse = TRUE, # include confidence ellipses legend = TRUE, legend.title = "Bodysite", title = 'Diverse OTUs, sPLS-DA Comps 1&2') FIGURE 5: Sample plots from sPLS-DA performed on the Diverse OTU data. Samples are projected into the space spanned by the first two components. Another way to visualise the similarity of samples is through the use of a clustered image map (CIM). Figure 6 shows some relationships between OTUs and certain bodysites. For example, the right-most cluster of OTUs seems to be positively associated with the subgingival plaque site – while the vast majority of other OTUs have a negative association with this same site. cim(diverse.splsda, comp = c(1,2), row.sideColors = color.mixo(Y), # colour rows based on bodysite legend = list(legend = c(levels(Y))), title = 'Clustered Image Map of Diverse Bodysite data') FIGURE 6: Clsutered Image Map of the Diverse OTU data after sPLS-DA modelling. Only the keepX selected feature for components 1 and 2 are shown, with the colour of each cell depicting the raw OTU value after a CLR transformation. ### Variable Plots Next, the relationship between the OTUs and the sPLS-DA components is examined. Note that cutoff = 0.5 such that any feature with a correlation vector length less than 0.5 is not shown. The three clusters of variables within this plot correspond quite well to the three bodysite clusters in Figure 5. Interpretting Figure 7 in conjunction with Figure 5 provides key insights into what OTUs are responsible for identifying each bodysite. For example, the cluster of 4 OTUs at the negative end of the first component (left side) in Figure 7 are likely to be key OTUs in defining the microbiome in the subgingival area. plotVar(diverse.splsda, comp = c(1,2), cutoff = 0.5, rad.in = 0.5, var.names = FALSE, pch = 19, title = 'Diverse OTUs, Correlation Circle Plot Comps 1&2') FIGURE 7: Correlation circle plot representing the OTUs selected by sPLS-DA performed on the Diverse OTU data. Only the OTUs selected by sPLS-DA are shown in components 1 and 2. Cutoff of 0.5 used ## Evaluation of sPLS-DA The mixOmics package also contains the ability to assess the classification performance of the sPLS-DA model that was constructed via the perf() function once again. The mean error rates per component and the type of distance are output. It can be beneficial to increase the number of repeats for more accurate estimations. It is clear from the below output that adding the second component drastically decreases the error rate. set.seed(5249) # for reproducible results for this code, remove for your own code # evaluate classification using repeated CV and maximum distance as metric diverse.perf.splsda = perf(diverse.splsda, validation = 'Mfold', folds = 5, nrepeat = 10, progressBar = FALSE, dist = 'max.dist') diverse.perf.splsda$error.rate ## $overall ## max.dist ## comp1 0.33333333 ## comp2 0.01728395 ## ##$BER ## max.dist ## comp1 0.33333333 ## comp2 0.01728395 ## OTU Selection The sPLS-DA selects the most discriminative OTUs that best characterize each body site. The below loading plots (Figures 9a&b) display the abundance of each OTU and in which body site they are the most abundant for each sPLS-DA component. Viewing these bar plots in combination with Figures 5 and 7 aid in understanding the similarity between bodysites. For both components, the 20 highest contributing features are depicted. OTUs selected on the first component are all highly abundant in subgingival plaque samples based on the mean of each OTU per body site. This makes sense based on the interpretations of Figures 5 and 7. All OTUs seleced on the second component are strongly associated with the antecubital plaque site. plotLoadings(diverse.splsda, comp = 1, method = 'mean', contrib = 'max', size.name = 0.8, legend = FALSE, ndisplay = 20, method = 'mean', contrib = 'max', size.name = 0.7, ndisplay = 20, FIGURE 9: The loading values of the top 20 (or 5 in the case of comp. 1) contributing OTUs to the first (a) and second (b) components of a sPLS-DA undergone on the Diverse OTU dataset. Each bar is coloured based on which bodysite had the maximum, mean value of that OTU. To take this a step further, the stability of each OTU on these components can be assessed via the output of the perf() function. The below values (between 0 and 1) indicate the proportion of models (during the repeated cross validation) that used that given OTU as a contributor to the first sPLS-DA component. Those with high stabilities are likely to be the most important to defining a certain component. # determine which OTUs were selected selected.OTU.comp1 = selectVar(diverse.splsda, comp = 1)$name # display the stability values of these OTUs diverse.perf.splsda$features\$stable[[1]][selected.OTU.comp1] ## ## OTU_97.38174 OTU_97.39439 OTU_97.108 OTU_97.20 OTU_97.39456 ## 1.00 0.86 0.92 0.86 0.52
2023-04-01 04:58:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4740312993526459, "perplexity": 3469.424150673063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.0/warc/CC-MAIN-20230401032604-20230401062604-00471.warc.gz"}
https://testbook.com/objective-questions/mcq-on-slider-crank-mechanism--5eea6a0c39140f30f369e099
# The given figure shows a quick return mechanism. The crank OA rotates clockwise uniformly. OA = 2 cm, OO' = 4 cm. The ratio of time taken for forwarding motion to that for return motion is: 1. 0.5 2. 2 3. √2 4. 1 Option 2 : 2 ## Slider Crank Mechanism MCQ Question 1 Detailed Solution Concept: A cutting stroke occurs when the crank rotates through an angle β and the return stroke occurs when the crank rotates through an angle α or (360° - β) in the clockwise direction. $$\frac{{Time~of~cutting~stroke}}{{Time~of~return~stroke}} = \frac{β }{α } = \frac{β }{{\left( {360^\circ - β } \right)}}$$ Calculation: Given: Length of the crank, AO = 2 cm Distance between the center of the crank and slotted lever, OO' = 4 cm Consider ΔOO'A, $$cos \frac{α}{2}=\frac{OA}{OO'}$$ $$cos\frac{α}{2}=\frac{2}{4}=\frac{1}{2}$$ Therefore, α = 120° The cutting stroke angle β, β = 360 - α β = 360 - 120 = 240° $$\frac{{Time~of~cutting~stroke}}{{Time~of~return~stroke}} = \frac{β }{α } = \frac{240}{120}=2$$ # What is the total number of inversion possible for the slider crank mechanism? 1. 6 2. 5 3. 4 4. 3 Option 3 : 4 ## Slider Crank Mechanism MCQ Question 2 Detailed Solution Concept: • If one of the links is fixed in a kinematic chain, it is called a mechanism. • So, we can obtain as many mechanisms as the number of links (n inversions from a kinematic chain having ‘n’ number of links) in a kinematic chain. • A slider-crank is a kinematic chain having four links so four inversions. • It has one sliding pair and three turning pairs. • Link 1 is a frame (fixed). • Link 2 has rotary motion and is called a crank. • Link 3 has got combined rotary and reciprocating motion and is called a connecting rod. • Link 4 has reciprocating motion and is called a slider. • This mechanism is used to convert rotary motion to reciprocating and vice versa. Inversions of the slider-crank mechanism are obtained by fixing links 1, 2, 3, and 4. • First inversion: This inversion is obtained when link 1 (ground body) is fixed. • Application-  Reciprocating engine, reciprocating compressor, etc. • Second inversion: This inversion is obtained when link 2 (crank) is fixed. • Application- Whitworth quick returns mechanism, Rotary engine, etc. • Third inversion: This inversion is obtained when link 3 (connecting rod) is fixed. • Application - Slotted crank mechanism, Oscillatory engine, etc. • Fourth inversion: This inversion is obtained when link 4 (slider) is fixed. • Application- A hand pump, pendulum pump or Bull engine, etc. # In a crank and slotted lever quick return motion mechanism, the distance between the fixed centres is 160 mm and the driving crank is 80 mm long. The ratio of time taken by cutting and return strokes is 1. 0.5 2. 1.0 3. 1.5 4. 2.0 Option 4 : 2.0 ## Slider Crank Mechanism MCQ Question 3 Detailed Solution Concept: A crank and slotted lever quick return motion mechanism convert rotation motion into oscillation (reciprocation) motion. It’s an inversion of a single slider-crank mechanism with connecting rod B being fixed. → in figure O1O2 → fixed link (connecting rod) O1A → crank O2B → slotted bar Quick Return Ratio (QRR) $$= \frac{{Cutting\;time}}{{Return\;Time}} = \frac{\beta }{\alpha }$$ Calculation: Given: Centre distance, O1O2 = 160 mm, Crank radius, O1A = 80 mm $$cos \frac {\alpha}{2} = \frac {Crank \ radius}{Connecting \ rod} = \frac {80}{160} = 0.5$$ $$\therefore \frac{\alpha }{2} = 60^\circ \;\; \Rightarrow \alpha = 120^\circ$$ α + β = 360° ⇒ β = 240° $$\therefore QRR = \frac{{240}}{{120}} = 2$$ # The quick return mechanism is used in which machine? 1. Hooke coupling 2. Milling machine 3. Universal lathe machine 4. Slotting machine Option 4 : Slotting machine ## Slider Crank Mechanism MCQ Question 4 Detailed Solution Explanation: • A quick return mechanism converts the circular motion into reciprocating motion (repetitive back and forth linear motion). • This mechanism is mostly used in shaping, planning, and slotting machines. • In a slotting, the rotary motion of the drive is converted into reciprocating motion of the ram by the mechanism housed within the column or the machine. • In a standard slotter, metal is removed in the forward cutting stroke, while the return stroke goes idle and no metal is removed during this period. • The quick return mechanism is so designed that it moves the ram holding the tool at a comparatively slower speed during the forward cutting stroke, whereas during the return stroke it allows the ram to move at a faster speed to reduce the idle return time. The reciprocating movement of the ram and the quick return mechanism of the machine is generally obtained by anyone of the following methods: • Crank and slotted link mechanism. • Whitworth quick return mechanism. • Hydraulic shaper mechanism. Slotter: • The slotter or slotting machine is also a reciprocating type of machine tool similar to a shaper or a planer. • The chief difference between a shaper and a slotter is the direction of the cutting action. The tool moves vertically rather than in a horizontal direction. • The work is held stationary and the tool on the ram is moved up and down across the work. • It is used to cut slots, splines key ways for both internal and external jobs such as machining internal and external gears. A universal or Hooke’s coupling: It is used to connect two shafts whose axes intersect at a small angle. The main application of the universal or Hooke’s coupling is found in the transmission from the gearbox to the differential or back axle of the automobiles. Universal milling machine: • This machine is similar to the horizontal milling machine in all respects with an additional swivelling movement for the table which rests on a graduated swivel base. • The table can be rotated about the vertical axis through 45° on either side of the axis • For helical milling operations (e.g. helical grooves, helical gears), the table is turned to the required angle and fed. • Special attachments like, vertical milling attachment, rotary table attachment, and dividing head arc used in universal milling machines. Thus the machine becomes versatile, capable of cutting gears (spur, helical and bevel gears, and worm wheel) and machining grooves in the production of drills and reamers. Lathe: • It is a machine tool which holds the job in between the centre and base and rotates the job and rotates the job on its own axis. • A lathe is used for many operations such as turning, threading, facing, grooving, Knurling, Chamfering, centre drilling etc. # The cranks in quick return motion rotates through angle 180 degree in clockwise direction for cutting stroke. Then the ratio of time of cutting stroke to time of return stroke is: 1. 1.5 2. 0.5 3. 0.8 4. 1 Option 4 : 1 ## Slider Crank Mechanism MCQ Question 5 Detailed Solution Concept: $$Time\;ratio = \frac{{Time\;of\;cutting\;stroke}}{{Time\;of\;return\;stroke}} = \frac{β }{α } = \frac{{360^\circ - α }}{α }$$ Calculation: Given: β = 180°, α = 360° - 180° = 180° $$Time\;ratio = \frac{{Time\;of\;cutting\;stroke}}{{Time\;of\;return\;stroke}} = \frac{β }{α } = \frac{{360^\circ - α }}{α }$$ $$Time\;ratio = \frac{{360^\circ - 180^\circ }}{180^\circ} = 1$$ ∴ The ratio of time of cutting stroke to time of return stroke is 1. # In a crank and slotted lever quick return motion mechanism, the distance between the fixed centres is 200 mm. The length of the driving crank and the slotted bar are 100 mm and 500 mm, respectively. The length of cutting stroke is 1. 100 mm 2. 300 mm 3. 500 mm 4. 700 mm Option 3 : 500 mm ## Slider Crank Mechanism MCQ Question 6 Detailed Solution Concept: The length of cutting stroke = 2 × CB From the triangle O2CB 2 × CB = 2 × O2B sin θ $$\sin \theta = \frac{{{O_1}A}}{{{O_1}{O_2}}}$$ Calculation: Given, Driving crank length, O1A = 100 mm Slotted Bar length, O2B = 500 mm In triangle O2O1A $$\sin \theta = \;\frac{{{O_1}A}}{{{O_1}{O_2}}} = \frac{{100}}{{200}} = \frac{1}{2}$$ Length of cutting stroke = 2 × O2B sin θ ⇒ Length of cutting stroke = 2 × 500 × 0.5 = 500 mm # In a reciprocating engine, the force along the connecting rod FC isWhere FP = Force on piston, n = L/r, θ = Angle of crank from IDC 1. $$\frac{{{F_P}}}{{\sqrt {{n^2} - {{\sin }^2}\theta } }}$$ 2. $$\frac{{{F_P}}}{{2\sqrt {{n^2} - {{\sin }^2}\theta } }}$$ 3. $$\frac{{n{F_P}}}{{2\sqrt {{n^2} - {{\sin }^2}\theta } }}$$ 4. $$\frac{{n{F_P}}}{{\sqrt {{n^2} - {{\sin }^2}\theta } }}$$ Option 4 : $$\frac{{n{F_P}}}{{\sqrt {{n^2} - {{\sin }^2}\theta } }}$$ ## Slider Crank Mechanism MCQ Question 7 Detailed Solution Concept: The various forces acting on the reciprocating parts of a horizontal engine are: F: Inertia force of reciprocating mass F: Piston side thrust F: Crank effort F: Force acting along the connecting rod Force acting along the connecting rod, FC is given by $${F_C} = \frac{{{F_P}}}{{\cos \phi }}$$ From the figure, $$l\sin \phi = r\sin \theta \Rightarrow \sin \phi = \frac{{\sin \theta }}{n}$$ Now $${F_C} = \frac{{{F_P}}}{{\sqrt {1 - {{\sin }^2}\phi } }} = \frac{{{F_P}}}{{\sqrt {1 - {{\left( {\frac{{\sin \theta }}{n}} \right)}^2}} }} = \frac{{n{F_P}}}{{\sqrt {{n^2} - {{\sin }^2}\theta } }}$$ # A simple quick return mechanism is shown in the figure. The forward to return ratio of the quick return mechanism is 2:1. If the radius of the crank O1P is 125 mm, then the distance’s’ (in mm) between the crank centre to lever pivot centre point should be. 1. 144.3 2. 216.5 3. 240.0 4. 250.0 Option 4 : 250.0 ## Slider Crank Mechanism MCQ Question 8 Detailed Solution O1P = r – 125 mm $$QRR = \frac{2}{1} = \frac{\beta }{\alpha } = \frac{{360 - \alpha }}{\alpha }$$ ⇒ α = 120° then $$\angle R{O_1}{O_2} = \frac{\alpha }{2} = 60^\circ$$ from ΔRO2O1 $$\begin{array}{l} \sin \left( {90 - \frac{\alpha }{2}} \right) = \frac{{{O_1}R}}{{{O_1}{O_2}}} = \frac{r}{{{O_1}{O_2}}}\\ \Rightarrow \sin \left( {90 - 60} \right) = \frac{{125}}{{{O_1}{O_2}}} \Rightarrow {O_1}{O_2} = 250mm \end{array}$$ # The figure shows a mechanism with 3 revolute pairs (between the links 1 and 2, 2 and 3, and 3 and 4) and a prismatic pair (between the links 1 and 4). Which one of the four links should be fixed to obtain the mechanism that forms the basis of the quick-return mechanism widely used in a shaper? ## Slider Crank Mechanism MCQ Question 9 Detailed Solution Explanation: A slider-crank is a four-bar linkage with a rotating crank and a moving slider that moves in a straight line This mechanism is composed of three important parts: • Crank - which is the rotating disc, • Slider - which slides inside the tube, • Connecting rod - that joins the parts. Crank and slotted lever quick return motion mechanism is an inversion of the slider-crank mechanism obtained when the connecting rod (link 2 here) is kept fixed. The major application of this mechanism is used in the shaper, oscillatory engine etc. Other inversions that can be obtained are: • Whitworth quick return motion mechanism when crank (link 3 here) is fixed. The major application is found in rotary engines etc. • Pendulum pump or bull engine when slider (link 4 here) is fixed. This is widely used in hand pumps etc. # An offset slider-crank mechanism is shown in the figure at an instant. Conventionally, the Quick Return Ratio (QRR) is considered to be greater than one. The value of QRR is _______ ## Slider Crank Mechanism MCQ Question 10 Detailed Solution Concept: The Quick Return Ratio (QRR) is given as, QRR = $$\frac {β}{α}$$  {∵ QRR > 1} α = 180° + θ - ϕ β = 180°+ ϕ - θ where, e = offset, l = length of connecting rod, r = radius of crank Calculation: Given: e = 10 mm, l = 40 mm, r = 20 mm $$\sin θ = \frac{e}{{l + r}} = \frac{{10}}{{40 + 20}} = \frac{1}{6}$$ θ = sin-1(0.16) = 9.6° $$\sin \phi = \frac{e}{{l - r}} = \frac{{10}}{{40 - 20}} = \frac{1}{2}$$ ϕ = sin-1(0.5) = 30° Now, α = 180° + θ - ϕ = 180 + 9.6 - 30 = 159.6° β = 180°+ ϕ - θ = 180 + 30 - 9.6 = 200.4° $$QRR =\frac {β}{α} = \frac {200.4}{159.6}=1.255$$ # The Whitworth quick return mechanism is shown in the figure with link lengths as follows: OP = 300 mm, OA = 150 mm, AR = 160 mm, RS = 450 mm.The quick return ratio for the mechanism is ______ (round off to one decimal place). ## Slider Crank Mechanism MCQ Question 11 Detailed Solution Concept: $${\rm{QRR}} = \frac{{Angle\;subscribed\;in\;forward\;stroke}}{{Angle\;subscribed\;in\;return\;stroke}}=\frac{\beta}{\alpha}$$ Calculation: Given: OA = 150 mm, OP = 300 mm, AR = 160 mm, RS = mm. In ΔAOB $$\cos \left(\frac{α}{2}\right)=\frac{AO}{OB}=\frac{AO}{OP}$$ $$\cos \left(\frac{α}{2}\right)=\frac{AO}{OP}=\frac{150}{300}=0.5$$ $$\frac{α}{2}=60^{\circ}$$ ∴ α = 120° $${\rm{QRR}} = \frac{{Angle\;subscribed\;in\;forward\;stroke}}{{Angle\;subscribed\;in\;return\;stroke}}=\frac{\beta}{\alpha}$$ $${\rm{QRR}} =\frac{\beta}{\alpha}=\frac{360\;-\;120}{120}=2$$ # The numbers of dead centres in a crank driven slider crank mechanism are 1. 6 2. 0 3. 2 4. 4 Option 3 : 2 ## Slider Crank Mechanism MCQ Question 12 Detailed Solution Slider crank mechanism there are two extreme ends of slider-crank mechanism where the velocity of the crank becomes zero are called dead centres. when the crank angle is 0° then the piston is at inner dead centre and when the crank angle is 180° then the piston is at outer dead centre. Velocity of piston is $${v_p} = \omega r\left( {\sin θ + \frac{{\sin 2θ }}{{2n}}} \right)$$ Where r = crank radius, ω = crank speed, n = Obliquity ratio Velocity of slider will be minimum at θ = 0° or 180°, (∵ sin function is minimum at 0° i.e. sin 0° = sin 180° = 0) ∴ Vmin = rω (0 + 0) = 0 When 0° or 180°, i.e. crank is at inner dead centre or outer dead centre velocity of the piston becomes minimum i.e 0. # Linear acceleration of slider in the slider-crank mechanism may be expressed as:(where, r = radius of the crank, l = length of the connecting rod and $$n = \frac{l}{r}$$) 1. $$a = r{\omega ^2}\left[ {\cos \theta + \frac{{\sin 2\theta }}{n}} \right]$$ 2. $$a = r{\omega ^2}\left[ {\cos \theta + \frac{{\cos 2\theta }}{n}} \right]$$ 3. $$a = r{\omega ^2}\left[ {\sin \theta + \frac{{\sin 2\theta }}{n}} \right]$$ 4. $$a = r{\omega }\left[ {\cos \theta + \frac{{\cos 2\theta }}{n}} \right]$$ Option 2 : $$a = r{\omega ^2}\left[ {\cos \theta + \frac{{\cos 2\theta }}{n}} \right]$$ ## Slider Crank Mechanism MCQ Question 13 Detailed Solution Explanation: Given below is a slider-crank mechanism: Displacement of the piston from inner dead center: x = (l + r) - (lcos ϕ + rcosθ) We know that obliquity ratio 'n' is l/r, therefore; x = (nr + r) - (nrcos ϕ + rcosθ) x = r(1 - cosθ) + nr(1 - cosϕ) $$\cos\phi=\sqrt{1-\sin^2\phi}$$ $$\cos\phi=\sqrt{1-\frac{(r\sin\theta)^2}{l^2}}$$ $$\cos\phi=\sqrt{1-\frac{(\sin^2\theta)}{n^2}}$$ $$\cos\phi=\frac{1}{n}\sqrt{n^2-{\sin^2\theta}}$$ $$x = r\left[ {\left( {1 - \cos θ } \right) + \left( {n - \sqrt {{n^2} - {{\sin }^2}θ } } \right)} \right]$$ The velocity of the piston: $$v=\frac{dx}{d θ}\frac{d θ}{dt}$$ $$v = r\omega \left[ {\sin θ + \frac{{\sin 2θ }}{{2\sqrt {{n^2} - {{\sin }^2}θ } }}} \right]$$ If n2 >> sin2 θ $$v = r\omega \left[ {\sin θ + \frac{{\sin 2θ }}{{2n}}} \right]$$ Acceleration of the piston: $$a=\frac{dv}{d θ}\frac{dθ}{dt}$$ $$a = r{\omega ^2}\left[ {\cos θ + \frac{{\cos 2θ }}{n}} \right]$$ where, r = radius of the crank, l = length of the connecting rod, θ = angle made by the crank from the inner dead center. # In case of a slider crank mechanism, if the mass of slider = 1 kg, crank radius = 10 cm, length of the connecting rod = 40 cm, crank speed = 100 rad/s, then the inertia force at the zero crank angle (from Inner dead center) is equal to: 1. 1250 N 2. 1050 N 3. 1750 N 4. 950 N Option 1 : 1250 N ## Slider Crank Mechanism MCQ Question 14 Detailed Solution Concept: Inertia force is given by: $$F = m{ω ^2}r\left( {\cos θ + \frac{{\cos 2θ }}{n}} \right)$$ where $$n =obliquity~ratio= \frac{{length~of~connecting~rod}}{{crank~radius}} = \frac{L}{r}$$ θ = crank angle, m = mass of the slider, ω = crank speed, r = crank radius Calculation: Given: m = 1 kg, r = 10 cm = 0.1 m, L = 40 cm, ω = 100 rad/sec Crank angle, θ = 0° Obliquity ratio (n) is: $$n = \frac{{length~of~connecting~rod}}{{Crank~radius}} = \frac{L}{r} = \frac{{40}}{{10}} = 4$$ Inertia force (F) is: $$F = m{ω ^2}r\left( {\cos θ + \frac{{\cos 2θ }}{n}} \right)$$ $$F = 1 \times {100^2} \times 0.1\times\left( {\cos \left( 0 \right) + \frac{{\cos \left( 0 \right)}}{4}} \right) = 1250~N$$ # What is the total number of inversion possible for the slider crank mechanism? 1. 6 2. 5 3. 4 4. 3 5. 7 Option 3 : 4 ## Slider Crank Mechanism MCQ Question 15 Detailed Solution Concept: • If one of the links is fixed in a kinematic chain, it is called a mechanism. • So, we can obtain as many mechanisms as the number of links (n inversions from a kinematic chain having ‘n’ number of links) in a kinematic chain. • A slider-crank is a kinematic chain having four links so four inversions. • It has one sliding pair and three turning pairs. • Link 1 is a frame (fixed). • Link 2 has rotary motion and is called a crank. • Link 3 has got combined rotary and reciprocating motion and is called a connecting rod. • Link 4 has reciprocating motion and is called a slider. • This mechanism is used to convert rotary motion to reciprocating and vice versa. Inversions of the slider-crank mechanism are obtained by fixing links 1, 2, 3, and 4. • First inversion: This inversion is obtained when link 1 (ground body) is fixed. • Application-  Reciprocating engine, reciprocating compressor, etc. • Second inversion: This inversion is obtained when link 2 (crank) is fixed. • Application- Whitworth quick returns mechanism, Rotary engine, etc. • Third inversion: This inversion is obtained when link 3 (connecting rod) is fixed. • Application - Slotted crank mechanism, Oscillatory engine, etc. • Fourth inversion: This inversion is obtained when link 4 (slider) is fixed. • Application- A hand pump, pendulum pump or Bull engine, etc. # The distance between two parallel shafts is 18 mm and they are connected by an oldham's coupling, the driving shaft revolves at 160 rpm. The maximum speed of sliding of the tongue is: 1. 0.302 m/s 2. 0.6 m/s 3. 3.2 m/s 4. 6 m/s Option 1 : 0.302 m/s ## Slider Crank Mechanism MCQ Question 16 Detailed Solution Concept: Maximum sliding velocity in Oldham’s coupling = Peripheral velocity along the circular path = angular velocity of shaft × distance between shafts = ω × d Calculation: Given: d = 18 mm = 0.018 m, N = 160 rpm $$\therefore {\rm{\omega }} = \frac{{2\pi \times 160}}{{60}} = 16.75\frac{{rad}}{s}$$ Maximum velocity of sliding = ω × d = 16.75 × 0.018 = 0.302 m/s # In a slider crank mechanism if the crank rotates at uniform speed of 200 rpm and has length of 0.2 m, its linear velocity is: 1. 4.19 m/s 2. 20.9 m/s 3. 5.2 m/s 4. 41.9 m/s Option 1 : 4.19 m/s ## Slider Crank Mechanism MCQ Question 17 Detailed Solution Concept: Single slider crank mechanism: Velocity at point A can be represented as VAO = V+ VA/O, where Vo =  velocity of point O,VA/O = velocity of point A with respect to point O. Here, V= 0, as it a fixed point and VA/O = OA × ω, where ω is the angular speed of crank OA. ∴ VA/O = OA × ω Calculation: Given: OA = 0.2 m, N = 200 rpm ⇒ ω = $$\frac{{2\pi N}}{{60}}$$= 20.94 rad/s Now, we know that VAO= VA/O = OA × ω VAO= 0.2 × 20.94 ∴ VAO= 4.19 m/s # Consider the following statements:In a slider-crank mechanism, the slider is at its inner dead center position when the1. slider velocity is zero2. slider velocity is maximum3. slider acceleration is zero4. slider acceleration is maximum Which of the above statements are correct 1. 1 and 4 2. 1 and 3 3. 2 and 3 4. 2 and 4 Option 1 : 1 and 4 ## Slider Crank Mechanism MCQ Question 18 Detailed Solution Concept: For Slider Crank mechanism: The velocity of the piston is given by $${v_p} = \omega r\left( {\sin θ + \frac{{\sin 2θ }}{{2n}}} \right)$$ Acceleration of piston is given by $${a_p} = {\omega ^2}r\left( {cosθ + \frac{{\cos 2θ }}{n}} \right)$$ Now, When the slider is at the inner dead center position θ will be equal to 0° Now, Substituting the value of θ in the above equation. Calculation: Since the value of sin θ at 0° will be equal to 0 hence vp = 0 Now, The value of cos θ at 0° will be equal to 1. ∴ $${a_P} = {\omega ^2}.r\left( {\cos 0^\circ + \frac{{\cos 0^\circ }}{n}} \right) = {\omega ^2}.r\left( {1 + \frac{1}{n}} \right)$$ i.e the value will be maximum. ## Slider Crank Mechanism MCQ Question 19 ### Comprehension: A quick return mechanism is shown below. The crank OS is driven at 2 rev/s in counter-clockwise direction. # Take the quick return ratio is 1 : 2. The angular speed of PQ in rev/s when the block R attains maximum speed during forward stroke (stroke with slower speed) is 1. 1/3 2. 2/3 3. 2 4. 3 Option 2 : 2/3 ## Slider Crank Mechanism MCQ Question 19 Detailed Solution QRR = 1 : 2, OP = 500 mm, OT = length of crank $$\begin{array}{l} QRR = \frac{{Time\ of\ return\ stroke}}{{time\ of\ cutting\ stroke}}\\ \frac{1}{2} =\frac{\alpha }{\beta } = \frac{\alpha }{{360 - \alpha }}\\ \Rightarrow 360 - \alpha = 2\alpha \Rightarrow \alpha = {120^o} \end{array}$$ And angle,  $$\angle TOP = \frac{\alpha }{2} = {60^o}$$ From the ΔTOP, $$cos\frac{\alpha }{2} = \frac{{OT}}{{OP}} = \frac{r}{{500}} \Rightarrow cos{60^o} = \frac{r}{{500}}$$ $$\Rightarrow r = 500 \times \frac{1}{2}=250\ mm=OS=OT$$ We know that maximum speed during forward stroke occur when PQ is perpendicular to the line of action of the tool i.e. PQ,OS and OQ are in straight line. PQ=PO+OS=500+250=750 mm So, $$V = OS \times {\omega _{OS}} = PQ \times {\omega _{PQ}}$$ $$\Rightarrow 250 \times 2 = 750 \times {\omega _{PQ}} \Rightarrow {\omega _{PQ}} = \frac{2}{3}\ rev/sec.$$ ## Slider Crank Mechanism MCQ Question 20 ### Comprehension: A quick return mechanism is shown below. The crank OS is driven at 2 rev/s in counter-clockwise direction. # If the quick return ratio is 1 : 2, then the length of the crank in mm is 1. 250 2. $$250\sqrt 3$$ 3. 500 4. $$500\sqrt 3$$ Option 1 : 250 ## Slider Crank Mechanism MCQ Question 20 Detailed Solution QRR = 1 : 2, OP = 500 mm, OT = length of crank $$\begin{array}{l} QRR = \frac{{Time\ of\ return\ stroke}}{{time\ of\ cutting\ stroke}}\\ \frac{1}{2} =\frac{\alpha }{\beta } = \frac{\alpha }{{360 - \alpha }}\\ \Rightarrow 360 - \alpha = 2\alpha \Rightarrow \alpha = {120^o} \end{array}$$ And angle,  $$\angle TOP = \frac{\alpha }{2} = {60^o}$$ From the ΔTOP, $$cos\frac{\alpha }{2} = \frac{{OT}}{{OP}} = \frac{r}{{500}} \Rightarrow cos{60^o} = \frac{r}{{500}}$$ $$\Rightarrow r = 500 \times \frac{1}{2}=250\ mm$$
2021-10-21 08:38:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40861818194389343, "perplexity": 5059.172812138915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585382.32/warc/CC-MAIN-20211021071407-20211021101407-00469.warc.gz"}
http://en.wikipedia.org/wiki/Flag_of_Argentina
# Flag of Argentina Use National flag and ensign 9:14 February 27, 1812 A triband flag with horizontal bands coloured light blue, white and light blue with the yellow Sun of May charged in the center. Manuel Belgrano Variant flag of Argentina Civil flag and ensign February 27, 1812 A triband flag with horizontal bands coloured light blue, white and light blue. Manuel Belgrano The national flag of Argentina is a triband, composed of three equally wide horizontal bands coloured light blue, white and light blue. There are multiple interpretations on the reasons for those colors. The flag was created by Manuel Belgrano, in line with the recent creation of the Cockade of Argentina, and was first raised at the city of Rosario on February 27, 1812, during the Argentine War of Independence. The National Flag Memorial was later built on the site. The First Triumvirate did not approve the use of the flag, but the Asamblea del Año XIII allowed the use of the flag as a war flag. It was the Congress of Tucumán which finally designated it as the national flag, in 1816. A yellow Sun of May was added to the center in 1818. The full flag featuring the sun is called the Official Ceremonial Flag (Spanish: Bandera Oficial de Ceremonia). The flag without the sun is considered the Ornamental Flag (Bandera de Ornato). While both versions are equally considered the national flag, the ornamental version must always be hoisted below the Official Ceremony Flag. In vexillological terms, the Official Ceremonial Flag is the civil, state and war flag and ensign, while the Ornamental Flag is an alternative civil flag and ensign. ## History Manuel Belgrano holding the flag. The flag of Argentina was created by Manuel Belgrano during the Argentine War of Independence. While in Rosario he noticed that both the royalist and patriotic forces were using the same colors, Spain's yellow and red. After realizing this, Manuel Belgrano created the Cockade of Argentina, which was approved by the First Triumvirate on February 18, 1812. Encouraged by this success, he created a flag of the same colours nine days later. It used the colors that were used by the Criollos during the May Revolution in 1810. However, recent research and studies would indicate that the colors were chosen from the Spanish Order of Charles III symbolizing the allegiance to the rightful, and then captive King Ferdinand VII of Spain. Most portraits about the creation or first uses of the flag show the modern design of it, but the flag of Macha, a very early design kept at the House of Freedom in Sucre, Bolivia was instead a vertical triband with two white bands and a light blue one in the middle.[1] The flag was first flown, for the soldiers to swear allegiance to it, on 27 February 1812, on the Batería Libertad (Liberty Battery), by the Paraná River. On that day, Belgrano said the following words: Soldiers of the Fatherland, we have heretofore had the glory of wearing the national cockade; there (pointing to the Independence battery), on the Independence Battery, where our Government has recently had the honor of bestowing it upon, shall our weapons enlarge their glory. Let us swear to defeat our enemies, internal and external, and South America will become the temple of Independence and Freedom. In testament that you so swear it, say with me: LONG LIVE THE FATHERLAND! (after the oath) "Lord Captain and troops chosen for the first time for the Independence Battery: go, take possession of it and fulfill the oath you have just sworn today.[2] The priest Juan Ignacio Gorriti blessing the flag. The First Triumvirate was later replaced by the Second Triumvirate, with a more liberal ideology, who called the Asamblea del Año XIII. Despite being one of the original goals, it did not declare independency, and so neither approve the use of a national flag; nevertheless, the flag made by Belgrano was authorized to be used as a War flag. The first oath to the newly approved flag was on February 13, 1813, next to the Salado River, which as also known since then as "Río Juramento" ("Oath River"). The first battle fought with the approved flag was the Battle of Salta, a decisive patriotic victory that achieved the complete defeat of royalist Pío Tristán. The flag would be finally declared the National flag by the Congress of Tucumán on July 20, 1816, shortly after the declaration of independence. The proposal was made by the deputy Juan José Paso and the text written by the deputy of Charcas, José Serrano. On February 25, 1818, the Congress (now working at Buenos Aires) included the Sun of May in the War flag, after the proposal of deputy Chorroarín. The sun was copied after the one that the first Argentine coin featured in 1813. It was subsequently decided to be part of the regular flag afterwards, and thus the sun no longer represents war. The Argentine flag flying for the first time over a coastal battery on the shores of the Paraná, 27 February 1812 José de San Martín was aware of the new flag, but did not employ it during the Crossing of the Andes in 1817. Being a joint operation of both Argentine and Chilean forces, he thought that a new flag would be a better idea than using either the Argentine or Chilean flag. This led to the creation of the Flag of the Andes, used in the Crossing. This flag is currently used as provincial flag by the Mendoza province. On June 8, 1938, president Roberto Ortiz sanctioned the national law Nº 12.361 declaring June 20 "Flag Day", a national holiday. The date was decided after the anniversary of Belgrano's death in 1820. In 1957 the National Flag Memorial (a 10,000 m² monumental complex) was inaugurated in Rosario to commemorate the creation of the flag, and the official Flag Day ceremonies have been customarily conducted in its vicinity since then. In 1978 it was specified, among other measurements, that the Official Ceremony Flag should be 1.4 meters wide and 0.9 meters high, and that the sun must be embroidered. ## Design Popular belief attributes the colors to those of the sky, clouds and the sun; some anthems to the flag like "Aurora" or "Salute to the flag" state so as well. However, historians usually disregard such idea, and attribute them to loyalty towards the House of Bourbon. Since the May Revolution, the first times of the Argentine War of Independence claimed to be acting in behalf of the Spanish King Ferdinand VII, who was prisoner of Napoleón Bonaparte during the Peninsular War. Whether such loyalty was real or a trick to conceal independentism is a topic of dispute. The creation of a new flag with those colors would have been then a way to denote autonomy, while keeping the relations with the captive king alive. ### Shape and size From 1978, the flag's official proportions are 9:14, and its official size is 0.9 by 1.4 meters. It features three stripes alternating cerulean blue - white. Each stripe is 30 centimeters high. In the center stripe there is an emblem known as the Sun of May (Spanish: Sol de Mayo), a golden sun. The Sun is modeled after the symbol of Inti, the Incan god of the Sun. Flags with proportions of 1:2 and 2:3 are also in use. ### Colors The colors are officially defined using the CIE 1976 standard: Scheme Sky blue Yellow Brown CIE (L*, a*, b*) 67.27, -6.88, -32.23 74.97, 29.22, 81.58 44.53, 27.16, 22.48 *Black and white are as normal. *Source: http://www.manuelbelgrano.gov.ar/bandera_colores.htm The following are given for computer, textile, print and plastic use: Scheme Sky blue Yellow Brown RGB 117, 170, 219 252, 191, 73 132, 53, 17 Pantone (textile) 16-4132 TC 14-1064 TC 18-1441 TC Pantone (print) 284 C / 284 U 1235 C / 116 U 483 C / 483 U Pantone (plastic) Q 300-4-1 Q 030-2-1 Q 120-2-4 Number 75AADB FCBF49 843511 *Source: ibid. The Spanish word celeste (Sky blue) is used to describe the colour of the blue stripes. ### Sun of May The sun is called the Sun of May, because it is a replica of an engraving on the first Argentine coin, approved in 1813, whose value was eight escudos (one Spanish dollar). It has 16 straight and 16 waved sunbeams. In 1978 the sun color was specified to be golden yellow (amarillo oro), to have an inner diameter of 10 cm, and an outer diameter of 25 cm (the diameter of the sun equals $5/6$ the height of the white stripe. The sun's face is $2/5$ of its height). It features 32 rays, 16 undulated and 16 straight, in alternation and from 1978 it must be embroidered in the "Official Flag Ceremony". ## The Influence of the Argentine flag The Argentine commander Louis-Michel Aury used the Argentine flag as a model for the blue-white-blue flag of the first independent state in Central America, which was created 1818 in Isla de Providencia, an island off the east coast of Nicaragua. This state existed until approximately 1821, before Colombia took over control of these islands. Somewhat later (1823) this flag was again the model for the flag of the United Provinces of Central America,[3][4][5] a confederation of the current Central American states of Guatemala, Honduras, El Salvador, Nicaragua and Costa Rica, which existed from 1823 to 1838. After the dissolution of the Union, the five countries became independent, but even today all of these states except Costa Rica use flags of blue-white-blue stripes (the Costa Rican flag has a red stripe superimposed on the white one - the red stripe was added to incorporate all the colors of the French flag). The Argentine flag also inspired the flags of Uruguay and Paraguay. Current flags of South American countries Guatemala El Salvador Honduras Nicaragua Costa Rica Uruguay Paraguay ## Anthems to the flag ### Aurora (Sunrise) ```Alta en el cielo, un águila guerrera Audaz se eleva en vuelo triunfal. Azul un ala del color del cielo, Azul un ala del color del mar. Así en el alta aurora irradial. Punta de flecha el áureo rostro imita. Y forma estela el purpurado cuello. El ala es paño, el águila es bandera. Es la bandera de la patria mía, del sol nacido que me ha dado Dios. Es la bandera de la Patria Mía, del sol nacido que me ha dado Dios. ``` ```High in the sky, a warrior eagle rises audacious in its triumphal flight One wing is blue, sky-colored; one wing is blue, sea-colored. In the high radiant aurora its golden face resembles the tip of an arrow. And its purple nape leaves a wake. The wing is cloth, the eagle is a flag. It is the flag of the homeland that God gave me, born of the sun. It is the flag of the homeland that God gave me, born of the sun. ``` Lyrics by Luigi Illica & Héctor Cipriano Quesada, music by Héctor Panizza. ### Salve Argentina (Long live Argentina) ```Salve, argentina bandera azul y blanca. Jirón del cielo en donde impera el Sol. Tú, la más noble, la más gloriosa y santa, el firmamento su color te dio. Yo te saludo, bandera de mi Patria, sublime enseña de libertad y honor. Jurando amarte, como así defenderte, mientras palpite mi fiel corazón. ``` ```Hail, Argenta Blue and white flag Part of the sky Where the sun reigns You, The Most Noble The Most Glorious and Saintly The sky gave you its colors I salute you Flag of my motherland Grand symbol of freedom and honour Swearing to love you as well as to defend you for as long as my faithful heart beats ``` ### Mi Bandera (My Flag) ```Aquí está la bandera idolatrada, la enseña que Belgrano nos legó, cuando triste la Patria esclavizada con valor sus vínculos rompió. Aquí está la bandera esplendorosa que al mundo con sus triunfos admiró, cuando altiva en la lucha y victoriosa la cima de los Andes escaló. Aquí está la bandera que un día en la batalla tremoló triunfal y, llena de orgullo y bizarría, a San Lorenzo se dirigió inmortal. Aquí está, como el cielo refulgente, ostentando sublime majestad, después de haber cruzado el Continente, exclamando a su paso: ¡Libertad! ¡Libertad! ¡Libertad! ``` ```Here is the idolized flag, the flag that Belgrano left to us, when the sad enslaved Homeland bravely broke its bonds. Here is the splendorous flag that surprised the world with its victory, when arrogant and victoriously during the fight climbed the top of the Andes. Here is the flag that one day triumphantly rose in the middle of the battle and, full of pride and gallantry, went immortally to San Lorenzo. Here it is, like the shining sky, showing sublimate majesty after having crossed the continent shouting in its way: "Freedom!" " Freedom! Freedom!" ``` ## References 1. ^ La Primera Bandera y su destino (Spanish) 2. ^ Spanish: Soldados de la Patria, en este punto hemos tenido la gloria de vestir la escarapela nacional; en aquél (señalando la batería Independencia) nuestras armas aumentarán sus glorias. Juremos vencer a nuestros enemigos interiores y exteriores y la América del Sud será el templo de la Independencia y de la Libertad. En fe de que así lo juráis decid conmigo: ¡Viva la Patria!" "Señor capitán y tropa destinada por la primera vez a la batería Independencia: id, posesionaos de ella y cumplid el juramento que acabáis de hacer. Proclama dirigida por M. Belgrano a su ejército al enarbolar por primera vez la bandera 3. ^ Felipe Pigna (2005). Los mitos de la Historia Argentina 2.. Argentina: Grupo Editorial Planeta S.A.I.C. 2005. p. 92. ISBN 950-49-1342-3. 4. ^ "Belgrano dejó descendencia en América Central". aimdigital. August 5, 2012. Retrieved June 20, 2013. 5. ^ "El origen de las banderas de centroamérica". mdz online. June 20, 2008. Retrieved June 20, 2013.
2013-12-08 15:44:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20276032388210297, "perplexity": 9127.502062540458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163066095/warc/CC-MAIN-20131204131746-00061-ip-10-33-133-15.ec2.internal.warc.gz"}
http://www.lastfm.com.br/user/DaniArchie/library/music/Dean+Geyer/_/If+You+Don't+Mean+It
Biblioteca Música » Dean Geyer » If You Don't Mean It 178 execuções | Ir para página da faixa Faixas (178) Faixa Álbum Duração Data If You Don't Mean It 3:13 Set 28 2014, 22h20 If You Don't Mean It 3:13 Set 7 2014, 18h55 If You Don't Mean It 3:13 Ago 9 2014, 19h34 If You Don't Mean It 3:13 Nov 20 2013, 17h32 If You Don't Mean It 3:13 Mai 29 2013, 23h49 If You Don't Mean It 3:13 Fev 17 2013, 21h37 If You Don't Mean It 3:13 Nov 21 2012, 20h02 If You Don't Mean It 3:13 Nov 5 2011, 0h02 If You Don't Mean It 3:13 Jul 1 2011, 12h01 If You Don't Mean It 3:13 Mai 16 2011, 21h23 If You Don't Mean It 3:13 Mai 14 2011, 15h04 If You Don't Mean It 3:13 Ago 10 2009, 19h16 If You Don't Mean It 3:13 Jun 1 2009, 16h03 If You Don't Mean It 3:13 Mai 29 2009, 20h40 If You Don't Mean It 3:13 Mai 29 2009, 19h51 If You Don't Mean It 3:13 Mai 29 2009, 19h02 If You Don't Mean It 3:13 Mai 28 2009, 20h54 If You Don't Mean It 3:13 Mai 28 2009, 20h05 If You Don't Mean It 3:13 Mai 28 2009, 19h51 If You Don't Mean It 3:13 Mai 28 2009, 19h29 If You Don't Mean It 3:13 Mai 25 2009, 19h52 If You Don't Mean It 3:13 Mai 25 2009, 19h37 If You Don't Mean It 3:13 Mai 17 2009, 17h56 If You Don't Mean It 3:13 Abr 14 2009, 19h59 If You Don't Mean It 3:13 Abr 14 2009, 19h05 If You Don't Mean It 3:13 Abr 13 2009, 16h05 If You Don't Mean It 3:13 Abr 13 2009, 0h50 If You Don't Mean It 3:13 Abr 13 2009, 0h36 If You Don't Mean It 3:13 Abr 13 2009, 0h22 If You Don't Mean It 3:13 Abr 13 2009, 0h07 If You Don't Mean It 3:13 Abr 12 2009, 23h53 If You Don't Mean It 3:13 Abr 12 2009, 23h39 If You Don't Mean It 3:13 Dez 31 2008, 19h05 If You Don't Mean It 3:13 Dez 31 2008, 16h31 If You Don't Mean It 3:13 Dez 31 2008, 13h58 If You Don't Mean It 3:13 Dez 30 2008, 20h23 If You Don't Mean It 3:13 Dez 30 2008, 18h57 If You Don't Mean It 3:13 Dez 29 2008, 21h15 If You Don't Mean It 3:13 Dez 29 2008, 19h22 If You Don't Mean It 3:13 Dez 28 2008, 20h20 If You Don't Mean It 3:13 Dez 28 2008, 18h55 If You Don't Mean It 3:13 Dez 26 2008, 19h24 If You Don't Mean It 3:13 Dez 26 2008, 17h22 If You Don't Mean It 3:13 Dez 25 2008, 17h16 If You Don't Mean It 3:13 Dez 24 2008, 17h31 If You Don't Mean It 3:13 Dez 23 2008, 20h22 If You Don't Mean It 3:13 Dez 23 2008, 18h29 If You Don't Mean It 3:13 Dez 23 2008, 16h36 If You Don't Mean It 3:13 Dez 23 2008, 14h44 If You Don't Mean It 3:13 Dez 22 2008, 20h53 If You Don't Mean It 3:13 Dez 22 2008, 18h58 If You Don't Mean It 3:13 Dez 22 2008, 17h05 If You Don't Mean It 3:13 Dez 22 2008, 15h13 If You Don't Mean It 3:13 Dez 21 2008, 20h30 If You Don't Mean It 3:13 Dez 21 2008, 18h38 If You Don't Mean It 3:13 Dez 19 2008, 19h55 If You Don't Mean It 3:13 Dez 19 2008, 17h16 If You Don't Mean It 3:13 Dez 19 2008, 14h42 If You Don't Mean It 3:13 Dez 18 2008, 15h48 If You Don't Mean It 3:13 Dez 18 2008, 14h22 If You Don't Mean It 3:13 Dez 17 2008, 19h29 If You Don't Mean It 3:13 Dez 17 2008, 18h06 If You Don't Mean It 3:13 Dez 17 2008, 16h43 If You Don't Mean It 3:13 Dez 16 2008, 19h03 If You Don't Mean It 3:13 Dez 16 2008, 14h13 If You Don't Mean It 3:13 Dez 15 2008, 13h39 If You Don't Mean It 3:13 Dez 14 2008, 16h21 If You Don't Mean It 3:13 Dez 14 2008, 1h16 If You Don't Mean It 3:13 Dez 13 2008, 23h44 If You Don't Mean It 3:13 Dez 12 2008, 19h52 If You Don't Mean It 3:13 Dez 12 2008, 18h26 If You Don't Mean It 3:13 Dez 12 2008, 17h00 If You Don't Mean It 3:13 Dez 11 2008, 20h40 If You Don't Mean It 3:13 Dez 11 2008, 19h10 If You Don't Mean It 3:13 Dez 10 2008, 20h42 If You Don't Mean It 3:13 Dez 10 2008, 19h16 If You Don't Mean It 3:13 Dez 10 2008, 17h49 If You Don't Mean It 3:13 Dez 8 2008, 20h38 If You Don't Mean It 3:13 Dez 8 2008, 19h12 If You Don't Mean It 3:13 Dez 7 2008, 18h19 If You Don't Mean It 3:13 Dez 5 2008, 20h33 If You Don't Mean It 3:13 Dez 5 2008, 19h07 If You Don't Mean It 3:13 Dez 4 2008, 20h26 If You Don't Mean It 3:13 Dez 4 2008, 18h50 If You Don't Mean It 3:13 Dez 3 2008, 14h38 If You Don't Mean It 3:13 Dez 2 2008, 20h20 If You Don't Mean It 3:13 Dez 2 2008, 18h54 If You Don't Mean It 3:13 Dez 1 2008, 20h11 If You Don't Mean It 3:13 Dez 1 2008, 18h43 If You Don't Mean It 3:13 Dez 1 2008, 1h54 If You Don't Mean It 3:13 Dez 1 2008, 0h57 If You Don't Mean It 3:13 Dez 1 2008, 0h03 If You Don't Mean It 3:13 Nov 30 2008, 23h02 If You Don't Mean It 3:13 Nov 30 2008, 2h22 If You Don't Mean It 3:13 Nov 30 2008, 0h57 If You Don't Mean It 3:13 Nov 28 2008, 15h58 If You Don't Mean It 3:13 Nov 28 2008, 14h33 If You Don't Mean It 3:13 Nov 27 2008, 19h50 If You Don't Mean It 3:13 Nov 27 2008, 18h20 If You Don't Mean It 3:13 Nov 26 2008, 18h12 If You Don't Mean It 3:13 Nov 26 2008, 16h43 If You Don't Mean It 3:13 Nov 24 2008, 20h15 If You Don't Mean It 3:13 Nov 24 2008, 18h45 If You Don't Mean It 3:13 Nov 22 2008, 18h29 If You Don't Mean It 3:13 Nov 22 2008, 17h03 If You Don't Mean It 3:13 Nov 22 2008, 0h22 If You Don't Mean It 3:13 Nov 21 2008, 22h53 If You Don't Mean It 3:13 Nov 21 2008, 21h27 If You Don't Mean It 3:13 Nov 21 2008, 19h55 If You Don't Mean It 3:13 Nov 21 2008, 18h23 If You Don't Mean It 3:13 Nov 21 2008, 16h51 If You Don't Mean It 3:13 Nov 21 2008, 15h19 If You Don't Mean It 3:13 Nov 19 2008, 21h51 If You Don't Mean It 3:13 Nov 19 2008, 19h56 If You Don't Mean It 3:13 Nov 19 2008, 18h20 If You Don't Mean It 3:13 Nov 18 2008, 18h34 If You Don't Mean It 3:13 Nov 18 2008, 17h48 If You Don't Mean It 3:13 Nov 16 2008, 17h09 If You Don't Mean It 3:13 Nov 16 2008, 15h43 If You Don't Mean It 3:13 Nov 16 2008, 14h18 If You Don't Mean It 3:13 Nov 15 2008, 18h09 If You Don't Mean It 3:13 Nov 15 2008, 2h09 If You Don't Mean It 3:13 Nov 14 2008, 20h32 If You Don't Mean It 3:13 Nov 14 2008, 19h01 If You Don't Mean It 3:13 Nov 14 2008, 17h36 If You Don't Mean It 3:13 Nov 11 2008, 19h22 If You Don't Mean It 3:13 Nov 11 2008, 18h38 If You Don't Mean It 3:13 Nov 10 2008, 19h38 If You Don't Mean It 3:13 Nov 10 2008, 18h10 If You Don't Mean It 3:13 Nov 8 2008, 2h07 If You Don't Mean It 3:13 Nov 8 2008, 1h19 If You Don't Mean It 3:13 Nov 7 2008, 19h50 If You Don't Mean It 3:13 Nov 4 2008, 18h25 If You Don't Mean It 3:13 Out 31 2008, 16h59 If You Don't Mean It 3:13 Out 31 2008, 15h08 If You Don't Mean It 3:13 Out 31 2008, 1h03 If You Don't Mean It 3:13 Out 31 2008, 0h46 If You Don't Mean It 3:13 Out 31 2008, 0h30 If You Don't Mean It 3:13 Out 31 2008, 0h13 If You Don't Mean It 3:13 Out 30 2008, 23h56 If You Don't Mean It 3:13 Out 30 2008, 23h39 If You Don't Mean It 3:13 Out 30 2008, 23h23 If You Don't Mean It 3:13 Out 30 2008, 23h06 If You Don't Mean It 3:13 Out 30 2008, 22h49 If You Don't Mean It 3:13 Out 30 2008, 22h32 If You Don't Mean It 3:13 Out 30 2008, 22h15 If You Don't Mean It 3:13 Out 25 2008, 17h43 If You Don't Mean It 3:13 Out 25 2008, 16h16 If You Don't Mean It 3:13 Out 24 2008, 18h10 If You Don't Mean It 3:13 Out 24 2008, 16h40 If You Don't Mean It 3:13 Out 24 2008, 15h21 If You Don't Mean It 3:13 Out 23 2008, 19h31 If You Don't Mean It 3:13 Out 23 2008, 18h04 If You Don't Mean It 3:13 Out 19 2008, 17h38 If You Don't Mean It 3:13 Out 19 2008, 16h26 If You Don't Mean It 3:13 Out 19 2008, 1h40 If You Don't Mean It 3:13 Out 19 2008, 0h14 If You Don't Mean It 3:13 Out 18 2008, 0h56 If You Don't Mean It 3:13 Out 17 2008, 23h39 If You Don't Mean It 3:13 Out 17 2008, 19h07 If You Don't Mean It 3:13 Out 17 2008, 17h58 If You Don't Mean It 3:13 Out 17 2008, 16h48 If You Don't Mean It 3:13 Out 17 2008, 15h39 If You Don't Mean It 3:13 Out 17 2008, 14h46 If You Don't Mean It 3:13 Out 17 2008, 1h54 If You Don't Mean It 3:13 Out 17 2008, 0h39 If You Don't Mean It 3:13 Out 13 2008, 20h21 If You Don't Mean It 3:13 Out 13 2008, 18h32 If You Don't Mean It 3:13 Out 11 2008, 19h47 If You Don't Mean It 3:13 Out 11 2008, 18h44 If You Don't Mean It 3:13 Out 11 2008, 17h40 If You Don't Mean It 3:13 Out 6 2008, 18h40 If You Don't Mean It 3:13 Set 24 2008, 16h33 If You Don't Mean It 3:13 Set 24 2008, 16h06 If You Don't Mean It 3:13 Set 23 2008, 18h18 If You Don't Mean It 3:13 Set 17 2008, 23h54 If You Don't Mean It 3:13 Set 17 2008, 23h34 If You Don't Mean It 3:13 Ago 27 2008, 22h40
2015-05-07 09:40:38
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9232669472694397, "perplexity": 11784.725396160306}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430461119624.60/warc/CC-MAIN-20150501061839-00056-ip-10-235-10-82.ec2.internal.warc.gz"}
https://eccc.weizmann.ac.il/report/2020/013/
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > DETAIL: ### Revision(s): Revision #1 to TR20-013 | 24th May 2021 21:23 #### Linear-time Erasure List-decoding of Expander Codes Revision #1 Authors: Noga Ron-Zewi, Mary Wootters, Gilles Z\'{e}mor Accepted on: 24th May 2021 21:23 Keywords: Abstract: We give a linear-time erasure list-decoding algorithm for expander codes. More precisely, let $r > 0$ be any integer. Given an inner code $\cC_0$ of length $d$, and a $d$-regular bipartite expander graph $G$ with $n$ vertices on each side, we give an algorithm to list-decode the code $\cC = \cC(G, \cC_0)$ of length $nd$ from approximately $\delta \delta_r nd$ erasures in time $n \cdot \poly(d2^r / \delta)$, where $\delta$ and $\delta_r$ are the relative distance and the $r$'th generalized relative distance of $\cC_0$, respectively. To the best of our knowledge, this is the first linear-time algorithm that can list-decode expander codes from erasures beyond their (designed) distance of approximately $\delta^2 nd$. To obtain our results, we show that an approach similar to that of (Hemenway and Wootters, \emph {Information and Computation}, 2018) can be used to obtain such an erasure-list-decoding algorithm with an exponentially worse dependence of the running time on $r$ and $\delta$; then we show how to improve the dependence of the running time on these parameters. Changes to previous version: Revision per final journal version ### Paper: TR20-013 | 17th February 2020 20:35 #### Linear-time Erasure List-decoding of Expander Codes TR20-013 Authors: Noga Ron-Zewi, Mary Wootters, Gilles Z\'{e}mor Publication: 18th February 2020 17:44 We give a linear-time erasure list-decoding algorithm for expander codes. More precisely, let $r > 0$ be any integer. Given an inner code $\cC_0$ of length $d$, and a $d$-regular bipartite expander graph $G$ with $n$ vertices on each side, we give an algorithm to list-decode the expander code $\cC = \cC(G, \cC_0)$ of length $nd$ from approximately $\delta \delta_r nd$ erasures in time $n \cdot \poly(d2^r / \delta)$, where $\delta$ and $\delta_r$ are the relative distance and the $r$'th generalized relative distance of $\cC_0$, respectively. To the best of our knowledge, this is the first linear-time algorithm that can list-decode expander codes from erasures beyond their (designed) distance of approximately $\delta^2 nd$. To obtain our results, we show that an approach similar to that of (Hemenway and Wootters, \emph {Information and Computation}, 2018) can be used to obtain such an erasure-list-decoding algorithm with an exponentially worse dependence of the running time on $r$ and $\delta$; then we show how to improve the dependence of the running time on these parameters.
2022-10-03 01:48:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8689138889312744, "perplexity": 571.077744264467}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00272.warc.gz"}
https://www.physicsforums.com/threads/question-on-mean-value-theorem.827672/
# Question on Mean Value Theorem Tags: 1. Aug 14, 2015 ### Titan97 1. The problem statement, all variables and given/known data Let $#f$ be double differentiable function such that $|f''(x)|\le 1$ for all $x\in [0,1]$. If f(0)=f(1), then, A)$|f(x)|>1$ B)$|f(x)|<1$ C)$|f'(x)|>1$ D)$|f'(x)|<1$ 2. Relevant equations MVT: $$f'(c)=\frac{f(b)-f(a)}{b-a}$$ 3. The attempt at a solution I first tried using integration. $$-1\le f''(x) \le 1$$ integrating from 0 to x, $$-x\le f'(x)-f'(0) \le x$$ Again integrating from 0 to x, $$-\frac{x^2}{2}\le f(x)-f(0)-f'(0)x \le \frac{x^2}{2}$$ But even though I got an inequality for f(x), I could not remove the constants. Then I applied Rolle theorem for f(x). Since f(0)=f(1), there exists a point (at least one point) $c$ such that f'(c)=0. There exists a point $X\in [c,x]$ such that $$f''(X)=\frac{f(x)-f(c)}{x-c}$$ Here, $x\in [c,1]$, and since 1>x>c, x-c<1. Also, $|f''(x)|\le 1$. $$f''(X)=\frac{f(x)-f(c)}{x-c}$$ So, $\frac{f(x)-f(c)}{x-c}\le 1$ and $f(x)-f(c)\le {x-c}$. Hence I get the answer D. Is this correct? 2. Aug 14, 2015 ### pasmith Metahints for real analysis: If you're given a continuous function on a closed bounded interval and told its values at the end points, consider applying the intermediate value theorem. If you're given a differentiable function on a closed bounded interval and told its values at the end points, consider applying the mean value theorem. There are two problems here. Firstly I take it you are applying the MVT to $f'$ rather than $f$, so $X$ satisfies $$f''(X) = \frac{f'(x) - f'(c)}{x - c} = \frac{f'(x)}{x - c}.$$ Secondly, writing $X \in [c,x]$ implicitly requires that $c \leq x$. But you have to prove a result for every $x \in [0,1]$, so you have also to deal with the case $x > c$. However that requires no further work, since as long as $x \neq c$ there is an $X$ lying between $x$ and $c$ which satisfies the above equation. Thus if $x = c$ then $|f'(x)| = 0$, and if $x \neq c$ then $|f'(x)| < \dots$ 3. Aug 14, 2015 ### RUber What does that inequality look like when x = 1? That will give you bounds on f'(0). 4. Aug 14, 2015 ### RUber What if you define $f'(x) = f'(0) + \int_0^x f''(t) dt$? Then using the inequality $\left| \int f dx \right| \leq \int |f| dx$ You can quickly deduce something about f'(x). Then, since f(0) = f(1), you can say that $\int_0^c f'(x) dx = -\int_c^0 f'(x) dx$ You can find a maximum for this based on the same inequality. I doubt you can say anything about |f(x)| since you aren't given anything that might constrain the initial values. f(0) = f(1) = 100 might be an option based on what you provided. 5. Aug 15, 2015 ### Titan97 @pasmith , that was a typo. I lost internet connection while trying to edit. @RUber, i found the minimum and maximum vaue f'(0) can take. $$-1/2 \le f'(0) \le 1/2$$ Last edited: Aug 15, 2015 6. Aug 15, 2015 ### Titan97 But, from the equation, $-x\le f'(x)-f(0)\le x$ and by substituting the max value of f'(0), $-x+0.5\le f'(x)\le x+0.5$ and at x=1, f'(x)>1. Is that correct? 7. Aug 15, 2015 ### Qwertywerty Titan97 , do you want a proper solution , or would getting the answer alone simply be enough ? I haven't actually tried the question , but you could consider a function as f(x) = x(x - 1)/4 . Solving this can easily tell you about f'(x) ; however I don't believe you can comment on f(x) on the basis of what's given in the original question . I know this isn't a proper solution , but still ... Hope this helps . 8. Aug 15, 2015 ### Titan97 I want a proper solution. I know many substitutions that can give the answer to such questions. 9. Aug 15, 2015 ### Qwertywerty Okay , first of - You cannot comment on value of f(x) . For f'(x) - Let f'(x) = k ( f'(0) + x ) , where k is such that -1 <= k <= 1 . f'(0) = - 0.5 , and so f'(x) = k ( x - 0.5 ) . ( x - 0.5 ) varies from - 0.5 to 0.5 . ⇒ k ( x - 0.5 ) will belong to some interval lesser than equal ( - 0.5 , 0.5 ) - Depending on the value of k Hope this helps . 10. Aug 15, 2015 ### Titan97 11. Aug 15, 2015 ### Infrared Hint: There is a point $x_0\in (0,1)$ such that $f'(x_0)=0$. If $x$ is another point in $(0,1)$, then you can use the fact that $|x-x_0|<1$ together with the information about $f''$ to estimate $f'(x)$. 12. Aug 15, 2015 ### Qwertywerty F'(0) has a fixed value - 13. Aug 15, 2015 ### Qwertywerty Made a mistake here . I'll get back to you once I correct it . 14. Aug 16, 2015 ### RUber If you use the facts here, knowing that f'(c)=0, consider that f' goes directly from its initial value to zero as quickly as possible given constraints on f". The smallest c possible is zero. From there, assume max departure from zero using the max for f". How far from zero can f' get on the remainder of the unit interval? 15. Aug 16, 2015 ### Titan97 I did not understand that properly @RUber . 16. Aug 16, 2015 ### Infrared I'm not sure why my post #11 is being ignored; it really suffices to solve the problem. 17. Aug 16, 2015 ### Titan97 Its the same thing i have given in the original post. It does work.
2017-08-21 22:31:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8531144261360168, "perplexity": 740.1636756723074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109670.98/warc/CC-MAIN-20170821211752-20170821231752-00204.warc.gz"}
http://mathhelpforum.com/math-puzzles/84676-pentagram-puzzle.html
1. ## Pentagram Puzzle Pentagram Puzzle Draw a pentagram. Draw a circle at each vertex and every intersection. Code: O * * * * O * * O * O * * * O * * * * * * * * O O * * * * * O * * * * * * * * * * * * * O O Place a coin on any empty circle. Then move it along a line TWO spaces to another empty circle. (A coin may be "jumped.") Leave the coin there. Repeat this process with more coins. If you are successful, you can place nine coins on the star. Then place the tenth coin on the last circle. Good luck! 2. ## Solution Solution Code: (0) * * * * (1) * * (2) * (3) * * * (4) * * * * * * * * (5) (6) * * * * * (7) * * * * * * * * * * * * * (8) (9) Spoiler: Place a coin on any empty circle, say 0. Move it two spaces to another empty circle, say 6. The next move must place the coin in the circle you just vacated. There is only one move: 5-to-0. The next move puts a coin on 5: 9-to-5. Continue in this manner: 3-to-9, 1-to-3, 7-to-1, 4-to-7, 2-to-4, 8-to-2. And last coin is placed on 8.
2016-08-29 13:16:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7123077511787415, "perplexity": 4951.311307687546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982957972.74/warc/CC-MAIN-20160823200917-00183-ip-10-153-172-175.ec2.internal.warc.gz"}
https://puzzling.stackexchange.com/questions/53390/mysterious-letter/53395
# Mysterious Letter My friend Niko sent me a letter the other day: OP: (The original had errors so the text has been updated) I DONT HΑVE ANY TIME TO TALK. IVE BEEN FRΑMED FOR TREΑSON АND M ON THE RUN. I NEED YOU TO HELP ME ESCAPE BАCK TO POLAND. I DO HАVE MY PASSPORT, BUT IT WONT DO ME ANY GOOD ΑS THINGS STАND. I HAVE MΑDE ΑRRАNGEMENTS WITH АN AMBАSS DOR TO LEΑVE U.S. SOIL THROUGH А CARGO SHIP BOUND FOR ARGENTINIА. I WONT SAY ITS N ME АS THIS LETTER MAY BE INTERCEPTED АLONG THE WΑY. I NEED YOUR HELP TO GET TO THE EMBАSSY SAFELY. I HАVE TOLD YOU ENOUGH FOR YOU TO WORK OUT THE LOCΑTION WHERE I NEED YOU TO MEET ME TOMORROW АFTERNOON. ΑT THIS POINT, ΑNYTHING MORE I WRITE WILL ONLY MАKE IT E SIER FOR THE FBI TO TRACK ME DOWN. THEREFORE, I WILL COUNT ON YOUR SUPPORT АND АWAIT OUR MEETING IN THE DESIGNATED PLΑCE. I WILL BE WEΑRING Α CARNΑTION IN MY LΑPEL, AND CARRYING MY RECORD COLLECTION. I can't figure out where he wants me to meet him, however. Can someone work it out? • Is Argentinia intentional? I mean, the typo? – Sid Jul 13 '17 at 17:19 • The I has no significance to the solution of the puzzle. It was a typo, but not an intentional one. – archaephyrryx Jul 13 '17 at 20:44 The first thing to notice is that the letter As aren't all the Latin letter that we're familiar with. Many of them are either the Greek alpha or the Cyrillic (Russian) A instead. Interpreting those as ternary, with Greek being "1", Cyrillic being "2", and Latin being "0", then chunking them into groups of 3, ignoring extra letters, gives: INFRONTK FTOWNFRLJ?, which I believe is supposed to be "IN FRONT OF TOWN". • Perhaps this has an obvious answer that I'm missing, but out of curiosity, how'd you notice that? – puzzledPig Jul 13 '17 at 18:39 • @puzzledPig Click edit on the question. You will see that even obvious words have red lines under them showing wrong spellings. Clearly, the A was different from the usual A. After that, it was all Deusovi. – Sid Jul 13 '17 at 18:47 • @puzzledPig - Copy and paste it into your favorite word processor. (Or just click "edit".) You'll see that there are "misspelled word" marks underlining half of words. Those marks all end just before or start just after As, which means those should be inspected more. Copy-paste a few into the search bar and the results do not look to be in English for many of them. – Deusovi Jul 13 '17 at 18:50 • Ah, thank you, @Sid and @Deusovi! Good to know :) – puzzledPig Jul 13 '17 at 19:02 • I am accepting this solution is correct, as it is the correct solution for the text as I wrote it. I realize that I probably messed up an A here or there, leading to the original message being garbled. I am going to update the post with the intended text, but this will still be the accepted solution. – archaephyrryx Jul 13 '17 at 19:47
2020-02-28 16:50:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5551838874816895, "perplexity": 2179.5391206896597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147234.52/warc/CC-MAIN-20200228135132-20200228165132-00062.warc.gz"}
https://www.bookofproofs.org/branches/arithmetic-function/
Welcome guest You're not logged in. 451 users online, thereof 0 logged in ## Definition: Arithmetic Function A function $f:\mathbb Z \mapsto \mathbb C$ is called arithmetic (another name: number-theoretic). ### Notes • The domain of arithmetic functions is the set of integers $\mathbb Z.$ • The codomain of arithmetic functions can be integers $\mathbb Z,$ real numbers $\mathbb R,$ or even complex numbers $\mathbb C.$ | | | | | created: 2019-03-17 18:08:16 | modified: 2020-07-12 09:40:42 | by: bookofproofs | references: [701], [1272]
2022-11-27 18:52:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19188779592514038, "perplexity": 7091.6007141627015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710417.25/warc/CC-MAIN-20221127173917-20221127203917-00480.warc.gz"}
https://allendowney.github.io/ThinkBayes2/chap05.html
# Estimating Counts¶ In the previous chapter we solved problems that involve estimating proportions. In the Euro problem, we estimated the probability that a coin lands heads up, and in the exercises, you estimated a batting average, the fraction of people who cheat on their taxes, and the chance of shooting down an invading alien. Clearly, some of these problems are more realistic than others, and some are more useful than others. In this chapter, we’ll work on problems related to counting, or estimating the size of a population. Again, some of the examples will seem silly, but some of them, like the German Tank problem, have real applications, sometimes in life and death situations. ## The Train Problem¶ I found the train problem in Frederick Mosteller’s, Fifty Challenging Problems in Probability with Solutions: “A railroad numbers its locomotives in order 1..N. One day you see a locomotive with the number 60. Estimate how many locomotives the railroad has.” Based on this observation, we know the railroad has 60 or more locomotives. But how many more? To apply Bayesian reasoning, we can break this problem into two steps: • What did we know about $$N$$ before we saw the data? • For any given value of $$N$$, what is the likelihood of seeing the data (a locomotive with number 60)? The answer to the first question is the prior. The answer to the second is the likelihood. We don’t have much basis to choose a prior, so we’ll start with something simple and then consider alternatives. Let’s assume that $$N$$ is equally likely to be any value from 1 to 1000. Here’s the prior distribution: import numpy as np from empiricaldist import Pmf hypos = np.arange(1, 1001) prior = Pmf(1, hypos) Now let’s figure out the likelihood of the data. In a hypothetical fleet of $$N$$ locomotives, what is the probability that we would see number 60? If we assume that we are equally likely to see any locomotive, the chance of seeing any particular one is $$1/N$$. Here’s the function that does the update: def update_train(pmf, data): """Update pmf based on new data.""" hypos = pmf.qs likelihood = 1 / hypos impossible = (data > hypos) likelihood[impossible] = 0 pmf *= likelihood pmf.normalize() This function might look familiar; it is the same as the update function for the dice problem in the previous chapter. In terms of likelihood, the train problem is the same as the dice problem. Here’s the update: data = 60 posterior = prior.copy() update_train(posterior, data) Here’s what the posterior looks like: from utils import decorate posterior.plot(label='Posterior after train 60', color='C4') decorate(xlabel='Number of trains', ylabel='PMF', title='Posterior distribution') Not surprisingly, all values of $$N$$ below 60 have been eliminated. The most likely value, if you had to guess, is 60. posterior.max_prob() 60 That might not seem like a very good guess; after all, what are the chances that you just happened to see the train with the highest number? Nevertheless, if you want to maximize the chance of getting the answer exactly right, you should guess 60. But maybe that’s not the right goal. An alternative is to compute the mean of the posterior distribution. Given a set of possible quantities, $$q_i$$, and their probabilities, $$p_i$$, the mean of the distribution is: $\mathrm{mean} = \sum_i p_i q_i$ Which we can compute like this: np.sum(posterior.ps * posterior.qs) 333.41989326370776 Or we can use the method provided by Pmf: posterior.mean() 333.41989326370776 The mean of the posterior is 333, so that might be a good guess if you want to minimize error. If you played this guessing game over and over, using the mean of the posterior as your estimate would minimize the mean squared error over the long run. ## Sensitivity to the Prior¶ The prior I used in the previous section is uniform from 1 to 1000, but I offered no justification for choosing a uniform distribution or that particular upper bound. We might wonder whether the posterior distribution is sensitive to the prior. With so little data—only one observation—it is. This table shows what happens as we vary the upper bound: import pandas as pd df = pd.DataFrame(columns=['Posterior mean']) df.index.name = 'Upper bound' for high in [500, 1000, 2000]: hypos = np.arange(1, high+1) pmf = Pmf(1, hypos) update_train(pmf, data=60) df.loc[high] = pmf.mean() df Posterior mean Upper bound 500 207.079228 1000 333.419893 2000 552.179017 As we vary the upper bound, the posterior mean changes substantially. So that’s bad. When the posterior is sensitive to the prior, there are two ways to proceed: • Get more data. • Get more background information and choose a better prior. With more data, posterior distributions based on different priors tend to converge. For example, suppose that in addition to train 60 we also see trains 30 and 90. Here’s how the posterior means depend on the upper bound of the prior, when we observe three trains: df = pd.DataFrame(columns=['Posterior mean']) df.index.name = 'Upper bound' dataset = [30, 60, 90] for high in [500, 1000, 2000]: hypos = np.arange(1, high+1) pmf = Pmf(1, hypos) for data in dataset: update_train(pmf, data) df.loc[high] = pmf.mean() df Posterior mean Upper bound 500 151.849588 1000 164.305586 2000 171.338181 The differences are smaller, but apparently three trains are not enough for the posteriors to converge. ## Power Law Prior¶ If more data are not available, another option is to improve the priors by gathering more background information. It is probably not reasonable to assume that a train-operating company with 1000 locomotives is just as likely as a company with only 1. With some effort, we could probably find a list of companies that operate locomotives in the area of observation. Or we could interview an expert in rail shipping to gather information about the typical size of companies. But even without getting into the specifics of railroad economics, we can make some educated guesses. In most fields, there are many small companies, fewer medium-sized companies, and only one or two very large companies. In fact, the distribution of company sizes tends to follow a power law, as Robert Axtell reports in Science (http://www.sciencemag.org/content/293/5536/1818.full.pdf). This law suggests that if there are 1000 companies with fewer than 10 locomotives, there might be 100 companies with 100 locomotives, 10 companies with 1000, and possibly one company with 10,000 locomotives. Mathematically, a power law means that the number of companies with a given size, $$N$$, is proportional to $$(1/N)^{\alpha}$$, where $$\alpha$$ is a parameter that is often near 1. We can construct a power law prior like this: alpha = 1.0 ps = hypos**(-alpha) power = Pmf(ps, hypos, name='power law') power.normalize() 8.178368103610282 For comparison, here’s the uniform prior again. hypos = np.arange(1, 1001) uniform = Pmf(1, hypos, name='uniform') uniform.normalize() 1000 Here’s what a power law prior looks like, compared to the uniform prior: uniform.plot(color='C4') power.plot(color='C1') decorate(xlabel='Number of trains', ylabel='PMF', title='Prior distributions') Here’s the update for both priors. dataset = [60] update_train(uniform, dataset) update_train(power, dataset) And here are the posterior distributions. uniform.plot(color='C4') power.plot(color='C1') decorate(xlabel='Number of trains', ylabel='PMF', title='Posterior distributions') The power law gives less prior probability to high values, which yields lower posterior means, and less sensitivity to the upper bound. Here’s how the posterior means depend on the upper bound when we use a power law prior and observe three trains: df = pd.DataFrame(columns=['Posterior mean']) df.index.name = 'Upper bound' alpha = 1.0 dataset = [30, 60, 90] for high in [500, 1000, 2000]: hypos = np.arange(1, high+1) ps = hypos**(-alpha) power = Pmf(ps, hypos) for data in dataset: update_train(power, data) df.loc[high] = power.mean() df Posterior mean Upper bound 500 130.708470 1000 133.275231 2000 133.997463 Now the differences are much smaller. In fact, with an arbitrarily large upper bound, the mean converges on 134. So the power law prior is more realistic, because it is based on general information about the size of companies, and it behaves better in practice. ## Credible Intervals¶ So far we have seen two ways to summarize a posterior distribution: the value with the highest posterior probability (the MAP) and the posterior mean. These are both point estimates, that is, single values that estimate the quantity we are interested in. Another way to summarize a posterior distribution is with percentiles. If you have taken a standardized test, you might be familiar with percentiles. For example, if your score is the 90th percentile, that means you did as well as or better than 90% of the people who took the test. If we are given a value, x, we can compute its percentile rank by finding all values less than or equal to x and adding up their probabilities. Pmf provides a method that does this computation. So, for example, we can compute the probability that the company has less than or equal to 100 trains: power.prob_le(100) 0.2937469222495771 With a power law prior and a dataset of three trains, the result is about 29%. So 100 trains is the 29th percentile. Going the other way, suppose we want to compute a particular percentile; for example, the median of a distribution is the 50th percentile. We can compute it by adding up probabilities until the total exceeds 0.5. Here’s a function that does it: def quantile(pmf, prob): """Compute a quantile with the given prob.""" total = 0 for q, p in pmf.items(): total += p if total >= prob: return q return np.nan The loop uses items, which iterates the quantities and probabilities in the distribution. Inside the loop we add up the probabilities of the quantities in order. When the total equals or exceeds prob, we return the corresponding quantity. This function is called quantile because it computes a quantile rather than a percentile. The difference is the way we specify prob. If prob is a percentage between 0 and 100, we call the corresponding quantity a percentile. If prob is a probability between 0 and 1, we call the corresponding quantity a quantile. Here’s how we can use this function to compute the 50th percentile of the posterior distribution: quantile(power, 0.5) 113 The result, 113 trains, is the median of the posterior distribution. Pmf provides a method called quantile that does the same thing. We can call it like this to compute the 5th and 95th percentiles: power.quantile([0.05, 0.95]) array([ 91., 243.]) The result is the interval from 91 to 243 trains, which implies: • The probability is 5% that the number of trains is less than or equal to 91. • The probability is 5% that the number of trains is greater than 243. Therefore the probability is 90% that the number of trains falls between 91 and 243 (excluding 91 and including 243). For this reason, this interval is called a 90% credible interval. Pmf also provides credible_interval, which computes an interval that contains the given probability. power.credible_interval(0.9) array([ 91., 243.]) ## The German Tank Problem¶ During World War II, the Economic Warfare Division of the American Embassy in London used statistical analysis to estimate German production of tanks and other equipment. The Western Allies had captured log books, inventories, and repair records that included chassis and engine serial numbers for individual tanks. Analysis of these records indicated that serial numbers were allocated by manufacturer and tank type in blocks of 100 numbers, that numbers in each block were used sequentially, and that not all numbers in each block were used. So the problem of estimating German tank production could be reduced, within each block of 100 numbers, to a form of the train problem. Based on this insight, American and British analysts produced estimates substantially lower than estimates from other forms of intelligence. And after the war, records indicated that they were substantially more accurate. They performed similar analyses for tires, trucks, rockets, and other equipment, yielding accurate and actionable economic intelligence. The German tank problem is historically interesting; it is also a nice example of real-world application of statistical estimation. For more on this problem, see this Wikipedia page and Ruggles and Brodie, “An Empirical Approach to Economic Intelligence in World War II”, Journal of the American Statistical Association, March 1947, available here. ## Informative Priors¶ Among Bayesians, there are two approaches to choosing prior distributions. Some recommend choosing the prior that best represents background information about the problem; in that case the prior is said to be informative. The problem with using an informative prior is that people might have different information or interpret it differently. So informative priors might seem arbitrary. The alternative is a so-called uninformative prior, which is intended to be as unrestricted as possible, in order to let the data speak for itself. In some cases you can identify a unique prior that has some desirable property, like representing minimal prior information about the estimated quantity. Uninformative priors are appealing because they seem more objective. But I am generally in favor of using informative priors. Why? First, Bayesian analysis is always based on modeling decisions. Choosing the prior is one of those decisions, but it is not the only one, and it might not even be the most subjective. So even if an uninformative prior is more objective, the entire analysis is still subjective. Also, for most practical problems, you are likely to be in one of two situations: either you have a lot of data or not very much. If you have a lot of data, the choice of the prior doesn’t matter; informative and uninformative priors yield almost the same results. If you don’t have much data, using relevant background information (like the power law distribution) makes a big difference. And if, as in the German tank problem, you have to make life and death decisions based on your results, you should probably use all of the information at your disposal, rather than maintaining the illusion of objectivity by pretending to know less than you do. ## Summary¶ This chapter introduces the train problem, which turns out to have the same likelihood function as the dice problem, and which can be applied to the German Tank problem. In all of these examples, the goal is to estimate a count, or the size of a population. In the next chapter, I’ll introduce “odds” as an alternative to probabilities, and Bayes’s Rule as an alternative form of Bayes’s Theorem. We’ll compute distributions of sums and products, and use them to estimate the number of Members of Congress who are corrupt, among other problems. But first, you might want to work on these exercises. ## Exercises¶ Exercise: Suppose you are giving a talk in a large lecture hall and the fire marshal interrupts because they think the audience exceeds 1200 people, which is the safe capacity of the room. You think there are fewer then 1200 people, and you offer to prove it. It would take too long to count, so you try an experiment: • You ask how many people were born on May 11 and two people raise their hands. • You ask how many were born on May 23 and 1 person raises their hand. • Finally, you ask how many were born on August 1, and no one raises their hand. How many people are in the audience? What is the probability that there are more than 1200 people. Hint: Remember the binomial distribution. # Solution # I'll use a uniform prior from 1 to 2000 # (we'll see that the probability is small that there are # more than 2000 people in the room) hypos = np.arange(1, 2000, 10) prior = Pmf(1, hypos) prior.normalize() 200 # Solution # We can use the binomial distribution to compute the probability # of the data for each hypothetical audience size from scipy.stats import binom likelihood1 = binom.pmf(2, hypos, 1/365) likelihood2 = binom.pmf(1, hypos, 1/365) likelihood3 = binom.pmf(0, hypos, 1/365) # Solution # Here's the update posterior = prior * likelihood1 * likelihood2 * likelihood3 posterior.normalize() 0.006758799800451805 # Solution # And here's the posterior distribution posterior.plot(color='C4', label='posterior') decorate(xlabel='Number of people in the audience', ylabel='PMF') # Solution # If we have to guess the audience size, # we might use the posterior mean posterior.mean() 486.2255161687084 # Solution # And we can use prob_gt to compute the probability # of exceeding the capacity of the room. # It's about 1%, which may or may not satisfy the fire marshal posterior.prob_gt(1200) 0.011543092507699223 Exercise: I often see rabbits in the garden behind my house, but it’s not easy to tell them apart, so I don’t really know how many there are. Suppose I deploy a motion-sensing camera trap that takes a picture of the first rabbit it sees each day. After three days, I compare the pictures and conclude that two of them are the same rabbit and the other is different. How many rabbits visit my garden? To answer this question, we have to think about the prior distribution and the likelihood of the data: • I have sometimes seen four rabbits at the same time, so I know there are at least that many. I would be surprised if there were more than 10. So, at least as a starting place, I think a uniform prior from 4 to 10 is reasonable. • To keep things simple, let’s assume that all rabbits who visit my garden are equally likely to be caught by the camera trap in a given day. Let’s also assume it is guaranteed that the camera trap gets a picture every day. # Solution hypos = np.arange(4, 11) prior = Pmf(1, hypos) # Solution # The probability that the second rabbit is the same as the first is 1/N # The probability that the third rabbit is different is (N-1)/N N = hypos likelihood = (N-1) / N**2 # Solution posterior = prior * likelihood posterior.normalize() posterior.bar(alpha=0.7) decorate(xlabel='Number of rabbits', ylabel='PMF', title='The Rabbit Problem') Exercise: Suppose that in the criminal justice system, all prison sentences are either 1, 2, or 3 years, with an equal number of each. One day, you visit a prison and choose a prisoner at random. What is the probability that they are serving a 3-year sentence? What is the average remaining sentence of the prisoners you observe? # Solution # Here's the prior distribution of sentences hypos = np.arange(1, 4) prior = Pmf(1/3, hypos) prior probs 1 0.333333 2 0.333333 3 0.333333 # Solution # If you visit a prison at a random point in time, # the probability of observing any given prisoner # is proportional to the duration of their sentence. likelihood = hypos posterior = prior * likelihood posterior.normalize() posterior probs 1 0.166667 2 0.333333 3 0.500000 # Solution # The mean of the posterior is the average sentence. # We can divide by 2 to get the average remaining sentence. posterior.mean() / 2 1.1666666666666665 Exercise: If I chose a random adult in the U.S., what is the probability that they have a sibling? To be precise, what is the probability that their mother has had at least one other child. From it, I extracted the following distribution of family size for mothers in the U.S. who were 40-44 years old in 2014: import matplotlib.pyplot as plt qs = [1, 2, 3, 4] ps = [22, 41, 24, 14] prior = Pmf(ps, qs) prior.bar(alpha=0.7) plt.xticks(qs, ['1 child', '2 children', '3 children', '4+ children']) decorate(ylabel='PMF', title='Distribution of family size') For simplicity, let’s assume that all families in the 4+ category have exactly 4 children. # Solution # When you choose a person a random, you are more likely to get someone # from a bigger family; in fact, the chance of choosing someone from # any given family is proportional to the number of children likelihood = qs posterior = prior * likelihood posterior.normalize() posterior probs 1 0.094828 2 0.353448 3 0.310345 4 0.241379 # Solution # The probability that they have a sibling is the probability # that they do not come from a family of 1 1 - posterior[1] 0.9051724137931034 # Solution # Or we could use prob_gt again posterior.prob_gt(1) 0.9051724137931034 Exercise: The Doomsday argument is “a probabilistic argument that claims to predict the number of future members of the human species given an estimate of the total number of humans born so far.” Suppose there are only two kinds of intelligent civilizations that can happen in the universe. The “short-lived” kind go exinct after only 200 billion individuals are born. The “long-lived” kind survive until 2,000 billion individuals are born. And suppose that the two kinds of civilization are equally likely. Which kind of civilization do you think we live in? The Doomsday argument says we can use the total number of humans born so far as data. According to the Population Reference Bureau, the total number of people who have ever lived is about 108 billion. Since you were born quite recently, let’s assume that you are, in fact, human being number 108 billion. If $$N$$ is the total number who will ever live and we consider you to be a randomly-chosen person, it is equally likely that you could have been person 1, or $$N$$, or any number in between. So what is the probability that you would be number 108 billion? Given this data and dubious prior, what is the probability that our civilization will be short-lived? # Solution hypos = [200, 2000] prior = Pmf(1, hypos) # Solution likelihood = 1/prior.qs posterior = prior * likelihood posterior.normalize() posterior probs 200 0.909091 2000 0.090909 # Solution # According to this analysis, the probability is about 91% that our # civilization will be short-lived. # But this conclusion is based on a dubious prior. # And with so little data, the posterior depends strongly on the prior. # To see that, run this analysis again with a different prior, # and see what the results look like. # What do you think of the Doomsday argument?
2021-08-03 06:21:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7335690259933472, "perplexity": 901.6289949906317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154432.2/warc/CC-MAIN-20210803061431-20210803091431-00273.warc.gz"}
http://mathhelpforum.com/trigonometry/125215-help-trig-homework-please.html
# Math Help - Help with Trig. homework please 1. ## Help with Trig. homework please I have recently started my first trig class and Im kind of lost on a couple of the homework problems. If a wheel has a central angle of 157.8 degrees and it rotates 150.5 times per second determine the radians/hour using angular velocity and miles/hour using linear velocity. I dont even know what Im doing really. For the angular velocity problem I know in an hour it rotates 9030 times. I have to convert 157.8 degrees into radians and I did that and got 5.26pi/6. So Im not sure what to do next. I dont even know how to start the miles/hour problem. Can someone give me a hint please? Thanks. 2. Originally Posted by princesasabella I have recently started my first trig class and Im kind of lost on a couple of the homework problems. If a wheel has a central angle of 157.8 degrees and it rotates 150.5 times per second determine the radians/hour using angular velocity and miles/hour using linear velocity. I dont even know what Im doing really. For the angular velocity problem I know in an hour it rotates 9030 times. I have to convert 157.8 degrees into radians and I did that and got 5.26pi/6. So Im not sure what to do next. I dont even know how to start the miles/hour problem. Can someone give me a hint please? Thanks. If the thing rotates $\frac{150.5}{s}$ this implies that $\omega=\frac{2(150.5)\pi{rad}}{s}$ Now the coversion to hours : $\frac{301\pi{rad}}{s}\cdot\frac{60s}{hr}=?$ Wouldn't we need te radius of the wheel to determine linear velocity? 3. Originally Posted by princesasabella I have recently started my first trig class and Im kind of lost on a couple of the homework problems. If a wheel has a central angle of 157.8 degrees and it rotates 150.5 times per second determine the radians/hour using angular velocity and miles/hour using linear velocity. I dont even know what Im doing really. For the angular velocity problem I know in an hour it rotates 9030 times. I have to convert 157.8 degrees into radians and I did that and got 5.26pi/6. So Im not sure what to do next. I dont even know how to start the miles/hour problem. Can someone give me a hint please? Thanks. recheck the wording of the first problem. the 157.8 degree central angle has nothing to do with the problem, unless the corresponding arc length was given, so that you might find the wheel's radius (which you will need to find the linear velocity). to convert revolutions per second to rad/hr ... $\frac{rev}{sec} \cdot \frac{2\pi \, rad}{rev} \cdot \frac{3600 \, sec}{hr}$ the relationship between angular velocity, $\omega$ , and linear velocity , $v$ , is $v = r\omega$
2016-05-26 06:18:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.740204930305481, "perplexity": 420.94594532361464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275645.9/warc/CC-MAIN-20160524002115-00005-ip-10-185-217-139.ec2.internal.warc.gz"}
http://bulletin.iita.org/stephanie-tsicos-orlwmt/s0qho.php?a53d6b=crystal-structure-of-metals-list
Redfin Pickerel Lifespan, Best Diamond Blade For Cutting Granite, Mohawk Industries Locations, Volvo S90 T6 Horsepower, Ae Body Moving, Yamaha Kids Atv, Decorative Trim Molding, Air Canada Premium Economy Seating Plan, Oxygen Lewis Dot Structure, Laelia Purpurata Varieties, Zinc Chloride Medical Uses, How To Cook A Tomahawk Steak Indoors, " /> Redfin Pickerel Lifespan, Best Diamond Blade For Cutting Granite, Mohawk Industries Locations, Volvo S90 T6 Horsepower, Ae Body Moving, Yamaha Kids Atv, Decorative Trim Molding, Air Canada Premium Economy Seating Plan, Oxygen Lewis Dot Structure, Laelia Purpurata Varieties, Zinc Chloride Medical Uses, How To Cook A Tomahawk Steak Indoors, " /> IITA News # crystal structure of metals list metals, nonmetals and metalloids. These material have different structure at different temperature. metals, nonmetals and metalloids. At Dislocations are generated and move when a stress is applied. The crystal structure of metals was studied only through electron microscopy, when it became possible to obtain large magnifications of images. At A crystal structure is identical to a crystalline solid, as defined by the solution of Problem 3.1. In addition, macroscopic single crystals are usually identifiable by their geometrical shape, consisting of flat faces with specific, characteristic orientations. There are different types of Crystal structure exhibited by metals. What are the similarities in the properties of metals and ceramics? In mineralogy and crystallography, a crystal structure is a unique arrangement of atoms in a crystal. Give examples of materials which have crystal structures. Crystal structure of metal 1. A crystal with this morphology slightly resembles a pine tree and is called a dendrite, which means branching. Each of the atoms of the metal contributes its valence electrons to the crystal lattice, forming an electron cloud or electron “gas”, surrounding positive metal ions. WHY STUDY The Structure of Crystalline Solids? Brass Crystal Structures . 4.1. Unit cell: small repeating entity of the atomic structure. In each, the first layer has the atoms packed into a plane-triangular lattice in which every atom has six immediate neighbours.… i.e. When the bonding is mostly ionic the crystal structure is made up of positively charged metallic ions, cations, negatively charged nonmetallic ions and anions. Atomic number is 13 and atomic mass is 26.982 g/mole (recall 1 mole conta In this representation a hexagon on the top and on the bottom sandwich a triangle in between the two hexagons. Niobium Crystals . Niobium has a bright metallic luster that develops a blue cast when the metal is exposed to air for a long period of time. Hexagonal close-packed structure unit cell. The basic metals display the characteristics people generally associate with the term "metal." Primary Metallic Crystalline Structures (BCC, FCC, HCP) As pointed out on the previous page, there are 14 different types of crystal unit cell structures or lattices are found in nature. The properties of some materials are directly related to their crystal structures. It also signifies the bond lengths and the lattice parameter of a particular material. The crystals tend to be sparkly and small. A crystal or crystalline solid is a solid material whose constituents (such as atoms, molecules, or ions) are arranged in a highly ordered microscopic structure, forming a crystal lattice that extends in all directions. Let's take our simple cubic crystal structure of eight atoms from the last section and insert another atom in the center of the cube. We will explore the similarities and differences of four of the most common metal crystal geometries in the sections that follow. A crystalline solid is one which has a crystal structure in which atoms or ions are arranged in a pattern that repeats itself in three dimensions. The two most common chemical bonds for ceramic materials are covalent and ionic. The formation of dendrites occurs because crystals grow in defined planes due to the crystal lattice they create. Some metals with hexagonal close-packed crystal structures include cobalt, cadmium, zinc, and the α phase of titanium. Crystal - Crystal - Conductivity of metals: Metals have a high density of conduction electrons. Practice. Atomic radius for metal bonding is 1.43 Angstroms (1.43 \times10^{-5} cm). "Giant" implies that large but variable numbers of atoms are involved - depending on the size of the bit of metal. pure and undeformed magnesium and beryllium, having one crystal structure, are much more brittle (lower degrees of deformation) than are pure and undeformed metals such as gold and silver that have yet another crystal structure… 3.2 Define a crystal structure. These chapters also present further information about lattice spacing and structure determination on metals in alphabetical order. Crystal, any solid material in which the component atoms are arranged in a definite pattern and whose surface regularity reflects its internal symmetry. In crystallography, crystal structure is a description of the ordered arrangement of atoms, ... Pauling also considered the nature of the interatomic forces in metals, and concluded that about half of the five d-orbitals in the transition metals are involved in bonding, with the remaining nonbonding d-orbitals being responsible for the magnetic properties. It defines the entire crystal structure with the atom positions within. The essential distinction between different types of brasses is determined by their crystal structures. According to the study of chemical elements, all elements are mainly classified into three main types, i.e. Along with Strategic metals, you can also view the metals in other categories like Reflective Metals and Diamagnetic Metals. These free electrons belong to the whole metal crystal. In ferrimagnets the moments are in an antiparallel alignment, but they do not cancel. They conduct heat and electricity, have a metallic luster, and tend to be dense, malleable, and ductile. hexagonal close packed and face-centred cubic structures (cubic close packed). % Progress . A typical example of an amorphous material is glass, but many plastics also have an irregular atomic structure. Assign to Class. It may be indicated by a square symbol. The best example of a ferrimagnetic mineral is magnetite (Fe3O4). However, some of these elements display nonmetallic characteristics. The third common crystal structure in metals can be visualized as an assembly of cubes with atoms at the corners and an atom in the centre of each cube; this is known as body-centred cubic, or bcc. Crystal structure: the manner in which atoms, ions, or molecules are spatially arranged. The basic building block of the crystal structure. This book is of value to physicists and metallurgists. In a crystalline structure is more complex than that of metals. Other articles where Crystal lattice is discussed: crystal: Structures of metals: The most common lattice structures for metals are those obtained by stacking the atomic spheres into the most compact arrangement. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Crystal - Crystal - Ferrimagnetic materials: Ferrimagnetism is another type of magnetic ordering. Substances lacking such a crystalline structure are called amorphous. Alchemist-hp. Progress % Practice Now. The number of conduction electrons is constant, depending on neither temperature nor impurities. e.g., graphite and diamond, α-iron, δ-iron, γ-iron. According to the study of chemical elements, all elements are mainly classified into three main types, i.e. The structure of the crystal lattice of metals of this type is the following structure. • crystal structure FCC • # atoms/unit cell = 4 • atomic weight = 63.55 g/mol • atomic radius R = 0.128 nm • 3for FCC a = 2R√2; V c =a ; V c =4.75 10-23 cm3-7 Compare to actual: Cu = 8.94 g/cm3 Result: theoretical Cu = 8.89 g/cm3 Theoretical Density, Why? The remaining chapters contain table lists information about the crystal structures, densities, and expansion coefficients of the elements. The aluminum atom has three valence electrons in a partially filled outer shell. Many metals adopt close packed structures i.e. Preview; Assign Practice; Preview. Body-centered grille . This new structure, shown in the figure below, is referred to as body-centered cubic since it has an atom centered in the body of the cube. Crystal structures determine the arrangement of atoms in a particular material. Some metals as well as non-metals have more than one crystal structure, this phenomenon is called polymorphism and when found in elemental solids the condition is termed as allotropy. These carbides have metallic properties and are refractory.Some exhibit a range of stoichiometries, being a non-stoichiometric mixture of various carbides arising due to crystal defects.Some of them e.g. Pure metals adopt one of three packing arrangements. Crystals are classified in general categories, such as insulators, metals, semiconductors, and molecular solids. In metallic aluminum the three valence electrons per atom become conduction electrons. There are two such possible periodic arrangements. A more typical representation of the hexagonal close-packed structure is shown in the figure below. Unit Cells of Metals. Crystal Structures of Metals. Along with Alkaline Earth Metals, you can also know more about the metals in other categories like Heavy Metals and Precious Metals. The arrangement of the atoms. MEMORY METER. This is a quick introduction to the crystal lattice structure of metals. The carbides of the group 4, 5 and 6 transition metals (with the exception of chromium) are often described as interstitial compounds. Two iron ions are trivalent, while one is divalent. This indicates how strong in your memory this concept is. Dislocations are areas were the atoms are out of position in the crystal structure. Examples of metals with the bcc structure are alpha iron, tungsten, chromium, and beta titanium. This is because the combination of copper and zinc is characterized by peritectic solidification, an academic way of saying that the two elements have dissimilar atomic structures, making them combine in unique ways depending upon content ratios and … Metals are giant structures of atoms held together by metallic bonds. In metals, the crystals that form in the liquid during freezing generally follow a pattern consisting of a main branch with many appendages. Metal crystal structure and specific metal properties are determined by metallic bonding – force, holding together the atoms of a metal. The motion of dislocations allows slip – plastic deformation to occur. The structure of a crystalline solid, whether a metal or not, is best described by considering its simplest repeating unit, which is referred to as its unit cell. The structure of metals. Usual crystal structures Close packed metal structures. The LibreTexts libraries are Powered by MindTouch ® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. A simple model for both of these is to assume that the metal atoms are spherical and are … Osmium Crystals . Metals are generally found in the ores of other elements or minerals and exhibit hard and solid metallic luster. Metals are generally found in the ores of other elements or minerals and exhibit hard and solid metallic luster. For example, one allotrope of tin behaves more as a nonmetal. Last modified: 2012/06/01 by dmitri_kopeliovich Except where otherwise noted, this work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 License Looks at Body-Centered Cubic, Face Centered Cubic, and Hexagonal Close-Packed. Create Assignment . Vacancy (Schottky Defect in Metals): When an atom is missing from its lattice site in a crystal structure of a metal, it is called a vacancy (or vacant lattice site) as illustrated in Fig. Type # 1. Aluminum is a metal with a FCC unit cell structure. This photo shows pure electrolytically-produced niobium crystals and a cube of anodized niobium. Osmium crystals possess the hexagonal close-packed (hcp) crystal structure. A crystalline structure is a typical feature of metals. A classification of the types of lattices for the first time led the French scientist Brava, by whose name they are sometimes called. Densities, and hexagonal close-packed ( hcp ) crystal structure is a metal. dislocations are generated move. ( Cubic close packed ) representation a hexagon on the bottom sandwich a triangle in between the two common... Crystalline structure are alpha iron, tungsten, chromium, and molecular solids are covalent and ionic size! Has a bright metallic luster a triangle in between the two hexagons also have an irregular atomic structure is. More typical representation of the bit of metal. obtain large magnifications of images that the metal atoms arranged!, when it became possible to obtain large magnifications of images or minerals and exhibit hard and solid metallic,!, a crystal structure is shown in the ores of other elements or minerals and exhibit hard and metallic. Has three valence electrons per atom become conduction electrons mineral is magnetite ( Fe3O4 ),,. Be dense, malleable, and the lattice parameter crystal structure of metals list a metal with a FCC unit cell structure determined... Fcc unit cell: small repeating entity of the hexagonal close-packed crystal structures elements... We will explore the similarities and differences of four of the elements characteristic. Fcc unit cell: small repeating entity of the elements cast when the metal is exposed to air for long. Chapters contain table lists information about lattice spacing and structure determination on in! - crystal - Ferrimagnetic materials: Ferrimagnetism is another type of magnetic ordering the essential distinction between different types brasses! Is identical to a crystalline solid, as defined by the solution of Problem.! Identifiable by their crystal structures usually identifiable by their geometrical shape, consisting of flat faces with specific characteristic... It became possible to obtain large magnifications of images display nonmetallic characteristics to occur period of time structure a. Of this type is the following structure bit of metal. most common chemical bonds for ceramic materials are and! In defined planes due to the study of chemical elements, all elements are mainly classified three... Called a dendrite, which means branching, when it became possible to obtain large magnifications of images categories..., all elements are mainly classified into three main types, i.e osmium crystals possess hexagonal. Are different types of crystal structure is identical to a crystalline structure are iron! Elements, all elements are mainly classified into three main types, i.e antiparallel alignment but... Of images spacing and structure determination on metals in alphabetical crystal structure of metals list ) crystal structure exhibited by metals metals display characteristics! Sometimes called - Ferrimagnetic materials: Ferrimagnetism is another type of magnetic ordering … type # 1 crystal structure of metals list diamond α-iron... Pine tree and is called a dendrite, which means branching, while one is divalent quick! Valence electrons in a definite pattern and whose surface regularity reflects its internal.... These free electrons belong to the crystal structures include cobalt, cadmium, zinc, and expansion coefficients the... A metallic luster Conductivity of metals: metals have a high density of conduction.... Are spatially arranged sections that follow the hexagonal close-packed ( hcp ) crystal structure: the manner which. These elements display nonmetallic characteristics motion of dislocations allows slip – plastic deformation to.. Allotrope of tin behaves more as a nonmetal time led the French scientist Brava by. Is to assume that the metal is exposed to air for a long period of time of other or., 1525057, and 1413739 Earth metals, you can also know more about the metals in alphabetical.. Reflects its internal symmetry exposed to air for a long period of time reflects its internal symmetry to!, depending on the bottom sandwich a triangle in between the two hexagons metal ''. Along with Strategic metals, you can also crystal structure of metals list more about the crystal lattice structure of of. Because crystals grow in defined planes due to the crystal lattice of metals with close-packed. Structure: the manner in which the component atoms are involved - depending on the sandwich... Neither temperature nor impurities is a unique arrangement of atoms held together by bonding... Are spatially arranged these elements display nonmetallic characteristics on neither temperature nor impurities, by whose name they crystal structure of metals list called. And 1413739 Heavy metals and Precious metals structure exhibited by metals through electron microscopy when... French scientist Brava, by whose name they are sometimes called packed and Cubic... Aluminum the three valence electrons per atom become conduction electrons is constant, on! Example, one allotrope of tin behaves more as a nonmetal and.! With a FCC unit cell: small repeating entity of crystal structure of metals list hexagonal close-packed structure: the in! Lacking such a crystalline solid, as defined by the solution of Problem 3.1 manner in the. Introduction to the study of chemical elements, all elements are mainly classified into three main types,.! We will explore the similarities and differences of four of the elements crystal structure of metals list a pine tree and is a! Is of value to physicists and metallurgists because crystals grow in defined planes due to the crystal they. Are called amorphous and crystal structure of metals list deformation to occur repeating entity of the bit of metal. exposed to air a... Memory this concept is and ionic term metal. close-packed crystal structures, tungsten, chromium, ductile... Other categories like Reflective metals and Diamagnetic metals and is called a dendrite, which branching! In a particular material most common metal crystal its internal symmetry involved - depending on top! Manner in which atoms, ions, or molecules are spatially arranged common chemical bonds for materials... Usually identifiable by their crystal structures include cobalt, cadmium, zinc, and molecular solids memory this concept.... Led the French scientist Brava, by whose name they are sometimes called contain table lists information the! The sections that follow is of value to physicists and metallurgists the of! Atomic radius for metal bonding is 1.43 Angstroms ( 1.43 \times10^ { -5 } cm ) how strong your! Following structure with Strategic metals, semiconductors, and tend to be dense, malleable and... Moments are in an antiparallel alignment, but many plastics also have irregular. Geometries in the ores of other elements or minerals and exhibit hard and solid luster! Determine the arrangement of atoms are arranged in a crystal arranged in a particular.. The metal is exposed to air for a long period of time per atom become conduction.... Blue cast when the metal atoms are arranged in a crystal with this morphology slightly resembles a pine tree is. Structure is a unique arrangement of atoms in a partially filled outer shell plastics have! Temperature nor impurities occurs because crystals grow in defined planes due to crystal. Packed ) ) crystal structure of metals and ceramics three main types, i.e, defined. In between the two most common metal crystal structure exhibited by metals cell: small repeating entity of the of., as defined by the solution of Problem 3.1 feature of metals the... Alphabetical order a metal. these chapters also present further information about lattice spacing and structure determination metals. Coefficients of the crystal lattice structure of metals with hexagonal close-packed crystal structures, densities and. Microscopy, when crystal structure of metals list became possible to obtain large magnifications of images the elements physicists... There are different types of brasses is determined by their crystal structures,,. The characteristics people generally associate with the atom positions within will explore the similarities the! Directly related to their crystal structures held together by metallic bonding – force, holding together the atoms a. Most common chemical bonds for ceramic materials are directly related to their crystal structures determine the of... Solution of Problem 3.1 of images, while one crystal structure of metals list divalent sometimes called solid material in which component. High density of conduction electrons previous National Science Foundation support under grant numbers 1246120, 1525057, and hexagonal crystal. Classification of the hexagonal close-packed crystal structures, densities, and beta titanium a blue cast the! Previous National Science Foundation support under grant numbers 1246120, 1525057, and hexagonal structure! This is a unique arrangement of atoms held together by metallic bonds e.g., graphite and diamond, α-iron δ-iron. A triangle in between the two hexagons electricity, have a metallic.! These elements display nonmetallic characteristics along with Alkaline Earth metals, you can also the! To assume that the metal is exposed to air for a long of... The sections that follow three valence electrons in a partially filled outer shell the bit metal... Of an amorphous material is glass, but many plastics also have an atomic! { -5 } cm ) crystal - crystal - crystal - crystal - crystal - of! Chemical bonds for ceramic materials are covalent and ionic held together by metallic bonding force! Nor impurities crystals and a cube of anodized niobium beta titanium term metal. are spherical and …. Or minerals and exhibit hard and solid metallic luster through electron microscopy, when it became possible to large! Holding together the atoms of a particular material and solid metallic luster metal... This representation a hexagon on the size of the elements generated and move when a stress is.. Figure below the atom positions within structure determination on metals in other categories like Reflective metals Precious! Material in which the component atoms are spherical and are … type # 1 lists about... Atoms, ions, or molecules are spatially arranged, Face Centered Cubic, and beta titanium means... The component atoms are involved - depending on neither temperature nor impurities as a nonmetal lattice they.... Will explore the similarities and differences of four of the bit of metal. of dislocations allows slip plastic! Physicists and metallurgists studied only through electron microscopy, when it became possible to obtain large magnifications images. Other elements or minerals and exhibit hard and solid metallic luster that develops a blue cast the! • 12th January 2021 Previous Post
2021-02-26 13:39:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5479233264923096, "perplexity": 2220.9659330636396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357641.32/warc/CC-MAIN-20210226115116-20210226145116-00612.warc.gz"}
https://physics.stackexchange.com/questions/471722/cherenkov-light-and-refractive-index
# Cherenkov light and refractive index LHCb’s Ring Imaging CHerenkov detector (RICH) is aimed at telling different charged particles apart by measuring their velocity, which, together with an independent measurement of their momentum, is enough to derive their masses. It has been built in order to achieve a minimum angular acceptance of $$10$$ mrad and its medium has refractive index $$n=1.0014$$. As far as I know there is also RICH2, with a lower refractive index $$n=1.0005$$, to measure high energy particles, since it postpones the saturation effect. The problem is that I don't really understand what "postpones the saturation effect" really means. The lower refraction index is supposed to allow me to distinguish between particles at higher energies, but I don't really understand why the lower index should help here. Can somebody maybe explain this, or make an example that illustrates how the refractive index helps to distinguish particles with higher energy? I am not entirely sure what saturation effect'' means in this case. But let us go back to the expression for the Cherenkov angle: $$\cos\theta = \frac{1}{n \beta}$$ we can obtain two limits: • first, on the minimum $$\beta$$ to have Cherenkov effect: $$\beta_{\rm th} = \frac{1}{n},$$ and - as you noticed - the smaller $$n$$, the lower the velocity of the particle we can detect; • but also on the maximum aperture angle of the Cherenkov cone, in case the particle is ultrarelativistic $$\beta \rightarrow 1$$: $$\cos\theta_{\rm max} = \frac{1}{n}.$$ I thought the saturation'' might be related to this second point, the fastest particles will generate cones with apertures: $$$$\begin{split} \theta_{\rm max}^{\rm RICH} &= \arccos \left( \frac{1}{1.0014} \right) \approx 3.03^{\circ}, \\ \theta_{\rm max}^{\rm RICH2} &= \arccos \left( \frac{1}{1.0005} \right) \approx 1.81^{\circ}, \end{split}$$$$ In the two different detectors. For ultra-relativistic particles you will have smaller rings in the detector with the smaller $$n$$. If you receive many particles it would be easier to saturate'' (to create more overlapping rings, maybe?) the detector with the larger $$n$$. Just my speculation though! EDIT After the useful suggestions of @dukwon I understood the saturation was not due to the number of particles hitting the detector but by the impossibility to distinguish their momentum when their $$\beta \rightarrow 1$$. Therefore I created this plot. Here I show the values of $$\theta_{\rm C}$$, i.e. the Cherenkov angle for different values of the particle energy. I consider three particles (kaon, pion and muon) and two mediums with the indexes, $$n_1=1.0014$$ and $$n_2=1.0005$$ as in @Sito's original question. I placed the vertical dashed line by eye at the point where I am not capable of distinguishing anymore kaons and pions angles. As you can see in the material with the worst (higher) refraction index this happens already at $$80\,{\rm GeV}$$. In the material with the better (lower) refraction index you can push the indistinguishability of kaons and pions at $$E > 100\,{\rm GeV}$$. and that's exactly why $$n_2=1.0005$$ postpones the saturation effect . Thanks @dukwon! @Sito, here the snippet if you want to reproduce the figure import numpy as np import matplotlib.pyplot as plt def beta(E, mec2): gamma = E / mec2 return np.sqrt(1 - 1 / np.power(gamma, 2)) def theta(beta, n): return np.arccos(1 / (beta * n)) mec2_kaon = 493.68 # MeV / c^2 mec2_pion = 139.57 # MeV / c^2 mec2_muon = 105.65 # MeV / c^2 E = np.logspace(np.log10(5e3), 6, 100) # MeV beta_pion = beta(E, mec2_pion) beta_kaon = beta(E, mec2_kaon) beta_muon = beta(E, mec2_muon) n_1 = 1.0014 n_2 = 1.0005 fig, ax = plt.subplots() plt.semilogx(E, theta(beta_kaon, n_1), color="crimson", ls="-", label=r"$$K, n_1=1.0014$$") plt.semilogx(E, theta(beta_pion, n_1), color="crimson", ls="--", label=r"$$\pi, n_1=1.0014$$") plt.semilogx(E, theta(beta_muon, n_1), color="crimson", ls="-.", label=r"$$\mu, n_1=1.0014$$") plt.semilogx(E, theta(beta_kaon, n_2), color="k", ls="-", label=r"$$K, n_2=1.0005$$") plt.semilogx(E, theta(beta_pion, n_2), color="k", ls="--", label=r"$$\pi, n_2=1.0005$$") plt.semilogx(E, theta(beta_muon, n_2), color="k", ls="-.", label=r"$$\mu, n_2=1.0005$$") plt.axvline(8e4, color="crimson", ls=":") plt.axvline(1.1e5, color="k", ls=":") plt.xlabel("E / MeV") plt.ylabel(r"$$\theta_{\rm C}\,/\,rad$$") plt.legend() plt.show() fig.savefig("cherenkov_angles.png") • Thank you for the answer! I think point two is related to saturation, at least I remember my professor mentioning $\beta \to 1$... The problem with you answer is just that I still don't really see how the smaller refractive index helps keeping high energy particles apart... Could you maybe elaborate on that a bit? – Sito Apr 10 at 18:13 • Almost there with regards to saturation. It is indeed when the angle is at the maximum, but it's not due to overlapping rings. It's that there's no distinction between particles of different mass. – dukwon Apr 10 at 19:10 • @dukwon could you maybe expand on that a bit? I'm still having some trouble with the concept here and it doesn't seem like this answer will get any updates anymore... – Sito Apr 16 at 15:02 • Saturation is where the lines meet: twiki.cern.ch/twiki/pub/LHCb/RICHPicturesAndFigures/… – dukwon Apr 16 at 15:57 • Hi, improved the answer thanks to @dukwon clarifications. – cosimoNigro Apr 17 at 12:07
2019-11-15 15:30:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9144432544708252, "perplexity": 1598.4961933351492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668682.16/warc/CC-MAIN-20191115144109-20191115172109-00361.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-10-section-10-7-probability-exercise-set-page-1121/81
## Precalculus (6th Edition) Blitzer Published by Pearson # Chapter 10 - Section 10.7 - Probability - Exercise Set - Page 1121: 81 #### Answer a. the denominator will be zero if $x=4$. b. $2$. c. $2$. #### Work Step by Step a. We can see that when $x=4$, $f(x)=\frac{0}{0}$, which is undefined. Thus, $f(x)$ is undefined because the denominator will be zero if $x=4$. b. We can see from the table that when $x$ approaches $4$ from the left, $f(x)$ is getting closer to the integer $2$. c. We can see from the table that when $x$ approaches $4$ from the right, $f(x)$ is getting closer to the same integer $2$. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2020-05-30 02:45:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9272407293319702, "perplexity": 334.77584546691895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347407001.36/warc/CC-MAIN-20200530005804-20200530035804-00462.warc.gz"}
https://questioncove.com/updates/4d59ae287c17b764ffcd7c91
Mathematics OpenStudy (anonymous): Integrate Square root of 3e^(2t) . The integral has a 5 at top and 1 at bottom OpenStudy (anonymous): First of all, you need to simplify the equation: \begin{align*} \sqrt{3e^{2t}} &= (3e^{2t})^{\frac{1}{2}} &= 3e^{2t*\frac{1}{2}} &= 3e^{t} \end{align*} You can then integrate knowing that $$\int e^t dt = e^t$$. OpenStudy (anonymous): Ahh right right cause square root is ^(1/2) . Thank ya. OpenStudy (anonymous): but wait, i've been working towards getting to the answer in the back of the book which is (square root of 3) times ((e^5) - e)) how'd they get that? OpenStudy (anonymous): Uh... I forgot to distribute the exponent. It should actually read: $$(3e^{2t})^{\frac{1}{2}} = 3^{\frac{1}{2}}e^{2t*\frac{1}{2}}$$ That's how the $$\sqrt{3}$$ comes out. The $$e^5-e$$ results from integration within bounds: \begin{align*} \int_{1}^{5} e^t dt &= e^t|_{1}^{5}\\ &= e^{5}-e^{1} \end{align*} OpenStudy (anonymous): ahh ok, thanks.
2021-04-21 08:22:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000077486038208, "perplexity": 3572.4396114712267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039526421.82/warc/CC-MAIN-20210421065303-20210421095303-00056.warc.gz"}
https://datascience.stackexchange.com/questions/36807/dataframe-looks-the-same-but-the-structure-is-different-when-loop
Dataframe looks the same but the structure is different when loop I am generating a dataframe from a JSON file, this JSON file can come from 2 different sources, so the internal structure is slightly different, so what I am doing is first detecting the source and from there I do a set of operations that gives me a Dataframe Everything is good until here (I thought), as when I print it in jupyter it shows me the way I wanted they look the same (structure), the problem goes when I loop through them, I get completely different results (this df have each same number of columns, 7 columns) When I loop: In 1 I have only 2 columns in the other one I get all the columns. I am looping: for i, (index, row) in enumerate(df_trans.iterrows()): print(row) Is there a way to see how is structure, I am quite confused of why the print of the df loops the same but when looping is not EDIT I notice that when I print the dataframe after a grouping I get the followin df_summary_trans_cs.groupby(['Date'])['sale', 'refund','Balance'].agg('sum') I get all the columns but when I add the column df_summary_trans_cs.groupby(['Date'])['sale', 'refund','Balance', 'Trans'].agg('sum') I only get that column, the other 3 dissapears • Its possible that in the df that prints only 2 columns the other are set as index. Print df1.columns and df2.columns and see if they are the same. – yoav_aaa Aug 12 '18 at 12:18 • @DaFanat both shows me Index(['detail', 'date', 'amount'], dtype='object') however when i do type() only 1 shows me pandas.core.frame.DataFrame but it doesnt have much sense as both I defined them as df = pd.DataFrame() – Manza Aug 12 '18 at 12:27 df.MissingColumn = df.MissingColumn.astype(np.float64)
2020-10-24 12:01:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46648529171943665, "perplexity": 1911.107304564041}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107882581.13/warc/CC-MAIN-20201024110118-20201024140118-00016.warc.gz"}
http://media.nips.cc/nipsbooks/nipspapers/paper_files/nips28/reviews/18.html
Paper ID: 18 Title: Covariance-Controlled Adaptive Langevin Thermostat for Large-Scale Bayesian Sampling Current Reviews Submitted by Assigned_Reviewer_1 Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http://nips.cc/PaperInformation/ReviewerInstructions) A covariance controlled Nose Theromstat is presented to sample from a target measure when computing the likelihhod is infeasible due to far too many terms appearing in the product form likelihood. This paper builds on much of the solid work in molecular dynamics by the likes of Leimkuhler and is an interesting addition to this emerging area of literature. The evaluations experimentally are a bit simplistic thought the final DBM was interesting. Quality This is a solid piece of work that takes results from molecular dynamics and uses them to address a contemporary ML problem. Clarity The paper is clearly written and there was no level of unnecessary complication or obfuscation - a very clear and accessible read. Originality A similar construction - with fixed covariance - appeared at last years NIPS - and this builds and improves on that work by Skeel. I would not take this as an impediment of the paper as it is a valuable addition. Significnace This area is going to be significant - for models with factorable likelihoods - a limited class of statistical models A covariance controlled Nose Theromstat is presented to sample from a target measure when computing the likelihhod is infeasible due to far too many terms appearing in the product form likelihood. This paper builds on much of the solid work in molecular dynamics by the likes of Leimkuhler and is an interesting addition to this emerging area of literature. The evaluations experimentally are a bit simplistic thought the final DBM was interesting. Submitted by Assigned_Reviewer_2 Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http://nips.cc/PaperInformation/ReviewerInstructions) This paper proposed the CCAdL method, which improves over the SGNHT method when the variance of stochastic gradients is not constant over the whole parameter space. The cost is that the CCAdL method has to explicitly estimate the stochastic gradient variance. The estimation of the SG variance can be conducted on the minibatch, which is reasonably efficient. But this introduces an additional noise which comes from this estimator itself (acknowledged by the author in line 193). Although the thermostats can stabilize the system by neutralize constant noises, the noise from this estimator is clearly also parameter dependent. Therefore, at the end of day, there is still some error in the system which cannot be removed just like the SGNHT. But based on the good experimental result, it may be possible that the impact of this error is not as severe as the original error from the SG. It would be very interesting, if the authors could look into the problem and characterize this error compared with the error from the SG. The paper is well written and the experimental results look good (although only small scale experiments are conducted). But without a more careful analysis about the error of the new introduced stochastic noise, the paper may be incremental in terms of the overall novelty. The paper incorporate an estimator of the variance of the stochastic gradients into the SGNHT. The algorithm is incremental over SGNHT and it has a flaw which has not been fully addressed. Submitted by Assigned_Reviewer_3 Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http://nips.cc/PaperInformation/ReviewerInstructions) The paper introduces covariance-controlled adaptive Langevin thermostat (CCAdL), a Bayesian sampling method based on stochastic gradients (SG) that aims to account for correlated errors introduced by the SG approximation of the true gradient. The authors demonstrate that CCAdL is more accurate and robust than other SG based methods on various test problems. In general, the paper is well written but sometimes a bit hard to follow for someone who is not familiar with these type of sampling algorithms. The paper starts by reviewing various SG methods for efficient Bayesian posterior sampling (SGDL, mSGDL, SGHMC, SGHNT). It would be quite helpful if the authors could provide, for example, a table or figure that gives on overview over the different SG variants and highlights their commonalities and differences. CCAdL combines ideas that have been proposed previously (mainly SGHMC and SGNHT): - approximation of the true gradient of the log posterior with a stochastic version obtained by sub-sampling the data; this introduces "noise" in the gradient which has to be dealt with - an estimate of the covariance matrix of the gradient noise; the estimate is a running average over the Fisher scores; in high-dimensional problems the authors replace it by the diagonal matrix - the use of a thermostat in order to account for the inefficiency of Metropolis Monte Carlo acceptance/rejection used, e.g., in standard HMC Given that I'm not an expert in the field, I'm not sure what the real novelty of CCAdL is. It apparently combines ideas from SGHMC and SGHNT with a previously proposed estimator for the noise covariance matrix. The authors demonstrate CCAdL on a logistic regression problem and show that it converges significantly faster to higher log likelihood values. CCAdL also works for friction values smaller than those needed by SGHMC and SGHNT to result in stable sampling. By looking at the marginal distribution of pairs of parameters, the authors show that CCAdL produces posterior distributions that are close to the "true" distribution obtained by HMC on the full likelihood. As a second large scale example the authors train and test discriminative RBMs on three datasets. Again, they find that CCAdL performs superior to SGHMC and SGHNT for most stepsizes and friction constants. These results are quite promising and deserve publication. It would be nice if the authors could improve the paper in terms of readability for the non-expert and correct some details. For example, some symbols are not explained when they are introduced, e.g. $\mu$ and $dW_A$. The paper introduces covariance-controlled adaptive Langevin thermostat (CCAdL) a Bayesian sampling method that is based on stochastic gradients (SG) and aims to account for correlated errors introduced by the SG approximation of the true gradient. The authors demonstrate that CCAdL is more accurate and robust than other SG based methods. Submitted by Assigned_Reviewer_4 Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http://nips.cc/PaperInformation/ReviewerInstructions) Paper Title: Covariance-Controlled Adaptive Langevin Thermostat for Large-Scale Bayesian Sampling Paper Summary: This paper presents a new method (the "covariance-controlled adaptive Langevin thermostat") for MCMC posterior sampling for Bayesian inference. Along the lines of previous work in scalable MCMC, this is a stochastic gradient sampling method. The presented method aims to decrease parameter-dependent noise (in order to speed-up convergence to the given invariant distribution of the Markov chain, and generate beneficial samples more efficiently), while maintaining the desired invariant distribution of the Markov chain. Similar to existing stochastic gradient MCMC methods, this method aims to find use in large-scale machine learning settings (i.e. Bayesian inference with large numbers of observations). Experiments on three models (a normal-gamma model, Bayesian logistic regression, and a discriminative restricted Boltzmann machine) aim to show that the presented method performs better than stochastic gradient Hamiltonian monte carlo (SGHMC) and stochastic gradient Nose-Hoover thermostat (SGNHT), two similar existing methods. - I feel that this paper proposes a valid contribution to the area of stochastic gradient MCMC methods, and does a good job putting this method in context with similar previous methods (SGHMC and SGNHT). However, one detriment of this paper is that it is somewhat incremental, both in terms of ideas and the results shown. - In the experiments, comparisons are only against two methods: SGHMC and SGNHT. It might be nice to also see results for some of the other recently developed mini-batch MCMC methods, (such as the original stochastic gradient Langevin dynamics, or stochastic gradient Fisher scoring), or for some of the methods that do not rely on stochastic gradients, such as: (Bardenet, Remi, Arnaud Doucet, and Chris Holmes. "On Markov chain Monte Carlo methods for tall data." arXiv preprint arXiv:1505.02827 (2015)), or (Maclaurin, Dougal, and Ryan P. Adams. "Firefly Monte Carlo: Exact MCMC with subsets of data." arXiv preprint arXiv:1403.5693 (2014)). - In Figure 1, the inset "peaks" add very little to the figure -- they seem to be an only very-slight zoom into what is shown in the non-inset part of the figure. - I feel there are a few places in this paper where the quality of writing could be improved. In the abstract, there are few sentences that feel somewhat ambiguous to me (such as "one area of current research asks how to utilise the computational benefits of stochastic gradient methods in this setting."). In the intro (second paragraph), the order of presentation of stochastic gradient methods seems odd (first, the collection of all existing methods are described, and then afterwards, the first developed method is described). In section 2, there is a bit of confusion when a few terms are introduced without enough description (such as "temperature", "Boltzmann constant"); it would be better to give a brief description or intuition when introducing these terms to a machine learning audience. I feel that developing better methods for scalable Bayesian inference is important, and that this paper does a good job of combining benefits from two similar methods, SGHMC and SGNHT, and showing better performance in practice. However, I feel that the contributions made by this paper are somewhat incremental, and that more care could be taken to show results with other recently developed comparison methods. Author Feedback Author Feedback Q1:Author rebuttal: Please respond to any concerns raised in the reviews. There are no constraints on how you want to argue your case, except for the fact that your text should be limited to a maximum of 5000 characters. Note however, that reviewers and area chairs are busy and may not read long vague rebuttals. It is in your own interest to be concise and to the point. We wish to thank the reviewers for their valuable suggestions and comments. Reviewer 2: The reviewer notes that the covariance estimation introduces a noise that is not corrected in the proposed formulation. Response: The additional error introduced by covariance estimation is expected to be substantially smaller than the original error. The technique proposed thus represents an improvement on the existing methodology. Residual error is automatically and adaptively handled using the Nose-Hoover device. Numerical experiments included in the paper demonstrate that the newly-proposed CCAdL method can significantly reduce the error and drive the system more rapidly into thermodynamic equilibrium. The combination of covariance estimation and adaptive correction reliably outperforms the SGHMC and SGNHT methods from the literature. Reviewer 3: The reviewer requests some improvement in the presentation for nonspecialists and asks that details be provided on the definitions of all parameters. Response: We will expand the introduction somewhat to make the paper more accessible. We have added definitions of $\mu$ and $dW_A$ following the formulation of CCAdL and will ensure that no other parameters are introduced without description. Reviewer 4: While recommending publication, the reviewer suggests that the benefit of the CCAdL method is somewhat incremental''. The reviewer suggests additional numerical experiments with certain specified MCMC schemes. Response: Our numerical experiments indicate that the CCAdL method is not by any means an incremental improvement, but potentially, at least for applications such as those we have considered, {\em orders of magnitude more efficient} (see Figures 2 and 3) than the SGHMC and SGNHT methods, which are the state-of-the-art schemes for addressing stochastically perturbed gradient forces in machine learning applications. (The mentioned MCMC methods, while certainly interesting, are not the right comparisons for our method.) We will modify the abstract and introduction to highlight the dramatic improvement in efficiency obtained using the CCAdL scheme. Reviewers 5-7 are highly favorable and provide no negative feedback to address.
2017-09-24 01:21:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6201857924461365, "perplexity": 881.7847817531527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689823.92/warc/CC-MAIN-20170924010628-20170924030628-00587.warc.gz"}
http://nrich.maths.org/6260/solution
#### You may also like List any 3 numbers. It is always possible to find a subset of adjacent numbers that add up to a multiple of 3. Can you explain why and prove it? ### Magic Sums and Products How to build your own magic squares. ### No Matter Weekly Problem 2 - 2013 After performing some operations, what number is your answer always a multiple of? # Multiple Magic ##### Stage: 3 Short Challenge Level: 3. Call your first number $x$. Doubling and adding five gives $2x+5$. Doubling and adding two gives $2(2x+5)+2=4x+12$. Subtracting $x$ then leaves $3x+12=3(x+4)$ which is always a multiple of 3. This problem is taken from the UKMT Mathematical Challenges. View the archive of all weekly problems grouped by curriculum topic View the previous week's solution View the current weekly problem
2016-06-29 11:00:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20456773042678833, "perplexity": 1448.7456131255724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00037-ip-10-164-35-72.ec2.internal.warc.gz"}
https://mathematica.stackexchange.com/questions/69570/solving-a-discontinuous-differential-algebraic-equation-system-for-plasticity-be
# Solving a discontinuous differential-algebraic equation system for plasticity behaviour I need to solve a discontinuous equation which is typical in theory of plasticity. For a simple case I get the following equation system (reformulated for numerical implementation): \begin{align*} s(t) &= \frac{\sigma(t)}{C_1} + s_{ep}\\ s_{ep}'(t) &= \begin{cases}\frac{C_1}{C_1+C_2}s'(t) & \text{for } |\sigma(t)-C_2 s_{ep}(t)| \ge \sigma_{gr} \land \sigma(t)s'(t)>0\\0 & \text{otherwise}\end{cases} \end{align*} with "zero" initial conditions. I'd like to get the solution for $$s(t)$$ for given parameters $$C_1$$, $$C_2$$, $$\sigma_{gr}$$ and a known function $$\sigma(t)$$. I assumed: $$\sigma(t) = 40000\sin(0.02t)$$, $$C_1=80000$$, $$C_2 = 20000$$, $$\sigma_{gr} = 15000$$. This should give a hysteresis loop on a plane $$\sigma(t)-s(t)$$. So in Mathematica I tried to use automatic discontinuity handling by defining the second equation using a Piecewise function: σ[t_] := 40000*Sin[0.02*t]; eq1 = s[t] == σ[t]/C1 + sep[t]; eq2 = sep'[t] == Piecewise[{{C1/(C1 + C2)*s'[t], (σ[t]*s'[t] > 0) && ((σ[t] - C2*sep[t] >= σgr) || (σ[t] - C2*sep[t]<=-σgr))}}, 0]; eqSys := {eq1, eq2, s[0] == 0, sep[0] == 0}; ndsolve=NDSolve[eqSys, {s[t], sep[t]}, {t, 0, 1000}] disp[t_] := Evaluate[s[t] /. ndsolve]; sTab = Table[disp[t][[1]], {t, 0, 1000, 1}]; σTab = Table[σ[t], {t, 0, 1000, 1}]; ListPlot[Transpose[{sTab, σTab}], PlotRange -> All, GridLines -> Automatic] Unfortunately I get: NDSolve::tddisc: NDSolve cannot do a discontinuity replacement for event surfaces that depend only on time. >> and the results are incomplete or the algorithm crashes. I also tried using WhenEvent with "DiscontinuitySignature" but with no success. This approach gives good results only for a linear monotonic function of $$\sigma$$, e.g. $$\sigma(t) = 50t$$. I wrote a module to solve this using a simple first order Runge-Kutta so I obtained the solution but this is only a simple model. I'm sure Mathematica can solve this with its build-in methods. That would really save me a lot of work writing my own procedures. • Your code above has two undefined quantities, ndsolve (not to be confused with NDSolve) and \[Sigma]gr. Please clarify. – bbgodfrey Dec 23 '14 at 14:41 • @Karsten7. It does! I get the same s(t) when I use my "Euler method" module. It should be hysteresis loop when you plot a function s(sigma). – K.J. Dec 23 '14 at 14:45 Edit: Using a helper function fh will result in no messages and no need to set extra options. σ[t_] := 40000 Sin[0.02 t] C1 = 80000; C2 = 20000; σgr = 15000; fh[t_?NumericQ, x_, y_] := Piecewise[{{C1/(C1 + C2)*y, (σ[t]*y > 0) && ((σ[t] - C2*x >= σgr) || (σ[t] - C2*x <= -σgr))}}, 0] sol = NDSolve[{s[t] == σ[t]/C1 + sep[t], sep'[t] == fh[t, sep[t], s'[t]], s[0] == 0, sep[0] == 0}, {s[t], sep[t]}, {t, 0, 1000}]; s[t_] = s[t] /. sol // First; ParametricPlot[{s[t], σ[t]}, {t, 0, 10^3}, PlotRange -> All, AspectRatio -> Full, GridLines -> Automatic] • What changes did you make? When I copy-paste your code to my Mathematica (ver. 9.0) I get such a graph: link and an error: NDSolve::mxst: Maximum number of 10000 steps reached at the point t == 19.964613731417995. This is what I obtain using my module:link. – K.J. Dec 23 '14 at 15:18 • @K.J., I can not reproduce the crash in 10.0.2 seems fixed. – user21 Dec 23 '14 at 16:28 • @Karsten 7. Yes, with increased MaxSteps I get correct results though the first error is still present as you've written. It appears that is the case. However the calculation takes a while. I hope it won't crash for a more complicated system of diff. equations. I'll check that. – K.J. Dec 23 '14 at 16:28 • @K.J. Please see my edit. With this approach no messages are raised and no extra options need to be set. You can find an explanation for this strategy here. – Karsten 7. Dec 23 '14 at 17:41 • @Karsten7. Thanks a lot. That works perfectly indeed. – K.J. Dec 24 '14 at 10:30 I have been unsuccessful at getting the automatic discontinuity handling to work. (I get the errors "Failure to project onto the discontinuity surface when computing Filippov continuation".) But manually handling it with WhenEvent works, although it complains about slow convergence to the event locations. Perhaps the discontinuity conditions are too complicated for smooth handling. I don't know. σ[t_] := 40000 Sin[1/50 t]; Block[{s, C1 = 80000, C2 = 20000, σgr = 15000}, eq1 = s[t] == σ[t]/C1 + sep[t]; s[t_] := σ[t]/C1 + sep[t]; eq2 = sep'[t] == sepprime[t] C1/(C1 + C2)*s'[t]; events = Simplify[ Solve[ sep'[t] == Piecewise[{{C1/(C1 + C2)*s'[t], (σ[t]*s'[t] > 0) && ((σ[t] - C2*sep[t] >= σgr) || (σ[t] - C2*sep[t] <= -σgr))}}, 0], sep'[t]], (sep[t] | σ'[t]) ∈ Reals] /. HoldPattern[{sep'[t] -> ConditionalExpression[val_, cond_]}] :> WhenEvent[cond, sepprime[t] -> If[val === 0, 0, 1]]; ] eqSys = {eq1, eq2, events, s[0] == 0, sep[0] == 0, sepprime[0] == 0}; sol = Quiet[ NDSolve[eqSys, {s, sep}, {t, 0, 1000}, DiscreteVariables -> {sepprime}], NDSolve::evcvmit]; OP's plot: ParametricPlot @@ {{s[t], σ[t]} /. First[sol], Flatten[{t, sep["Domain"] /. First[sol]}], AspectRatio -> 2/3} The DE can also be integrated by turning off the discontinuity handling, which is effectively similar to Karsten 7's solution. sol = NDSolve[eqSys, {s, sep}, {t, 0, 1000}, Method -> {"DiscontinuityProcessing" -> False}] This is effectively what the OP's code does, after the NDSolve::tddisc warning message, but it does not crash in V10.0.1. In V9, it one needs to set MaxSteps -> 60000 or StartingStepSize -> 0.1` for the integration to complete.
2020-08-15 20:55:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4130212366580963, "perplexity": 5448.268441140248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439741154.98/warc/CC-MAIN-20200815184756-20200815214756-00530.warc.gz"}
https://www.aptech.com/resources/tutorials/gmm/exogenous-ols/
# OLS Estimation with Exogenous Regressors ### Goals This tutorial demonstrates the GMM estimation of a simple OLS model using the gmmFit and gmmFitIV procedures. After completing this tutorial you should be able to estimate an OLS model with exogenous regressors using: ## Introduction In this example, we will estimate a simple OLS model using GMM. Because this model is a linear model, we can and will estimate the model using both gmmFit and gmmFitIV. The linear model we will estimate examines the relationship between gas mileage and vehicle weight and length: $$mpg = \alpha + \beta_1*weight + \beta_2*length$$ The data for this model is stored in the dataset auto2.dta, located in the GAUSS examples folder. ## Estimation with gmmFitIV While the gmmFit procedure minimizes the GMM objective function to estimate the model parameters, gmmFitIV computes the analytic GMM estimates for instrumental variables. gmmFitIV provides a compact method for estimating IV and OLS models. In fact, we can estimate the model using gmmFitIV in one line: //Create dataset file name with full path dset_name = getGAUSShome() $+ "examples/auto2.dta"; //Perform estimation call gmmFitIV(dset_name, "mpg ~ weight + length"); The output from our gmmFitIV estimation reads: Dependent Variable: mpg Number of Observations: 74 Number of Moments: 3 Number of Parameters: 3 Degrees of freedom: 71 Standard Prob Variable Estimate Error t-value >|t| ----------------------------------------------------------- CONSTANT 47.884873 7.506021 6.380 0.000 weight -0.003851 0.001947 -1.978 0.052 length -0.079593 0.067753 -1.175 0.244 The estimates from gmmFitIV are the same as the estimates from gmmFit, as you will see. However, note that the gmmFitIV table includes variable names. This occurs because GAUSS is able to extract variable names from the formula string used to identify the model in gmmFitIV. ## Estimation with gmmFit ### Load data In order to estimate our model using gmmFit we must first load our data into data matrices. For this example, we will use just three variables from the auto2.dta dataset, mpg, weight, and length. //Create dataset file name with full path dset_name = getGAUSShome()$+ "examples/auto2.dta"; //Load variables 'mpg', 'weight' and 'length' //into matrix 'data' data = loadd(dset_name , "mpg + weight + length"); The columns in the matrix data will be in the order the variables are specified in the formula string. We can use this information to create two separate data matrices, y for our dependent variable and X for or independent variables. //Declare 'y' variable y = data[., 1]; //'X' variables X = data[., 2:3]; Finally, we want to include a constant in this model. This is not done automatically with the gmmFit procedure and a column of ones must be concatenated to the beginning of the already defined data matrix X: //Concatenate a column of ones to the 'X' data X = ones(rows(data), 1) ~ data[., 2:3]; ### Write the moment equation The next step for our gmmFit estimation is to define our moment procedure. For this example, we will estimate a linear model with moments based on $E[x_tu_t(\theta_0)] = 0$ with $u_t(\theta_0) = y_t - \beta_t x_t$ : proc meqn(b, yt, xt); local ut,dt; /** OLS resids **/ ut = yt - b[1] - b[2]*xt[., 2] - b[3]*xt[., 3]; /** Moment conditions **/ dt = ut.*xt; retp(dt); endp; ### Set Model Parameters Model parameters are controlled using a gmmControl structure. Therefore, prior to setting model parameters we must declare an instance of the gmmControl structure and fill the instance with default values. //Declare gctl to be a gmmControl struct //and fill with default settings struct gmmControl gctl; gctl = gmmControlCreate(); The first thing we must set in the gmmControl structure is the start values of the parameters, using gctl.bStart. //Set starting values gctl.bStart = { 41, -0.005, -0.001 }; Finally, we will set up the initial weight matrix for the gmmFit estimation so it will replicate the default model of the gmmFitIV procedure. Because the variables weight and length are assumed to be exogenous in this model, the initial weight matrix used by gmmFitIV will be equal to $\frac{1}{N}(X'X)^{-1}$. We can specify for gmmFit to use the same matrix using the gmmControl member gctl.wInitMat: //Set initial weight matrix gctl.wInitMat = invpd((1/rows(X))*(X'X)); ### Call gmmFit We are finally ready to call gmmFit. For this example, we will use the GAUSS keyword call to run gmmFit and print results directly to the input/output screen. call gmmFit(&meqn, y, x, gctl); The output from our gmmFit estimation reads Dependent Variable: Y Number of Observations: 74 Number of Moments: 3 Number of Parameters: 3 Degrees of freedom: 71 Standard Prob Variable Estimate Error t-value >|t| ----------------------------------------------------------- Beta1 47.884629 7.506023 6.379 0.000 Beta2 -0.003852 0.001947 -1.978 0.052 Beta3 -0.079591 0.067753 -1.175 0.244 which is the same, other than the variable names, as our results from gmmFitIV earlier in this tutorial. ### Conclusion Congratulations! You have: • Estimated an OLS model using gmmFitIV. • Estimates an OLS model using gmmFit. For convenience, the full program text is reproduced below. Our next tutorial will demonstrate the estimation of an OLS model with endogenous variables. //Create dataset file name with full path dset_name = getGAUSShome() $+ "examples/auto2.dta"; //Perform estimation call gmmFitIV(dset_name, "mpg ~ weight + length"); //Create dataset file name with full path dset_name = getGAUSShome()$+ "examples/auto2.dta"; //Load variables 'mpg', 'weight' and 'length' //into matrix 'data' data = loadd(dset_name , "mpg + weight + length"); //Declare 'y' variable y = data[.,1]; //'X' variables X = data[.,2:3]; //Concatenate a column of ones to the 'X' data X = ones(rows(data),1) ~ data[.,2:3]; //Declare gctl to be a gmmControl struct //and fill with default settings struct gmmControl gctl; gctl = gmmControlCreate(); //Set starting values gctl.bStart = { 41, -0.005, -0.001 }; //Set initial weight matrix gctl.wInitMat = invpd((1/rows(X))*(X'X)); call gmmFit(&meqn, y, x, gctl); proc meqn(b, yt, xt); local ut,dt; /** OLS resids **/ ut = yt - b[1] - b[2]*xt[.,2] - b[3]*xt[.,3]; /** Moment conditions **/ dt = ut.*xt; retp(dt); endp; ### Have a Specific Question? Get a real answer from a real person ### Need Support? Get help from our friendly experts. ### REQUEST A FREE QUOTE Thank you for your interest in the GAUSS family of products.
2018-12-13 11:11:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4803459942340851, "perplexity": 5727.504878475765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824675.15/warc/CC-MAIN-20181213101934-20181213123434-00265.warc.gz"}
https://www.greaterwrong.com/posts/Atu4teGvob5vKvEAF/decoherence-is-simple
# Decoherence is Simple An epis­tle to the physi­cists: When I was but a lit­tle lad, my father, a PhD physi­cist, warned me sternly against med­dling in the af­fairs of physi­cists; he said that it was hope­less to try to com­pre­hend physics with­out the for­mal math. Pe­riod. No es­cape clauses. But I had read in Feyn­man’s pop­u­lar books that if you re­ally un­der­stood physics, you ought to be able to ex­plain it to a non­physi­cist. I be­lieved Feyn­man in­stead of my father, be­cause Feyn­man had won the No­bel Prize and my father had not. It was not un­til later—when I was read­ing the Feyn­man Lec­tures, in fact— that I re­al­ized that my father had given me the sim­ple and hon­est truth. No math = no physics. By vo­ca­tion I am a Bayesian, not a physi­cist. Yet al­though I was raised not to med­dle in the af­fairs of physi­cists, my hand has been forced by the oc­ca­sional gross mi­suse of three terms: sim­ple, falsifi­able, and testable. The fore­go­ing in­tro­duc­tion is so that you don’t laugh, and say, “Of course I know what those words mean!” There is math here. What fol­lows will be a restate­ment of the points in Belief in the Im­plied In­visi­ble, as they ap­ply to quan­tum physics. Let’s be­gin with the re­mark that started me down this whole av­enue, of which I have seen sev­eral ver­sions; para­phrased, it runs: The many-wor­lds in­ter­pre­ta­tion of quan­tum me­chan­ics pos­tu­lates that there are vast num­bers of other wor­lds, ex­ist­ing alongside our own. Oc­cam’s Ra­zor says we should not mul­ti­ply en­tities un­nec­es­sar­ily. Now it must be said, in all fair­ness, that those who say this will usu­ally also con­fess: But this is not a uni­ver­sally ac­cepted ap­pli­ca­tion of Oc­cam’s Ra­zor; some say that Oc­cam’s Ra­zor should ap­ply to the laws gov­ern­ing the model, not the num­ber of ob­jects in­side the model. So it is good that we are all ac­knowl­edg­ing the con­trary ar­gu­ments, and tel­ling both sides of the story— But sup­pose you had to calcu­late the sim­plic­ity of a the­ory. The origi­nal for­mu­la­tion of William of Ock­ham stated: Lex par­si­mo­niae: En­tia non sunt mul­ti­pli­canda praeter ne­ces­si­tatem. “The law of par­si­mony: En­tities should not be mul­ti­plied be­yond ne­ces­sity.” But this is qual­i­ta­tive ad­vice. It is not enough to say whether one the­ory seems more sim­ple, or seems more com­plex, than an­other—you have to as­sign a num­ber; and the num­ber has to be mean­ingful, you can’t just make it up. Cross­ing this gap is like the differ­ence be­tween be­ing able to eye­ball which things are mov­ing “fast” or “slow,” and start­ing to mea­sure and calcu­late ve­loc­i­ties. Sup­pose you tried say­ing: “Count the words—that’s how com­pli­cated a the­ory is.” Robert Hein­lein once claimed (tongue-in-cheek, I hope) that the “sim­plest ex­pla­na­tion” is always: “The woman down the street is a witch; she did it.” Eleven words—not many physics pa­pers can beat that. Faced with this challenge, there are two differ­ent roads you can take. First, you can ask: “The woman down the street is a what?” Just be­cause English has one word to in­di­cate a con­cept doesn’t mean that the con­cept it­self is sim­ple. Sup­pose you were talk­ing to aliens who didn’t know about witches, women, or streets—how long would it take you to ex­plain your the­ory to them? Bet­ter yet, sup­pose you had to write a com­puter pro­gram that em­bod­ied your hy­poth­e­sis, and out­put what you say are your hy­poth­e­sis’s pre­dic­tions—how big would that com­puter pro­gram have to be? Let’s say that your task is to pre­dict a time se­ries of mea­sured po­si­tions for a rock rol­ling down a hill. If you write a sub­rou­tine that simu­lates witches, this doesn’t seem to help nar­row down where the rock rolls—the ex­tra sub­rou­tine just in­flates your code. You might find, how­ever, that your code nec­es­sar­ily in­cludes a sub­rou­tine that squares num­bers. Se­cond, you can ask: “The woman down the street is a witch; she did what?” Sup­pose you want to de­scribe some event, as pre­cisely as you pos­si­bly can given the ev­i­dence available to you—again, say, the dis­tance/​time se­ries of a rock rol­ling down a hill. You can pref­ace your ex­pla­na­tion by say­ing, “The woman down the street is a witch,” but your friend then says, “What did she do?,” and you re­ply, “She made the rock roll one me­ter af­ter the first sec­ond, nine me­ters af­ter the third sec­ond…” Pre­fac­ing your mes­sage with “The woman down the street is a witch,” doesn’t help to com­press the rest of your de­scrip­tion. On the whole, you just end up send­ing a longer mes­sage than nec­es­sary—it makes more sense to just leave off the “witch” pre­fix. On the other hand, if you take a mo­ment to talk about Gal­ileo, you may be able to greatly com­press the next five thou­sand de­tailed time se­ries for rocks rol­ling down hills. If you fol­low the first road, you end up with what’s known as Kol­mogorov com­plex­ity and Solomonoff in­duc­tion. If you fol­low the sec­ond road, you end up with what’s known as Min­i­mum Mes­sage Length. Ah, so I can pick and choose among defi­ni­tions of sim­plic­ity? No, ac­tu­ally the two for­mal­isms in their most highly de­vel­oped forms were proven equiv­a­lent. And I sup­pose now you’re go­ing to tell me that both for­mal­isms come down on the side of “Oc­cam means count­ing laws, not count­ing ob­jects.” More or less. In Min­i­mum Mes­sage Length, so long as you can tell your friend an ex­act recipe they can men­tally fol­low to get the rol­ling rock’s time se­ries, we don’t care how much men­tal work it takes to fol­low the recipe. In Solomonoff in­duc­tion, we count bits in the pro­gram code, not bits of RAM used by the pro­gram as it runs. “En­tities” are lines of code, not simu­lated ob­jects. And as said, these two for­mal­isms are ul­ti­mately equiv­a­lent. Now be­fore I go into any fur­ther de­tail on for­mal sim­plic­ity, let me digress to con­sider the ob­jec­tion: So what? Why can’t I just in­vent my own for­mal­ism that does things differ­ently? Why should I pay any at­ten­tion to the way you hap­pened to de­cide to do things, over in your field? Got any ex­per­i­men­tal ev­i­dence that shows I should do things this way? Yes, ac­tu­ally, be­lieve it or not. But let me start at the be­gin­ning. The con­junc­tion rule of prob­a­bil­ity the­ory states: For any propo­si­tions X and Y, the prob­a­bil­ity that “X is true, and Y is true,” is less than or equal to the prob­a­bil­ity that “X is true (whether or not Y is true).” (If this state­ment sounds not ter­ribly profound, then let me as­sure you that it is easy to find cases where hu­man prob­a­bil­ity as­ses­sors vi­o­late this rule.) You usu­ally can’t ap­ply the con­junc­tion rule di­rectly to a con­flict be­tween mu­tu­ally ex­clu­sive hy­pothe­ses. The con­junc­tion rule only ap­plies di­rectly to cases where the left-hand-side strictly im­plies the right-hand-side. Fur­ther­more, the con­junc­tion is just an in­equal­ity; it doesn’t give us the kind of quan­ti­ta­tive calcu­la­tion we want. But the con­junc­tion rule does give us a rule of mono­tonic de­crease in prob­a­bil­ity: as you tack more de­tails onto a story, and each ad­di­tional de­tail can po­ten­tially be true or false, the story’s prob­a­bil­ity goes down mono­ton­i­cally. Think of prob­a­bil­ity as a con­served quan­tity: there’s only so much to go around. As the num­ber of de­tails in a story goes up, the num­ber of pos­si­ble sto­ries in­creases ex­po­nen­tially, but the sum over their prob­a­bil­ities can never be greater than 1. For ev­ery story “X and Y,” there is a story “X and ¬Y.” When you just tell the story “X,” you get to sum over the pos­si­bil­ities Y and ¬Y. If you add ten de­tails to X, each of which could po­ten­tially be true or false, then that story must com­pete with other equally de­tailed sto­ries for pre­cious prob­a­bil­ity. If on the other hand it suffices to just say X, you can sum your prob­a­bil­ity over stories ((X and Y and Z and …) or (X and ¬Y and Z and …) or …) . The “en­tities” counted by Oc­cam’s Ra­zor should be in­di­vi­d­u­ally costly in prob­a­bil­ity; this is why we pre­fer the­o­ries with fewer of them. Imag­ine a lot­tery which sells up to a mil­lion tick­ets, where each pos­si­ble ticket is sold only once, and the lot­tery has sold ev­ery ticket at the time of the draw­ing. A friend of yours has bought one ticket for $1—which seems to you like a poor in­vest­ment, be­cause the pay­off is only$500,000. Yet your friend says, “Ah, but con­sider the al­ter­na­tive hy­pothe­ses, ‘To­mor­row, some­one will win the lot­tery’ and ‘To­mor­row, I will win the lot­tery.’ Clearly, the lat­ter hy­poth­e­sis is sim­pler by Oc­cam’s Ra­zor; it only makes men­tion of one per­son and one ticket, while the former hy­poth­e­sis is more com­pli­cated: it men­tions a mil­lion peo­ple and a mil­lion tick­ets!” To say that Oc­cam’s Ra­zor only counts laws, and not ob­jects, is not quite cor­rect: what counts against a the­ory are the en­tities it must men­tion ex­plic­itly, be­cause these are the en­tities that can­not be summed over. Sup­pose that you and a friend are puz­zling over an amaz­ing billiards shot, in which you are told the start­ing state of a billiards table, and which balls were sunk, but not how the shot was made. You pro­pose a the­ory which in­volves ten spe­cific col­li­sions be­tween ten spe­cific balls; your friend coun­ters with a the­ory that in­volves five spe­cific col­li­sions be­tween five spe­cific balls. What counts against your the­o­ries is not just the laws that you claim to gov­ern billiard balls, but any spe­cific billiard balls that had to be in some par­tic­u­lar state for your model’s pre­dic­tion to be suc­cess­ful. If you mea­sure the tem­per­a­ture of your liv­ing room as 22 de­grees Cel­sius, it does not make sense to say: “Your ther­mome­ter is prob­a­bly in er­ror; the room is much more likely to be 20 °C. Be­cause, when you con­sider all the par­ti­cles in the room, there are ex­po­nen­tially vastly more states they can oc­cupy if the tem­per­a­ture is re­ally 22 °C—which makes any par­tic­u­lar state all the more im­prob­a­ble.” But no mat­ter which ex­act 22 °C state your room oc­cu­pies, you can make the same pre­dic­tion (for the su­per­vast ma­jor­ity of these states) that your ther­mome­ter will end up show­ing 22 °C, and so you are not sen­si­tive to the ex­act ini­tial con­di­tions. You do not need to spec­ify an ex­act po­si­tion of all the air molecules in the room, so that is not counted against the prob­a­bil­ity of your ex­pla­na­tion. On the other hand—re­turn­ing to the case of the lot­tery—sup­pose your friend won ten lot­ter­ies in a row. At this point you should sus­pect the fix is in. The hy­poth­e­sis “My friend wins the lot­tery ev­ery time” is more com­pli­cated than the hy­poth­e­sis “Some­one wins the lot­tery ev­ery time.” But the former hy­poth­e­sis is pre­dict­ing the data much more pre­cisely. In the Min­i­mum Mes­sage Length for­mal­ism, say­ing “There is a sin­gle per­son who wins the lot­tery ev­ery time” at the be­gin­ning of your mes­sage com­presses your de­scrip­tion of who won the next ten lot­ter­ies; you can just say “And that per­son is Fred Smith” to finish your mes­sage. Com­pare to, “The first lot­tery was won by Fred Smith, the sec­ond lot­tery was won by Fred Smith, the third lot­tery was…” In the Solomonoff in­duc­tion for­mal­ism, the prior prob­a­bil­ity of “My friend wins the lot­tery ev­ery time” is low, be­cause the pro­gram that de­scribes the lot­tery now needs ex­plicit code that sin­gles out your friend; but be­cause that pro­gram can pro­duce a tighter prob­a­bil­ity dis­tri­bu­tion over po­ten­tial lot­tery win­ners than “Some­one wins the lot­tery ev­ery time,” it can, by Bayes’s Rule, over­come its prior im­prob­a­bil­ity and win out as a hy­poth­e­sis. Any for­mal the­ory of Oc­cam’s Ra­zor should quan­ti­ta­tively define, not only “en­tities” and “sim­plic­ity,” but also the “ne­ces­sity” part. Min­i­mum Mes­sage Length defines ne­ces­sity as “that which com­presses the mes­sage.” Solomonoff in­duc­tion as­signs a prior prob­a­bil­ity to each pos­si­ble com­puter pro­gram, with the en­tire dis­tri­bu­tion, over ev­ery pos­si­ble com­puter pro­gram, sum­ming to no more than 1. This can be ac­com­plished us­ing a bi­nary code where no valid com­puter pro­gram is a pre­fix of any other valid com­puter pro­gram (“pre­fix-free code”), e.g. be­cause it con­tains a stop code. Then the prior prob­a­bil­ity of any pro­gram P is sim­ply where is the length of P in bits. The pro­gram P it­self can be a pro­gram that takes in a (pos­si­bly zero-length) string of bits and out­puts the con­di­tional prob­a­bil­ity that the next bit will be 1; this makes P a prob­a­bil­ity dis­tri­bu­tion over all bi­nary se­quences. This ver­sion of Solomonoff in­duc­tion, for any string, gives us a mix­ture of pos­te­rior prob­a­bil­ities dom­i­nated by the short­est pro­grams that most pre­cisely pre­dict the string. Sum­ming over this mix­ture gives us a pre­dic­tion for the next bit. The up­shot is that it takes more Bayesian ev­i­dence—more suc­cess­ful pre­dic­tions, or more pre­cise pre­dic­tions—to jus­tify more com­plex hy­pothe­ses. But it can be done; the bur­den of prior im­prob­a­bil­ity is not in­finite. If you flip a coin four times, and it comes up heads ev­ery time, you don’t con­clude right away that the coin pro­duces only heads; but if the coin comes up heads twenty times in a row, you should be con­sid­er­ing it very se­ri­ously. What about the hy­poth­e­sis that a coin is fixed to pro­duce HTTHTT… in a re­peat­ing cy­cle? That’s more bizarre—but af­ter a hun­dred coin­flips you’d be a fool to deny it. Stan­dard chem­istry says that in a gram of hy­dro­gen gas there are six hun­dred billion trillion hy­dro­gen atoms. This is a startling state­ment, but there was some amount of ev­i­dence that sufficed to con­vince physi­cists in gen­eral, and you par­tic­u­larly, that this state­ment was true. Now ask your­self how much ev­i­dence it would take to con­vince you of a the­ory with six hun­dred billion trillion sep­a­rately speci­fied phys­i­cal laws. Why doesn’t the prior prob­a­bil­ity of a pro­gram, in the Solomonoff for­mal­ism, in­clude a mea­sure of how much RAM the pro­gram uses, or the to­tal run­ning time? The sim­ple an­swer is, “Be­cause space and time re­sources used by a pro­gram aren’t mu­tu­ally ex­clu­sive pos­si­bil­ities.” It’s not like the pro­gram speci­fi­ca­tion, that can only have a 1 or a 0 in any par­tic­u­lar place. But the even sim­pler an­swer is, “Be­cause, his­tor­i­cally speak­ing, that heuris­tic doesn’t work.” Oc­cam’s Ra­zor was raised as an ob­jec­tion to the sug­ges­tion that neb­u­lae were ac­tu­ally dis­tant galax­ies—it seemed to vastly mul­ti­ply the num­ber of en­tities in the uni­verse. All those stars! Over and over, in hu­man his­tory, the uni­verse has got­ten big­ger. A var­i­ant of Oc­cam’s Ra­zor which, on each such oc­ca­sion, would la­bel the vaster uni­verse as more un­likely, would fare less well un­der hu­man­ity’s his­tor­i­cal ex­pe­rience. This is part of the “ex­per­i­men­tal ev­i­dence” I was al­lud­ing to ear­lier. While you can jus­tify the­o­ries of sim­plic­ity on mathy sorts of grounds, it is also de­sir­able that they ac­tu­ally work in prac­tice. (The other part of the “ex­per­i­men­tal ev­i­dence” comes from statis­ti­ci­ans /​ com­puter sci­en­tists /​ Ar­tifi­cial In­tel­li­gence re­searchers, test­ing which defi­ni­tions of “sim­plic­ity” let them con­struct com­puter pro­grams that do em­piri­cally well at pre­dict­ing fu­ture data from past data. Prob­a­bly the Min­i­mum Mes­sage Length paradigm has proven most pro­duc­tive here, be­cause it is a very adapt­able way to think about real-world prob­lems.) Imag­ine a space­ship whose launch you wit­ness with great fan­fare; it ac­cel­er­ates away from you, and is soon trav­el­ing at . If the ex­pan­sion of the uni­verse con­tinues, as cur­rent cos­mol­ogy holds it should, there will come some fu­ture point where—ac­cord­ing to your model of re­al­ity—you don’t ex­pect to be able to in­ter­act with the space­ship even in prin­ci­ple; it has gone over the cos­molog­i­cal hori­zon rel­a­tive to you, and pho­tons leav­ing it will not be able to out­race the ex­pan­sion of the uni­verse. Should you be­lieve that the space­ship liter­ally, phys­i­cally dis­ap­pears from the uni­verse at the point where it goes over the cos­molog­i­cal hori­zon rel­a­tive to you? If you be­lieve that Oc­cam’s Ra­zor counts the ob­jects in a model, then yes, you should. Once the space­ship goes over your cos­molog­i­cal hori­zon, the model in which the space­ship in­stantly dis­ap­pears, and the model in which the space­ship con­tinues on­ward, give in­dis­t­in­guish­able pre­dic­tions; they have no Bayesian ev­i­den­tial ad­van­tage over one an­other. But one model con­tains many fewer “en­tities”; it need not speak of all the quarks and elec­trons and fields com­pos­ing the space­ship. So it is sim­pler to sup­pose that the space­ship van­ishes. Alter­na­tively, you could say: “Over nu­mer­ous ex­per­i­ments, I have gen­er­al­ized cer­tain laws that gov­ern ob­served par­ti­cles. The space­ship is made up of such par­ti­cles. Ap­ply­ing these laws, I de­duce that the space­ship should con­tinue on af­ter it crosses the cos­molog­i­cal hori­zon, with the same mo­men­tum and the same en­ergy as be­fore, on pain of vi­o­lat­ing the con­ser­va­tion laws that I have seen hold­ing in ev­ery ex­am­inable in­stance. To sup­pose that the space­ship van­ishes, I would have to add a new law, ‘Things van­ish as soon as they cross my cos­molog­i­cal hori­zon.’ ” The de­co­her­ence (a.k.a. many-wor­lds) ver­sion of quan­tum me­chan­ics states that mea­sure­ments obey the same quan­tum-me­chan­i­cal rules as all other phys­i­cal pro­cesses. Ap­ply­ing these rules to macro­scopic ob­jects in ex­actly the same way as micro­scopic ones, we end up with ob­servers in states of su­per­po­si­tion. Now there are many ques­tions that can be asked here, such as “But then why don’t all bi­nary quan­tum mea­sure­ments ap­pear to have 5050 prob­a­bil­ity, since differ­ent ver­sions of us see both out­comes?” How­ever, the ob­jec­tion that de­co­her­ence vi­o­lates Oc­cam’s Ra­zor on ac­count of mul­ti­ply­ing ob­jects in the model is sim­ply wrong. De­co­her­ence does not re­quire the wave­func­tion to take on some com­pli­cated ex­act ini­tial state. Many-wor­lds is not spec­i­fy­ing all its wor­lds by hand, but gen­er­at­ing them via the com­pact laws of quan­tum me­chan­ics. A com­puter pro­gram that di­rectly simu­lates quan­tum me­chan­ics to make ex­per­i­men­tal pre­dic­tions, would re­quire a great deal of RAM to run—but simu­lat­ing the wave­func­tion is ex­po­nen­tially ex­pen­sive in any fla­vor of quan­tum me­chan­ics! De­co­her­ence is sim­ply more so. Many phys­i­cal dis­cov­er­ies in hu­man his­tory, from stars to galax­ies, from atoms to quan­tum me­chan­ics, have vastly in­creased the ap­par­ent CPU load of what we be­lieve to be the uni­verse. Many-wor­lds is not a zillion wor­lds worth of com­pli­cated, any more than the atomic hy­poth­e­sis is a zillion atoms worth of com­pli­cated. For any­one with a quan­ti­ta­tive grasp of Oc­cam’s Ra­zor that is sim­ply not what the term “com­pli­cated” means. As with the his­tor­i­cal case of galax­ies, it may be that peo­ple have mis­taken their shock at the no­tion of a uni­verse that large, for a prob­a­bil­ity penalty, and in­voked Oc­cam’s Ra­zor in jus­tifi­ca­tion. But if there are prob­a­bil­ity penalties for de­co­her­ence, the lar­ge­ness of the im­plied uni­verse, per se, is definitely not their source! The no­tion that de­co­her­ent wor­lds are ad­di­tional en­tities pe­nal­ized by Oc­cam’s Ra­zor is just plain mis­taken. It is not sort-of-right. It is not an ar­gu­ment that is weak but still valid. It is not a defen­si­ble po­si­tion that could be shored up with fur­ther ar­gu­ments. It is en­tirely defec­tive as prob­a­bil­ity the­ory. It is not fix­able. It is bad math. • To­mor­row I will ad­dress my­self to ac­cu­sa­tions I have en­coun­tered that de­co­her­ence is “un­falsifi­able” or “untestable”, as the words “falsifi­able” and “testable” have (even sim­pler) prob­a­bil­ity-the­o­retic mean­ings which would seem to be vi­o­lated by this us­age. Doesn’t this fol­low triv­ially from the above? No ex­per­i­ment can de­ter­mine whether or not we have souls, but that counts against the idea of souls, not against the idea of their ab­sence. If de­co­her­ence is the sim­pler the­ory, then lack of falsifi­a­bil­ity counts against the other guys, not against it. • If the ex­pan­sion of the uni­verse con­tinues, as cur­rent cos­mol­ogy holds it should, there will come some fu­ture point where—ac­cord­ing to your model of re­al­ity—you don’t ex­pect to be able to in­ter­act with the space­ship even in prin­ci­ple; it has gone over the cos­molog­i­cal hori­zon rel­a­tive to you, and pho­tons leav­ing it will not be able to out­race the ex­pan­sion of the uni­verse. IIRC for this to be true the uni­verse’s ex­pan­sion has to ac­cel­er­ate, and the ac­cel­er­a­tion has to stay bounded above zero for­ever. (IIRC this is still con­sid­ered the most prob­a­ble case.) • No ex­per­i­ment can de­ter­mine whether or not we have souls Really? Not at­tempted up­load­ing? Micro­phys­i­cal ex­am­i­na­tion of a liv­ing brain? Tests for re­li­able mem­o­ries of past lives, or re­li­able mediums? • Math­e­mat­ics is just a lan­guage, and any suffi­ciently pow­er­ful lan­guage can say any­thing that can be said in any other lan­guage. Feyn­man was right—if you can’t ex­plain it in or­di­nary lan­guage, you don’t un­der­stand it at all. • Or­di­nary lan­guage is not suffi­ciently pow­er­ful. • Or­di­nary lan­guage in­cludes math­e­mat­ics. “One, two, three, four” is or­di­nary lan­guage. “The thing turned right” is or­di­nary lan­guage (it’s also mul­ti­pli­ca­tion by -i). Feyn­man was right, he just ne­glected to spec­ify that the or­di­nary lan­guage needed to ex­plain physics would nec­es­sar­ily in­clude the math sub­set of it. • If you’re cov­er­ing this later I’ll wait, but I ask now in case my con­fu­sion means I’m mi­s­un­der­stand­ing some­thing. Why isn’t nearly ev­ery­thing en­tanged with nearly ev­ery­thing else around it by now? Why is there a sig­nifi­cant amount of much quan­tum in­de­pen­dance still around? Or does it just look that way be­cause en­tanged sub­con­figu­ra­tions tend to get split off by decore­hence so branches re­tain a rea­son­able amount of non-en­tan­gled­ness within their branch? Sorry if this is a daft or daftly phrased ques­tion. • It IS. That was some­thing Eliezer said waay back when, and he was right. En­tan­gle­ment is a very or­di­nary state of af­fairs. • In­deed, a thor­oughly en­tan­gled world looks clas­si­cal, how­ever para­dox­i­cal it might sound. Re­gard­less of the adopted in­ter­pre­ta­tion. • To be fair, one could shy away from say­ing all those branches are real due to the difficulty of squar­ing the Born rule with the equal prob­a­bil­ity calcu­la­tions that seem to fol­low from that view. Without some­thing like man­gled wor­lds, one can be tempted by an ob­jec­tive col­lapse view, as that at least gives a co­her­ent ac­count of the Born rule. • (The other part of the “ex­per­i­men­tal ev­i­dence” comes from statis­ti­ci­ans /​ com­puter sci­en­tists /​ Ar­tifi­cial In­tel­li­gence re­searchers, test­ing which defi­ni­tions of “sim­plic­ity” let them con­struct com­puter pro­grams that do em­piri­cally well at pre­dict­ing fu­ture data from past data. Prob­a­bly the Min­i­mum Mes­sage Length paradigm has proven most pro­duc­tive here, be­cause it is a very adapt­able way to think about real-world prob­lems.) I once be­lieved that sim­plic­ity is the key to in­duc­tion (it was the topic of my PhD the­sis), but I no longer be­lieve this. I think most re­searchers in ma­chine learn­ing have come to the same con­clu­sion. Here are some prob­lems with the idea that sim­plic­ity is a guide to truth: (1) Solomonoff/​Gold/​Chaitin com­plex­ity is not com­putable in any rea­son­able amount of time. (2) The Min­i­mum Mes­sage Length de­pends en­tirely on how a situ­a­tion is rep­re­sented. Differ­ent rep­re­sen­ta­tions lead to rad­i­cally differ­ent MML com­plex­ity mea­sures. This is a gen­eral prob­lem with any at­tempt to mea­sure sim­plic­ity. How do you jus­tify your choice of rep­re­sen­ta­tion? For any two hy­pothe­ses, A and B, it is pos­si­ble to find a rep­re­sen­ta­tion X such that com­plex­ity(A) < com­plex­ity(B) and an­other rep­re­sen­ta­tion Y such that com­plex­ity(A) > com­plex­ity(B). (3) Sim­plic­ity is merely one type of bias. The No Free Lunch the­o­rems show that there is no a prior rea­son to pre­fer one type of bias over an­other. There­fore there is noth­ing spe­cial about a bias to­wards sim­plic­ity. A bias to­wards com­plex­ity is equally valid a pri­ori. • Without some­thing like man­gled wor­lds, one can be tempted by an ob­jec­tive col­lapse view, as that at least gives a co­her­ent ac­count of the Born rule. Does it re­ally ac­count for it in the sense of ex­plain it? I don’t think so. I think it merely says that the col­laps­ing oc­curs in ac­cor­dance with the Born rule. But we can also sim­ply say that many-wor­lds is true and the his­tory of our frag­ment of the mul­ti­verse is con­sis­tent with the Born rule. Ad­mit­tedly, this doesn’t ex­plain why we hap­pen to live in such a frag­ment but merely as­serts that we do, but similarly, the col­lapse view does not (as far as I know) ex­plain why the col­lapse oc­curs in the fre­quen­cies it does but merely as­serts that it does. • “No math = no physics” I would say that as a prac­ti­cal mat­ter, this is true, be­cause of­ten, many the­o­ries make the same qual­i­ta­tive pre­dic­tion, but differ­ent quan­ti­ta­tive ones. The effect of grav­ity on light for in­stance. In New­to­nian grav­ity, light af­fected the same as mat­ter, but in Gen­eral Rel­a­tivity, the effect is larger. Another ex­am­ple would be flat-Earth the­ory grav­ity ver­sus New­to­nian. Flat-Earthers would say that the Earth is con­stantly ac­cel­er­at­ing up­wards at 9.8 m/​s^2. To a high level of pre­ci­sion, this matches the idea that ob­jects are at­tracted by G M/​ R^2. Differ­ence be­comes large at high al­ti­tudes (large R), where it is quan­ti­ta­tively differ­ent, but qual­i­ta­tively the same. One could prob­a­bly get by set­ting up ex­per­i­ments where the only pos­si­ble re­sults are (same, differ­ent), but that’s re­ally the same as defin­ing num­bers of terms of what they lie be­tween; i.e., calcu­lat­ing sqrt(2) by calcu­lat­ing the largest num­ber < sqrt(2) and the small­est num­ber > sqrt(2). The rate of sci­en­tific progress jumped enor­mously af­ter New­ton, as peo­ple be­gan think­ing more and more quan­ti­ta­tively, and de­vel­oped tools ac­cord­ingly. This is not an ac­ci­dent. • I just had a thought, prob­a­bly not a good one, about Many Wor­lds. It seems like there’s a par­allel here to the dis­cov­ery of Nat­u­ral Selec­tion and un­der­stand­ing of Evolu­tion. Dar­win had the key in­sight about how se­lec­tion pres­sure could lead to changes in or­ganisms over time. But it’s taken us over 100 years to get a good han­dle on spe­ci­a­tion and figure out the de­tailed mechanisms of se­lect­ing for ge­netic fit­ness. One could ar­gue that we still have a long way to go. Similarly, it seems like we’ve had this in­sight that QM leads to Many Wor­lds due to de­co­her­ence. But it could take quite a while for us to get a good han­dle on what hap­pens to wor­lds and figure that de­tailed mechanisms of how they progress. But it was pretty clear that Dar­win was right long be­fore we had worked the de­tails. So I guess it doesn’t bother me that we haven’t worked out the de­tails of what hap­pens to the Many Wor­lds. • Eliezer: Could you maybe clar­ify a bit the differ­ence be­tween Kol­mogrov Com­plex­ity and MML? The first is “short­est com­puter pro­gram that out­puts the re­sult” and the sec­ond one is.… “short­est info that can be used to figure out the re­sult”? ie, I’m not quite sure I un­der­stand the differ­ence here. How­ever, as to the two be­ing equiv­a­lent, I thought I’d seen some­thing about the sec­ond one be­ing used be­cause the first was some­times un­com­putable, in the “to solve it in the gen­eral case, you’d have to have a halt­ing or­a­cle” sense. Cale­do­nian: Math isn’t so much a lan­guage as var­i­ous no­ta­tions for math are a lan­guage. Ba­si­cally, if you use “non math” to ex­press ex­actly the same thing as math, you ba­si­cally have to turn up the pre­ci­sion and “legaleese” to the point that you prac­tially are de­scribing math, just us­ing a lan­guage not meant for it, right? • I think you have it back­wards. When we use lan­guage in a very pre­cise and spe­cific way, strip­ping out am­bi­guity and the po­ten­tial for al­ter­nate mean­ings, we call the re­sult ‘math­e­mat­ics’. • Cale­do­nian: Er… isn’t that what I was say­ing? (That is, that’s ba­si­cally what I meant. What did you think I meant?) • If you head down this re­duc­tion­ism, oc­cams ra­zor route, doesn’t the con­cpet of a hu­man be­come ex­plana­to­rily re­dun­dant? It will be sim­pler to pre­cisely pre­dict what a hu­man will do with­out in­vok­ing the in­ten­tional stance and just mod­el­ling the un­der­ly­ing physics. • Nick Tar­leton: sadly, it’s my ex­pe­rience that it’s fu­tile to try and throw flour over the dragon. • Hang on. @ Cale­do­nian and Psy-Kosh: Surely math­e­mat­i­cal lan­guage is just lan­guage that refers to math­e­mat­i­cal ob­jects—num­bers and such­like. Pre­cise, un­am­bigu­ous lan­guage doesn’t count as math­e­mat­ics un­less it meets this con­di­tion. • Surely math­e­mat­i­cal lan­guage is just lan­guage that refers to math­e­mat­i­cal ob­jects—num­bers and such­like. Pre­cise, un­am­bigu­ous lan­guage doesn’t count as math­e­mat­ics un­less it meets this con­di­tion. Is logic math­e­mat­ics? I as­sert that pre­cise, umam­bigu­ous lan­guage nec­es­sar­ily refers to math­e­mat­i­cal ob­jects, be­cause ‘math­e­mat­ics’ is pre­cise, umam­bigu­ous lan­guage. All math is lan­guage. Not all lan­guage is math. Of course, I also as­sert that math­e­mat­ics is a sub­set of sci­ence, so con­sider that our ba­sic wor­ld­views might be very differ­ent. • “No math = no physics” Tell that to Coper­ni­cus, Gilbert, Gal­vani, Volta, Oer­sted and Fara­day, to men­tion a few. And it’s not like even Gal­ileo used much more than the high-school ar­ith­metic of his day for his elu­ci­da­tion of ac­cel­er­a­tion. • Some physi­cists speak of “el­e­gance” rather than “sim­plic­ity”. This seems to me a bad idea; your judg­ments of el­e­gance are go­ing to be marred by evolved aes­thetic crite­ria that ex­ist only in your head, rather than in the ex­te­rior world, and should only be trusted inas­much as they point to­wards smaller, rather than larger, Kol­mogorov com­plex­ity. Ex­am­ple: In the­ory A, the ra­tio of tiny di­men­sion #1 to tiny di­men­sion #2 is finely-tuned to sup­port life. In the­ory B, the ra­tio of the mass of the elec­tron to the mass of the neu­trino is finely-tuned to sup­port life. An “el­e­gance” ad­vo­cate might fa­vor A over B, whereas a “sim­plic­ity” ad­vo­cate might be neu­tral be­tween them. • i came to this dor­mant thread from the fu­ture: http://​​less­wrong.com/​​lw/​​1k4/​​the_con­trar­ian_sta­tus_catch22/​​1ckj. Seems to me there is a mismap­ping of mul­ti­ple wor­lds wrt quan­tum physics and the mul­ti­ple wor­lds we cre­ate sub­jec­tively. I per­son­ally steer clear of physics and con­cern my­self more with the sub­jec­tive re­al­ities we cre­ate. This seems to me to be more con­gru­ent with the ma­te­rial that eliezer pre­sents here, ie wrt logic and oc­cam’s ra­zor, and what he pre­sents in the ar­ti­cle linked above, ie wrt con­trar­ian dy­nam­ics and feel­ings of satis­fac­tion et al. • I’m stuck, what does ~ de­note? For ev­ery story “X∧Y”, there is a story “X∧~Y”. When you just tell the story “X”, you get to sum over the pos­si­bil­ities Y and ~Y. Y and ~Y, where ~Y does read as … ? • Not-Y, i.e. Y is false.. • I see, thank you. Does it add too much noise if I ask such ques­tions, should I rather not yet read the se­quences if I some­times have to in­quire about such mat­ters? Or should I ask some­where else? I was look­ing up this table of math­e­mat­i­cal sym­bols that stated that ~ does read as ‘has dis­tri­bu­tion’ and stopped look­ing any fur­ther since I’m deal­ing with prob­a­bil­ities here. I guess it should have been ob­vi­ous to me to ex­pect a log­i­cal op­er­a­tor. I was only used to the no­ta­tion not and ¬ as the nega­tion of a propo­si­tion. I’ll have to ad­just my per­ceived in­tel­li­gence down­wards. • There’s no prob­lem with ask­ing a clar­ify­ing ques­tion like that, which might help other lurk­ers and can be an­swered quickly with­out huge amounts of work. By the way, there’s no need for such self-de­p­re­cat­ing com­ments about your ed­u­ca­tion or in­tel­li­gence. It’s so­cially a bit off-putting to talk about the topic, and it risks com­ing across as dis­inge­nous. Just ask your ques­tions with­out such sup­pli­ca­tion. • No, these are rea­son­able ques­tions to ask. • Others that gets used a lot in var­i­ous con­texts are !Y, Y^c (for Y com­ple­ment). There are prob­a­bly more, but I can’t think of them at the mo­ment. • Y with a bar over it also gets used (though be care­ful as this more com­monly means clo­sure of Y, or, well, quite a few other things...) • “Not Y”. The pre­fix tilde, “~”, is com­monly used as an ASCII ap­prox­i­ma­tion for log­i­cal nega­tion, in place of the more math­e­mat­i­cally for­mal no­ta­tion of “¬” (U+00AC, HTML en­tity “¬”, and la­tex “\lnot”) . On that page are a few other com­mon no­ta­tions. • When you just tell the story “X”, you get to sum over the pos­si­bil­ities Y and ~Y. Maybe this is a stupid ques­tion, but shouldn’t it mean “When you just tell the story “Y”, you get to sum over the pos­si­bil­ities Y and ~Y.” ? • I have been work­ing with de­co­her­ence in ex­per­i­men­tal physics. It con­fuses me that you want to use it as a syn­onym for the Many-Wor­lds the­ory. • MWI is the sup­po­si­tion that there is noth­ing else to the fun­da­men­tal laws of na­ture ex­cept QM. De­co­her­ence is the main tricky point in the bridge be­tween QM and our sub­jec­tive ex­pe­riences. With de­co­her­ence, a col­lapse pos­tu­late is su­perflu­ous. With de­co­her­ence, you don’t need a Bohmian ‘real thing’ or what­ever he calls it. QM is sim­ply the way things are. You can stick with it, and MWI fol­lows di­rectly. • The col­lapse pos­tu­late is just a vi­su­al­iza­tion, just like the MWI is.The Born pro­jec­tion rule is the only “real” thing, and it per­sists through MWI or any other “I”. So no, the MWI does not fol­low di­rectly, un­less you strip it of all on­tolog­i­cal mean­ing. • This isn’t quite what your post was about, but one thing I’ve never un­der­stood is how any­one could pos­si­bly find “the uni­verse is to­tally ran­dom” to be a MORE sim­ple ex­pla­na­tion. • I did not get a chance to read this en­try un­til four years af­ter it was pub­lished, but it nonethe­less ended up cor­rect­ing a long-held flawed view I had on the Many Wor­lds In­ter­pre­ta­tion. Thank you for open­ing up my eyes to the idea that Oc­cam’s ra­zor ap­plies to rules, and not en­tities in a sys­tem. You have no idea as to how em­bar­rassed I feel for hav­ing so dras­ti­cally mi­s­un­der­stood the con­cept be­fore now. In­ci­den­tally, I wrote a blog en­try on how this ar­ti­cle changed my mind which seems to have gen­er­ated ad­di­tional dis­cus­sion on this is­sue. • How do you ex­plain this with many wor­lds, while avoid­ing non-lo­cal­ity? http://​​arxiv.org/​​pdf/​​1209.4191v1.pdf If re­sults such as these are easy to ex­plain/​​pre­dict, can the many wor­lds the­ory gain cred­i­bil­ity by pre­dict­ing such things? • Glib: start with the ini­tial states. Prop­a­gate time as speci­fied. Ob­serve the re­sult come out. That’s how. MWI is quan­tum me­chan­ics with no mod­ifi­ca­tions, so its pre­dic­tions match what quan­tum me­chan­ics pre­dicts, and quan­tum me­chan­ics hap­pens to be lo­cal. Ful­ller: The first mo­ment of de­co­her­ence is when pho­ton 1 is mea­sured. At this point, we’ve split the world ac­cord­ing to its po­lariza­tion. Then we have pho­tons 2 and 3 in­terfere. They are then shortly mea­sured, so we’ve split the world again in sev­eral parts. This al­lows post-se­lec­tion for when the two pho­tons com­ing out go to differ­ent de­tec­tors. When that crite­rion is met, then we’re in the sub­space with pho­ton 2 be­ing a match for pho­ton 3, which is the same sub­space as 4 be­ing a match for pho­ton 1. When they mea­sure pho­ton 4, it pro­ceeds to match pho­ton 1. Un­der MWI, this is so un­sur­pris­ing that I’m hav­ing a hard time jus­tify­ing perform­ing the ex­per­i­ment ex­cept as a way of show­ing off cool things we can do with co­her­ence. Now, as for whether this was lo­cal, note that the pro­ce­dure in­volved ig­nor­ing the events we don’t want. That sort of thing is al­lowed to be non-lo­cal since it’s an ar­ti­fact of data anal­y­sis. It’s not like pho­ton 1 made pho­ton 4 do any­thing. THAT would be non-lo­cal. But the force was ap­plied by post-se­lec­tion. Of COURSE… you don’t NEED to look at it through the lens of MWI to get that. Even Copen­hagen would come up with the right an­swer to that, I think. Ac­tu­ally, I’m not sure why it would be a sur­pris­ing re­sult even un­der Copen­hagen. Post-se­lec­tion cre­ates cor­re­la­tions! News at 11. • Un­der MWI, this is so un­sur­pris­ing that I’m hav­ing a hard time jus­tify­ing perform­ing the ex­per­i­ment ex­cept as a way of show­ing off cool things we can do with co­her­ence. Then the jus­tifi­ca­tion is sim­ple; it ei­ther pro­vides ev­i­dence in favour of MWI (and Copen­hagen and any other the­ory that pre­dicts the ex­pected re­sult) or it shat­ters all of them. Scien­tists have to do ex­per­i­ments to which the an­swer is ob­vi­ous—failing to do so leads to the situ­a­tion where ev­ery­body knows that heav­ier ob­jects fall faster than lighter ob­jects be­cause no­body ac­tu­ally checked that. • ev­ery­body knows that heav­ier ob­jects fall faster than lighter objects Be­cause this hap­pens to be mostly true. Air re­sis­tance is a thing. ac­tu­ally checked that Ac­tu­ally, if I re­mem­ber the high school physics anec­dote cor­rectly, the trou­ble for the idea that heavy ob­jects fall faster than light ones be­gan when a cer­tain sci­en­tist asked a hy­po­thet­i­cal ques­tion: what would hap­pen if you drop a light and a heavy ob­ject at the same time, but con­nect them with a string? • Be­cause this hap­pens to be mostly true. Air re­sis­tance is a thing. Got noth­ing to do with weight, though. An acre of tis­sue pa­per, spread out flat, will still fall more slowly than a ten cent coin dropped edge-first. Ac­tu­ally, if I re­mem­ber the high school physics anec­dote cor­rectly, the trou­ble for the idea that heavy ob­jects fall faster than light ones be­gan when a cer­tain sci­en­tist asked a hy­po­thet­i­cal ques­tion: what would hap­pen if you drop a light and a heavy ob­ject at the same time, but con­nect them with a string? Well, yes. (Gal­ileo, wasn’t it?) Doesn’t af­fect my point, though—the ba­sics do need to be checked oc­ca­sion­ally. • Got noth­ing to do with weight, though. Noth­ing? Are you quite sure about that? :-) • Hmmm… let me con­sider it. In an air­less void, the an­swer is no—the mass terms of the force-due-to-grav­ity and the ac­cel­er­a­tion-due-to-force equa­tions can­cel out, and weight has noth­ing to do with the speed of the fal­ling ob­ject. In the pres­ence of air re­sis­tance, how­ever… the force from air re­sis­tance de­pends on how much air the ob­ject hits (which in turn de­pends on the shape of the ob­ject), and how fast rel­a­tive to the ob­ject the air is mov­ing. The force ap­plied by air re­sis­tance is in­de­pen­dent of the mass (but de­pen­dent on the shape and speed of the ob­ject) - but the ac­cel­er­a­tion caused by that force is de­pen­dant on the mass (f=ma). There­fore, the ac­cel­er­a­tion due to air re­sis­tance does de­pend par­tially on the mass of the ob­ject. Okay, so not quite “noth­ing”, but mass is not the most im­por­tant fac­tor to con­sider in these equa­tions... • I don’t know how you de­cide what’s more and what’s less im­por­tant in physics equa­tions :-/​ If I tell you I dropped a sphere two inches in di­ame­ter from 200 feet up, can you calcu­late its speed at the mo­ment it hits the ground? Without know­ing its weight, I don’t think you can. • I don’t know how you de­cide what’s more and what’s less im­por­tant in physics equa­tions :-/​ Pre­dic­tive power. The more ac­cu­rate a pre­dic­tion I can make with­out know­ing the value of a given vari­able, the less im­por­tant that vari­able is. If I tell you I dropped a sphere two inches in di­ame­ter from 200 feet up, can you calcu­late its speed at the mo­ment it hits the ground? Without know­ing its weight, I don’t think you can. Ugh, im­pe­rial mea­sures. Do you mind if I work with a five-cen­time­tre sphere dropped from 60 me­tres? A sphere is quite an aero­dy­namic shape; so I ex­pect, for most masses, that air fric­tion will have a small to neg­ligible im­pact on the sphere’s fi­nal ve­loc­ity. I know that the ac­cel­er­a­tion due to grav­ity is 9.8m/​s^2, and so I turn to the equa­tions of mo­tion; v^2 = v_0^2+2*a*s (where v_0 is the start­ing ve­loc­ity). Start­ing ve­loc­ity v_0 is 0, a is 9.8, s is 60m; thus v^2 = (0*0)+(2*9.8*60) = 1176, there­fore v = about 34.3m/​s. Lit­tle slower than that be­cause of air re­sis­tance, but prob­a­bly not too much slower. (You’ll also no­tice that I’m not us­ing the ra­dius of the sphere any­where in this calcu­la­tion). It’s an ap­prox­i­ma­tion, yes, but it’s prob­a­bly fairly ac­cu­rate… good enough for many, though not all pur­poses. Now, if I know the mass but not the shape, it’s a lot harder to jus­tify the “ig­nore air re­sis­tance” step... • A sphere is quite an aero­dy­namic shape (Eng­ineer with a back­ground in fluid dy­nam­ics here.) A sphere is quite un­aero­dy­namic. Its drag co­effi­cient is about 10 times higher than that of a stream­lined body (at a rele­vant Reynolds num­ber). You have bound­ary layer sep­a­ra­tion off the back of the sphere, which re­sults in a large wake and con­se­quently high drag. The speed as a func­tion of time for an ob­ject with a con­stant drag co­effi­cient drop­ping ver­ti­cally is known and it is a di­rect func­tion of mass. If I learned any­thing from mak­ing potato guns, it’s that in gen­eral, dra­gless calcu­la­tions are pretty in­ac­cu­rate. You’ll get the trend right in many cases with a dra­gless calcu­la­tion, but in gen­eral it’s best to not as­sume drag is neg­ligible un­less you’ve done the math or ex­per­i­ment to show that it is in a par­tic­u­lar case. • A sphere is quite un­aero­dy­namic. Huh. I thought the fact that it got con­tinu­ally and mono­ton­i­cally big­ger un­til a given point and then mono­ton­i­cally smaller meant at least some aero­dy­nam­ics in the shape. I did not even con­sider the wake... The speed as a func­tion of time for an ob­ject with a con­stant drag co­effi­cient drop­ping ver­ti­cally is known and it is a di­rect func­tion of mass. Well. I stand cor­rected, then. Ev­i­dently drag has a far big­ger effect than I gave it credit for. ...pro­por­tional to the square root of the mass, given all oher fac­tors are un­changed, I see. • It’s bet­ter than a flat plate per­pen­dicu­lar to the flow. Most peo­ple seem to not ex­pect that the back of the ob­ject af­fects the drag, but there’s a large low pres­sure zone due to the wake. With high pres­sure in the front and low pres­sure in the back (along with a some­what neg­ligible skin fric­tion con­tri­bu­tion), the drag is con­sid­er­able. So you need to tar­get both the front and back to have a low drag shape. Most aero­dy­namic shapes trade pres­sure drag for skin fric­tion drag, as the lat­ter is small (if the Reynolds num­ber is high). • For “an aero­dy­namic shape” my in­tu­ition first gives me a stylized drop: hemi­spheric in the front and a long tail thin­ning to a point in the back. But af­ter a cou­ple of sec­onds it de­cides that a spin­dle shape would prob­a­bly be bet­ter :-) • The “teardrop” shape is pretty good, though the name is a fair bit mis­lead­ing as droplets al­most never look like that. Their shape varies in time de­pend­ing on the flow con­di­tions. Not quite sure what you mean by spin­dle shape, but I’m sure a va­ri­ety of shapes like that could be pretty good. For the front, it’s im­por­tant to not have a flat tip. For the back, you’d want a grad­ual de­cay of the ra­dius to pre­vent the fluid from sep­a­rat­ing off the back, cre­at­ing a large wake. Th­ese are the heuris­tics. Which shape ob­jects have min­i­mum drag is a fairly in­ter­est­ing sub­ject. The shape with min­i­mum wave drag (i.e., su­per­sonic flow) is known, but I’m not sure there are any gen­eral proofs for other flow regimes. Per­haps it doesn’t mat­ter much, as we already know a bunch of shapes with low drag. The real prob­lem seems to be get­ting these shapes adopted, as (for ex­am­ple) cars don’t seem to be bought on ra­tio­nal bases like en­g­ineer­ing. This should not be sur­pris­ing. • cars don’t seem to be bought on ra­tio­nal bases like en­g­ineer­ing. Of course, but I don’t see it as a bad thing. Typ­i­cally when peo­ple buy cars they have a col­lec­tion of must-haves and then from the short list of cars match­ing the must-haves, they pick what they like. I think it’s a perfectly fine method of pick­ing cars. Com­pare to pick­ing clothes, for ex­am­ple... • You’re do­ing the mid­dle-school physics “an ob­ject dropped in vac­uum” calcu­la­tion :-) If you want to get a num­ber that takes air re­sis­tance into ac­count you need col­lege-level physics. So, since you’ve men­tioned ac­cu­racy, how ac­cu­rate your 34.3 m/​s value is? Can you give me some kind of con­fi­dence in­ter­vals? • You’re do­ing the mid­dle-school physics “an ob­ject dropped in vac­uum” calcu­la­tion :-) Yes, ex­actly. Be­cause for many ev­ery­day situ­a­tions, it’s close enough. So, since you’ve men­tioned ac­cu­racy, how ac­cu­rate your 34.3 m/​s value is? Can you give me some kind of con­fi­dence in­ter­vals? No, I can’t. In or­der to do that, I would need, first of all, to know how to do the air re­sis­tance calcu­la­tion—I can prob­a­bly look that up, but it’s go­ing to be com­pli­cated—and, im­por­tantly, some sort of prob­a­bil­ity dis­tri­bu­tion for the pos­si­ble masses of the ball (know­ing the ra­dius might help in es­ti­mat­ing this). Of course, the greater the mass of the ball, the more ac­cu­rate my value is, be­cause the air ri­sis­tance will have less effect; in the limit, if the ball is a hy­dro­gen bal­loon, I ex­pect it to float away and never ac­tu­ally hit the ground at all, while in the other limit, if the ball is a tiny black hole, I ex­pect it to smash into the ground at ex­actly the calcu­lated value (and then keep go­ing). • No, I can’t. And thus we get back to the ques­tion of what’s im­por­tant in physics equa­tions. But let’s do a nu­mer­i­cal ex­am­ple for fun. Our ball is 5 cm in di­ame­ter, so its vol­ume is about 65.5 cm3. Let’s make it out of wood, say, bam­boo. Its den­sity is about 0.35 g/​cm3 so the ball will weigh about 23 g. Let’s calcu­late its ter­mi­nal ve­loc­ity, that is, the speed at which drag ex­actly bal­ances grav­ity. The for­mula is v = sqrt(2mg/​(pAC)) where m is mass (0.023 kg) , g is the same old 9.8, p is air den­sity which is about 1.2 kg/​m3, A is pro­jected area and since we have a sphere it’s 19.6 cm2 or 0.00196 m2, and C is the drag co­effi­cient which for a sphere is 0.47. v = sqrt( 2 0.023 9.8 /​ (1.2 0.00196 0.47)) = 20.2 m/​s So the ter­mi­nal ve­loc­ity of a 5 cm di­ame­ter bam­boo ball is about 20 m/​s. That is quite a way off your es­ti­mate of 34.3 and we got there with­out us­ing things like hol­low balls or aero­gel :-) • To be fair, a light ball is ex­actly where my es­ti­mate is known to be least ac­cu­rate. Let’s con­sider, rather, a ball with a den­sity of 1 - one that nei­ther floats nor sinks in wa­ter. (Since, in my ex­pe­rience, many things sink in wa­ter and many, but not quite as many, things float in it, I think it makes a rea­son­able guess for the av­er­age den­sity of all pos­si­ble balls). Then you have m=0.0655kg, and thus: v = sqrt( 2 0.0655 9.8 /​ (1.2 0.00196 0.47)) = 34.0785 m/​s ...okay, if it was fal­ling in a vac­uum it would have reached that speed, but it’s had air re­sis­tance all the way down, so it’s prob­a­bly not even close to that. (And it it had been dropped from, say, 240m, then I would have calcu­lated a value of close on 70 m/​s, which would have been even more wildly out). So, I will ad­mit, it turns out that mass is a good deal more im­por­tant than I had ex­pected—also, air re­sis­tance has a larger effect than I had an­ti­ci­pated. • while in the other limit, if the ball is a tiny black hole, I ex­pect it to smash into the ground at ex­actly the calcu­lated value Nope, be­cause in that case, your value of g would be sig­nifi­cantly higher than 9.8 m/​s^2. • Wikipe­dia has a longer ver­sion of the thought ex­per­i­ment. • The prob­lem is, we’ve done much, MUCH more stringent tests than this. It’s like, af­ter check­ing the be­hav­ior of pen­du­lums and fal­ling ob­jects of vary­ing weights and lengths and ar­eas, over vast spans of time and all re­gions of the globe, and in cen­trifuges, and on pul­leys… we went on to then check if two iden­ti­cal ob­jects would fall at the same speed if we dropped one when the other landed. Any­way, I didn’t say it shouldn’t be done. I sup­port ba­sic ex­per­i­ments on QM, but I’d like them to push the en­velope in in­ter­est­ing ways rather than, well, this. • Karl Pop­per alread dealy with the prob­lem of Oc­cam’s Ra­zor not be­ing us­able based on com­plex­ity. He re­cast it as pre­dic­tive util­ity. When one does that, the pre­dic­tion of Many Wor­lds is untestable in pin­ci­ple—IE not ever wrong.
2020-10-23 03:48:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6172589659690857, "perplexity": 13124.919163755356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880519.12/warc/CC-MAIN-20201023014545-20201023044545-00471.warc.gz"}
https://www.intel.com/content/www/us/en/developer/articles/technical/using-intel-data-analytics-acceleration-library-on-matlab.html
# Using Intel® Data Analytics Acceleration Library on Matlab* Published: 08/15/2016 Last Updated: 08/15/2016 By Ying Hu Intel® Data Analytics Acceleration Library (Intel® DAAL) is high performance library, which provides a rich set of algorithms, ranging from the most basic descriptive statistics for datasets to more advanced data mining and machine learning algorithms. It can help developer develop highly optimized big data algorithm with relatively little effort. It’s designed for use with popular data platforms including Hadoop*, Spark*, R, and Matlab*. Matlab* is a multi-paradigm numerical computing;interactive software;and has been widely used for solving engineering and scientific problem. This article tries to show Matlab* and Intel DAAL developers one way to use Intel DAAL from Matlab*. ### Prerequisites: • Install Intel® DAAL on your system; • Install a C++ compiler such as Microsoft Visual Studio* 2015 (MSVS 2015); • Install Matlab* on your system; Note: this article was tested with MATLAB R2015b and Intel DAAL 2017 for Windows ### Principle: Linking Matlab and Intel DAAL function by MexFunction Matlab provide some mechanism, which allow developer interface with programs written in other languages such as C++, Fortran etc. A basic interfacing with C++ language is through a C++ function called mexFunction provided by Matlab MEX library (http://www.mathworks.com/help/matlab/apiref/mexfunction.html). By creating the mexFunction in a *.c or *.cpp file, the function can be complied and called in the Matlab* platform as they were build-in functions. The *.c files are called MEX files and the function name is the Mex filename. Intel DAAL has C++, Java and Python Interface.  We will use the mechanism to call Intel DAAL C++ function from Matlab*. ## Part 1. C++ side The cpp file can be written either in a C++ editor like MSVS2015,  any C language editor like notepad, or in Matlab* IDE ### Step 1. Writing a basic mexFunction in cpp to create a mexFunction, as  below #include "mex.h" void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[]){...;} Here mxArray *prhs[]  are able to accept the input from workspace of Matlab. mxArray *plhs[]  are able to pass the function compute result to workspace of Matlab. In the function, we will add Intel DAAL algorithm, for example get absolute value of data set . /* retrieve the input data from data source */ DataSource<xxx> dataSource(...); /* Create an algorithm */ abs::Batch<float> algorithm; /* Set an input object for the algorithm */ algorithm.input.set(abs::data, dataSource.getNumericTable()); /* Compute Abs function */ algorithm.compute(); /* Print the results of the algorithm */ services::SharedPtr<abs::Result> res = algorithm.getResult(); Please see the whole c++ example in Intel DAAL Developer Guide. Here is a basic structure of the mexFunction in C language. ### Step 2. Inputting and converting of matrix (optional) As the above definition shows, the functions mexFunction facilitate the transfer of data between MEX files and the workspace of Matlab by mxArray. In some of cases, one can read external input data source directly and feed to Intel DAAL algorithm in the function. But in some of cases, one may want to pass the Matlab matrix to Intel DAAL function. Then we may consider the converting of matrix between mxArray and normal array in C. The function mxGetpr() (http://www.mathworks.com/help/matlab/access-data_buu4p6_-5.html) is able to read the matrix from Matlab* input.  But there is notable issue that the matrix from mxGetpr() is stored in column-major order, while C array take row-major order by default.  For example, In the function mexFunction where prhs[] refers to the input matrix structure  mxArray, the input matrix is stored in column-major order which means: a common matrix 1 2 3 4 5 6 7 8 9 was stored as [1 4 7 2 5 8 3 6 9] in prhs[] However, the default read order in C language is row-major order which means when converting an array to matrix, [1 4 7 2 5 8 3 6 9] with dimension of 3*3 will be read as: 1 4 7 2 5 8 3 6 9 So you need consider to transpose the input matrix data in Matlab* or convert it in the mexFunction in *.cpp file, or other read way to fill into Intel DAAL algorithm. A piece of codes sample in Matlab is shown below: input1=input1'; [output1,...]=yourfunctionname(input1,...) output1=output1'; Note: Remember that every time a new matrix is inputted it has to be converted. (mexArray to C Array) And the input matrix have to be converted back to column-major if they still need to be processed in Matlab*.(C Array to mexArray) Remember that every time a matrix is going to be outputted it has to be converted. (C Array to mexArray) ### Step 3. Defining Intel® DAAL function input and output In Intel® DAAL computing, a NumericTable class is required to be the input of algorithms.  So before setting the input of algorithm, input matrix should be converted into NumericTable to be recognized by the Intel® DAAL algorithms. The conversion from inputMatrix (C++ matrix processed in Step 2) to inputData (NumericTable of DAAL) is shown below: SharedPtr<NumericTable>inputData = SharedPtr<NumericTable>(new Matrix<double>(number_of_colome, number_of_row, inputMatrixPtr)) Here is an code example of setting up inputs for Abs algorithm. The inputMatrix is from Matlab workspace. #include "daal.h" #include "mex.h" #include "matrix.h" using namespace std; using namespace daal; using namespace daal::services; using namespace daal::data_management; using namespace daal::algorithms; using namespace daal::algorithms::math; #define inputA prhs[0] #define outputA plhs[0] void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[]) { /*Input Arguments Check if needed*/ if (!mxIsDouble(inputA) || mxIsComplex(inputA)) { mexErrMsgIdAndTxt("Sample:prhs", "Input matrix must be double");} */ /*Defination*/ mwSize nrow, ncol; double *pxtrain; double *poutA; /*Define the Size and Pointer of Input Matrix*/ nrow = mxGetM(inputA); ncol = mxGetN(inputA); pxtrain = mxGetPr(inputA); /*Convert MexArray to C Array (Matlab to C++) if needed*/ //matrix_cr_conv(pxtrain, ncol, nrow); /*Create an Intel DAAL NumericTable*/ SharedPtr<NumericTable>inputdataA = SharedPtr<NumericTable>(new Matrix<double>(ncol, nrow, pxtrain)); /* Create an algorithm */ abs::Batch<double> abs; /* Set an input object for the algorithm */ abs.input.set(abs::data, inputdataA); /* Compute Abs function */ abs.compute(); /*Define Output Pointer*/ SharedPtr<Matrix<double>>result = staticPointerCast<Matrix<double>, NumericTable>(abs.getResult()->get(abs::value)); /*Create Output Matlab Matrix*/ outputA = mxCreateDoubleMatrix(nrow,ncol, mxREAL); /*Define Output Pointer*/ poutA = mxGetPr(outputA); int i; for (i = 0; i < nrow*ncol; i++) { poutA[i] = (*result)[0][i]; } /*Convert C Array to MexArray (C++ to Matlab)* if needed/ //matrix_cr_conv(poutA, nrow, ncol); } To get and output the result you need first get the result from the algorithm (refer to the guide), then create an output matrix, finally define the output pointers to the algorithm results. Also, don't forget to transpose the output matrix as mentioned above. ### Regarding Debug If you want to debug the *.cpp file in C++ compiler IDE, not only Intel® DAAL environment variables has to be set, but also the Matlab* environment variables need to be set. Include path: %MATLAB ROOT%\extern\include; library path: %MATLAB ROOT%\lib\win64;%MATLAB ROOT%\extern\lib\win64\microsoft; Path: %MATLAB ROOT%\bin\win64; and the libmx.lib; libmex.lib; libmat.lib; should be added to the Additional Dependence (Configuration Properties>>Linker>>Input>> Additional Dependence) ## Part 2. Matlab* Side After the *.cpp file is created, the function is ready to be build and called from Matlab platform. ### Step 1: Setting up the environment for Intel® DAAL in Matlab* The easiest way of setting up the environment for Intel® DAAL in Matlab* is to launch the Matlab* from the Intel® Parallel Studio compiler command prompt: • Open command prompt from Start>>All apps>>Intel Parallel Studio XE 201X>>Command Prompt with Intel Compiler xx >>Intel 64 Visual Studio 201X environment (or  IA-32 Visual Studio 201X environment). • C:\Program Files (x86)\IntelSWTools>>"%MATLAB ROOT%\bin\matlab.exe" Then in Matlab command windows, by using getenv() function and setenv() function you can add include and library path to the Matlab* recent environment. (or If you meet compile or link error with starting the compiler command prompt, you can also check and set environment under Matlab* command window manually as below) The Intel DAAL is installed in the default path (C:\Program Files (x86)\IntelSWTools\). To be added to 'INCLUDE' list: C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_xxxx.x.xxx\windows\compiler\include; C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_xxxx.x.xxx\windows\compiler\include\intel64; C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_xxxx.x.xxx\windows\daal\include; To be added to 'lib' list: C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_xxxx.x.xxx\windows\compiler\lib; C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_xxxx.x.xxx\windows\compiler\lib\intel64; C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_xxxx.x.xxx\windows\compiler\lib\intel64_win; C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_xxxx.x.xxx\windows\daal\lib\intel64_win; Here is an example of adding a path to 'INCLUDE' list >>setenv('INCLUDE',[getenv('INCLUDE') ';C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_xxxx.x.xxx\windows\daal\include']); setenv('LIBPATH',[getenv('LIBPATH') ';C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_xxx\windows\lib\intel64']) Then set up  compiler in Matlab* command line, use >>mex -setup to identify the C/C++ language compilation such as Microsoft Visual Studio*. ### Step 2: Creating mexfile from cpp file First allocate (cd) the current folder to the location of the cpp file, then use the following command to statically link the mexfile to the Intel® DAAL libraries. >>C:\Users\xxx\Desktop\Matlab\daal_abs_sample >>mex -v -largeArrayDims yourfunctionname.cpp daal_core.lib daal_thread.lib daal_core.lib can be used if you want a static library within Matlab* (which will result in a larger mexfile); daal_sequential.lib can be used if you don’t need multi-thread computing. Paths may not be included if they are on the list of getenv('PATH'). A success build of mexfile should result in the word: MEX completed successfully and one yourfunctionname.mexw64 file was produced. ### Step 3  Run the mexfunction as matlab build-in function Now you can make full use of the function "yourfunctionname"(must be same name as the name of cpp file), it can be called under the path where the mexfile is. [output1, output2...]=yourfunctionname (input1, input2, input3...); Here is an example of calling function ‘m_daal_abs’ by mexing m_daal_abs.cpp. Attach file: daal_abs_sample.rar For more information about mex and mexFunction please visit: http://www.mathworks.com/help/matlab/ref/mex.html http://www.mathworks.com/help/matlab/apiref/mexfunction.html
2022-08-19 11:24:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3724231421947479, "perplexity": 10810.897789454737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573667.83/warc/CC-MAIN-20220819100644-20220819130644-00226.warc.gz"}
https://www.physicsforums.com/threads/input-variable-in-a-function.790956/
Input variable in a function Main Question or Discussion Point For a generic function $f$, why is that we use the variable $x$ as the argument of the function? What is the significance of this variable in regards to the object that is the function? Does it denote what the function is able to transform? i.e., $f(z)$ usually denotes a function that takes complex numbers as inputs, while $f(x)$ represents real numbers as inputs... In addition, would this be the case for functions in physics? i.e., $x(t)$ denotes the displacement function that only takes time as input? Besides this information, what is the significance of the having a variable as an input to a function? symbolipoint Homework Helper For a generic function $f$, why is that we use the variable $x$ as the argument of the function? What is the significance of this variable in regards to the object that is the function? Does it denote what the function is able to transform? i.e., $f(z)$ usually denotes a function that takes complex numbers as inputs, while $f(x)$ represents real numbers as inputs... In addition, would this be the case for functions in physics? i.e., $x(t)$ denotes the displacement function that only takes time as input? Besides this information, what is the significance of the having a variable as an input to a function?
2020-04-01 12:17:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9458214640617371, "perplexity": 174.8031406701438}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505730.14/warc/CC-MAIN-20200401100029-20200401130029-00211.warc.gz"}
https://www.mediasocialnews.com/vq787/2707e9-why-do-lighter-objects-float-in-water
why do lighter objects float in water They sink if they're heavier. water than the Water has a density of 1 gram / mL Bag-O-Water Try This: Place a bucket of water on a bathroom scale; be sure you can still see the readings on the scale. An object seen in the water will usually appear to be at a different depth than it actually is, due to the refraction of light rays as they travel from the water into the air. Objects that are air-filled will also float. If an object is less dense than water, it will float, if it is denser than water it will sink. Lighter objects, such as paper and leaves, will float on top of the water or be suspended in the water because they are lighter than the water that is trying to hold them up. Buoyancy is the force that is exerted by a aluminum foil and seeing under what conditions it Not only is it dense, but it can shift if it isn’t contained. density. So now you know why a flat raft will float on water even when used in extreme activities such as river rafting. gravitational force. objects sink in water. Why do Objects Float on Water. Why do lighter objects float in water 1 See answer alnaidahhnaim alnaidahhnaim Due to the bouyancy force which is resulted by the means of an object's weight. Another thing to consider is the shape of an object. If it weighs more, it sinks. He observed that when you place an object on the water, it displaces or pushes aside an equal amount of water to make room for itself. So, a balloon filled with air will float on the water. sink. A cork floats in water because it is less dense than a cork-size volume of water. Gravity is a major player in the study of physical science. How Objects Float in Fluids. the solid (if it is denser) is the difference in However, the object is also being https://www.teachervision.com/gravity/why-do-some-objects-fall-faster-others objects that weigh less, per unit of volume than than you are, then gravity will pull on it more Gravity is a major player in the study of physical science. For materials, it is some feature of the material, as opposed to the object itself, that determines if it will float or sink. Why do lighter objects float in water? Is floating a type of flying? Buoyancy is the upward force we need from the water to stay afloat, and it's measured by weight. sinks? being displaced so the buoyant force causes it to Do Objects Float Better in Salt Water Than in Fresh Water? less dense than water The shape The thinner the walls of the object and the more empty or hollow it is, the lighter it is and the better it will float. dense than the fluid. This causes a net So, a balloon filled with air will float on the water. This is partly why huge heavy ships float. Wood is less dense than water. of the fluid that is Think about swimming in a pool. When an object placed in water weighs less than the amount of water it displaces, it floats. released from. The thinner the walls of the object and the more empty or hollow it is, the lighter it is and the better it will float. Interestingly enough it was Archimedes who first discovered the concept of displacement. yourself. Thinking about the learning. When you Students have frequent experiences with objects floating and sinking in the bath, in a swimming pool or at the beach. If the object is less dense than the liquid or gas, buoyancy will make it float. Right Lines: Floating and sinking depend on the density of the material. It does not have to be just water. These substances float (in water) because they are less dense than water. Why Do Some Objects Fall Faster Than Others?Science Projects for BeginnersPhysical ScienceWhy Do Some Objects Fall Faster Than Others?Do Objects Float Better in Salt Water Than in Fresh Water? Any three dimensional object will have volume. filled with helium float in air. In a gravitational field, the pressure at the Objects like apples, wood, and sponges are less dense than water. submerged in water. It is the force that causes wood and boats to float in water and the reason why objects feel lighter when submerged in water. Why do you think it is easier to float if you take a big breath? That is for each unit of volume it should have less mass than the same volume of water. Physical Science. is only supporting the weight of all the air above Any object that floats is always denser with the liquid where it floats. counteract the Saturn is visible in the sky now, a beautiful ringed world with a density so low it would float in water. If you could weigh a large amount of water that has the same volume as the log, the log will weigh less than the water. or remains in a fixed position depends on which Though he was trying to measure the volume of irregular objects by dunking them in a tub of water and measuring the rise of the water level across the tank’s water’s surface. Because of this principle, ships can be built out of materials such as steel that would sink if they were in a solid form. What happens if the object has the same density Objects float if they're lighter than the amount of liquid that they displace, or push aside. In a gravitational field, the pressure at the bottom of a fluid is higher than at the top (because of the weight of the fluid above it, as in air or water pressure). The boat is usually made of wood, and the wood's density is lower than water that makes it float on the water. fluid. pushing down on the (in class) after solutions are prepared by the teacher l Content Standard: NSES Physical Science, properties and changes of properties in matter | Ocean Literacy Principle 1e: Most of Earth's water (97%) is in the ocean. bottom of a fluid is When that force is greater than the downward force of gravity, the object will float to the top of the fluid. But if you consider the full volume of the ship (including the metal hull and the space inside it), on average the ship is less dense than water. beneath the object will have higher pressure than Holding the two objects will give pupils a clear and immediate sense of a difference in mass. This is the central reason that things float. It is the force that causes wood and boats to float in water and the reason why … object, the force from the fluid pushing up on the If you have a ball of clay and toss it into the water, it will pretty much sink immediately. Whether something will float or not depends on how big it is as well as how much it weighs.. How heavy an object is for its size is called its density.Scientists define density as how much mass there is per unit volume.. Now, the deeper into the water you load and push a boat, the greater the pressure on the hull. When it does, the water level rises because the object pushed all the water … Well, it’s not as simple as it may seem. Those objects which have density more than water sink in water We can also say that those having Relative Density more than 1 sink in water Example Iron floats in water because Iron has more density than water (Density of iron is 7800 kg/m 3 but water is 1000 kg/m 3 Relative Density of Iron = Density of iron/Density of Water = 7800/1000 = 7.8 We can also say that since relative density of object is more than 1, it sinks in water Why then do ships made from steel hulls not sink? if $\text{F}_\text{B} = \text{mg}$. pushing up. This is why ships can be made of metal and have lead keels as long as they have hull contents that are not as dense as water (meaning that they weigh less than 62 pounds per cubic foot) they will float. Add your answer and earn points. Negative buoyancy is when the immersed object is denser than the fluid displaced which results in the sinking of the object. from a displaced water), has some weight to it. It has the same pressure Any object that is in water has some buoyant force pushing up against gravity, which means that any object in water loses some weight. As a raft is lighter than the weight of the water that it displaces, it easily floats. Anyway, fresh water weighs 62 pounds per cubic foot while salt water weighs 65 pounds per cubic foot. One of the fairly new ideas in ship ballast is the use of water ballast or tanks that can be filled to cause a ship to drop lower in the sea and become more stable. As long as the weight of the helium plus the balloon fabric is lighter than the air it displaces, the balloon will float in the air. The reason why things float in water applies to air as well, so let's start by understanding water flotation. That is for each unit of volume it should have less mass than the same volume of water. go down, the pressure gets higher. Why Does an Egg float in Salt water? As you go deeper and deeper into the were to submerge a water balloon in water. Saltwater contains salt, this added weight allows it to hold heavier objects so the objects float, as Golf Balls have a density of 1.015 g/mL once enough salt gets added to the water so it becomes 1.016 g/ml Golf balls will float. The buoyant force is proportional to the volume Steel is denser Neutral buoyancy takes place when the weight of immersed object is equal to the fluid displaced. Therefore, the log floats. Why do boats float in water YnnaFeliasAmparo YnnaFeliasAmparo Boats float on water because of buoyancy. This pressure comes from the volume of the water being greater for a deeper keeled boat and at 62 pounds for fresh water this adds up. the boat before it Objects float if they're lighter than the amount of liquid that they displace, or push aside. But the Archimedes principle states that the buoyant force is the weight of the fluid displaced. Explain why some objects float in water /oil/etc and others do not Get the answers you need, now! Wait. if the forces are equal. For objects that weigh more, per unit of So as you buoyant force in terms of density - the density of Objects sink if their average density is greater than that of the fluid, while those with lesser overall density float. A coin would do. Wood floats in water, but steel sinks. If the boat But what you are more likely refering to is to why some objects float in water and some do not. displaced by an as the fluid? You feel lighter in the pool than when walking around on the ground. Learn why you should start using yellow and other colors in my article. water) so the boat would If we want to know what will float on water then it makes sense that it should be something that is ‘lighter’ than water. Carefully squeeze out the air before you close the bag. Objects that are air-filled will also float. It has to do with the fact that objects have different densities, less than it does with weight. gravitational force (the extra weight of the pulled down by the Positive buoyancy is when the immersed object is lighter than the fluid displaced and this is the reason why the object floats. A person who could not swim can still float because its mass is lighter than the water in the pool combined. and causes it to have higher pressure. float. Following a prediction about which will float, the two objects can be lowered into a sink or bowl full of water. Students tend to pay little attention to why things float or sink and may perform simple tests on objects in a tank of water without any understanding of the forces involved in why they float or sink. So, for a floating object on a liquid, the weight of the displaced liquid is the weight of the object. Anotehr example, an egg will sink in fresh water, but float in salt water. A bowling ball seems pretty heavy, but the determining factor of whether the ball (or any object, for that matter) will sink or float is its density. Physical Science. If an object is less dense than water, it will float, if it is denser than water it will sink. Different objects float at different levels in the water because as most regular objects are lowered into the surface of water, the upward push of the water steadily increases until it is in balance with the weight force of the object, and the object then continues floating at this level with the two forces in balance. up or down. empty space occupied by air) to be less than the object has the same density as the fluid, then the If you have a ball of clay and toss it into the water, it will pretty much sink immediately. Carefully squeeze out the air before you close the bag. How much water can you put in Such phenomena are explained scientifically by one simple property i.e. Take for example a human being on a swimming pool. Is floating a type of flying? Boats made of They sink if they're heavier. Also, hollow objects, such as ping pong balls, will generally float on top of water. An object will sink in water if it is made up of particles that are more tightly packed together than the particles in water. Let's say that y­ou take a plastic 1-liter soda bottle, empty out the soft drink it contains, put the cap back on it (so you have a sealed bottle full of air), tie a string around it … Objects that are more dense than water sink and those less dense float. volume. Generally the more of the outside of an object that is touching the water the more buoyant it is. displaced by the object. If we want to know what will float on water then it makes sense that it should be something that is ‘lighter’ than water. Metal ships: Metal is denser than water. here. It depends on how much of them you have in order for them to be "lighter than water" as you put it. Objects that displace an amount of liquid equal to their weight will float because they receive that upward push from the water. would act in exactly the same way that the lake This compresses the water This is an important distinction. ... It’s a major reason why some objects float while others do not. making a boat out of floats. Frita's peanuts are heavier than the fresh water that they displace, so they sink. When we look at the weights of two objects like copper metal and thermocole, the former will have more weight than that of the latter of same size. Another reason why objects float is its buoyancy. Why does a helium balloon float into the air? of boat will float? When it is placed in water, the water it … the boat (steel + - 6015927 lonniemorgado is waiting for your help. That is how buoyancy helps in floating boats and much larger ships. The gravitational force happens? metal takes up. float. the fluid above it, balanced out by the weight of the water. top of the object, so overall the fluid causes a and the reason why objects feel lighter when A hull full of gasoline will float because gas is lighter than water this is why tankers float though they are loaded with oil or gas. of water. 3. Some objects like rocks sink because they are denser than water causing them to sink. A cork floats in water because it is less dense than a cork-size volume of water. denser than you. Common Preconceptions: • Objects float in water because they are lighter than water. The force comes about force on the object acting upwards. If the object displaces an amount of water equal to its own weight, the buoyant force acting on it will be equal to gravity—and the object will float. The helium balloon displaces an amount of air (just like the empty bottle displaces an amount of water). were filled with water instead of air, the buoyant If an object is denser than water, it will sink. Buoyancy is a consequence of being less dense than Now maybe we could replace the material inside the bottle with something other than water. Why do some metals float on water? upward, sinks downward, However, pupils faced with a heavy block of wood, perhaps too heavy for them to lift, and a lightweight … Why do Frita's peanuts float in salt water but not in fresh water? This is the reason why people who could not swim should not be afraid of sinking in a pool or ocean because eventually, he will just float because the body of water is heavier. Objects float when they are lighter than the medium that they are in. So, when a death duck is in the water it will not displace water. If you put the boat under water, what Wood floats. This is why anything floats higher (displaces less water) in salt water and floats lower (displaces more water) in fresh water for a given weight. your surroundings. No. density between solid and fluid times the So, for Whether the object floats Weight and Density. Whether an object floats or sinks in water depends on how dense, or compact, its particles are. A grain of sand will sink because sand is more dense than water. He figured out the principle of why and how do boats float on water. This comes down to density. as in air or water pressure). They will float. it, or atmospheric pressure, about 14 pounds per kind of fluid, a liquid or a gas (in this case This causes the average density of What shape Objects weigh the same in water as in air, but in the water, there is an additional force, of the water pushing the object up. Why do some objects float and others sink? though it's easier to picture a liquid, like A person who could not swim can still float because its mass is lighter than the water in the pool combined. It turns out that helium is a lot lighter than air. pictures will not be pushed If an force is stronger or If the object weighed less than the fluid it displaces and was submerged in the fluid, the upward pressure would be greater than the downward pull of gravity and it would float. Fresh water weighs 62 pounds a cubic foot. For example, an iron screw will sink in water, but float in mercury. Physical Science Why Do Some Objects Fall Faster Than Others? These c… They are not really lighter, they only seem lighter. • Mass/volume/weight/heaviness/size/density may be perceived as equivalent. It has to do with the fact that objects have different densities, less than it does with weight. Explaining density. The lower the number, the better it will float. Therefore, it is easiest square inch. In other words it should be less dense. nc ok amm i … So it is obvious that the heavy object will sink into water while the lighter object floats on water. So, a rock dropped into a bucket of water has Object floats. Because objects float better on a dense surface, they float better on salt water than fresh water.The denser the salt water, the easier it is for objects to float on top of it.. Also Know, will a cold needle float on hot water? same, but it would not be strong enough to The Otherwise it will sink or remain in place in the fluid. Frita's peanuts are heavier than the fresh water that they displace, so they sink. If everything else is denser to rise (float) to the surface if whatever is density of the water Here is a website that shows some examples with The displacement of an object is the amount of water the object pushes out of the way as it floats or sinks (In the case of a submarine). How do boats made of steel float? This is why ships can be made of metal and have lead keels as long as they have hull contents that are not as dense as water (meaning that they weigh less than 62 pounds per cubic foot) they will float. I've attached two links where you can learn more So the buoyancy force is This means that a cubic meter of water has more mass (and thus weighs more on Earth) than a cubic meter of cork, wood, or ice. the fluid it is Answer 6: Think about what happens when something sinks in a glass of water. Why? water would act. The coin was light, but small. It will remain in whatever position in the fluid above the object. It arises in a system of a Helium is less dense than A hull does not need to have any air in it to float, it simply has to weigh less than 62 pounds for fresh water and 65 pounds for salt water. Dissolving salt in fresh water makes the same amount of liquid heavier allowing Frita's peanuts to float. Wrong Track: Well that's easy, heavy things sink and light things float. Do you use different coloured Golf balls? pictures: Positive buoyancy occurs when the submerged object floats because the water displacement is lighter than the density of the object. acceleration due to gravity, 9.8 m/s^2. Click to see full answer In respect to this, do things float better in saltwater or freshwater? than water so steel Bag-O-Water Try This: Place a bucket of water on a bathroom scale; be sure you can still see the readings on the scale. About & Disclaimer | Terms | Privacy | Contact, Understanding Density and why Objects Float on Water according to the Variables. When the fluid is For example, if a stone and a piece of wood are dropped into water, the stone sinks and the wood floats. What this means is that any supporting. of a boat causes it to displace a larger volume of You can read more about buoyant forces in How Hot Air Balloons Work . Also, hollow objects, such as ping pong balls, will generally float on top of water. to float in water POTATO FLOAT Unit: Salinity Patterns & the Water Cycle l Grade Level: Middle l Time Required: 30 min. This tutorial explores how fish, observed from the bank of a pond or lake, appear to be closer to the surface than they really are. force would be the Dunk a large zip-closing plastic bag under water and fill it half full. It is the force that causes wood and boats into a liquid you go, the more weight it is The shape of an object can also be a way to know if it is going to float or sink. Objects either float or sink in water because of something called buoyancy. float because they are designed to displace a lot be equal to the gravitational force and the object Density = mass / volume. Density is the ration of an objects mass to its volume. This is called buoyancy. As a raft is lighter than the weight of the water that it displaces, it easily floats. because a fluid has a pressure gradient in it the Buoyancy is an upward force caused by the pressure negative buoyancy force acting upon it. This is called buoyancy. An object will float if the buoyancy force exerted on it by the fluid balances its weight, i.e. Density plays a part in why some things float and some sink. Ducks are lighter than the water they displace. so it floats in water. In fact, air is so much lighter than water that very little of a boat has to become submerged for it to displace enough water to float. That’s because air is less dense than water. VOLTA - Floating Light Bulb — Volta is named after the Italian physicist, Alessandro Volta, a pioneer of electricity and power, and credited as the inventor of the electric battery. lake, there is more and more water above you that Moreover, submarines can float and sink at will. It is, of course, the force of gravity that causes objects to fall. But take that same ball of clay and flatten it into the shape of a boat or raft and when you place it in water it will float. object is stronger than the force from the fluid Clearly, heavy objects such as rocks should sink while lighter objects such as feathers would float, right? We've just noticed that the buoyancy force exerted is equal to the gravity force acting on the water displaced in the bottle-pushing experiment. The principle of flotation states that when an object displaces the same amount of weight in water than the weight of the object itself then the object will float. Even when used in extreme activities such as river rafting it may seem be a way know! As it may seem of being less dense float more and more water above you that needs to be lighter... More likely refering to is to why some objects float when they are floating in why... How dense, but it won ’ t contained something other than water area larger and will! Balloons Work why you should start using yellow and other colors in my.! Is an upward force we need from the water much larger ships larger and it will eventually float the volume... 'S actually wrong, says Wired Science blogger Rhett Allain are more tightly packed together than the amount of that! Sinking in the pool combined is made up of particles that are tightly! More tightly packed together than the weight of the displaced liquid is the ration of objects... Exerted is equal to their weight will float because their density is lower than water the Archimedes states. Or bowl full of water also being pulled down by the pressure from displaced. Consequence of being less dense than water it will pretty much sink immediately the weight of the fluid.. Made from steel hulls not sink Grade Level: Middle l Time Required: 30 min can be... Fluid it is going to float or sink in fresh water that it displaces, it easily floats shape a... He figured out the air before you close the bag much water can you put the boat you and... Than the density of the log is still the same density as the fluid displaced which results the... Exerted on it by the object heavy solid objects sink, but float in salt weighs! It 's measured by weight helium and hot-air balloons float in mercury amount. Water it will sink in water the presence of gravity exert an force... Under water, it ’ s a major player in the study physical. Arises in a swimming pool death duck is in the pool combined than air equivalent volumes substances... Is always denser with the fact that objects have different densities, less than the displaced! A stone and a solid when the submerged object floats or sinks in water as. Float if the object acting upwards amm i … objects float while others do not Egg will sink in article. Boat causes it to displace a lot lighter than the fluid displaced and this is heavy. ) why do lighter objects float in water either float or sink tightly packed together than the fluid of., do things float better in saltwater or freshwater floating boats and much larger ships activities such ping! Discovered the concept of displacement steel, float because of the log still. Why things float and light things float better in salt water but not in water... Balances its weight, i.e a way to know if it is made up of particles that more! Same amount of liquid heavier allowing frita 's peanuts are heavier than the amount. Of something called buoyancy takes up not only is it dense, but was... Boats and much larger ships anywhere, because it would float in air but..., they only seem lighter: floating and sinking depend on the.! Frequent experiences with objects floating and sinking in the presence of gravity exert an force. Egg float in water /oil/etc and others do not much of them you have learned... Well that 's actually wrong, says Wired Science blogger Rhett Allain upward push from the water in the combined... Buoyant force is the upward pressure of the water you load and push boat! Do you think it is heavier allowing frita 's peanuts are heavier than the weight of immersed object denser. As river rafting salt in fresh water makes the same amount of liquid that they displace, or compact its. Also being pulled down by the object floats on water peanuts are heavier than water, but buoyancy... And others do not just like the empty bottle displaces an amount of water frequent experiences with objects and. Know why a flat raft will float because they are heavier than the particles in water because it made. Things often float too as air is less dense than water can be... A glass of water it will sink because sand is more or less dense than water, it s! It would act out that helium is a lot lighter than water will float on the of! Applies to air as well, it will remain in whatever position in the sinking of the liquid! Experiment with this concept by making a boat causes it to displace a larger volume of water it. Bath, in a glass of water with this concept by making a boat, the stone and. Boats made of a boat causes it to have higher pressure sky now, the buoyancy force exerted on by. When the immersed object is less dense float to see full answer in to... Water Cycle l Grade Level: Middle l Time Required: 30 min float because its is. Than you two objects will give pupils a why do lighter objects float in water and immediate sense of a boat causes it to displace larger... Water to stay afloat, and sponges are less dense than your surroundings if they lighter!, if it is obvious that the buoyancy force acting on the hull it to have higher pressure replace material. And a solid when the submerged object floats on water even when used extreme... Per unit of volume it should have less mass than the fluid displaced which results in the of. The reason why objects feel lighter when submerged in water you think it is to! Object can also be a way to know if it is displace a lot lighter than.... Made of a boat, the object is also being pulled down by the object statement heavy things sink light. B } = \text { mg } [ /latex ]: • objects float better in saltwater or?... They form understandings from an early age about these ideas and equally importantly the used. Be lighter than the particles in water foil and seeing under why do lighter objects float in water it. So the buoyancy force acting upon it the upward pressure of the fluid before sinks! ’ t contained give pupils a clear and immediate sense of a metal, like steel float! Otherwise it will float because their density is more or less dense than a cork-size volume of water shows! Why this happens Patterns & the water you load and push a boat causes to... Higher pressure because sand is more or less than it does with weight which results in the now. It won ’ t contained a rock dropped into water while the lighter object floats or sinks in glass. Sponges are less dense than water so it floats the wood 's density is lower than.... Generally float on top of water but its buoyancy effect makes it lighter on water when! Them to be supported you put it than you maybe we could replace the material, do float... Surface area larger and it 's measured by weight human being on a pool! Displaces an amount of liquid that they displace, or compact, its particles.. Than in fresh water a solid when the immersed object is denser than water immersed is. Act in exactly the same amount of liquid that they displace, or push aside is! Sentences ) objects either float or sink in water depends on how dense, or push aside balls and... Be lowered into a sink or remain in place in the pool combined some do not Get the answers need! Liquid or gas, buoyancy will make it float study why do lighter objects float in water physical Science why do think... So balloons filled with air will float, if it is making a boat it... Weight, i.e float is close to being true half full floats on even... Archimedes principle states that the lake water would act in exactly the pressure... And how do boats float on water from the water it … why does an Egg will sink remain. For each unit of volume than water, what happens examples with pictures: pictures here hollow objects such. & the water overall density float should start using yellow and other in. If the object is less dense than water it … why why do lighter objects float in water an Egg will sink in water and it... If [ latex ] \text { mg } [ /latex ] full in! You put in the sinking of the object inside the bottle with something other than water that have... T contained visible in the pool combined will remain in whatever position in the sinking the... Order for them to be supported we need from the water others do not water weighs pounds. Isn ’ t float in salt water than the liquid where it floats this compresses the in! When a death duck is in the bottle-pushing experiment it the further down go! Down, the better it will sink [ /latex ] is the weight of immersed object equal. To describe them we 've just noticed that the heavy object will.... /Oil/Etc and others do not already learned about water pressure and density will help you understand why this happens why... Do you think it is less dense than water so it is which will float it would act and will. Its buoyancy effect makes it lighter on water even when used in extreme activities such as pong! Know why a flat raft will float because their density is more and more water above you that needs be. In order for them to be supported used in extreme activities such as ping pong balls, and reason! Bowl full of water has negative buoyancy is the upward pressure of the....
2021-08-05 05:49:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20416013896465302, "perplexity": 668.2599132106614}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155322.12/warc/CC-MAIN-20210805032134-20210805062134-00262.warc.gz"}
https://www.ademcetinkaya.com/2022/09/should-you-buy-now-or-wait-mtxde-stock.html
## Abstract We evaluate MTU Aero Engines prediction models with Reinforcement Machine Learning (ML) and Polynomial Regression1,2,3,4 and conclude that the MTX.DE stock is predictable in the short/long term. According to price forecasts for (n+1 year) period: The dominant strategy among neural network is to Hold MTX.DE stock. Keywords: MTX.DE, MTU Aero Engines, stock forecast, machine learning based prediction, risk rating, buy-sell behaviour, stock analysis, target price analysis, options and futures. ## Key Points 1. Is now good time to invest? 2. Is it better to buy and sell or hold? 3. How do you pick a stock? ## MTX.DE Target Price Prediction Modeling Methodology We consider MTU Aero Engines Stock Decision Process with Polynomial Regression where A is the set of discrete actions of MTX.DE stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Polynomial Regression)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Reinforcement Machine Learning (ML)) X S(n):→ (n+1 year) $\begin{array}{l}\int {r}^{s}\mathrm{rs}\end{array}$ n:Time series to forecast p:Price signals of MTX.DE stock j:Nash equilibria k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## MTX.DE Stock Forecast (Buy or Sell) for (n+1 year) Sample Set: Neural Network Stock/Index: MTX.DE MTU Aero Engines Time series to forecast n: 07 Sep 2022 for (n+1 year) According to price forecasts for (n+1 year) period: The dominant strategy among neural network is to Hold MTX.DE stock. X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Yellow to Green): *Technical Analysis% ## Conclusions MTU Aero Engines assigned short-term B1 & long-term B2 forecasted stock rating. We evaluate the prediction models Reinforcement Machine Learning (ML) with Polynomial Regression1,2,3,4 and conclude that the MTX.DE stock is predictable in the short/long term. According to price forecasts for (n+1 year) period: The dominant strategy among neural network is to Hold MTX.DE stock. ### Financial State Forecast for MTX.DE Stock Options & Futures Rating Short-Term Long-Term Senior Outlook*B1B2 Operational Risk 6373 Market Risk3243 Technical Analysis9058 Fundamental Analysis3641 Risk Unsystematic8547 ### Prediction Confidence Score Trust metric by Neural Network: 87 out of 100 with 750 signals. ## References 1. M. L. Littman. Friend-or-foe q-learning in general-sum games. In Proceedings of the Eighteenth International Conference on Machine Learning (ICML 2001), Williams College, Williamstown, MA, USA, June 28 - July 1, 2001, pages 322–328, 2001 2. A. Shapiro, W. Tekaya, J. da Costa, and M. Soares. Risk neutral and risk averse stochastic dual dynamic programming method. European journal of operational research, 224(2):375–391, 2013 3. Breiman L. 2001a. Random forests. Mach. Learn. 45:5–32 4. S. Bhatnagar and K. Lakshmanan. An online actor-critic algorithm with function approximation for con- strained Markov decision processes. Journal of Optimization Theory and Applications, 153(3):688–708, 2012. 5. M. Petrik and D. Subramanian. An approximate solution method for large risk-averse Markov decision processes. In Proceedings of the 28th International Conference on Uncertainty in Artificial Intelligence, 2012. 6. Dudik M, Langford J, Li L. 2011. Doubly robust policy evaluation and learning. In Proceedings of the 28th International Conference on Machine Learning, pp. 1097–104. La Jolla, CA: Int. Mach. Learn. Soc. 7. Y. Chow and M. Ghavamzadeh. Algorithms for CVaR optimization in MDPs. In Advances in Neural Infor- mation Processing Systems, pages 3509–3517, 2014. Frequently Asked QuestionsQ: What is the prediction methodology for MTX.DE stock? A: MTX.DE stock prediction methodology: We evaluate the prediction models Reinforcement Machine Learning (ML) and Polynomial Regression Q: Is MTX.DE stock a buy or sell? A: The dominant strategy among neural network is to Hold MTX.DE Stock. Q: Is MTU Aero Engines stock a good investment? A: The consensus rating for MTU Aero Engines is Hold and assigned short-term B1 & long-term B2 forecasted stock rating. Q: What is the consensus rating of MTX.DE stock? A: The consensus rating for MTX.DE is Hold. Q: What is the prediction period for MTX.DE stock? A: The prediction period for MTX.DE is (n+1 year) ## People also ask What are the top stocks to invest in right now? Our Mission As AC Investment Research, our goal is to do fundamental research, bring forward a totally new, scientific technology and create frameworks for objective forecasting using machine learning and fundamentals of Game Theory. 301 Massachusetts Avenue Cambridge, MA 02139 667-253-1000 pr@ademcetinkaya.com Follow Us | Send Feedback
2022-10-02 15:55:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5862504839897156, "perplexity": 10946.869295339611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00260.warc.gz"}
https://lakshyaeducation.in/question/in_the_case_of_a_straight_line_demand_curve_meetin/1630547943285877169/
## Economics In the case of a straight-line demand curve meeting the two axes, the price-elasticity of demand at the mid-point of the line would be Options:
2022-09-25 01:08:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8351436257362366, "perplexity": 1585.0402068444942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334332.96/warc/CC-MAIN-20220925004536-20220925034536-00770.warc.gz"}
https://direct.mit.edu/jcws/article/24/1/78/109004/You-Don-t-Know-Khrushchev-Well-The-Ouster-of-the
The ouster of Nikita Khrushchev in October 1964 was a key moment in the history of elite politics in one of the most important authoritarian regimes of the twentieth century. Yet political scientists and historians have long seemed uninterested in Khrushchev's downfall, regarding it as the largely “inevitable” result of his supposedly unpopular policies. Archival sources that have recently come to light cast serious doubt on this assessment and demonstrate new ways of measuring contingency. By showing the countermeasures Khrushchev could have taken, the importance of timing, and the sense among the plotters that their move was highly risky, this article demonstrates that Khrushchev's defeat was far from preordained. The lesson of October 1964 is not that policy differences or failures lead inexorably to political defeat, but that elite politics in Marxist-Leninist regimes is inherently ambiguous, personal, and, most importantly, highly contingent. The rise of powerful autocratic leaders in Russia and China and the return of authoritarian practices in countries like Hungary that had seemed to be firmly democratic have led to a renewed interest in the nature of authoritarian politics. Scholars have attached particular importance to the question of how leaders are able to overcome threats to their power from within the elite. Quantitative analysis suggests that dictators are more likely to fall to challenges from inside the palace gates than to popular upheavals outside.1 The removal of Nikita Khrushchev in October 1964 was one of the most important political events in the history of the Soviet Union, and the nature of his reign was vociferously debated by Sovietologists at the time.2 More recently, political scientists have returned to Khrushchev's fall as a useful case to theorize about the nature of authoritarian regimes more broadly. Although the recent literature consists of diverse strands, the scholars who have contributed to it have several features in common. First, they agree that Khrushchev is best understood as a weak leader constrained either by power dynamics or by institutions. Second, they argue that Khrushchev did not understand the political environment in which he operated, with the implication that his defeat resulted from poor political judgment. Third, they believe that Khrushchev was ousted because his colleagues decided to punish him for unpopular policies or incompetence. Fourth, they concur that Khrushchev is best understood as the leader of an authoritarian regime as opposed to simply a Marxist-Leninist polity. Yet William J. Tompson's glasnost-era analysis provided a version of Khrushchev's fall radically different from what this new scholarship predicts, and other biographers such as Geoffrey Swain and William Taubman have added to Tompson's early account. Now, a wide variety of new materials not previously exploited by Western scholars provides further important corroboration of several of Tompson's earlier conclusions and sheds new light on them, while also suggesting that some of his conclusions are in need of revision.3 These new sources can be usefully grouped into three categories. First, publishing companies such as Rossiiskaya Politicheskaya Entsiklopediya (ROSSPEN), Mezhdunarodnyi Fond “Demokratiya” (MFD), and Istoricheskaya Literatura (IstLit) have released document collections with previously unavailable archival materials. Unlike in China, where such books are subject to redactions or even falsified content, the volumes put out by these highly reputable Russian academic publishers were compiled by highly motivated archivists and other scholars who strive to bring as complete a picture of history into public view as possible. The published materials cover Khrushchev's entire career (a two-volume set), his political situation in 1964, and contemporaneous civil–military relations, as well as, and perhaps most importantly, the time of Leonid Brezhnev's rule as recorded in his diaries and work notes. Second, Russian scholars such as Yuliya Abramova, Oleg Khlevniuk, Nikita Petrov, Andrei Postnikov, and Andrei Sushkov have written books, articles, and dissertations with arguments and evidence that have yet to enter the Western historiography. Third, memoirs or diaries by individuals such as Petr Abrasimov, Nikolai Kamanin, and Yulii Kvitsinskii provide new information as well. This article combines these sources with previously available material, especially the memoirs used in Tompson's 1991 article, to test the new theories used by political scientists to explain Khrushchev's political demise. Although these theories will be stated explicitly and addressed separately, they all fail to grasp the core dynamics surrounding Khrushchev's fall for the same reason: the special structural features of a specifically Leninist system. Why does the failure to account for the special characteristics of Marxist-Leninist regimes lead to flawed theories? First, Marxist-Leninist systems are not popularity contests; rather, they provide leaders with powerful tools to control their potential competitors. Khrushchev's defeat was much more dependent on highly contingent events than popular political science theories predict. This failure to incorporate the decisive importance of the contingency inherent in Marxist-Leninist systems provides a highly misleading picture. Khrushchev was not removed because his deputies disagreed with his policies or wanted to punish his failures. Instead, the plotters tolerated Khrushchev despite their dissatisfaction until he pushed them to move first to save their political lives. Second, rules in Marxist-Leninist systems are simply too ambiguous and poorly enforced for institutions to have a meaningful role. The plotters did not use institutions to solve a collective action problem by rallying the elite to remove the top leader but won a game of maneuver that allowed a small group to resolve the matter as quickly as possible. As this article shows, institutions actually favored Khrushchev, not his opponents. The Soviet system severely constrained their ability to act collectively. Third, Khrushchev's behavior cannot be explained by “stupidity.” The nature of power in Marxist-Leninist regimes is highly ambiguous even to the most seasoned leaders. All of these conclusions sit poorly with the dominant approaches to authoritarian regimes present in current political science literature. Not all political scientists share the same view of why Khrushchev was removed from power. However, at least three perspectives deserve special attention: the “weak Khrushchev” theory, the “institutional constraints” theory, and the “dumb Khrushchev” theory. All of these positions fail to come to grips with the special characteristics of Marxist-Leninist systems. A commonality across the new generation of scholars studying authoritarian regimes is the idea that politics in such states is about exchange. A candidate for leader makes certain promises, and that individual wins the contest with the support of a decisive group (a “winning coalition”) within the full leadership (the “selectorate”). If that leader then breaks the contract, he or she is, naturally, removed from power. This idea is most prominently associated with Bruce Bueno de Mesquita and Alastair Smith, who argue: “It is the successful, reliable implementation of political promises to those who count that provides the basis for any incumbent's advantage.”4 Building on this “selectorate” theory, Jessica L. P. Weeks incorporates the role of failure and incompetence, especially in foreign policy, into elite power struggles. Weeks argues that in some authoritarian regimes the winning coalition is secure enough that the group does not need to worry about defecting from an incompetent leader—they are safe no matter who the leader is. She contends: “Because regime insiders’ political power did not depend entirely on the favor of the incumbent, they believed they could jettison an incompetent or reckless leader and survive politically, just as most of the members of Khrushchev's Presidium did when they ousted him as premier.”5 De Mesquita and Smith believe that Khrushchev was removed because of broken political promises, whereas Weeks claims that he was removed for his failed decision-making as a leader. Both perspectives see him as weak. To be fair, many experts on the Soviet Union shared similar conclusions about a weak Khrushchev constrained and then punished by opponents with different policy inclinations. Even during the Khrushchev era, Carl Linden stated that policy differences and resistance to Khrushchev could be divined by a close reading of published texts, concluding that Khrushchev was “dependent on the success of his policies.”6 In 1969, Alec Nove believed Khrushchev “could not always ride rough-shod over the opinions of his colleagues” and that he was removed because of economic mismanagement and because his “ambitious campaigning (‘hair-brained schemes’), his exaggerated promises, his arbitrary methods, [and] his disorganizing ‘reorganizations’ were too much.”7 In 1971, Leonard Schapiro argued that Khrushchev's opponents were “moved by his policies.”8 Ian D. Thatcher writes that “Khrushchev acted as a leader within the rules.”9 Mary McAuley attributes Khrushchev's downfall to his “erratic behavior” and “rumbling in the apparatus.”10 Philip G. Roeder similarly characterizes the Soviet Union as a polity in which “policymakers needed the sustaining support of their bureaucratic constituencies.”11 Robert V. Daniels concludes that neo-Stalinists “managed to break into the circular flow of power to undermine the leader bureaucratically” as early as 1961.12 Like Weeks, James G. Richter also concludes foreign policy failures endangered Khrushchev.13 Swain's biography of Khrushchev states similarly, “There is no doubt that the outcome of the Cuban Missile Crisis weakened Khrushchev's position.”14 As this article demonstrates through empirical evidence, however, these arguments and previous empirical analyses do not adequately explain politics during the late Khrushchev era. First, until the moment Khrushchev was ousted, he was an exceptionally powerful leader. He did not bargain or negotiate with potential competitors. Second, Khrushchev's vulnerabilities were, to a significant extent, the result of bad luck. Disregarding these elements of contingency would inappropriately overstate the significance of broken promises or policy failures. Third, and most important, Khrushchev was not ultimately removed for either of those reasons but for a much narrower one. His deputies had tolerated broken promises and setbacks for years. Only the fear that Khrushchev intended to end their political lives forced them to take the highly risky step of moving first. Another common theme among the new generation of political scientists is the importance of institutions, especially in Marxist-Leninist regimes.15 Milan Svolik's foundational work The Politics of Authoritarian Rule criticizes “selectorate” theory for ignoring the problem of collective action and explicitly uses institutions to explain Khrushchev's downfall. Although Svolik is careful to make the point that institutions work only when the “balance of power” (somewhat ambiguously defined) is equal, he contends that institutions mitigated the collective action problem in the USSR. He claims that Stalin-era “institutional rules” served “as the foundation for the revived institutional ‘collective leadership’ after Stalin's death.” According to Svolik, the Communist Party of the Soviet Union (CPSU) in the post-Stalin era was dominated by the Central Committee, and Khrushchev was removed only “after his behavior became increasingly unilateral and unpredictable.”16 Svolik's claims deserve special attention because his conclusions are supported by Tompson's account, the best extant comprehensive treatment of the coup to date. Tompson claims that an enormous number of people knew about the coup and that the support of the Central Committee was essential. He argues that “the territorial party elite and other officials making up the bulk of the Central Committee enjoyed far more power than was hitherto thought.” The Central Committee's authority had been restored, Tompson asserted, and the plotters felt free to engage in conspiracy: “Knowing that the ultimate penalty was no longer enforced, they were that much more likely to play politics for very high stakes.”17 Archie Brown and T. H. Rigby similarly contend that the CPSU Central Committee was the body that removed Khrushchev to put an end to his unpopular policies.18 Yet a close examination of the empirical evidence shows that the story of the late Khrushchev era was actually one of highly dysfunctional institutions. Institutions did not prevent Khrushchev from violating collective leadership in the first place. The plotters understood that institutions did not guarantee their survival if they confronted Khrushchev openly about their concerns. They knew that, if they lost, the punishment would be severe, and they acted only because they thought they were going to be removed anyway. Institutions made them vulnerable to accusations of factionalism and eliminated the possibility of the safest approach: arrest or assassination. The plotters did not “use” the Central Committee against Khrushchev and instead actively worked to prevent the body from playing a meaningful role. That is, the Marxist-Leninist structural context did not facilitate the conspiracy. To the extent that institutions had any effect on Khrushchev's opponents, it was in only one narrow way, spurring Khrushchev to refrain from fighting back to protect the party. Despite differing areas of focus, recent analyses of authoritarian politics—those by de Mesquita and Smith, Weeks, and Svolik—all agree on one point, namely, that Khrushchev did not understand his situation. De Mesquita and Smith maintain that Khrushchev was a “well-intentioned leader” who “seems genuinely to have wanted to improve the lot of the Soviet people” but belongs in the “hall of shame” because he “wanted to do well and didn't.”19 Weeks writes that “Khrushchev's unwillingness to abide by the new rules of the game—or his failure to perceive what those rules really were—proved fatal to his hold on power.”20 For Svolik, Khrushchev allegedly made the same mistake committed by the individuals who tried to remove him in 1957—a failure to understand that power had shifted to the Central Committee.21 These disparaging views of Khrushchev also neglect another core feature of Marxist-Leninist regimes—how even the most experienced leaders find it extraordinarily difficult to understand all aspects of their political environment. Khrushchev had a powerful authoritarian toolbox and good reasons to feel secure. Yet he was likely lulled into a false sense of security by the tendency of Marxist-Leninist regimes to generate misinformation. The problem was not that Khrushchev was obtuse but that he was facing unclear signals. Counterintuitively, Khrushchev's understanding of his undeniable strengths is precisely what left him vulnerable to a coup. The new generation of authoritarian regimes scholars suggest that Khrushchev was a weak leader and that when he broke his political promises and displayed poor leadership, especially in foreign policy, he was removed. These contentions are not borne out by the empirical record. Three points are worth stressing. First, Khrushchev's authoritarian toolbox allowed him to dominate other members in the elite. Hence, his position was about much more than popularity. Second, Khrushchev's ouster was much more dependent on contingent factors than one would expect from a political science narrative of “unpopular policies and demonstrated incompetence leads to removal.” Third, the main factor behind Khrushchev's removal was not policy differences or failures but his increasingly aggressive position, which forced his opponents to fight for their political lives. ### Khrushchev the Dominant The suggestion that authoritarian systems are primarily about exchange fails to appreciate the extent to which Marxist-Leninist systems are “leader-friendly.” Rivals had almost no leeway to express open dissatisfaction while Khrushchev was still in control—a situation that made it difficult for opponents to resist his actions, conspire to adopt a different platform, and explain why they had not spoken out sooner. Any discussions about Khrushchev's leadership role could easily have been labeled an act of “factionalism” and allowed Khrushchev to move first. He could have sown discord among his colleagues, developed parallel power structures, and rigged party meetings to guarantee a previously determined outcome. Khrushchev clearly did not feel that he was on the political defensive. Prior to being removed Khrushchev had extraordinary control over the CPSU. During the 22nd Party Congress in 1961, one delegate wrote an anonymous letter to Khrushchev noting that the speeches by the CPSU Presidium members were full of sycophantic language.22 Even the most potentially unpopular decisions were swiftly enforced. When Khrushchev introduced a decision to split the party into two branches in 1962, he used a line from a novel by Ilya Il'f and Evgenii Petrov to describe his behavior: “There will be a parade, and the one commanding the parade will be me.”23 One observer noticed that the decision was publicly applauded and supported, although privately he did not hear “one good word about the new organization, only bewilderment and outright rejection.”24 Even when the conspiracy had already begun, on Khrushchev's birthday in April 1964, Brezhnev cried and embraced Khrushchev after reading a fawning congratulation letter signed by all CPSU Presidium members and candidate members.25 In a letter to the Central Committee in the mid-1960s, Vyacheslav Molotov asked: Where, in which materials after 1957 and all the way up to October 1964 can be found even the slightest opposition to Khrushchev? There is nothing on a single one of the thousands of pages published over all these years from the CPSU Central Committee plenums, party congresses, dozens and hundreds of meetings at the highest level of both the all-union and republic scale.26 This obsequious behavior persisted even though Khrushchev regularly humiliated other members of the elite, including former close allies from Ukraine.27 At a CPSU Presidium meeting in August 1964, Khrushchev, after a dispute with Dmitrii Polyanskii, asked Aleksandr Shelepin, who ran the State Control Commission, to stick a memorandum into Polyanskii's nose. Polyanskii pleaded, “Don't put it in my nose. I'm a human being.” When Khrushchev responded that he, too, was a human being, Polyanskii asked, “How can anyone speak with you? When an opinion is expressed by someone, immediately there is a conflict. Perhaps you have such an attitude toward me?” Khrushchev was blunt: “Apparently yes, I do not deny it . . . I cannot rely on you.”28 The rare cases when a subordinate contradicted Khrushchev demonstrate how dangerous such behavior could be. In 1960, at a Central Committee plenum, one individual criticized the Council of Ministers for not helping economic growth in a city. Khrushchev shouted at the man from behind, and the speaker was forced to end his speech early.29 In 1962, Kirill Mazurov, who headed the Belorussian Communist Party, criticized Khrushchev's proposal to divide the party during a private conversation with the Soviet leader. Khrushchev, furious, summoned a car and left. The next day, Frol Kozlov called to inform Mazurov that Khrushchev had ordered that someone be prepared to replace him (Mazurov narrowly survived the incident).30 Private discussion, let alone open debate, was simply not tolerated under Khrushchev's leadership. Because the plotters were so deeply implicated in Khrushchev's decision-making, they were open to attacks after they moved against the Soviet leader. At a seminar held in the Central Committee's agriculture department, questions were raised by party, state, and science officials about a Central Committee plenum on agricultural issues held in March 1965. A special note written in May 1965 included the more typical of those questions, such as: Why were mistakes not recognized openly until many years after the people suffered an enormous loss? It turned out that Khrushchev, like Stalin, wanted his own cult and was a poor leader. This raises the question. Where was the CC Presidium? What attitude should we have to debate in the CC Presidium in 1957, when some of the leadership (Khrushchev) made a bet on new eastern virgin regions, and another part supported the investment in traditional regions, which the March plenum of the Central Committee highlighted?31 The plotters were also weakened by norms against factionalist activity that could threaten stability. When the so-called Anti-Party Group moved against Khrushchev in 1957, Marshal Ivan Konev criticized the group for damaging the unity of the party and the danger such behavior portended for national security.32 Charges of factionalist conspiracy worked successfully against the Anti-Party Group even though they held a majority on the Presidium. In 1964, Mikhail Suslov was afraid that the conspiracy might foment a split in the party or even society. When he was told of the plot, his lips turned blue and his mouth twitched: “What are you talking about?! There will be a civil war.”33 Khrushchev also had a multitude of organizational tools he could use to undermine potential competitors. He nominated a “primary” first deputy and a “secondary” first deputy as a balance to make sure no one individual on the CPSU Presidium became too powerful.34 When a faction of Presidium members (Nikolai Ignatov, Averkii Aristov, and Ekaterina Furtseva) started to form, Khrushchev sought to instill conflicts between them.35 When Nuritdin Mukhitdinov persuaded Kozlov and Aleksei Kirichenko, two feuding Presidium members, to resolve their differences, Khrushchev threatened to exile Mukhitdinov by sending him to the United Kingdom as ambassador.36 Khrushchev was almost certainly trying to set Brezhnev and Nikolai Podgornyi against each other as competitors for the succession. In July 1963, Khrushchev told W. Averell Harriman that “Brezhnev was in line but that consideration would also be given to Podgorny.”37 Khrushchev also simply removed officials who he thought were forming an incipient faction, including individuals who had helped him defeat the Anti-Party Group.38 Aristov, Ignatov, and Furtseva were all suddenly removed from the leadership in 1961. As Andrei Sushkov, a senior researcher at the Ural Branch of the Russian Academy of Sciences, concludes, As time showed, the members of the CC Presidium reached appropriate conclusions about the personal qualities of Khrushchev, who for the first time in front of them without a twinge of conscience removed from his shoulders all responsibility and placed it on other leaders.39 The sudden purge of Khrushchev's former allies led to “the strongest psychological shock” among Presidium members.40 As leader, Khrushchev could also effectively guarantee that full party meetings could not be used as a platform against him. He effectively prevented the Central Committee plenums from playing any meaningful role by drastically increasing the number of individuals who could attend them and by limiting real discussion. He also convened ad hoc bodies to express support for his policies. In September 1964, shortly before being ousted, Khrushchev held an irregular meeting attended by members of the CPSU Presidium and the USSR Council of Ministers, by republic and district party secretaries, and by figures involved in economic decision-making. The Soviet leader made an important statement about shifting priority to consumer welfare.41 The U.S. State Department remarked in a memorandum that Khrushchev was proclaiming policy before an irregularly constituted body, evidently without any previous action by a proper organ of decision—Presidium, Council of Ministers, or Central Committee. . . . The impression is strong that he found himself blocked in the regular channels of decision and was trying to circumvent them.42 Khrushchev after his downfall was criticized for making Central Committee plenums “parade brouhaha” to “avoid possible criticism from CC members.”43 ### Contingency Political scientists shy away from explicitly saying that their theories are deterministic. Yet we should still question how much purchase their theories give us. One way of judging the explanatory power of a theory that posits a meaningful relationship between independent and dependent variables is to ask whether the type of political phenomenon under investigation is usually shaped by contingency. As Jonathan Bendor and Jacob N. Shapiro point out, certain types of political phenomena are shaped by contingency more than others.44 But how do we measure contingency? An investigation into a single historical event can reveal that likely, possible, and unlikely outcome are all possibilities, as seen in the debate over how likely Adolf Hitler's rise actually was.45 How much an outcome is determined by structural causes is an empirical question. “Documents and other historical evidence can tell whether key actors in a critical juncture acted with a significant degree of freedom or not.”46 Some outcomes are likely, whereas others are “not just determined but overdetermined.”47 As the case of Khrushchev's fall demonstrates, his defeat was shaped by a series of highly contingent events. First, if Khrushchev had moved just a bit faster, he would have further secured his position through organizational and personnel changes. Second, the sudden and unexpected death of a single individual who enjoyed a particularly powerful position made Khrushchev uniquely vulnerable. Until October 1964, Khrushchev had been steadily building more resilience into his position. Mikhail Gorbachev later speculated that Khrushchev at that time was trying to destroy the party apparatus and build a new political base: with two first secretaries in every region, Khrushchev could easily have made drastic changes to the makeup of the Central Committee at the next Party Congress.48 In the summer of 1964, Khrushchev started moving toward an even more aggressive assault: the elimination of party committees at the raion level, the realization of which “would in practice bring a serious strike on the position of the ruling party as a whole.”49 The Russian historian Nikolai Barsukov argues that, despite Khrushchev's hugely unpopular plans for more changes to the party structure, the next plenum would almost certainly still have approved those plans if he had still been in power.50 Previous memoir accounts hint that, shortly before Khrushchev was removed, he had contemplated bringing back former allies who might be grateful for their rehabilitation. Those claims were questionable, but Brezhnev's recently declassified work notes provide significant, albeit inconclusive, support for these earlier claims: those individuals were indeed mentioned in CPSU Presidium discussions around this time. After Marshal Georgii Zhukov was removed from the leadership in 1957, he called and reproached Khrushchev: “You are losing your best friend.”51 According to Brezhnev's notes, at a Presidium meeting on 28 March 1964, Khrushchev stated it was “not necessary to crucify” Zhukov, as he “played not a small role.”52 In the summer of 1964, Khrushchev called Zhukov and said: You know, it was difficult for me then to figure out what was going on inside your head, people would come to me and say: “Zhukov is a dangerous person, he ignores you, at any moment he can do whatever he wants. His authority in the army is too strong, apparently the ‘crown of Eisenhower’ does not give him peace.” Now I am very busy . . . . When I return from vacation—we will meet and discuss matters as friends.53 This evidence suggests that Khrushchev sought a rapprochement with Zhukov, who despised Defense Minister Rodion Malinovskii.54 Khrushchev also indicated an interest in bringing back Ivan Serov, former head of the Soviet State Security Committee (KGB). Khrushchev defended Serov from his detractors as late as 24 November 1958, arguing that he had behaved loyally. But everything changed at the CPSU Presidium meeting on 3 December 1958 where the participants learned that Ignatov had lied over the phone when he claimed that Serov was not in his office. Suddenly it seemed that Ignatov and Serov might have had a private relationship detrimental to Khrushchev's interests. One Presidium member claimed that “Malinovskii was involved in this” (byl Malinovskii pri etom dele), but the rough notes from the meeting do not provide details.55 At the same meeting in March 1964 when Khrushchev praised Zhukov, he also described Serov as an “honest person” who should be given work. Perhaps significantly, in February, Serov wrote a letter to Khrushchev complaining about “persecution” from Malinovskii and Nikolai Mironov, head of the CPSU Administrative Department—an innocuous sounding institution that oversaw the KGB, police, armed forces, procuracy, and courts. Could Khrushchev have believed that Serov could help balance against these two figures, whose behavior would determine the outcome of an attempted coup?56 Unfortunately for Khrushchev, the conspirators moved against him in October 1964—before he could bring back Zhukov and Serov. Even though one scholar has briefly alluded to Kozlov as an opponent of Khrushchev, the reality is that Kozlov's untimely death was devastating for the Soviet leader.57 Kozlov was Khrushchev's planned successor, and he held the position of CPSU Secretary managing the armed forces, military industry, and KGB. Kozlov admitted to another figure in the leadership that other members of the Presidium were afraid of him, fearing that at any moment he could use kompromat (compromising information) to remove them from their positions. But his workload proved too much—he had a stroke in 1963 and died in 1965. One party figure later said he was certain “that if Kozlov had still been alive, Khrushchev's opponents would have achieved nothing at the CC plenum in October 1964.”58 Kozlov's incapacitation gave an opening to Mironov. According to one account, Mironov, who enjoyed great authority among the generals and KGB, served as “chief of staff” in preparing the Central Committee plenum that approved the Presidium's decision to remove Khrushchev from power.59 Mironov called Nikolai Mesyatsev, a party apparatchik, a few days before the October plenum to ask: “Apparently Khrushchev will be removed from his positions, what is your attitude toward this?”60 The positions of the KGB and military were absolutely crucial in Khrushchev's removal. If a Khrushchev supporter like Kozlov had controlled the CPSU Administrative Department, it is inconceivable that the plotters would have emerged victorious. ### Not Broken Promises or Failed Policies but Political Encroachments Khrushchev was associated with unpopular policies that became a political liability.61 To deny otherwise would be a rejection of undeniable evidence from the historical record. Those problems made Khrushchev's removal easier to accept for certain members of the Central Committee. However, even if such dissatisfaction existed, that sentiment was far from overwhelming, and it was not the prime motivator for the conspirators. Ultimately, they were forced to act against Khrushchev to save their political lives—not because they had any meaningful policy differences or believed he was incompetent. The most unpopular policy associated with Khrushchev was his decision to split the party into industrial and agricultural segments. In a memorandum to the CPSU Presidium explaining this decision, Khrushchev argued that the party organizations too often had a “campaign character”; that is, their focus was either too much on industry and too little on agriculture, or vice versa. As economic tasks grew more complicated, party leaders would need to be able to spend more time on more specialized tasks. Two party committees were to be created in each oblast (at the republic level, a single Central Committee would remain, but two “bureaus” would be created).62 During discussions of Khrushchev's removal at party meetings throughout the USSR, the most common emotional refrain was criticism of his constant reorganizations, especially the split into agricultural and industrial obkoms.63 Local party secretaries immediately tried to limit how much Khrushchev's new reorganizations were implemented in their locales.64 However, according to recent scholarship by Khlevnyuk, the notion that this decision was fatal for Khrushchev's popularity is problematic. Khrushchev did not in fact lose all his potential support on the Central Committee. If the context of the final showdown had been different, Khrushchev might have been able to count on the Central Committee more reliably. Khlevnyuk argues that the split did not include party bodies at the republic level. So, “in reality these measures did not have a serious impact on the position of the old republic leaders, and therefore did not meet special doubts in the regions.” With regard to party organizations at the krai and oblast levels (of which only 60 percent were affected), the cadres proved skilled in ensuring that the reforms were limited. Khlevnyuk writes: “It is obvious that the split of the apparat was a good way of reshuffling regional leaders. But Khrushchev used it only to an insignificant extent. The posts of the new ruling structures were basically filled by old leaders.”65 The former leader of the region tended to dominate whoever was posted in the other committee in the same region. Thus, in Khlevnyuk's view, the impact of the change was less far-reaching than often alleged: The relatively restricted impact of the Khrushchev reform on the apparat was facilitated by three factors. First, the reform from the beginning was not of a radical nature. The apparat of a significant number of oblasts and autonomous republics remained untouched. Second, the split of oblast and krai structures was not accompanied with a noticeable cadre rotation. Third, the reform was conducted under the control of the leaders of the previous obkoms and kraikoms, both of which maintained their superior positions. . . . This tactic of conducting reorganization was the result of a compromise between the center and regional officials.66 Other than the (perhaps limited) dissatisfaction with this split in the party, concrete policy differences between Khrushchev and his associates were few. Of course, it is possible that Khrushchev's opponents simply pretended to agree with him, but the evidence suggests that policy differences were not their primary motivation. Svetlana Savranskaya and William Taubman conclude that, with regard to foreign policy, “there is an overall trend that characterizes the whole period—movement from the Cold War's most dangerous episode, the 1962 Cuban Missile Crisis, to the high point of détente in 1975.”67 Tompson, whose review of the memoir literature was exhaustive, writes: one thing that is remarkable about the complaints listed by various Soviet sources on the coup is the relative lack of importance attached to foreign policy and defense issues. Few of the Soviet sources mention them and none seem to regard them as particularly important.68 Differences on domestic policy were far from fundamental. Yakov Feygin, a historian of Soviet economic reforms, notes that at the time of Khrushchev's ouster “the new leadership, wracked by internal rivalries, had not yet defined exactly what would come after Khrushchev's tumultuous decade in power.”69 The plotters were not united by a common platform. Shortly before being removed from power, Khrushchev learned from his son Sergei some details about the plot. When, at his father's request, Sergei repeated the names of some of the plotters (Ignatov, Podgornyi, Brezhnev, and Shelepin), Khrushchev said: “No, it is not believable . . . Brezhnev, Podgornyi, Shelepin—they are completely different people. This cannot be. Ignatov—it is possible. . . . But what does he have in common with the others?”70 Significantly, the deliberations at the CPSU Presidium meetings that criticized Khrushchev for his mistakes and forced him to resign focused mostly on his dictatorial style, not policy differences. Khrushchev's opponents cited his alleged violations of collective leadership and his “voluntarist” decisions. Many speakers emphasized that the party “line” was correct. Khrushchev himself said, “I consider you like-minded friends” and that “we have the same foundation.”71 Chinese observers at the time concurred with the assessment that policy differences were not paramount. On 6 November 1974, during a meeting at Mao Zedong's home, Deng Xiaoping said that Brezhnev's report played down Khrushchev's mistakes: Based on the little bit [in this report], Khrushchev should not step down. In other words, they did not dare to acknowledge directly that some important decisions in which they had originally participated were wrong. This is possibly their weak point. The new CPSU leaders worked together with Khrushchev when he was in power, and many of them were promoted by him. I suspect that making a clear break with Khrushchev's line is impossible [for them].72 Mao agreed, arguing that the reason Khrushchev was removed was not his “line.” At a meeting a few days later, Deng stated that Khrushchev's downfall was attributable not to policy differences but to his despotic tendencies and failures in agriculture. When Zhou Enlai was on a trip to Moscow that month, Suslov told him there would be some differences with Khrushchev. When Zhou asked for examples, Suslov could not give a straight answer.73 Even though the evidence indicates that Khrushchev was not universally unpopular and that Khrushchev's detractors did not have fundamental policy differences with him, dissatisfaction was present. However, this unhappiness started long before Khrushchev was ousted. As early as November 1961, Nikolai Kamanin, then a Soviet Air Force officer in charge of training cosmonauts, wrote in his journal that “the people do not love Khrushchev”—a theme to which he returned several times. In February 1962, Kamanin complained about a lack of meat and other groceries in Moscow and referred to rumors in Minsk of an attempted coup against Khrushchev.74 Yet, even though Kamanin had long made note of rumblings in society, he was caught by surprise when Khrushchev was removed: “I did not think it would be that fast. Brezhnev, Suslov, and Kosygin showed great bravery and outsmarted one of the cleverest people of the modern era.”75 Vladimir Semichastnyi, who was chairman of the KGB at the time of Khrushchev's ouster, maintains that an anti-Khrushchev group formed only around March 1964—many years after the onset of economic problems and also many years after such debacles as the split with China.76 Why the sudden change? In 1991, Tompson hypothesized that one key reason for the move against Khrushchev was that he had planned to promote a younger generation before retiring. The new evidence strongly supports Tompson's conclusion and provides crucial new details.77 As Mikoyan later said: “Now I think that Khrushchev himself provoked them, having promised after vacation to introduce suggestions on making the Presidium younger.”78 According to Brezhnev's rough notes, Khrushchev told him in February 1964 to think about moving Lithuanian party boss Antanas Sniečkus in order to “push forward younger people.” Khrushchev said one possibility for Sniečkus was chairman of the Supreme Soviet—a position that Brezhnev, who was three years younger than Sniečkus, held at the time.79 The notes further mention that at a dinner lasting more than three hours on 7 July 1964, Khrushchev criticized “everyone”: “Shelest understands nothing. . . . Voronov does not understand animal husbandry. . . . Send Shelest on vacation—make him sit and not interfere. . . . Polyanskii—you are a dangerous person—your situation must be changed.” Brezhnev writes that Khrushchev, using foul language, remarked that “it is necessary to separate if we do not understand one another. I paid my dues—I am going to retire and fish.” The notes of the meeting conclude with the phrase, “The mood among us was heavy.”80 A few days later, at a CPSU Central Committee plenum on 11 July, Khrushchev announced that Brezhnev would be removed from his position as chairman of the Supreme Soviet (the Soviet head of state) and replaced with Mikoyan. When the applause ended, Khrushchev disdainfully said to Brezhnev, “They are glad that you have been removed. Those who are not removed cannot be given new posts. It gladdens people that you were removed.” To save face, Brezhnev said, “I don't think so. They are sending me off well.” Then, Khrushchev proceeded to explain that this personnel change was necessary because of a need to raise the prestige of the Supreme Soviet and because the leader of the body had to be a more democratic figure. The obvious implication was that Brezhnev was incapable of raising the prestige of the body or acting democratically.81 At a CPSU Presidium meeting in August 1964, Khrushchev, while attacking Polyanskii, threatened that “this disagreement is forming into a sort of line”—a harsh accusation implying that Polyanskii was at risk of removal or worse. Kosygin was criticized as well: “Kosygin is not here. But this smells of Kosygin.” Khrushchev indicated he had no plans to retreat: “Perhaps this is a matter of age, but I get upset, I worry, I react. Apparently, for as long as I am alive, I will react. There is nothing I can do about it.”82 Khrushchev again raised the possibility of major personnel reshuffles at a meeting on 17 September, referring to “three levels”: young, middle-aged, and old.83 According to Brezhnev's rough notes, Khrushchev said: It is necessary to make further quiet tweaks in the composition of the CC—(future). Some [members] have matured. Some have become old. I am referring to the Presidium, [where] we have many who are allowed to take two month-long vacations [members of the Presidium over sixty years of age were allowed to take both summer and winter vacations]. This is not embellishment [eto ne ukrashenie]. I myself become tired within two hours.84 Khrushchev raised the idea of creating a body of “inspectors” allowing senior Communists to continue to play a role, like the Group of Military Inspectors that existed in the Ministry of Defense.85 On 1 October, Khrushchev met Petro Shelest in Simferopol. Khrushchev complained to him that the CPSU Presidium was a “society of old men” (obshchestvo starikov). He complained about Suslov, Podgornyi, and even Mikoyan, but especially Brezhnev, whom he described as an “empty person” (pustym chelovekom). Khrushchev said, “We will summon a plenum, and we will put each in his place, show them how everyone should work and where.” Shelest concludes that Brezhnev must have worried that if the plenum took place, he would be the first to be punished: “Therefore he was mortally afraid of the upcoming plenum, and he had only two options: to ‘force the matter’ with Khrushchev or to give up everything to him.”86 According to Khrushchev's son, his father spent his vacation thinking about the succession. Khrushchev allegedly wanted to avoid the struggles that ensued after Joseph Stalin's death—an outcome that could be prevented only by introducing laws on leadership change. If every member of the Presidium knew what kind of term limits they faced, they would be willing to act more boldly without worrying about their flanks. Younger Central Committee members would see a future to their careers. Khrushchev also wanted to enlarge the size of the Presidium by adding younger people with initiative.87 Brezhnev's notes say that during the Presidium meetings where Khrushchev was criticized before his resignation, Brezhnev complained: “You recently started to introduce an idea—that the Presidium has aged and needs to be expanded. The Secretariat was [assigned] to find new members of the Presidium.”88 Moreover, the timing of one particular act on Brezhnev's part suggests that the extant dissatisfaction with Khrushchev was a necessary condition for the coup, but not a precipitating one. According to Yurii Korolev, who worked in the party apparatus, in September 1964, shortly before the coup, Brezhnev summoned a group of party workers and gave them the task of visiting the regions. In an “unclear fashion, through hints,” Brezhnev told them to gauge how people were reacting to the split in the party into industrial and agricultural sectors. He was so cautious that he even assured them the administrative change “was a correct action.” The group later reported to Brezhnev in careful terms, but the information was enough to allow him to make a move to save his own political career.89 Svolik argues that institutions in 1964 facilitated collective action and helped the plotters defeat Khrushchev through activation of the power of the Central Committee. Yet the evidence tells a different story. Institutions did not prevent Khrushchev from violating collective leadership in the first place. Indeed, institutions worked against the plotters. Even if norms against factionalism had not been so powerful, institutions themselves would have prevented the plotters from simply arresting or assassinating Khrushchev. Perhaps most revealingly, despite the potential charges they could use against Khrushchev, the conspirators still did not use established party rules or institutions to engineer his removal. Moreover, although Brezhnev and his allies believed that Khrushchev was unpopular enough to make a coup possible, they did not feel pressure from below and were unsure about the strength of that sentiment. Even after Khrushchev was summoned to Moscow, their plot was not assured of victory, and they feared that Khrushchev would still use the Central Committee against them, as he had done in June 1957. Anyone could use institutions to win—the game was not about using the rulebook fairly but about using it against someone else. Ultimately, the goal of Khrushchev's opponents was not to “activate” the Central Committee but to use it as a rubber stamp once their desired outcome was a fait accompli. Unsurprisingly, Ukrainian party boss Shelest describes Khrushchev's removal as “heinous political villainy committed surreptitiously, through conspiracy and intrigue” that was just like the palace intrigues of old.90 Essentially, the only way that institutional norms helped the plotters is that they facilitated Khrushchev's decision not to fight back in a way that would threaten regime stability. If institutions were actually resilient (as Svolik argues), the plotters would not have been put into a position in which they would have to resort to extra-institutional and risky means to fight for their political lives. They clearly did not believe institutions protected them, and they operated outside the CPSU Presidium and Central Committee. By all accounts, Brezhnev was terrified of Khrushchev. When Shelest suggested that instead of a coup they simply meet and discuss the situation, Brezhnev almost screamed: “I already told you, I do not believe in open conspiracies, whoever speaks first will be the first to be hurled out of the leadership.”91 According to Moscow party boss Nikolai Egorychev, when Khrushchev told Mikoyan to investigate the evidence of a plot, Brezhnev started crying and said, “Kolya, Khrushchev knows everything. He will shoot all of us.” When Egorychev told him they were not violating any party rules, Brezhnev responded: “You don't know Khrushchev well.” Egorychev even had to take Brezhnev to a sink and tell him to clean himself up.92 Victor Louis, a Soviet journalist with long-standing ties to the KGB, claims that during preparation and implementation of the coup Brezhnev slept in his office in his clothes with a pistol under his pillow. Brezhnev's family allegedly waited with two packed cars at a dacha near Moscow in case they needed to flee to the Monino military airport.93 Semichastnyi later claimed that Brezhnev's repeated delays to move against Khrushchev made Semichastnyi fear for the fate of the coup and for his own fate as well. Semichastnyi even believed that if Ignatov's bodyguard had not leaked the plot to Khrushchev's son (and thereby spurred Brezhnev into action before it was too late), the October plenum might never have happened.94 Party norms, despite their clearly elastic nature, prevented the plotters from taking the easiest step to defeat Khrushchev: arresting or assassinating him. As Postnikov reveals, Shelest's written diaries, available at the Russian State Archive of Social-Political History (RGASPI), include a crucial detail omitted from his published memoirs. Shelest writes that one of the options considered by the conspirators was to arrest Khrushchev on the road. They chose not to do this because they feared that Khrushchev's bodyguards would open fire, and also because “in case of arrest how would it be justified, motivated?. . . This is already treason, a coup. And the consequences are a liability.” He also notes, however, that they came to this conclusion only after extended arguments and conversations. The fact that the conspirators even considered such an action demonstrates how much they were afraid of Khrushchev's power.95 Brezhnev allegedly even wanted to murder Khrushchev. According to Semichastnyi's memoirs, Brezhnev asked the KGB to arrest Khrushchev after returning from Leningrad and isolate him, but Semichastnyi refused. Semichastnyi then says Brezhnev “directed the conversation to the possibility of the physical liquidation of Khrushchev” (sklonil razgovor k vozmozhnosti fizicheskoi likvidatsii Khrushcheva).96 A former senior KGB operative claims that, in 1988, after reading Semichastnyi's account, Gorbachev called for an investigation into the matter. Reportedly, Semichastnyi was asked to write a formal analysis of the organization's experiments with poisons and Brezhnev's alleged instructions, but he declined.97 The struggle for power was intense and operated with strange rules that worked against the plotters. Khrushchev was unpopular but not universally disliked. Shelest writes: “It would be wrong to say that N. S. Khrushchev did not enjoy a certain authority and respect, popularity among the party and people. Saying otherwise would violate truth and history.”98 Therefore, the plotters simply could not rely on “institutions”—in fact, they broke the rules to rig the game. The danger was that Khrushchev's allies would use a Central Committee plenum to create an unpredictable situation. On 14 October, after the move against Khrushchev began, a Central Committee member wrote a memorandum: “The Presidium is meeting, something is going on. I agree that something must be said about deficiencies, but it is wrong to resort to extreme measures. . . Brezhnev is vain and power-hungry.” That writer was summoned to a “certain place” so her activities could be controlled.99 Another Khrushchev supporter, Leonid Efremov, was sent to remote Kyzyl to give an award, thus preventing him from participating in the plenum.100 According to Semichastnyi, when the CPSU Presidium was meeting on 13 October, he received phone calls from members of the CPSU Central Committee and members of the USSR Presidium of the Supreme Soviet who said to him: “Why are you sitting [doing nothing], they are removing Khrushchev, and you are inactive!” Semichastnyi called Brezhnev to warn him that the discussion in the CPSU Presidium must not be allowed to drag on: “there could be unpredictable actions: there is a lot of excitement around.”101 He also warned Brezhnev that “if a group of members of the CC come, I won't be able to stop them. I cannot use physical violence against them. Some will come to save you, others will come to save Khrushchev.” When Brezhnev exclaimed this must not be allowed to happen, Semichastnyi asked: “And you . . . will do what? Refuse to allow them to appear in the waiting room?” He emphasized that one side was asking him, as head of the KGB, to “call you [Brezhnev] to order” (prizval vas k poryadku). Semichastnyi later told an interviewer, “It's unlikely I would have been able to hold on through the second night as the demands to arrest Brezhnev and the other plotters against Khrushchev were getting more insistent.” When Polyanskii's speech at the Presidium meeting went on too long, Kosygin interrupted: “we should not talk so long, otherwise we will be waiting until like it was in 1957, the members of the CC will come and carry us all out of here.”102 According to Khrushchev's son, CPSU Presidium members wanted to prevent Khrushchev from persuading any of them to defect from the coup. Hence, they agreed not to answer their telephones.103 Far from activating institutions, the plotters did their best to ignore and neutralize them. If institutions were working properly, a real debate should have occurred when the CPSU Central Committee finally met after the CPSU Presidium meetings. During these meetings, Khrushchev asked to be allowed to address the plenum, something he was entitled to do under party rules. However, Brezhnev interrupted him emphatically: “This will not happen.”104 When the Central Committee plenum began, Brezhnev stressed that the decision to remove Khrushchev should be approved immediately, without discussion. Polyanskii proposed that Brezhnev be named CPSU First Secretary, and the decision was confirmed not by a secret ballot but by a show of hands.105 The entire plenum ended quickly, serving essentially as a rubber stamp for the CPSU Presidium's decision. Semichastnyi and Egorychev both later attested (in interviews with Yurii Aksyutin) that everything was rushed through because they were afraid that discussions would spiral out of control, that Brezhnev himself might be criticized, and that other CPSU Presidium members would be vulnerable if a free discussion took place about mistakes during the Khrushchev era.106 When a KGB operative later asked Andropov why no debate had occurred at the plenum after Suslov's presentation, Andropov answered: “And did you not count how many members of the plenum had been appointed during the Khrushchev era?”107 According to Tompson, Gennadii Voronov, who in 1964 was chairman of the Council of Ministers of the Russian Soviet Federative Socialist Republic, believed that, “if Khrushchev had defended himself at the plenum, his supporters would have rallied to his defense and things might well have got out of hand.”108 Polyanskii prepared a much harsher speech toward Khrushchev in case the latter insisted on speaking to the plenum or the proceedings somehow began to slip out of control of the conspirators.109 Luckily for them, it turned out to be unnecessary. Evidence for the extent to which rules were being violated can be seen through a close read of the different records of the October 1964 Central Committee plenum in the archives: one contains the verbatim transcript (nepravlennaya stenogramma) and the other is an edited version (stenograficheskii otchet) that was disseminated as the official version. First, according to the edited version, participants of the plenum spontaneously spoke out against having an open debate, whereas the verbatim transcript shows that in fact Brezhnev was the one who instructed the Central Committee to vote without any debate. Second, according to the edited version, Suslov told the plenum that Khrushchev had decided he did not want to speak— something that was clearly untrue. Third, the edited version claims that Brezhnev's candidacy to be CPSU First Secretary was proposed spontaneously by the participants of the plenum, whereas the verbatim transcript makes clear that Brezhnev was nominated right away by Podgornyi and that no discussion about the nomination took place. Podgornyi simply asked, “Are there any other proposals?” and he then immediately said there were none.110 Memoir accounts provide further details about how the plenum was undermined. According to Mukhitdinov, when Marshal Semen Timoshenko heard Podgornyi's proposal in favor of Brezhnev, the marshal said: “Who? Lenya as first secretary? Geez . . . ” By the time Timoshenko raised his hand to request the right to speak, the plenum had already decided not to open the proposal up for discussion.111 When the decision not to allow discussion was announced, “there was an uproar in the hall, different shouts were heard,” according to an eyewitness.112 Semichastnyi was “deeply surprised” when Brezhnev announced there would be no debate. Just two hours earlier, the CPSU Presidium members had agreed to hold an open debate, and Semichastnyi did not realize that Brezhnev would annul that decision. He surmised that Brezhnev and the other plotters must have worried “it was still unclear what they would hear in their own direction” if a debate took place.113 The questionable nature of Brezhnev's behavior was visible even to those who did not attend the plenum. Kamanin wrote in his journal, The removal of Khrushchev can only be welcomed—this is a benefit to the USSR and the socialist camp—but it would be stupid not to see that such coups damage the authority of our political system. Our country lacks strong constitutional laws.114 Why did Khrushchev not fight back? Even now, decades later, Khrushchev's thinking remains murky. However, here we do see institutions playing a role, although not in the way predicted by Svolik. Khrushchev apparently felt that his opponents were relatively unified and that a fight therefore would not be worthwhile. He also apparently sensed that fighting back would have required acts that threatened regime stability and raised questions about his legitimacy as leader. On the eve of the plot, Khrushchev evidently finally started to suspect something was up. In September, Vasilii Galyukov, former bodyguard of Ignatov (one of the plotters), told Khrushchev's son Sergei about a plot against his father. Galyukov provided specific details regarding conversations between Ignatov and Brezhnev about feeling out whether other individuals would support a move against Khrushchev. Khrushchev disclosed this revelation to Podgornyi, who laughed and denied it was true. Khrushchev decided he would leave Moscow two days later, on 30 September, to go on vacation in the south, but he did tell Mikoyan to speak with Galyukov.115 According to Ignatov, Khrushchev said to the members of the CPSU Presidium: “You are planning something against me, friends. Look here, if something is up I will smash you like puppies.” When they all vowed that such a thing was impossible, Khrushchev told Mikoyan to investigate.116 On 29 September, in a conversation with the Indonesian leader Sukarno, Khrushchev even joked that “my friends want to evacuate me from Moscow so I go on vacation, because they still have some shreds of conscience.” According to Khrushchev, he was given a short stay because of a demand to meet Sukarno, but then they said, “OK, but tomorrow you must get the hell out of Moscow.”117 On 3 October, Mikoyan arrived in the Georgian resort town of Pitsunda bringing rough notes of a conversation with Galyukov. Both Khrushchev and Mikoyan spoke with Evgenii Vorob'ev, party secretary of Krasnodar, whom Galyukov had named as one of the conspirators. Vorob'ev denied any intriguing.118 Khrushchev did not act immediately, perhaps because he knew that Brezhnev was in Berlin and Podgornyi was flying to Chișinău in Moldova on 9 October. But on 11 October, according to Polyanskii, Khrushchev called him (he had been left behind to manage daily affairs) to say that he knew about the conspiracy and promised to return in three or four days and “give them hell” (kuz'kinu mat’).119 Gaston Palewski, a personal envoy of Charles de Gaulle, met Khrushchev shortly before the Soviet leader left for Moscow and was removed from power. On 9 October, Kosygin told Palewski that he could meet Khrushchev at 11:00 a.m. on 13 October. On 12 October, Palewski was told he should stay for lunch. He arrived in Sochi the same day. At 11:00 p.m., he was woken up and told that Khrushchev would instead see him at 9:30 a.m. and lunch was cancelled. During the meeting, Khrushchev “made an allusion to the president of the French Republic and noted that ‘only death was likely to put an end to the activity of a statesman.’” According to Palewski, Khrushchev was “in excellent form and gave no sign of suffering from age or ill health.” Palewski told Harry Hohler, the British first minister in Paris, that his conclusion was “Mr. Khrushchev was fore-warned but was confident that he could deal with the opposition.”120 Khrushchev, however, accepted the CPSU Presidium's decision. Sergei Khrushchev later concluded that his father believed the rumors were probably true but decided not to act on them because he “was immensely tired in a moral and physical sense.” After Khrushchev's 70th birthday in April, he allegedly spoke seriously about retiring. If Khrushchev had decided to battle against his associates on the Presidium, it would have meant fighting against those same individuals he had promoted to the top over the previous seven years.121 Moreover, Khrushchev would not have felt a need to protect a particular policy agenda—he was not facing a coherent opposition platform. At a Presidium meeting on 14 October, Khrushchev said, “I told comrade Mikoyan—I will not fight, we share the same foundation [osnova odna]. Why would I look for paints and smear you?”122 However, Khrushchev may also have believed it was worthwhile to involve the CPSU in another power struggle for one final reorganization of the leadership to affirm his legacy and set the stage for a prolonged (semi)-retirement. Brezhnev wanted to act in a way that guaranteed victory while minimizing the extent to which party norms were violated. If a coup went too far (e.g., if it involved the arrest of Khrushchev), that would damage the prestige of the new leadership. Actions that might destabilize the regime were especially taboo. For Khrushchev to defeat his opponents, he would likely have had to rely on the military or KGB, including senior officials below the top leadership (e.g., regional commanders). He would have had to explain why a potential majority of the CPSU Presidium was “anti-party.” The struggle would have damaged the very norms—on removing the power ministries from politics and an institutionalized succession—that Khrushchev wanted to introduce. Going quietly, moreover, would enhance Khrushchev's legacy. Khrushchev told the Presidium on 14 October “I am glad that, finally, the party has matured and can control any individual.”123 If that consideration did indeed shape Khrushchev's thinking, it means that institutions played at least some role—albeit in a way not predicted by Svolik. However, such behavior contradicts another core assumption of much of the political science on authoritarian regimes: individuals as power maximizers. Yet Khrushchev's behavior was far from unique. Consider the case of Hua Guofeng, Mao Zedong's designated successor, who said, If the party had another internal struggle, the regular people would suffer. I stubbornly resigned from all positions. I told Marshal Ye [Jianying] before I did it. Some said I was a fool. Some said I was too honest. I do not regret it.124 The good of the party as an institution played a role in Hua's thinking, and, despite Khrushchev's clear love for power, may also have affected the Soviet leader's considerations as well. Why did Khrushchev fail to understand that his actions would lead to his removal by deputies frightened about their political lives? Characterizing his behavior as a stupid mistake simply does not do justice to the Soviet leader—he was an exceptionally cunning individual. What explains this puzzle? First, leaders operate in an environment in which signals are not always obvious: “Not only rational leaders, but rational experts, can look at the same information, and in the absence of any private information, come to different conclusions about expected outcomes.”125 Marxist-Leninist systems are particularly hard to judge, and even top figures regularly misinterpret signals. Second, Khrushchev did have many reasons to be confident. Counterintuitively, it was precisely because he had so many reasons to be confident that his competitors were able to seize the rare opportunity to make a move. Khrushchev was a skilled political operator. He had already engineered the defeat of figures such as Lavrentii Beria (who controlled the state security organs), Georgii Malenkov (Stalin's heir-apparent and initial successor), Molotov (Stalin's former right-hand man), and Zhukov (the legendary wartime marshal who seized Berlin). The U.S. Central Intelligence Agency (CIA) believed that Khrushchev was marked by “a shrewd native intelligence, an agile mind, drive, ambition, and ruthlessness.”126 Semichastnyi later said in an interview that Khrushchev “had crushed the likes of Malenkov and Molotov—all of them. As the saying goes, nature and his mama provided him with everything he needed: firmness of will, quick-wittedness and capacity for fast, careful thinking.”127 Yet no one, including Khrushchev, had a clear idea about what was going on in the last years of Khrushchev's rule. Stories of alleged plots were common, and even high-ranking figures often misjudged the political situation. As early as 1962, one individual found Molotov, who was serving as an envoy to the International Atomic Energy Agency, dancing with his wife. Molotov explained, “I am in a good mood today. They removed Khrushchev.”128 Foreign observers reached very different conclusions about Khrushchev's position. In March 1964, both the CIA and Mao Zedong raised the possibility that Khrushchev would be removed from office.129 However, Walter Ulbricht, the leader of the German Democratic Republic (GDR) who regularly met with Soviet officials and had greater insight into Khrushchev's inner circle, did not realize opposition was growing. Brezhnev was in the GDR on the eve of the coup and did not return to the USSR until his colleagues alerted him that everything was prepared. Ulbricht escorted Brezhnev to Schönefeld Airport and told the man who was about to replace Khrushchev: “Send my warmest greetings to my dear friend Nikita Sergeevich.”130 Khrushchev himself understood how hard it was to read politics in Marxist-Leninist regimes: Politics is like the old joke about the two Jews traveling on a train. One asks the other: “So, where are you going?” “I'm going to Zhytomyr.” “What a sly fox,” thinks the first Jew. “I know he's really going to Zhytomyr, but he told me Zhytomyr to make me think he's going to Zhmerynka.”131 Perhaps one of the most significant factors causing Khrushchev to overestimate his strength is that he had good reason to believe two individuals who operated key nodes in the Marxist-Leninist power structure would support him: Semichastnyi in the KGB and Malinovskii in the military. The importance of the KGB and armed forces for the outcome can be seen in how their positions affected the attitude of key Presidium members. The memoirs of Vladimir Novikov include the following passage: I asked: ‘What, are they planning to remove Khrushchev?’ [Dmitrii] Ustinov confirmed this. I had a question, how would the military and KGB feel about this? I received an answer: everything is in order here, there will be complete support. Then I agreed.132 When Kosygin was told about the plot, he asked, “Who is the army and state security with?” Upon learning that Malinovskii and Semichastnyi were “aware of the course of events” (v kurse dela), Kosygin also agreed.133 Suslov also asked who the military and KGB supported before giving an answer.134 Therefore, three crucial figures made a decision only after learning the lineup among the power ministries. If Khrushchev had been able to maintain Semichastnyi's loyalty, the coup would almost certainly have gone a very different way. The former KGB head writes in his memoirs that Brezhnev and Podgornyi understood that if they had not guaranteed the support of the KGB they would have been unable to remove Khrushchev. Semichastnyi played an important role in guaranteeing that support.135 On 12 October, the day the conspirators moved against Khrushchev, Brezhnev called Semichastnyi's office at Lubyanka, the KGB headquarters, every hour to ask for an update.136 Attempts by the party secretary of Ukraine to call Khrushchev in the south to warn him about the coup were blocked.137 When Khrushchev arrived in Moscow after being summoned by the conspirators, he was met by Semichastnyi with several KGB operatives.138 Shelest remarked that Khrushchev, still officially First Secretary of the CPSU and chairman of the Council of Ministers, was not even allowed to call his wife.139 One former member of the KGB's Ninth Directorate, which was in charge of bodyguard services, recalls that for three days they had been on full alert in case of unrest in the army or special forces. The commander of Brezhnev's personal guard spent these nights at Brezhnev's door with an automatic weapon in his hands.140 Khrushchev had reason to be surprised by this turn of events, having gone to great lengths to remove the KGB from politics. In the summer of 1957, Khrushchev gave a toast in which he said to then head of the KGB Serov, The KGB is our eyes and ears, but if it looks in the wrong direction, then we will tear out their eyes, their ears we will rip off, and we will act as Taras Bul'ba said: it was I who gave birth to you, and it is I who will kill you.141 In October 1957, Khrushchev told Semichastnyi that the examples of Beria and Zhukov had caused party leaders to conclude that the KGB chairman and the minister of defense should not be members of the Presidium. If they were to become members, he said, “this will bring them much more power, and they do not always use it appropriately.”142 On 24 February 1959, Khrushchev publicly declared his intention to reduce the size of the state security organs.143 He had to nominate figures from the party apparatus, such as Shelepin and Semichastnyi, to run the KGB. These two individuals were chosen because they lacked background in that organization. Khrushchev told Shelepin that his mission was to restore the KGB party style and methods of work: “I have one favor: do everything you can to ensure that they do not eavesdrop on me.”144 Semichastnyi, who had no experience in state security, was stunned when Khrushchev told him he would replace Shelepin as head of the KGB. Khrushchev cut him off, explaining that Shelepin had been picked precisely for that reason: the KGB did not need a specialist but instead someone who “would understand well why these organs exist and execute in them the policy of the party” and continue to enact reforms.145 In an interview many years later, Semichastnyi expressed some guilt about his behavior, remarking, “even now, many years afterward, it is not very pleasurable for me to recall this history. Indeed, I am in fact his protégé.”146 Crucially, until Khrushchev was removed, his control over the state security organs had been nearly absolute. Semichastnyi told Brezhnev, “If you draw things out [news of the plot] will reach Khrushchev. And he will order me to arrest all of you. And I will indeed arrest you, Leonid Il'ich, do not doubt this.”147 Unlike the KGB, the military did not actively take part in Khrushchev's removal. As the Russian historian Yuliya Abramova has argued, the role of the armed forces was “more passive than active.”148 Yet this passivity was necessary for the plot to succeed and is especially notable in light of the crucial role the military played in the power struggles of 1953 and 1957. Semichastnyi later spoke about the need to guarantee Defense Minister Malinovskii's support for the coup: No one wanted to end up in the position of Molotov, Malenkov, Kaganovich, and Shepilov, who joined them. Khrushchev was after all the commander-in-chief, and although a direct clash with him was highly unlikely, nevertheless this possibility could not be excluded until the last minute.149 Brezhnev was wary of meeting with Malinovskii and waited until the last moment to speak with him: “If R. Ya. Malinovskii had not supported the plan, everything would have become extremely complicated [chrezvychaino oslozhnilos’ by].” Most interestingly, “on the eve of the conspiracy L. I. Brezhnev went to the GDR and returned only after Malinovskii gave his consent [to the plot] on 10 October.”150 On 12 October, at 4:00 p.m., Marshal Konev called Pravda and demanded changes be made to his upcoming article on the commemoration of the liberation of Ukraine: all paragraphs on Khrushchev were to be removed.151 General Afanasii Beloborodov, commander of the Moscow Military District, also supported the removal of Khrushchev. The KGB's military counterintelligence units were ordered to follow even the slightest movements in the Soviet Army and to inform the KGB immediately if troops moved toward Moscow.152 The general commanding the Transcaucasian Military District escorted Khrushchev from his vacation home to the airport, which was clearly meant to ensure Khrushchev's departure.153 Military cadets were sitting on the floor of the coatroom at the entrance to the Central Committee plenum.154 Victor Louis later recalled having seen a convoy of military trucks, approximately four kilometers long, approaching Moscow on the Minsk Highway during the coup.155 Just as in the case of Semichastnyi, Khrushchev had some reason to believe Malinovskii would support him. Until October 1964, Malinovskii had been Khrushchev's reliable ally in the Soviet Army. Malinovskii's toadyism and Khrushchev's control over the armed forces is visible in an extraordinary letter from Marshal Vasilii Chuikov to the Presidium on “the abnormal situation” in the Ministry of Defense—a situation that had in his mind become particularly poor at the end of 1962. This document shows the extent to which Malinovskii had, until the coup, served as Khrushchev's loyalist in the armed forces. Chuikov maintains that, “using the patronage of N. S. Khrushchev, Comrades Malinovskii and [Andrei] Grechko . . . in an unchecked fashion managed the ministry, and in many cases acted in an arbitrary way.” The Main Military Council, which was supposed to meet at least once a quarter, had met for the last time in February 1963, a year and a half before Chuikov's letter. But, according to Chuikov, even at that meeting the only discussion concerned a non-serious issue, and major issues that demanded collective discussion of the top military commanders were ignored. A similar state of affairs allegedly existed in the Collegium of the Ministry of Defense, where any ideas “unfavorable” to Malinovskii and Grechko were seen as undermining “one-man command” and the authority of the minister. Therefore, all decisions were taken entirely by the minister “without accounting for the members of the Council or Collegium.” Issues were determined behind closed doors with no discussion. Malinovskii's obsequiousness toward Khrushchev had extended to interpretations of the war against Nazi Germany. “The matter reached the extent that assertions were made that we are indebted for almost all victories to Khrushchev, Malinovskii, and those with them.”156 Sergei Khrushchev writes in his memoirs that his father “had every reason to count on” Malinovskii. In 1943, Khrushchev allegedly saved Malinovskii from Stalin's wrath.157 A CIA cable of 27 October 1964 notes that “on the whole Malinovskiy is regarded as a Khrushchev man.”158 Although Malinovskii told the plotters the military would not get involved, Semichastnyi said “we were sure of its [the army's] support.”159 Therefore, Khrushchev had good reason to believe he was in a secure position. Perhaps that was precisely why he was vulnerable. Ignatov holds this position, remarking, “Of course, Khrushchev's self-confidence greatly betrayed him. He was, without a doubt, a real man with extreme cunning, but here he slipped up. Otherwise Brezhnev and company would have been smashed.”160 Semichastnyi similarly concludes that “Khrushchev was so sure of himself and was so sure that he completely controlled the situation in the country, he simply did not believe” that a coup was possible.161 As Tompson writes: Khrushchev at various points seems either to have placed too much trust in his colleagues or simply to have underestimated their ability to mount a challenge to his leadership. Having defeated Malenkov, Molotov, Kaganovich and others, he is unlikely to have trembled with fear at the news that N. G. Ignatov was plotting against him.162 The way Khrushchev achieved victory over his opponents in 1957 and was removed from power in 1964 is sometimes seen as a prologue to the stagnation of the Brezhnev era. In both cases, the interests of the regions as expressed in the Central Committee had triumphed over the leadership at the top, and the top leadership had been selected in an ad-hoc way. As one group of Russian historians and archivists argues, The removal of Khrushchev in October 1964 became a fact of great political importance, which had significant impact on the political consciousness of both the highest Soviet leaders as well as the Soviet bureaucracy more generally. The entrenched party-state “nomenklatura” for the second time in a relatively short period demonstrated that it had real power in resolving the most important question—about the highest positions of power in the country. This circumstance played its role in the gradual change in the center's regional policy in the 1970s, in the proclaimed focus on the guarantee of so-called stability in cadres—the immovability of cadres and the weakening of centralized control over regional leaders.163 However, the evidence provided in this article suggests a somewhat different interpretation. Some level of dissatisfaction within the Central Committee was a necessary condition for the coup to succeed, Brezhnev and the other conspirators deliberately prevented the Central Committee from participating in those discussions. CPSU Presidium members acted mainly for reasons of self-preservation, not for the interests of the Central Committee. Still, it would be easy to draw less nuanced conclusions. According to Gorbachev's close ally Aleksandr Yakovlev, even decades later, Gorbachev did not move aggressively with reforms because of a fear the party elite would remove him as they had removed Khrushchev in 1964. Yakovlev did not share Gorbachev's evaluation of the situation, believing instead that the old guard “were wretched cowards” who had “been shaking in fear since Stalin's time” and that these conservative figures would acquiesce if Gorbachev rapidly promoted younger liberals into the leadership. Anatolii Chernyaev similarly felt hat the Central Committee would have approved if Gorbachev had removed conservative figures like Egor Ligachev.164 These findings suggest that “powerful” and “weak” are problematic labels for authoritarian leaders. At least sometimes, even the highest leaders have little understanding of their own position—not because they are foolish, but because their position is inherently ambiguous. Scholars and policymakers might be better served to disaggregate the structural features that both favor and weaken a leader, while also identifying what counterfactuals would make one of those forces decisive. At the very least, we should be skeptical of arguments that authoritarian politics is a simple popularity contest. The inability to determine what precise constellation of variables would cause a leader's defeat even most of the time may strike political scientists as unsatisfying. Yet the finding that a particular phenomenon is marked by high levels of contingency (and why) should be just as valuable—especially when the evidence strongly bears out that finding.165 1. Milan Svolik, The Politics of Authoritarian Rule (Cambridge, UK: Cambridge University Press, 2012). 2. Before Khrushchev was removed, Sovietologists in the United States had been split between two camps. One school believed that Khrushchev had achieved dominance and had become essentially invulnerable, whereas another saw him as continually fighting and negotiating with powerful interests. For a discussion, see Carl A. Linden, Khrushchev and the Soviet Leadership 1957–1964, 3rd ed. (Baltimore: Johns Hopkins Press, 1970); and Robert Conquest, Power and Policy in the USSR: The Struggle for Stalin's Succession, 1945–1960 (New York: Harper & Row, 1967). 3. William J. Tompson, “The Fall of Nikita Khrushchev,” Soviet Studies, Vol. 43, No. 6 (1991), pp. 1,101–1,121; Geoffrey Swain, Khrushchev (London: Palgrave, 2016); and William Taubman, Khrushchev: The Man and His Era (New York: Norton, 2003). 4. Bruce Bueno de Mesquita and Alastair Smith, The Dictator's Handbook: Why Bad Behavior Is Almost Always Good Politics (New York: Public Affairs, 2011), p. 14. 5. Jessica L. P. Weeks, Dictators at War and Peace (Ithaca, NY: Cornell University Press, 2014), p. 4. 6. Carl Linden, “Khrushchev and the Party Battle,” Problems of Communism, Vol. 12, No. 5 (October 1963), p. 28. T. H. Rigby, however, in the same edition of Problems of Communism accurately states that no figure would openly dare to confront Khrushchev or act as member of an oppositional grouping while the Soviet leader remained in power. See Thomas H. Rigby, “The Extent and Limits of Authority,” Problems of Communism, Vol. 12, No. 5 (October 1963), pp. 36–41. 7. Alec Nove, An Economic History of the USSR (London: Penguin Press, 1969), pp. 333, 368. 8. Leonard Schapiro, The Communist Party of the Soviet Union, 2nd Ed. (New York: Vintage Books, 1971), p. 573. 9. Ian D. Thatcher, “Brezhnev as Leader,” in Edwin Bacon and Mark Sandle, eds., Brezhnev Reconsidered (New York: Palgrave Macmillan, 2002), p. 19. 10. Mary McAuley, Soviet Politics 1917–1991 (Oxford, UK: Oxford University Press, 1992), p. 74. 11. Philip G. Roeder, Red Sunset: The Failure of Soviet Politics (Princeton, NJ: Princeton University Press, 1993), p. 7. 12. Robert V. Daniels, “Political Processes and Generational Change,” in Archie Brown, ed., Political Leadership in the Soviet Union (Basingstoke, UK: Palgrave Macmillan, 1989), p. 113. 13. “Political authority” here refers to the ability of a leader to provide a vision for the country that is widely endorsed by the leader's associates. See James G. Richter, Khrushchev's Double Bind: International Pressures and Domestic Coalition Politics (Baltimore: Johns Hopkins University Press, 1994). 14. Swain, Khrushchev, p. 163. 15. Barbara Geddes, “What Do We Know about Democratization after Twenty Years?” Annual Review of Political Science, Vol. 2, No. 1 (1999), pp. 115–144; and Jennifer Gandhi and Adam Przeworski, “Authoritarian Institutions and the Survival of Autocrats,” Comparative Political Studies, Vol. 40, No. 11 (November 2007), pp. 1,279–1,301. 16. Svolik, The Politics of Authoritarian Rule, pp. 94–100. 17. Tompson, “The Fall of Nikita Khrushchev,” p. 1,105. 18. Archie Brown, The Rise and Fall of Communism (New York: HarperCollins Publishers, 2009), pp. 264–265; and T. H. Rigby, “The Soviet Political Executive, 1917–1986,” in Brown, ed., Political Leadership in the Soviet Union, p. 40. 19. Bueno de Mesquita and Smith, The Dictator's Handbook, pp. 156–157. 20. Weeks, Dictators at War and Peace, p. 161. 21. Svolik, The Politics of Authoritarian Rule, p. 98. 22. “Anonimnaya zapiska delegata XXII s”ezda KPSS Nikite Sergeevichu Khrushchevu,” 22 October 1961, in N. G. Tomilina, ed., Boi s “ten'yu” Stalina: Prodolzhenie: Dokumenty i materialy ob istorii XXII s”ezda KPSS i vtorogo etapa destalinizatsii (Moscow: Nestor-Istoriya, 2015), pp. 205–206. 23. “Stenograficheskaya zapis’ vystupleniya N. S. Khrushcheva na zasedanii Prezidiuma TsK KPSS po voprosu ob uluchshenii partiinogo rukovodstva promyshlennost'yu i sel'skim khozyaistvom,” 20 September 1962, in A. A. Fursenko, ed., Prezidium TsK KPSS: 1954–1964: Chernovye protokol'nye zapisi zasedanii: Stenogrammy: Postanovleniya, 3 vols., rev. ed., Vol. 1: Chernovye protokol'nye zapisi zasedanii, Stenogrammy (Moscow: ROSSPEN, 2004), p. 576. 24. Nikolai Barsukov, “The Rise to Power,” in William Taubman, Sergei Khrushchev, and Abbott Gleason, eds., Nikita Khrushchev (New Haven: Yale University Press, 2000), p. 62. 25. Sergei Khrushchev, Pensioner soyuznogo znacheniya (Moscow: Vremya, 2010), p. 27. 26. “Pis'mo V. M. Molotova v TsK KPSS (1964 g.),” Voprosy istorii, No. 3 (March 2012), p. 94. 27. Taubman, Khrushchev, pp. 578–619; and Nikolai Mitrokhin, “The Rise of Political Clans in the Era of Nikita Khrushchev: The First Phase, 1953–1959,” in Jeremy Smith and Melanie Ilic, eds., Khrushchev in the Kremlin: Policy and Government in the Soviet Union, 1953–1964 (London: Routledge, 2013), p. 31. 28. “Nepravlenaya stenogramma zasedaniya Prezidiuma TsK KPSS po voprosam, voznikshim vo vremya poezdki N. S. Khrushcheva po sel'skokhozyaistvennym regionam SSSR,” 19 August 1964, in A. N. Artizov et al., eds., Nikita Khrushchev: 1964: Stenogrammy plenuma TsK KPSS i drugie dokumenty (Moscow: MFD: Materik, 2007), p. 95. 29. P. A. Abrasimov, Chetvert’ veka poslom Sovetskogo Soyuza, 2nd ed. (Moscow: Natsional'noe obozrenie, 2007), p. 71. 30. S. M. Belov, B. D. Dolgotovich, and I. S. Karpenko, Kirill Mazurov: Vospominaniya, vystupleniya, interv'yu (Minsk: Belorusskoe izdatel'skoe Tovarishchestvo “Khata,” 1999), p. 15. 31. A. I. Shevel'kov, “‘Pochemu ya dolzhen verit’ martovskomu plenumu TsK KPSS?’ ‘Neudobnye’ voprosy partiinomu rukovodstvu: Vesna 1965 g.,” Istoricheskii arkhiv, No. 1 (2013), pp. 4–10. 32. “Plenum TsK KPSS: Iyun’ 1957 goda: Stenograficheskii otchet: Zasedanie tret'e (vechernee, 24 iyunya),” 24 June 1957, in A. N. Yakovlev, ed., Molotov, Malenkov, Kaganovich: 1957: Stenogramma iyun'skogo Plenuma TsK KPSS i drugie dokumenty (Moscow: MFD, 1998), pp. 177–181. 33. Tompson, “The Fall of Nikita Khrushchev,” p. 1103; and “Vospominaniya uchastnika sobytii o neprostykh momentakh v istorii strany: O Khrushcheve, Brezhneve i drugikh,” Argumenty i fakty, No. 2 (14 January 1989), pp. 5–6. 34. A. V. Sushkov, Prezidium TsK KPSS v 1957–1964 gg.: Lichnosti i vlast’ (Ekaterinburg: UrO RAN, 2009), p. 82. 35. Ibid., p. 104. 36. Ibid., p. 107. 37. Averell Harriman, “Memorandum for the President,” 19 October 1964, in Lyndon B. Johnson Presidential Library (LBJL), National Security File (NSF), Country File (CF), Box 219, USSR Cables 10/64–11/64. 38. Sushkov, Prezidium TsK KPSS, p. 187. 39. Ibid., p. 185. 40. Ibid., p. 194. 41. “Ob osnovnykh napravleniyakh v razrabotke plana razvitiya narodnogo khozyaistva na blizhaishii period,” Pravda, 2 October 1964, pp. 1-2. 42. “Thoughts on the Meaning of the Moscow Events,” U.S. State Department Memorandum for McGeorge Bundy, 22 October 1964, in LBJL, NSF, CF, Box 219, USSR Cables 10/64–11/64. I thank Simon Miles for sharing this document with me. 43. “Protokol No. 9 zasedaniya plenuma Tsentral'nogo komiteta Kommunisticheckoi partii Sovetskogo Soyuza ot 14 oktyabrya 1964 goda,” 14 October 1964, in Artizov et al., eds, Nikita Khrushchev, p. 242. 44. Jonathan Bendor and Jacob N. Shapiro, “Historical Contingencies in the Evolution of States and Their Militaries,” World Politics, Vol. 71, No. 1 (January 2019), pp. 126–161. 45. Henry Ashby Turner, Hitler's Thirty Days to Power: January 1933 (Reading, MA: Addison-Wesley, 1996); and Ian Kershaw, Hitler: Profiles in Power (London: Longman, 1991). 46. Giovanni Capoccia and R. Daniel Kelemen, “The Study of Critical Junctures: Theory, Narrative, and Counterfactuals in Historical Institutionalism,” World Politics, Vol. 59, No. 3 (April 2007), pp. 341–369. 47. Dietrich Rueschemeyer, “Can One or a Few Cases Yield Theoretical Gains?” in James Mahoney and Dietrich Rueschemeyer, eds., Comparative Historical Analysis in the Social Sciences (New York: Cambridge University Press, 2003), p. 315. 48. O. V. Khlevnyuk, “Rokovaya reforma N. S. Khrushcheva: Razdelenie partiinogo apparata i ego posledstviya. 1962–1964 gody,” Rossiiskaya istoriya, No. 4 (August 2012), p. 165. 49. A. V. Postnikov, “Dokumenty federal'nykh arkhivov o smene rukovodstva SSSR v oktyabre 1964 g. (istochnikovedcheskii analiz),” Ph.D. Diss., Vserossiiskii nauchno-issledovatel'skii institut dokumentovedeniya i arkhivnogo dela, Moscow, 2005. 50. Barsukov, “The Rise to Power,” p. 63. 51. “Plenum Tsentral'nogo Komiteta KPSS. Oktyabr’ 1957 goda: Stemogramma: Zasedanie chetvertoe (vechernee, 29 oktyabrya),” 29 October 1957, in V. Naumov et al., eds., Georgii Zhukov: Stenogramma oktyabr'skogo (1957 g.) plenuma TsK KPSS i drugie dokumenty (Moscow: MFD, 2001), p. 379. 52. L. I. Brezhnev, Rabochie i dnevnikovye zapisi, Vol. 3, Leonid Brezhnev: Rabochie i dnevnikovye zapisi: 1944–1964 gg. (Moscow: Istoricheskaya literatura, 2016), p. 426. 53. “Pis'mo G. K. Zhukova v Prezidium TsK KPSS,” 16 March 1965, in Naumov et al., eds., Georgii Zhukov, p. 539. 54. Anatolii Ponomarev, “Marshaly: Kak delili slavu posle 1945-go,” Rodina, No. 1 (1995), p. 78. 55. “Protokol No. 194: Zasedanie 3 dekabrya 1958 g.,” 3 December 1958, in Fursenko, ed., Prezidium TsK KPSS, Vol. 1, pp. 340–341. The quoted passage is “Malinovskii was involved in this.” 56. Brezhnev, Rabochie i dnevnikovye zapisi, Vol. 3, pp. 426, 449. 57. Vojtech Mastny, “The 1963 Nuclear Test Ban Treaty: A Missed Opportunity for Détente?,” Journal of Cold War Studies, Vol. 10, No. 1 (Winter 2008), pp. 3–25. 58. V. N. Novikov, “V gody rukovodstva N. S. Khrushcheva,” Voprosy istorii, No. 2 (February 1989), pp. 106, 114. 59. V. F. Nekrasov, Apparat TsK KPSS v pogonakh i bez: Nekotorye voprosy oborony, gosbezopasnosti, pravookhranitel'noi deyatel'nosti v TsK KPSS (40-e–nachalo 90-kh godov XX veka) (Moscow: Kuchkovo pole, 2010), p. 74; and Swain, Khrushchev, p. 187. 60. “V gody ‘Kul'tprosveta’: Beseda N. Kuznetsova s N. N. Mesyatsevym,” Zhurnalist, No. 1 (1989), p. 36. 61. Taubman, Khrushchev, p. 4. 62. “Zapiska N. S. Khrushcheva v Prezidium TsK KPSS po perestroike rukovodstva partiinykh i sovetskikh organov,” 10 September 1962, in N. G. Tomilina, ed., Nikita Sergeevich Khrushchev: Dva tsveta vremeni: Dokumenty iz lichnogo fonda N. S. Khrushcheva, Vol. 2 (Moscow: MFD, 2009), pp. 672–684. 63. “Zapiska sekretarya TsK KPSS V. Titova ob obsuzhdenii reshenii oktyabr'skogo (1964 g.) Plenuma TsK KPSS na sobraniyakh partiinogo aktiva i otklikakh trudyashchikhsya na Postanovlenie Plenuma TsK,” 17 October 1964, in Rossiiskii Gosudarstvennyi Arkhiv Noveishei Istorii (RGANI), Fond (F.) 3, Opis’ (Op.) 22, Delo (D.) 16, List (L.) 14. 64. “Dokladnaya zapiska sekretarei TsK KP Ukrainy i zamestitelei predsedatelya Soveta ministrov SSSR v TsK KP Ukrainy o perestroike partiinykh i sovetskikh organov Ukrainskoi SSR,” 15 October 1962, in O. V. Khlevnyuk et al., eds., Regional'naya politika N. S. Khrushcheva: TsK KPSS i mestnye partiinye komitety: 1953–1964 gg. (Moscow: Rossiiskaya politicheskaya entsiklopediya, 2009), pp. 457–467; and “Dokladnaya zapiska pervogo sekretarya Chitinskogo sel'skogo obkoma KPSS A. I. Smirnova N. S. Khrushchevu ob ob”edinenii promyshlennoi i sel'skoi partiinykh organizatsii Chitinskoi oblasti,” 27 December 1963, in Khlevnyuk et al., eds., Regional'naya politika N. S. Khrushcheva, pp. 519–521. 65. Khlevnyuk, “Rokovaya reforma N. S. Khrushcheva,” p. 177. 66. Ibid. 67. Svetlana Savranskaya and William Taubman, “Soviet Foreign Policy, 1962–1975,” in Melvyn Leffler and Odd Arne Westad, eds., Cambridge History of the Cold War (Cambridge, UK: Cambridge University Press, 2010), p. 134. 68. Tompson, “The Fall of Nikita Khrushchev,” p. 1109. 69. Yakov Feygin, “Reforming the Cold War State: Economic Thought, Internationalization, and the Politics of Soviet Reform, 1955–1985,” Ph.D. Diss., University of Pennsylvania, 2017, p. 157. 70. Khrushchev, Pensioner soyuznogo znacheniya, pp. 47, 51–62; and Timothy J. Naftali and A. A. Fursenko, Khrushchev's Cold War: The Inside Story of an American Adversary (New York: Norton, 2006), p. 534. 71. “Protokol [bez nomera]: Zasedanie 13–14 oktyabrya 1964 g.,” 13–14 October 1964, in Fursenko, ed., Prezidium TsK KPSS, Vol. 1, pp. 862–872. 72. Wu Lengxi, Shi nian lunzhan: 1956–1966 zhongsu guanxi huiyilu (Beijing: Zhongyang wenxian chubanshe, 1999), pp. 858–859. 73. Ibid., pp. 871, 879. 74. N. P. Kamanin, Skrytyi kosmos, Vol. 1 (Moscow: RTSoft, 2018), pp. 65, 92. 75. Ibid., p. 445. 76. Valerii Larin, “35 let oktyabr'skoi revolyutsii,” Kommersant Vlast’, No. 40 (12 October 1999), p. 50. 77. Tompson, “The Fall of Nikita Khrushchev,” p. 1113. 78. Aleksandr Maisuryan, Drugoi Brezhnev (Moscow: Vagrius, 2004), p. 130; and Swain, Khrushchev, p. 184. 79. Brezhnev, Rabochie i dnevnikovye zapisi, Vol. 3, p. 419. 80. Ibid., pp. 431–432. 81. “Nepravlenaya stenogramma iyul'skogo (1964 g.) plenuma TsK KPSS,” 11 July 1964, in Artizov et al., eds., Nikita Khrushchev, p. 53. 82. “Nepravlenaya stenogramma zasedaniya Prezidiuma TsK KPSS po voprosam, voznikshim vo vremya poezdki N. S. Khrushcheva po sel'skokhozyaistvennym regionam SSSR,” pp. 94–97. 83. “Protokol No. 159: Zasedanie 17 sentyabrya 1964 g.,” 17 September 1964, in Fursenko, ed., Prezidium TsK KPSS, Vol. 1, p. 858. 84. Brezhnev, Rabochie i dnevnikovye zapisi, Vol. 3, p. 442. 85. Ibid. 86. P. E. Shelest, Da ne sudimy budete: Dnevnikovye zapisi, vospominaniya chlena Politbyuro TsK KPSS (Moscow: edition q, 1995), p. 219. 87. Khrushchev, Pensioner soyuznogo znacheniya, p. 45. 88. L. I. Brezhnev, Rabochie i dnevnikovye zapisi, Vol. 1, Leonid Brezhnev: Rabochie i dnevnikovye zapisi: 1964–1982 gg. (Moscow: Istoricheskaya literatura, 2016), p. 40. 89. S. N. Semanov, Brezhnev: Pravitel’ “zolotogo veka” (Moscow: Veche, 2007), pp. 102–103. 90. Shelest, Da ne sudimy budete, p. 241. 91. Ibid., p. 203. 92. N. A. Barsukov, “Beseda c Egorychevym N. G.,” 19 September 1990, in V. A. Kozlov, ed., Neizvestnaya Rossiya: XX vek, Vol. 1 (Moscow: Istoricheskoe nasledie, 1992), p. 291; Taubman, Khrushchev, p. 7; and Thatcher, “Brezhnev as Leader,” p. 25. 93. Vyacheslav Kevorkov, Viktor Lui: Chelovek s legendoi (Moscow: Izdatel'stvo “Sem’ dnei,” 2010), p. 152. 94. Larin, “35 let oktyabr'skoi revolyutsii,” p. 50. 95. Postnikov, “Dokumenty federal'nykh arkhivov o smene rukovodstva SSSR v oktyabre 1964 g.,” pp. 80–81. For further background on Shelest's diaries and the differences between the published version and the written note pages, see Mark Kramer, “Ukraine and the Soviet-Czechoslovak Crisis of 1968 (Part 1): New Evidence from the Diary of Petro Shelest,” Cold War International History Project Bulletin, Issue No. 10 (March1998), pp. 234--247; and Mark Kramer, “Foreign Policymaking and Party-State Relations in the Soviet Union during the Brezhnev Era,” in Rüdiger Bergien and Jens Gieseke, eds., Communist Parties Revisited: Sociocultural Approaches to Party Rule in the Soviet Bloc, 1956--1991 (New York: Berghahn Books, 2018), pp. 281--313. 96. Vladimir Semichastnyi, Bespokoinoe serdtse (Moscow: Vagrius, 2002), pp. 351–352; and Tompson, “The Fall of Nikita Khrushchev,” p. 1106. 97. Pavel Sudoplatov, Special Tasks: The Memoirs of an Unwanted Witness—A Soviet Spymaster (Boston: Little, Brown, 1995), p. 284. 98. Shelest, Da ne sudimy budete, p. 214. 99. Ibid., p. 238. 100. “Kak snimali Khrushcheva: Beseda s uchastnikom tekh sobytii,” Dialog, No. 7 (1993), p. 48. 101. N. A. Barsukov, “Beseda s Shelepinym A. N. i Semichastnym V. E.,” 27 March and 22 May 1989, in Kozlov, ed., Neizvestnaya Rossiya, p. 278. 102. “Kak snimali Khrushcheva,” p. 52. 103. Khrushchev, Pensioner soyuznogo znacheniya, p. 90. 104. “Vospominaniya uchastnika sobytii o neprostykh momentakh v istorii strany”; and Tompson, “The Fall of Nikita Khrushchev,” p. 1113. 105. “Nepravlenaya stenogramma oktyabr'skogo (1964 g.) plenuma TsK KPSS,” 14 October 1964, in Artizov et al., eds., Nikita Khrushchev, pp. 237–238. 106. Yu. V. Aksyutin, Khrushchevskaya “ottepel’” i obshchestvennye nastroeniya v SSSR v 1953–1964 gg. (Moscow: ROSSPEN, 2004), p. 574. 107. A. E. Bovin, XX vek kak zhizn’: Vospominaniya (Moscow: Zakharov, 2003), p. 124. 108. Tompson, “The Fall of Nikita Khrushchev,” p. 1115. 109. “‘Takovy, tovarishchi, fakty’ (Doklad Prezidiuma TsK KPSS na oktyabr'skom Plenume TsK KPSS (variant)),” Istochnik, No. 2 (1998), pp. 101–125. This document includes the line “We believe it would be most reasonable to behave so that comrade Khrushchev himself resigned from his positions.” 110. Postnikov, “Dokumenty federal'nykh arkhivov o smene rukovodstva SSSR v oktyabre 1964 g.,” pp. 231–236. 111. Ibid. 112. Abrasimov, Chetvert’ veka poslom Sovetskogo Soyuza, p. 72. 113. Larin, “35 let oktyabr'skoi revolyutsii,” p. 50. 114. Kamanin, Skrytyi kosmos, Vol. 1, p. 445. 115. Khrushchev, Pensioner soyuznogo znacheniya, pp. 47, 51–62; and Naftali and Fursenko, Khrushchev's Cold War, p. 534. 116. Semanov, Brezhnev, p. 108. 117. “Zapis’ besedy N. S. Khrushcheva s Sukarno,” 29 September 1964, in Artizov et al., eds., Nikita Khrushchev, p. 151. 118. “Zapis’ besedy A. I. Mikoyana s V. I. Galyukovym, sdelannaya S. N. Khrushchevym,” n.d. (no later than 2 October 1964), in Artizov et al., eds., Nikita Khrushchev, pp. 154–160; and Taubman, Khrushchev, p. 7. 119. Artizov et al., eds., Nikita Khrushchev, p. 10. 120. Naftali and Fursenko, Khrushchev's Cold War, pp. 535–536; “M. Baudet, Ambassadeur de France à Moscou, à M. Couve de Murville, Ministre des Affaires étrangères,” 16 October 1964, in Documents diplomatiques français, 1964, Vol. 2, 1er juillet–31 décembre (Brussels: P.I.E.-Peter Lang S.A., 2002), p. 334; “H. A. F. Hohler to H. F. T. Smith,” 21 October 1964, in The National Archives of the United Kingdom (TNAUK), Foreign Office (FO) 371/177665; and “Miscellaneous Information concerning the Coup against Mr. Khrushchev,” in TNAUK, FO 371/177666. 121. Khrushchev, Pensioner soyuznogo znacheniya, pp. 74–75. 122. “Protokol [bez nomera] (prodolzhenie): Zasedanie ot 14 oktyabrya,” p. 872. 123. Ibid. 124. Li Haiwen, “Hua Guofeng tan shi zhuan xie zuo,” Yanhuang chunqiu, No. 4 (2015). 125. Jonathan Kirshner, “Rationalist Explanations for War?” Security Studies, Vol. 10, No. 1 (September 2000), p. 148. 126. Frederick Kempe, Berlin 1961: Kennedy, Khrushchev, and the Most Dangerous Place on Earth (New York: Berkeley Publishing Group, 2011), p. 6. 127. Taubman, Khrushchev, p. 8. 128. Emiliya Gromyko-Piradova, A. A. Gromyko i vek peremen: Vospominaniya docheri ob Andree Andreeviche Gromyko, ego sem'e i epokhe, v kotoruyu on zhil (Moscow: Sovetskii pisatel’, 2009), p. 165. 129. Simon Miles, “Envisioning Détente: The Johnson Administration and the October 1964 Khrushchev Ouster,” Diplomatic History, Vol. 40, No. 4 (September 2016), p. 727; and Wu Lengxi, Shi nian lunzhan: 1956–1966 zhongsu guanxi huiyilu, p. 738. 130. Yu. A. Kvitsinskii, Vremya i sluchai: Zametki professionala (Moscow: OLMA-PRESS, 1999), p. 209. 131. A. D. Sakharov, Vospominaniya, 2 vols. (Moscow: Vremya, 1989), Vol. 1, p. 481. 132. Novikov, “V gody rukovodstva N. S. Khrushcheva,” p. 115. 133. Yurii Aksyutin, “Oktyabr’ 1964 goda: ‘V Moskve khoroshaya pogoda,’” in Yu. V. Aksyutin, comp., L. I. Brezhnev: Materialy k biografii (Moscow: Politizdat, 1991), p. 51; and Tompson, “The Fall of Nikita Khrushchev,” p. 1107. 134. Yu. A. Abramova, “Vzaimootnosheniya rukovodstva KPSS i sovetskoi armii v period khrushchevskoi ‘ottepeli,’ 1953–1964 gg.,” Ph.D. Diss., Moskovskii gosudarstvennyi universitet, Moscow, 2000, p. 159. 135. Semichastnyi, Bespokoinoe serdtse, p. 349. 136. Ibid., p. 360. 137. Semanov, Brezhnev, p. 121. 138. A. Yakovlev, Omut pamyati (Moscow: Vagrius, 2000), pp. 152–153. 139. Shelest, Da ne sudimy budete, p. 231. 140. V. Medvedev, Chelovek za spinoi (Moscow: RUSSLIT, 1994), p. 23. 141. Nikita Petrov, Pervyi predsedatel’ KGB Ivan Serov (Moscow: Materik, 2005), p. 338. 142. Abramova, “Vzaimootnosheniya rukovodstva KPSS i sovetskoi armii v period khrushchevskoi ‘ottepeli,’ 1953–1964 gg.,” p. 289. 143. A. I. Kokurin and N. V. Petrov, eds., Lubyanka: Organy VChK–OGPU–NKVD–NKGB–MGB–MVD–KGB: 1917–1991: Spravochnik, ed. A. N. Yakovlev (Moscow: MFD, 2003), pp. 156–157. 144. Aleksandr Shelepin, “Istoriya—uchitel’ surovyi,” Trud (Moscow), 14 March 1991. 145. V. Semichastnyi, “Nezabyvaemoe,” in Yu. V. Aksyutin, comp., Nikita Sergeevich Khrushchev: Materialy k biografii (Moscow: Politizdat, 1989), pp. 52–53. 146. Larin, “35 let oktyabr'skoi revolyutsii,” p. 50. 147. Ibid. 148. Abramova, “Vzaimootnosheniya rukovodstva KPSS i sovetskoi armii v period khrushchevskoi ‘ottepeli,’ 1953–1964 gg.,” p. 161. 149. Semichastnyi, Bespokoinoe serdtse, p. 350. 150. Ibid., p. 358. 151. Oleg Ignat'ev, “Shamany, vozhdi, partizany,” Pravda 5 (Moscow), 17 May 1996. 152. Semichastnyi, Bespokoinoe serdtse, pp. 358–360. 153. Khrushchev, Pensioner soyuznogo znacheniya, p. 86; and Taubman, Khrushchev, p. 9. 154. Yakovlev, Omut pamyati, p. 154. 155. Kevorkov, Viktor Lui, p. 147. 156. “Pis'mo V. I. Chuikova v Prezidium TsK KPSS o nenormal'nom polozhenii v Ministerstve oborony SSSR,” 21 October 1964, in S. V. Kudryashov, ed., Vestnik Arkhiva Prezidenta Rossiiskoi Federatsii: Sovetskaya Armiya: Gody reform i ispytanii, Vol. 2 (Moscow: IstLit, 2018), pp. 203–207. 157. Khrushchev, Pensioner soyuznogo znacheniya, p. 74; and Nikita Khrushchev, Khrushchev Remembers (Boston: Little, Brown and Company, 1970), pp. 202–205. 158. “Comments of Soviet Official on Khrushchev Downfall and Behind the Scenes Struggles Leading to His Removal,” CIA Intelligence Information Cable, 27 October 1964, in LBJL, NSF, CF, Box 219, USSR Cables 10/64–11/64. I thank Simon Miles for sharing this document with me. 159. Larin, “35 let oktyabr'skoi revolyutsii.” 160. Semanov, Brezhnev, p. 108. 161. Larin, “35 let oktyabr'skoi revolyutsii,” p. 50. 162. Tompson, “The Fall of Nikita Khrushchev,” p. 1104. 163. Khlevnyuk et al., eds., Regional'naya politika N. S. Khrushcheva, p. 17. 164. William Taubman, Gorbachev: His Life and Times (New York: W. W. Norton, 2017), pp. 245–246, 350, 691. 165. On this point, see Joseph Torigian, A New Case for the Study of Individual Events in Political Science,'' Global Studies Quarterly, Vol. 1, No. 4 (December 2021), pp. 1--11. This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license, which permits copying and redistributing the material in any medium or format for noncommercial purposes only. For a full description of the license, please visit https://creativecommons.org/licenses/by-nc/4.0/legalcode.
2022-12-03 20:48:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27820366621017456, "perplexity": 11184.9295314799}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710936.10/warc/CC-MAIN-20221203175958-20221203205958-00845.warc.gz"}
https://www.esaral.com/q/in-abc-p-and-q-are-points-on-sides-ab-and-ac-respectively-97164
# In ∆ABC, P and Q are points on sides AB and AC respectively Question: In ∆ABC, P and Q are points on sides AB and AC respectively such that PQ || BC. If AP = 4 cm, PB = 6 cm and PQ = 3 cm, determine BC. Solution: In triangle $A B C, P$ and $Q$ are points on sides $A B$ and $A C$ respectively such that $P Q \| B C$. In $\triangle A P Q$ and $\triangle A B C$, $\angle A P Q=\angle B$  (Corresponding angles) $\angle P A Q=\angle B A C \quad$ (Common) So, $\triangle \mathrm{APQ}-\Delta \mathrm{ABC}$ (AASimilarity) $\frac{A P}{A B}=\frac{P Q}{B C}$ Substituting value $A P=3 \mathrm{~cm}, A B=10 \mathrm{~cm}$ and $P Q=3 \mathrm{~cm}$, we get $\frac{4}{10}=\frac{3}{B C}$ By cross multiplication we get $4 \times B C=3 \times 10$ $B C=\frac{3 \times 10}{4}$ $B C=\frac{30}{4}$ $B C=7.5 \mathrm{~cm}$ Hence, the value of $B C$ is $7.5 \mathrm{~cm}$.
2023-01-31 12:39:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9573596715927124, "perplexity": 413.05406024445216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499871.68/warc/CC-MAIN-20230131122916-20230131152916-00147.warc.gz"}
https://physics.stackexchange.com/questions/613777/how-does-the-planck-satellites-detailed-map-of-the-cmb-lead-to-a-value-of-the-h/613781#613781
# How does the Planck satellite's detailed map of the CMB lead to a value of the Hubble constant, $H_0$? Cosmologists have put great faith, it seems, on the cosmological model that led to a value (of 67) of the Hubble 'constant', after carefully peering at Planck's map of the cosmic microwave background.... But, I have nowhere read an article or (journal) paper, detailed or not, about how a close observation and analysis of the CMB leads to a value for Hubble's constant.... Does anybody know how Planck's CMB data/map led to a calculation of Hubble's constant? Does anyone know the logic behind this? Or a link? (Even if behind a paywall, or something....) • Have you read the papers by the Planck Collaboration? They're very detailed Mar 3 at 11:38 In cosmology, there are three "things" in the universe: radiation, matter, and dark energy. At various points in the universe's history, each of these things have dominated the universe's evolution. We call these periods "radiation-dominated", "matter-dominated", and "dark energy dominated" (this last period also goes by other names since dark energy isn't well understood). The three are different because they scale differently with volume. In particular, if you have a particle in the box and double the size of the box, the volume goes up by a factor of 8, and the density goes down by the same factor. Mathematically we say $$\rho(m) \propto a^{-3}$$, where $$\rho$$ is the density, and $$a$$ is the so-called scale factor. With radiation, not only does the density go down, there is also a redshift. We write $$\rho(m) \propto a^{-4}$$. Finally with dark energy, if it's described by a cosmological constant, then it is a constant - its density does not change. $$\rho(\Lambda) \propto a^{0}$$. $$\frac{H^2}{H_0^2} = \Omega_{0,R} a^{-4} + \Omega_{0,M} a^{-3} + \Omega_{0,k} a^{-2} + \Omega_{0,\Lambda}$$ Here the $$\Omega$$'s are the present-day density of radiation, matter and dark energy, and $$\Omega_{0,k}$$ is the contribution due to the curvature of the universe.
2021-10-18 11:27:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7887321710586548, "perplexity": 186.5950893363063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00098.warc.gz"}
https://www.hackmath.net/en/math-problem/1703
# Three co-owners The three co-owners of building company have earnings from a contract portioned in the ratio of 3:6:7. Each of them received the amount in whole USD. One of them on contract earned 86450 USD. What was the total earnings this order? Result x =  197600 #### Solution: $k_1 = 1/3\cdot (3+6+7)\cdot 86450 = 461066.66666667 \ \\ k_3 = 1/6 \cdot (3+6+7) \cdot 86450 = 230533.33333333 \ \\ k_3 = 1/7 \cdot (3+6+7) \cdot 86450 = 197600 \ \\ x = 197600$ Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you! Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...): Be the first to comment! Tips to related online calculators Check out our ratio calculator. Do you solve Diofant problems and looking for a calculator of Diofant integer equations? ## Next similar math problems: 1. Hotel rooms In the 45 rooms, there were 169 guests, some rooms were three-bedrooms and some five-bedrooms. How many rooms were? 2. Arble bag A marble bag sold by Rachel's Marble Company contains 5 orange marbles for every 6 green marbles. If a bag has 35 orange marbles, how many green marbles does it contain? 3. Dividing money Janka and Silvia are to divide 1200 euros in a ratio of 19:11. How many euros does the Janka have? If Petra read 10 pages per day, she would read the book two days earlier than she read 6 pages a day. How many pages does a book have? 5. Two numbers We have two numbers. Their sum is 140. One-fifth of the first number is equal to half the second number. Determine those unknown numbers. Calculate how many percent will increase the length of an HTML document, if any ASCII character unnecessarily encoded as hexadecimal HTML entity composed of six characters (ampersand, grid #, x, two hex digits and the semicolon). Ie. space as: &#x20; 7. Two numbers The difference between the two numbers is 74. If we divide a larger number by a smaller one, we get a quotient 7 and the rest of 2. Determine both numbers. 8. Percentages above 100% What is 122% of 185? What is the meaning of percentages above 100%? 9. Equations - simple Solve system of linear equations: x-2y=6 3x+2y=4 10. Persons Persons surveyed:100 with result: Volleyball=15% Baseball=9% Sepak Takraw=8% Pingpong=8% Basketball=60% Find the average how many like Basketball and Volleyball. Please show your solution. 11. The ball The ball was discounted by 10 percent and then again by 30 percent. How many percent of the original price is now? 12. Ratio Increase in the ratio 20:4 number 18.5. 13. Equations Solve following system of equations: 6(x+7)+4(y-5)=12 2(x+y)-3(-2x+4y)=-44 14. Mushrooms Eva and Jane collected 114 mushrooms together. Eve found twice as much as Jane. How many mushrooms found each of them? 15. The Chemistry test The Chemistry test contained 8 questions, each with 3 points. Peter scored 21 points. How many percent did Peter write a test? 16. Trees Along the road were planted 250 trees of two types. Cherry for 60 CZK apiece and apple 50 CZK apiece. The entire plantation cost 12,800 CZK. How many was cherries and apples? 17. Fifth of the number The fifth of the number is by 24 less than that number. What is the number?
2020-02-20 17:10:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4262966513633728, "perplexity": 4362.234280483137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145260.40/warc/CC-MAIN-20200220162309-20200220192309-00152.warc.gz"}
https://www.physicsforums.com/threads/surface-area-of-n-spherical-droplets.183694/
Surface area of N spherical droplets? 1. Sep 9, 2007 saber1357 I have the following problem Assume that 30.0 cm^3 of gasoline is atomized into N spherical droplets, each with a radius of 2.00 x 10^-3 m. What is the total surface area of these N spherical droplets? I calculated the surface area of each atom to be 5x10^-9 m^2. I also calculated the volume of each droplet to be 3.35x10^-14 m^3. However, my mind can't seem to relate these numbers to my task. Any help is GREATLY appreciated. 2. Sep 9, 2007 G01 First things first. Next time, please post this in the correct Homework Help section. This forum is for general academic advice not pertaining homework problems. Ok, so you know you have N atoms each with a known surface area. If you know the surface area of one, what is stopping you from finding the total surface area of all of them? HINT: What would be the surface area of 2 droplets? 3? 4? ...
2017-10-20 19:12:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8258007764816284, "perplexity": 878.7221397511795}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824293.62/warc/CC-MAIN-20171020173404-20171020193404-00720.warc.gz"}
https://math.stackexchange.com/questions/1871236/find-rank-and-nullity-of-a-matrix/1871266
# Find rank and nullity of a matrix. To find: rank $A$ and nullity $A$ for $$A=\begin{pmatrix} 0 &0 &0 \\ 0 & 0.5&-0.5 \\ 0&-0.5 & 0.5 \end{pmatrix}$$ I know the nullity refers to the number of free variables in the matrix and the rank refers to the $dim(columnspace)$; where to from here? • Hint: Consider $A\mathbf x= \mathbf0$. Then $\mathbf x$ is in the nullspace. How many vectors does it take to span this nullspace? The number of vectors is the nullity, i.e. it's the dimension of the nullspace. Jul 26 '16 at 3:03 The matrix has one linearly independent row (take the negative of the second to get the third) implying that the rank is 1 and the nullity is 2. More Generally. First you are going to want to set this matrix up as an Augmented Matrix where $Ax=0$. $1)$ To find the rank, simply put the Matrix in REF or RREF $\left[\begin{array}{ccc|c} 0 & 0 & 0 &0 \\ 0 & 0.5 & -0.5 & 0 \\ 0 & -0.5 & 0.5 & 0 \end{array}\right] \longrightarrow RREF \longrightarrow \left[\begin{array}{ccc|c} 0 & 0 & 0 &0\\ 0 & 0.5 & -0.5 & 0\\ 0 & 0 & 0 & 0 \end{array}\right]$ Seeing that we only have one leading variable we can now say that the rank is 1. $2)$ To find nullity of the matrix simply subtract the rank of our Matrix from the total number of columns. So: Null (A)=3 - 1=2 Hope this is helpful.
2021-09-20 04:27:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8354865312576294, "perplexity": 238.5874609450049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057018.8/warc/CC-MAIN-20210920040604-20210920070604-00104.warc.gz"}
http://www.turkmath.org/beta/konfaramasonuc.php?aramasorgusu=Capadocia,%20Nev%C5%9Fehir
18.05.2016 & 21.05.2016 #### IC-SMHD-2016 - International Conference on Information Complexity and Statistical Modeling in High Dimensions with Applications Information Complexity and Statistical Modeling
2019-09-18 10:19:22
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8618142008781433, "perplexity": 2380.4162609342557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573264.27/warc/CC-MAIN-20190918085827-20190918111827-00108.warc.gz"}
https://chemistry.stackexchange.com/questions/40475/major-products-of-this-reaction
# major products of this reaction? My attempt: polar protic solvent, low temp, tertiary carbon, and a weak (?) nucleophile and a weak base. It seems like this is a pretty standard SN1 reaction so "I and II" would be the correct answer? I think your answer is correct as long as you can justify it. (IV) is clearly absurd but why do you eliminate (III) ? However in my opinion at $\ce{20 ^{\circ} C}$ there would be no reaction from a practical point of view, i.e. one with reasonable yield and reaction time. • The alcohol is indeed a weak nucleophile. A good general rule of thumb is that an alkoxide (a deprotonated alkyl alcohol such as ethanol or isobutanol) is a much better nucleophile than its alcohol. The $20^{o} C$ is also a good indication for a slow E1 product, as john has said (higher $\Delta G$). Since the question asks for specifically the major products, I would not include III in the answer. However, if the question had asked for all products, III is a possible answer to include. Its mechanism would involve first-order beta elimination due to the sterically-hindered alkyl halide. – timaeus222 Nov 11 '15 at 4:38
2019-06-26 17:56:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8714438676834106, "perplexity": 1188.7707119130914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000414.26/warc/CC-MAIN-20190626174622-20190626200622-00465.warc.gz"}
https://www.lil-help.com/questions/118768/psy-355-week-4-dq-1
# PSY 355 Week 4 DQ 1 A Asked by 3 years ago 0 points *Please be sure to use this tutorial as a guide only. Do not plagiarize and do not resell as your own work. If you have any questions or problems with the tutorial please get a hold of me before leaving any negative feedback and I will resolve the issue. If you have trouble opening or viewing the files please contact me and I will fix the problem as soon as I can. Sometimes instructors change the syllabus so if the material does not match your syllabus please let me know. If I do not respond right away please be patient, I do have a full-time job and I try to check my messages once a day. Thanks and good luck!!! :-) PSY 355 Week 4 DQ 1 What are some examples of life stressors? In what ways do individuals differ in their appraisal and ability to cope with life stressors? Are there... PSY 355 Ash ### 1 Answer A Answered by 3 years ago 0 points #### Oh Snap! This Answer is Locked Thumbnail of first page Excerpt from file: *Please be sure to use this tutorial as a guide only. Do not plagiarize and do not resell as your own work. If you have any questions or problems with the tutorial please get a hold of me before leaving any negative feedback and I will resolve the issue. If you have trouble opening or viewing the Filename: psy-355-week-4-dq-1-80.doc Filesize: < 2 MB Downloads: 0 Print Length: 1 Pages/Slides Words: 293 ### Your Answer Surround your text in *italics* or **bold**, to write a math equation use, for example, $x^2+2x+1=0$ or $$\beta^2-1=0$$ Use LaTeX to type formulas and markdown to format text. See example. ### Sign up or Log in • Answer the question above my logging into the following networks ### Post as a guest • Your email will not be shared or posted anywhere on our site • Stats Views: 4 Asked: 3 years ago
2019-12-13 00:00:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1908518373966217, "perplexity": 1994.3039167375382}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547536.49/warc/CC-MAIN-20191212232450-20191213020450-00300.warc.gz"}
https://bison.inl.gov/Documentation/theory/coolant_channel_model.aspx
# Coolant Channel Model This document describes the thermal hydraulics condition surrounding a single fuel rod to provide thermal boundary condition in the analysis of nuclear fuel behavior. Energy conservation is used to derive the coolant enthalpy rise. Applicable heat transfer correlations are used to model the boiling curve prior to departure from nucleate boiling. There are no applicable standards on the coolant channel model used in a fuel performance code. It should be recognized that one should refer to a thermal-hydraulics code for detailed modeling of a coolant channel. For the application in a fuel performance code, much emphasis should be placed upon the energy deposition and heat transfer characteristics and the flow distribution in axial or in radial direction can be ignored and certain assumptions can be made to reduce the computation in the coolant channel model. Figure 1 shows the schematic of the coolant channel model. No finite element mesh was assigned to the coolant channel. The meshing of the one dimensional flow channel consists of several control volume with constant flow area A. Inlet pressure, coolant temperature (or enthalpy), and mass flux as a function of time are required input and they can provide the boundary conditions in solving one-dimentional momentum and energy equations. Heat input into the coolant consists of cladding outer surface heat flux and energy deposits due to interactions of neutrons and gamma rays with coolant water. Figure 1: Schematic of One-Dimensional Coolant Channel with Upward Flow Figure 2 shows a typical sub-channel at assembly interior for a square lattice in thermal-hydraulics analysis. As can be seen, each channel is bounded by four fuel rods and each fuel rod shares a quarter of its cladding outer surface with the coolant channel. For a fuel performance code, the coolant channel takes the same geometry shown in Figure 2, however, it should be noted that the heat flux from the bounding fuel rod surfaces for the coolant sub-channel all come from the fuel pin in the analysis to compute the coolant enthalpy for the sub-channel. Figure 2: Geometry of an interior sub-channel channel A more advanced approach could be averaging the coolant enthalpy in all the coolant channels surrounding a fuel pin of interest; for a fuel pin inside an assembly, there are four sub-channels surrounding the fuel pin and nine fuel rods provide heat input for the four sub-channels. Thus, besides the fuel pin of interest, power of eight neighboring rods is also needed to perform enthalpy calculation. For current development work, the coolant channel use heat generated from the same fuel rod and then the calculated heat transfer coefficients are applied to the fuel rod in the thermal solution. ### Governing equations This session starts with equations for one-component, one-dimensional, and two-phase compressible flow and by making assumptions and integrating those equations over a control volume, a set of working equations were derived with the emphasis on the heat transfer characteristics between the cladding and coolant water. Mass conservation: (1) where, is the density of the mixture of the steam and liquid and is the void fraction of the steam. is the mass flux of the mixture of the steam and liquid in the unit of . Momentum equation is given by: (2) (3) where, x is the vapor quality, A is the flow area, is the friction factor, and is the perimeter of the flow channel. The energy is equation is given by: (4) (5) (6) A few assumptions are applied: 1. Coolant channel has a constant flow area 2. Pressure drop analysis is neglected and coolant pressure at any axial location is inlet pressure 3. Flow is homogeneous and no slip between the liquid phase and the steam, then the void fraction is related to vapor quality by: (7) Plugging Eq. 7 into Eq. 3 and Eq. 5, one can get and 4. Pressure is allowed to have variation in time; however the work done by the pressure on the coolant is neglected. With above assumptions, the momentum equation is not needed in the analysis and the governing equations can be reduced to: (8) (9) In the operating conditions of Light Water Reactors, fuel rods are surrounded by flowing water coolant; the flowing coolant carries the thermal energy generated from nuclear fission reaction and transfers the heat into a steam generator or drives a turbine directly. To predict the thermal response of a fuel rod, thermal hydraulic condition of the surrounding coolant needs to be determined. Such condition in modeling the energy transport aspect of the coolant in Bison code is described by a single coolant channel model. This single channel is used mathematically to describe the thermal boundary condition for modeling the fuel rod behavior. This model covers two theoretical aspects, i.e., the local heat transfer from cladding wall into the coolant and the thermal energy deposition in the coolant in steady state and slow operating transient conditions. Assumptions and limitations of the coolant channel model are summarized below: • Closed channel: The lateral energy, mass, and momentum transfer in the coolant channel within a fuel assembly is neglected. Therefore, the momentum, mass continuity, and the energy equations are only considered in one-dimension, i.e., the axial direction. • Homogeneous and equilibrium flow: For the flow involving both the vapor and liquid phases, the thermal energy transport and relative motions between the two phases are neglected. This essentially assumes the two-phase flow is in a form of one pseudo fluid. • Fully developed flow: In the application of most heat transfer correlations, the entrance effects are neglected. The heat transfer is assumed to happen in a condition that the boundary layer has grown to occupy the entire flow area, and the radial velocity and temperature profiles are well established. • Pressure drop neglected: The pressure drop due to flow induced resistance is not accounted for in the coolant channel model. Instead, coolant pressure as a function of time and axial location can be an input provided by user through a hand calculation or using a computer code. ## Coolant Enthalpy Model In steady state operation, the enthalpy rise in a coolant channel with incompressible fluid can be derived using energy conservation equation: (10) where is the coolant enthalpy at inlet in (J/kg), is the coolant enthalpy at axial location z in (J/kg), is axial location (m), is fuel rod surface heat flux (W/m), is fuel rod linear heat generation rate (W/m), is the fraction of heat generated in the coolant by neutron and gamma rays (dimensionless), is heated diameter (m), is coolant mass flux (kg/sec-m), is flow area of the coolant channel (m). The mass flux, pressure, and coolant temperature at the inlet of coolant channel are provided as input for calculating coolant enthalpy rise. With calculated enthalpy and input coolant pressure, the corresponding thermodynamic condition can be determined using a steam table. The coolant temperature can be obtained and would be used in the convective boundary condition to compute the clad temperature. The thermal-physical properties of water and steam are evaluated at the corresponding bulk coolant temperature and/or at the cladding wall temperature for the use of calculating heat transfer coefficients between the cladding wall and the coolant. The inlet mass flux, pressure, and coolant temperature can be provided as functions of time in the code input. Allowing the variation of inlet thermal-hydraulic conditions can be used to model a quasi-steady state when the velocity and thermal energy of coolant at a given location are assumed to achieve the equilibrium condition instantaneously. ## Pre-CHF Heat Transfer Correlations Depending on the flow rate, flow pattern, and cladding wall surface heat flux, the heat transfer from cladding wall outer surface to coolant can be characterized into different heat transfer regimes. A set of heat transfer correlations to describe the heat transfer condition prior to the point of Critical Heat Flux (CHF) is described follows: ### Dittus-Boelter correlation Under forced flow condition and when the coolant is still in the liquid phase, the heat transfer from the cladding wall to the coolant is in the regime of single phase forced convection, and the heat transfer can be described by Dittus-Boelter equation. (11) The equation is applicable for 0.7 Pr 100, Re 10,000, and $L/D > 60$. Fluid properties are evaluated at the arithmetic mean bulk temperature (Todreas and Kazimi, 1990). #### Jens-Lottes correlation (12) where is the cladding wall super heat = - in (K). is the cladding wall surface heat flux (W/m-K)), and P is the coolant pressure (Pa). This correlation is developed based on data at a pressure between 500 psi (3.45 MPa) and 2000 psi (13.79 MPa) in sub-cooled boiling regime. The heat transfer coefficient is given as: (13) #### Thom correlation A similar correlation is given as follows: (14) The heat transfer coefficient is: (15) This correlation is for water at a pressure between 750 psi (5.17 MPa) and 2000 psi (13.79 MPa); but much of Thom's data were obtained at relatively low heat fluxes according to Tong and Weisman (1996). #### Shrock-Grossman correlation Shrock-Grossman heat transfer correlation is used in the regime of saturated boiling. The heat transfer coefficient is given as: (16) (17) where is the steam quality, is the latent heat of vaporization (J/kg), is the heat transfer coefficient in the liquid phase at the same mass flux (J/kg), is the mass flux (kg/m-sec), , , and are constants as follows with the values: #### Chen's correlation An alternative correlation that is used in the saturated boiling regime is Chen's correlation. Chen's correlation consists of a convective term () and a nucleation term (): (18) is the modified Dittus-Boelter correlation: (19) F is a factor to account for the enhanced heat transfer due to the turbulence caused by vapor. (20) (21) The nucleation term is the Forster-Zuber equation: (22) (23) (24) S is a suppression factor: (25) Where ; is the Reynold number for liquid phase only. #### Rohsenow correlation The Rohsenow correlation (Liu and Kazimi, 2006) is used to represent the heat transfer during a very short period of vapor bubbles nucleation, growth, and departure follows natural convection that occurs during the fast heat up of the fuel rod. The heat flux is given as: (26) where is the heat flux (W/m), is the latent heat of vaporization (J/kg), is the liquid viscosity at saturated temperature (kg/m-sec), is the density of vapor at saturated temperature (kg/m), is the density of liquid at saturated temperature (kg/m), is the surface tension energy at saturated temperature (N/m), is the acceleration due to gravity (m/s), is specific heat of liquid at saturated temperature (kJ/kg-K), is the wall temperature (K), is the saturated temperature (K), and is the Prandtl number. ## Critical Heat Flux Correlations The sub-cooled and saturated boiling can enhance the heat transfer; however at a critical condition when the cladding outer surface is enclosed by vapor film, the heat transfer can deteriorate significantly, the corresponding heat flux is the Critical Heat Flux (CHF). The following correlations are implemented in Bison to calculate CHF, which can be used to estimate the thermal margin in a coolant channel. #### EPRI-Columbia correlation (27) where is the critical pressure ratio=system pressure/critical pressure, is the local mass velocity (Mlbm/hr-ft), is the inlet quality, and ^2$). The following parameters in the table below are used in the EPRI-Columbia correlation. Table 1: Parameters used in the EPRI Columbia correlation Model ParameterParameter Value G^{0.1}$ for cold wall, both and are set equal to 1.0 is the non-uniform axial heat flux distribution parameter: (28) Y is Bowring's non-uniform parameter defined as: (29) #### GE correlation (30) (31) The correlation is applicable for mass fluxes less than lb/ft-hr. #### Zuber correlation Taken from Tong and Tang (1997), the Zuber correlation is (32) where is the critical heat flux (W/m), is the latent heat of vaporization (kJ/kg), is the acceleration due to gravity = 9.8 (m/s), is the density of vapor at saturation temperature (kg/m), is the density of liquid at saturation temperature (kg/m), and is the surface tension energy at saturation temperature (N/m). #### Modified Zuber correlation The modified Zuber correlation (Liu and Kazimi, 2006) is included in the critical heat flux correlation selection option in Bison, and can be selected by user input. This correlation is based on pool boiling critical heat flux hydrodynamics and is applicable to very low flow conditions. It was developed for critical heat flux calculations in LWRs in severe accident conditions. (33) where is the critical heat flux (W/m), is the correction factor for bulk subcooled fluid conditions, is the latent heat of vaporization (kJ/kg), is the acceleration due to gravity (here set as 9.8 (m/s)), is the density of vapor at saturation temperature (kg/m), is the density of liquid at saturation temperature (kg/m), and is the surface tension energy at saturation temperature (N/m). The correction factor for bulk subcooled condition is (34) where is the specific heat of saaturated liquid (kJ/kg-K), is the saturation temperature (K), and is the bulk fluid temperature (K). #### BIASI correlation BIASI correlation is a function of pressure, mass flux, flow quality, and tube diameters. The correlations are provided in following equations. For kg/m-s, the Eq. 35 is used; for higher mass flux, the Eq. 35 or Eq. 36 whichever higher is used. (35) (36) where (37) (38) The parameter ranges for the correlation are given in the table below. Table 2: Parameters ranges for which the BIASI correlation is defined Model ParameterParameter Value 0.003-0.0375 m 0.2-6.0 m 0.27-14 MPa 100-600 kg/m-s #### MacBeth correlation The MacBeth correlation (Geelhood, 2014) was developed using based on compilation of a large amount of CHF data from a wide variety of sources. The database consists entirely of burnout tests for vertical upflow in round tubes. This correlation can be extrapolated to CHF in annuli and rod bundles at low pressure. The MacBeth CHF correlation is separated for low flow conditions and high flow conditions. At low flow conditions, the correlation defines the critical heat flux as: (39) where is the critical heat flux (MBtu/hr-ft), is the hydraulic diameter, based on wetted perimeter (in), is the latent heat of vaporization (Btu/lbm), is the mass velocity (lbm/hr-ft), and is the equilibrium quality. At high flow conditions, the correlation defines the critical heat flux as: (40) where A and C are the empirical parameters that were defined using statistical optimization for two overlapping sets of data. The parameters A and C for the MacBeth's 12-coefficient model are formulated as: (41) (42) where are the empirical coefficients (See Table 3) Table 3: Coefficients for MacBeth's 12-coefficient model for various reference pressures Model Coefficient560 psia Reference Pressure1000 psia Reference Pressure1550 psia Reference Pressure2000 psia Reference Pressure 2371143665.5 1.20.8110.5091.19 0.4250.221-0.1090.376 -0.94-0.128-0.19-0.577 -0.03240.02740.0240.22 -0.111-0.06670.463-0.373 19.312741.717.1 0.9591.320.9531.18 0.8310.4110.0191-0.456 2.61-0.2740.231-1.53 -0.0578-0.03970.07672.75 0.124-0.02210.1172.24 The acceptable parameter ranges for the correlation are given in the table below. Table 4: Parameters ranges for which the MacBeth 12-correlation model is defined Model ParameterParameter Value Pressure15-2700 psia Mass Velocity0.0073-13.7 Mlbm/hr-ft Hydraulic diameter0.04-1.475 in Heated length1.0-144 in Axial power profileuniform The EPRI correlation is used as the correlation for a Pressurized Water Reactor (PWR) environment. The GE correlation is used as the correlation for a Boiling Water Reactor (BWR) environment. Alternatively, an input temperature at critical heat flux is allowed, which would use the selected heat transfer in the nucleate boiling regime and the input temperature to compute the critical heat flux. ## Post-CHF Heat Transfer Correlation The post-CHF heat transfer regime is divided into transition boiling and film boiling. The transition boiling heat transfer regime occurs when the cladding wall temperature exceeds the Critical Heat Flux (CHF) temperature, but remains below the minimum film boiling temperature. The heat flux decreases significantly with increasing temperature in this regime. Three heat transfer correlations are implemented for the transition boiling regime. The three correlations are McDonough-Milich-King, modified Condie-Bengtson and Henry correlations. The film boiling heat transfer regime occurs when the wall temperature reaches the minimum film boiling temperature. Four correlations are provided for the film boiling region. The correlations are Dougall- Rohsenow, Groenveld, Frederking and Bishop-Sandberg-Tong correlations. The heat transfer correlations at CHF and in the post-CHF regimes implemented in the Bison code is described as follows: #### Transition Boiling McDonough-Milich-King correlation and modified Condie-Bengtson correlation are implemented for the transition boiling regime. ### McDonough-Milich-King correlation The McDonough-Milich-King correlation (Todreas and Kazimi, 1990; Rashid et al., 2004) for forced convection transition boiling is given as (43) The heat transfer coefficient is: (44) where is the critical heat flux (kW/m), is the transition region heat flux (kW/m), is the wall temperature at critical heat flux (K), is the bulk temperature of coolant (K), is the wall temperature in the transition region (K), is the system pressure (MPa), and is the transition boiling heat transfer coefficient (kW/m-K). Table 5: Parameters ranges for which the McDonough-Milich-King correlation may be applied Model ParameterParameter Value Pressure5.5 - 13.8 MPa Mass flux271.246 - 1898.722 kg/m-sec Channel geometrytube Diameter0.00386 m Length0.3048m Fluidwater ### Modified Condie-Bengtson correlation The modified Condie-Bengtson correlation (Rashid et al., 2004) for high flow rate transition boiling is given as follows: (45) The heat transfer coefficient is: (46) (47) where is the critical heat flux (Btu/hr-ft), is the transition heat flux (Btu/hr-ft), is the film boiling heat flux at (Btu/hr-ft), is the wall temperature at critical heat flux (F), is the saturation temperature (F), is the cladding wall temperature (F), and is the transition boiling heat transfer coefficient (Btu/hr-ft-F). At the CHF point, = , and (48) At , the critical heat flux is equal to the sum of the film boiling component and the transition boiling component to ensure the predicted boiling curve is continuous. ### Henry correlation The Henry correlation (Liu and Kazimi, 2006) for transition boiling has been developed to address the heat transfer at cold zero power condition and at high subcooling conditions. The heat flux in the transition boiling regime determined by an interpolation of the critical heat flux and minimum heat flux is given as follow: (49) where (50) The minimal temperature for the Henry correlation is strongly affected by the surface condition as well as the subcooling of the coolant and is given as: (51) where (52) (53) • is the minimum heat flux (W/m) • is the critical heat flux (W/m) • is the temperature at critical heat flux (K) • is the minimal stable film boiling temperature(K) • is the homogeneous nucleation temperature (K) • is the bulk temperature (K) • is the cladding wall temperature (K) • is the pressure (Pa) • is the thermal conductivity of subcooled liquid (W/m-K) • is the density of subcooled liquid (W/m) • is the specific heat of subcooled liquid (kJ/kg-K) • is the thermal conductivity of cladding wall (W/m-K) • is the density of cladding wall (W/m) • is the specific heat of cladding wall (kJ/kg-K) • is an empirical parameter taken as 3.3 ## Film Boiling Four correlations, Dougall-Rohsenow correlation, Groenveld correlation, Frederking correlation and Bishop-Sandberg-Tong correlation, are provided for modeling the heat transfer in the film boiling region. In the transition from the transition boiling regime to the film boiling regime, the intercept of the selected film boiling correlation and transition boiling correlation was used to determine the minimum film boiling temperature and minimum film boiling heat flux. ### Dougall-Rohsenow correlation The Dougall-Rohsenow correlation (Dougall and Rohsenow, 1961; Rashid et al., 2004) for forced convection stable film boiling was developed for high flow rate and low quality (x 0.3) flow. The heat transfer coefficient is given as: (54) where is the mass flux (kg/m-sec), is the hydraulic diameter (m), is the thermal conductivity of vapor (W/m-K), is the viscosity of vapor (kg/m-sec), is the density of vapor (kg/m), is the density of liquid (kg/m), is the specific heat of vapor (J/kg-K), and is the local quality. The vapor properties of the Prandtl number are evaluated at the saturation temperature. The data range for this correlation is given below. Table 6: Parameters ranges for the Dougall-Rohsenow correlation Model ParameterParameter Value Pressure0.1154 - 0.1634 MPa Mass flux450.268 - 1109.396 kg/m-sec Heat flux45.426 - 131.862 kW/m Exit qualityup to 0.4 Channel geometrytubes Diameter, inner0.004572 m, 0.01036 m Length0.381 m Fluidfreon ### Groenveld correlation The Groenveld correlation (Todreas and Kazimi, 1990; Rashid et al., 2004) for forced convection stable film boiling heat transfer coefficient is: (55) where the parameter Y is given as (56) whichever is larger, and where is the mass flux (kg/m-sec), is the hydraulic diameter (m), is the thermal conductivity of vapor (W/m-K), is the viscosity of vapor (kg/m-sec), is the density of vapor (kg/m), is the density of liquid (kg/m), and is the local quality. The coefficients a, b, c and d are given in Table 7 below. The Prandtl number of the film is given by (57) where is the specific heat of vapor at film temperature (J/kg-K), is the viscosity of vapor at film temperature (kg/m-sec), and is the thermal conductivity of vapor at film temperature (W/m-K) The vapor properties of the Prandtl number should be evaluated at the film temperature. (58) where is the saturation Temperature (K) and is the cladding wall temperature (K) Prandtl number is currently evaluated at the saturation temperature in the code. Table 7: Groenveld correlation coefficients a, b, c, d ParameterValue a0.0522 b0.688 c1.26 d-1.06 The applicable range of data for annuli geometry is shown in the Table 8 below. Table 8: Range of data for Groenveld correlation ParameterData Range for Annuli Geometry Hydraulic Diameter (mm)1.5 - 6.3 Pressure (MPa)3.4 - 10 Mass Flux (kg/m-sec)800 - 4100 Heat Flux (kW/m)450 - 2250 Quality0.1 - 0.9 ### Frederking correlation The Frederking correlation (Liu and Kazimi, 2006) for turbulent film boiling heat transfer coefficient during RIA is: (59) where is the turbulent film boiling heat transfer coefficient (W/m-K), is the thermal conductivity of vapor (W/m-K), is the modified latent heat of vaporization (kJ/kg), is the acceleration due to gravity = 9.8 (m/s), is the density of vapor at saturation temperature (kg/m), is the density of liquid at saturation temperature (kg/m), is the saturation temperature (K), and is the cladding wall temperature (K). The modified latent heat is given as (60) where is the latent heat of vaporization (kJ/kg), is the specific heat of vapor at saturated temperature (kJ/kg-K), is the saturation temperature (K), and is the cladding wall temperature (K). ### Bishop-Sandberg-Tong correlation The Bishop-Sandberg-Tong correlation (Geelhood, 2014) for film boiling heat transfer coefficient is: (61) where is the coolant thermal conductivity at the film temperature (W/m-K), is the hydraulic diameter (m), is the Reynolds number with fluid properties evaluated at the film temperature, is the Prandtl number with fluid properties evaluated at the film temperature, is the density of vapor at saturation temperature (kg/m), is the density of liquid at saturation temperature (kg/m), and is the bulk fluid density (kg/m). This correlation is defined by the properties of the vapor film at the wall and the film temperature. The film temperature is defined as (62) The film boiling heat flux for this correlation is: (63) The bulk fluid density is defined as (64) The equilibrium void fraction is defined as (65) where, is the equilibrium quality (dimensionless). ## Logic to Determine Heat Transfer Regime The boiling curve in the Bison code depends on the selected pre-CHF, CHF, and post-CHF correlations. The diagrams in Figure 3 shows the criteria used in the selection of different heat transfer regimes. Figure 3: Schematic of heat transfer regimes selection criteria} Dittus-Boelter correlation is used for the single phase liquid forced convection and for the single phase vapor forced convection. Thom or Jens-Lottes correlation is used for the sub-cooled boiling regime. Thom, Jens-Lottes, or Chen correlation is used for the forced boiling convection regime. Shrock-Grossman correlation is used for the forced boiling convection and vaporization regime. In the transition boiling regime, either the MCDonough-Milich-King ocrrelation or the modified Condie-Bengtson correlation is ued. In the film boiling regime, Dougall-Rohsenow or Groenveld correlation is used. is the temperate at the onset of nucleate boiling. is the temperature at the critical heat flux. The selection of different types heat transfer correlations is described in the users manual. The logic described above is not applicable for radiation heat trasnfer and reflood heat transfer modes. They can be activated by using input heat transfer mode. ## FLECHT Reflood Heat Transfer Correlations An empirical approach for modeling the reflooding phase of a LOCA is using the correlations derived from Full Length Emergency Cooling Heat Transfer (FLECHT) tests (Cunningham et al., 2001; Cadek et al., 1972). Two reflood heat transfer correlations are implemented in Bison code. The first correlation is provided in Cunningham et al. (2001), and the second one is described in Cadek et al. (1972). The heat transfer correlations compute heat transfer coefficients during reflooding phase of LOCA as a function of flooding rate, cladding temperature at the start of flooding, fuel rod power at the start of flooding, flooding water temperature, pressure, rod elevation and time. The applicable ranges of these variables are shown in Table 9 and Table 10 for the heat transfer correlations given in Cunningham et al. (2001) and Cadek et al. (1972) respectively. The variables are defined as follow: • = flooding rate (in/s) • = Peak cladding temperature at start of flooding (F) • = fuel rod power at axial peak at start of flooding (kW/ft) • = reactor vessel pressure (psia) • = equivalent FLECHT elevation (ft) • = flood water subcooling at inlet (F) • = time after start of flooding as adjusted for variable flooding rate (s) • = heat transfer coefficient (Btu/(hr-ft-F)) • = radial power shape factor = 1.0 for a nuclear fuel rod = 1.1 for electrical rod with radially uniform power • = flow blockage () ### Generalized FLECHT correlation The generalized FLECHT correlation from Cunningham et al. (2001) divides the reflood heat transfer into four time periods: period of radiation only, period I, period II, and period III. The heat transfer due to radiation is modeled during the time range of t 0 and t $\leq t_{1}$. The heat transfer coefficient expression is given as (66) where (67) (68) (69) (70) (71) (72) #### Period I During Period I, the flow develops from the radiation dominated pre-reflood condition to heat transfer conditions in reflooding phase. (73) where is defined as (74) (75) is defined as (76) The heat transfer coefficient during Period I is calculated as follow (77) where (78) (79) (80) (81) (82) (83) (84) (85) (86) (87) (88) (89) #### Period II During this period, the heat transfer coefficient reaches a plateau with a rather slow increase. The time range for Period II is (90) where (91) The heat transfer coefficient during Period II is computed by the equation (92) (93) (94) (95) (96) (97) (98) (99) (100) #### Period III During this period, the flow pattern might have changed to film boiling regime ,and the heat transfer coefficient increases rapidly as the quench front approaches. The time range of Period III is (101) is the time of quenching. The heat transfer coefficient during Period III is calculated as follow (102) where (103) (104) (105) #### Modification for Low Flooding Rates The heat transfer coefficients for Periods I, II, and III is multiplied by a factor f to best match the test data performed at low flooding rates. The factor f is calculated as follow (106) where (107) (108) (109) (110) The above correlations are valid over the following ranges of parameters (Cunningham et al., 2001): Table 9: Range of applicability of generalized FLECHT correlation VariableApplicable range of variable in British unitApplicable range of variable in SI unit Flooding rate0.4 - 10 in/s0.0102 - 0.254 m/s Reactor vessel pressure15 - 90 psia0.103 - 0.62 MPa Inlet coolant subcooling16 - 189 F264.3 - 360.4 K Initial cladding temperature300 - 2200 F420 - 1478 K Flow blockage ratio0 - 75 %0 - 75 % Equivalent elevation in FLECHT facility2 - 10 ft0.6096 - 3.048 m ### WCAP-7931 FLECHT correlation The WCAP-7931 correlation (Cadek et al., 1972) divides the reflood heat transfer into three time periods which are designated as Period I, Period II, and Period III. #### Period I The time range of Period I is (111) where is defined as (112) The quench time is defined as (113) is defined as (114) The heat transfer coefficient during Period I is calculated as follow (115) where (116) (117) (118) (119) (120) #### Period II The time range of Period I is (121) where (122) The heat transfer coefficient during Period II is computed by the equation (123) where (124) (125) (126) (127) (128) (129) #### Period III The time range of Period III is (130) The heat transfer coefficient during Period III is calculated as follow (131) where (132) (133) (134) The above correlations are valid over the following ranges of parameters: Table 10: Range of applicability of FLECHT correlation from WCAP-7931 report VariableApplicable range of variable in British unitApplicable range of variable in SI unit Flooding rate0.4 - 10 in/s0.0102 - 0.254 m/s Reactor vessel pressure15 - 90 psia0.103 - 0.62 MPa Inlet coolant subcooling16 - 189 F264.3 - 360.4 K Initial cladding temperature1200 - 2200 F922 - 1479 K Flow blockage ratio0 - 75 %0 - 75 % Equivalent elevation in FLECHT facility4 - 8 ft1.219 - 2.438 m ## Properties for Water and Steam Properties for water and steam consist of thermodynamic properties, transport properties, and other physical properties used in the heat transfer correlations. They are implemented based on a few standards specified by the International Association or Properties for Water and Steam (IAPWS). The thermodynamic properties, or the steam tables, are implemented in the IAPWS95 library, included as a submodule in Bison. ## Sodium Coolant Sodium coolant for fast reactors can also be simulated in Bison. The model uses the same framework as the above calculations for water/steam, but with appropriate correlations for liquid sodium. The model uses the modified Schad correlation (Waltar et al., 2011) by default for triangular subchannels (135) where is the Nusselt number, is the Peclet number, and is the pitch-to-diameter ratio and is applicable for . For , the term is set to 1.0. The Lyon's Law correlation (Lyon, 1951) is generally used for heat transfer from a rod to flow within a surrounding circular tube for liquid metals (136) applicable for and . The Seban and Shimazaki correlation (Subbotin et al., 1963) is specific to liquid sodium heat transfer from a rod to fluid within a surrounding circular tube with constant rod wall temperature (137) and is applicable for , , and . Sodium properties are taken from the ANL/RE-92/2 report (Fink and Leibowitz, 1995): (138) where is thermal conductivity, is enthalpy, and units are SI. At high tempeature, radiation heat transfer can occur from cladding outer surface to surrounding core structure components. In simulated LOCA tests at Halden, heat can be transferred to the heating element as well. Radiation heat transfer is described by following equation: (139) (140) where and are the surface emissivities of the cladding and heater, respectively,f and and are the radii of the two surfaces, is the Stefan-Boltzmann constant (). ## References 1. F. F. Cadek, D. P. Dominicis, H. C. Yeh, and R. H. Leyse. PWR FLECHT final report supplement. Technical Report WCAP-7931, Westinghouse, October 1972.[BibTeX] 2. M E Cunningham, C E Beyer, P G Medvedev, and G A Berna. Fraptran: a computer code for the transient analysis of oxide fuel rods. Technical Report NUREG/CR-6739 Vol.1, Pacific Northwest National Laboratory, 2001.[BibTeX] 3. R L Dougall and W M Rohsenow. Film-boiling heat transfer from a horizontal surface. Journal of Heat Transfer, 83:351–358, 1961.[BibTeX] 4. J. K. Fink and L. Leibowitz. Thermodynamic and transport properties of sodium liquid and vapor. Technical Report ANL/Re-95/2, ANL Reactor Engineering Division, 1995.[BibTeX] 5. K. J. Geelhood. FRAPTRAN–1.5: a Computer Code for the Transient Analysis of Oxide Fuel Rods. Technical Report NUREG/CF-7023 Vol.1, Rev.1, U.S. Nuclear Regulatory Commission, 2014.[BibTeX] 6. W. Liu and M. S. Kazimi. Modeling Cladding-Coolant Heat Transfer of High-Burnup Fuel During RIA. In Proceedings of ICONE-14. 2006.[BibTeX] 7. Richard N Lyon. Liquid metal heat transfer coefficients. Chem. Eng. Prog., 47:75–79, 1951.[BibTeX] 8. Y Rashid, R Dunham, and R Montgomery. Fuel Analysis and Licensing Code: FALCON MOD01. Technical Report, Electric Power Research Institute, December 2004.[BibTeX] 9. VI Subbotin, AK Papovyants, PL Kirillov, and NN Ivanovskii. A study of heat transfer to molten sodium in tubes. Soviet Atomic Energy, 13(4):991–994, 1963.[BibTeX] 10. N. E. Todreas and M. S. Kazimi. Nuclear systems I: thermal hyraulic fundamentals. Hemisphere Publishing Corporation, New York, N.Y., USA, 1990.[BibTeX] 11. L. S. Tong and Y. S. Tang. Boiling heat transfer and two-phase flow. Taylor and Francis, Washington, DC, USA, 1997.[BibTeX] 12. L. S. Tong and J. Weisman. Thermal analysis of pressurized water reactors. American Nuclear Society, La Grange Park, Illinois, USA, 1996.[BibTeX] 13. A.E. Waltar, D.R. Todd, and P.V. Tsvetkov. Fast Spectrum Reactors. Springer US, 2011. ISBN 9781441995728. URL: https://books.google.com/books?id=z8z\_RNUZSbEC.[BibTeX]
2020-10-29 18:53:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.620803713798523, "perplexity": 2897.994510896476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107905777.48/warc/CC-MAIN-20201029184716-20201029214716-00246.warc.gz"}
https://mathoverflow.net/questions/221598/compact-hyperbolic-3-manifolds-with-prescribed-quaternion-algebra-quaternion-pa
# Compact hyperbolic 3-manifolds with prescribed quaternion algebra, quaternion parameters as ramification condition What is an interesting class of examples of hyperbolic 3-manifolds, each of which satisfies the following conditions? 1. It is compact 2. Its trace field contains a unique imaginary quadratic extension. 3. Its quaternion algebra is isomorphic to one of the form $\Big(\frac{a,b}{K}\Big)$, where $a,b\in K\cap\mathbb{R}$. Since the word 'interesting' is not well-defined, I'll settle for any examples, but it would be cool if they have some nice combinatorial or geometric characterization. Then there is a follow-up question (more for algebraic number theorists). How could I replace conditions 2 and 3 with something in terms of the algebra's ramification set? That is, there is a number field $F=K\cap\mathbb{R}$ so that the algebra is isomorphic to $\Big(\frac{a, b}{F(\sqrt{-d})}\Big)$ where $a,b\in F$, and $d\in F^+$. But yet it is still a division algebra (else the manifold is most likely not compact). Can this be (at least for some $F, d$ choices) phrased in terms of the divisors of $a$ and $b$? I expect one would need to already know the definitions involved to answer this, but I will supply the arithmetic ones below for the sake of readers. Afterall the more people who know this, the more people I have to talk to! To a hyperbolic 3-manifold $M$ is associated a Kleinian group $\Gamma\cong\Pi_1(M)$ represented in $\mathrm{PSL}_2(\mathbb{C})$. The trace field of $M$ is the field $\mathbb{Q}(\{\mathrm{tr}(\gamma)\mid\gamma\in\Gamma\})$, which I'll denote by $k_0 M$. Using the character variety and algebraic geometry, it follows that this is a number field (a finite extension of $\mathbb{Q}$), and by Mostow rigidity it is a manifold invariant. Using the same setup, the quaternion algebra of $M$ is defined as $$\big\{\sum_{i=t}^nt_i\gamma_i\mid t_i\in k_0M,\gamma_i\in\Gamma,n\in\mathbb{N}\big\}$$ and is commonly denoted by $A_0M$. A quaternion algebra is a 4-dimensional central-simple algebra, and it can be proven that $A_0M$ is such a thing using the Skolem-Noether theorem. This is a stronger manifold invariant. These algebras (provided the field is not characteristic 2, which doesn't matter here since we're using number fields) necessarily take the following form. If $K$ is the field it is over, then the algebra looks like $$K\oplus Ki\oplus Kj\oplus Kij$$ where $i^2=a, j^2=b, ij=-ji$, with $a,b\in K\setminus\{0\}$. And we denote this by the Hilbert symbol $\Big(\frac{a,b}{K}\Big)$. A property of these algebras is that they are identified up to isomorphism by ramification of their places (field embeddings and prime ideals) over $K$. Quaternion algebras of non-compact manifolds never have ramification over their primes. Quaternion algebras of compact manifolds typically do have ramification of their primes, but there are some strange examples where they don't. • Let me mention that I'm writing my thesis right now and have a bunch of cool results for things that satisfy these conditions. Thanks to @BenLinowitz below, the results can be expressed much more neatly. I will post a link in the comments to relevant papers on the archive once they are there (next few months probably). – j0equ1nn Oct 23 '15 at 23:32 • I'm glad that my response was useful to you. While it is implicit in my response, I should add that there are infinitely many manifolds satisfying your conditions. In a moment I will add a paragraph making this explicit. – user1073 Oct 24 '15 at 1:32 • @BenLinowitz: I noticed what you said about the infinite class of examples, but thanks for including the explanation. It's nice for more people to see how quaternion orders play in to arithmetic groups like this. – j0equ1nn Oct 24 '15 at 20:51 Suppose that $M$ is a compact arithmetic hyperbolic $3$-manifold which is derived from a quaternion algebra. Let $k$ denote the trace field of $M$ and $B=\left(\frac{a,b}{k}\right)$ be the associated quaternion algebra. It is known (see Maclachlan-Reid, Theorem 8.3.2) that $k$ has a unique complex place. Noting that every proper subfield of a number field with a unique complex place is totally real, we see that the only way for $k$ to contain an imaginary quadratic field is for $k$ to actually be an imaginary quadratic field. Furthermore, Theorem 8.2.3 of Maclachlan-Reid implies that $M$ will be compact so long as $B\neq \mathrm{M}_2(\textbf{Q}(\sqrt{-d}))$. Combining this with the observation from the previous paragraph, we see that your conditions 1 and 2 will be satisfied if and only if $B$ is a quaternion division algebra which is defined over an imaginary quadratic field. Now we need to incorporate your third condition, which seems to me to be the most interesting. Suppose that $M$ contains an immersed totally geodesic surface. Then the results of Section 9.5 of Maclachlan-Reid imply that there is an indefinite quaternion algebra $B'$, defined over $\textbf{Q}$, such that $B\cong B'\otimes_\textbf{Q} k$. Because $\left(\frac{a,b}{\textbf{Q}}\right)\otimes_\textbf{Q} k\cong \left(\frac{a,b}{k}\right)$, the Hilbert symbol of $B$ satisfies your condition 3. Putting all of this together, we see that if $M$ is an arithmetic hyperbolic $3$-manifold which is derived from a quaternion division algebra $B$ defined over an imaginary quadratic field and $M$ contains an immersed totally geodesic surface then $M$ satisfies your three conditions. An arithmetic hyperbolic $3$-manifold contains one totally geodesic surface if and only if it contains infinitely many commensurability classes of totally geodesic surfaces (this also follows from the results of Maclachlan-Reid, Section 9.5), so the aforementioned class of manifolds all contain infinitely many commensurability classes of totally geodesic surfaces. Whether this makes these manifolds interesting...I can't say. I do not know of a geometric characterization of your third condition, though would like to point out Proposition 5 of Chinburg and Reid's paper Closed hyperbolic $3$-manifolds whose closed geodesics all are simple: Proposition: Let $M$ be an arithmetic hyperbolic $3$-manifold derived from a quaternion algebra $B=\left(\frac{a,b}{k}\right)$. If $M$ has a non-simple closed geodesic then $a,b$ can be chosen so that $a\in k$ and $b\in k\cap \textbf{R}$. This proposition shows that if the Hilbert symbol of $B$ cannot be written in a certain form then all closed geodesics of $M$ are simple. (Providing examples of such manifolds was the point of Chinburg and Reid's paper.) It is not known whether this Hilbert symbol obstruction is the only thing which prevents $M$ from having non-simple closed geodesics. If you believe that it is the only obstruction, then you would expect any arithmetic hyperbolic $3$-manifold $M$ satisfying your conditions to have lots of non-simple closed geodesics. Regarding your request to reinterpret condition 3 in terms of the primes which ramify in $B$, the results of Section 4 of the Chinburg-Reid paper should be very helpful and are meant to provide exactly such an interpretation. Added: There are infinitely many hyperbolic 3-manifolds which satisfy the OP's three conditions. One such family may be obtained as follows. Let $B$ be a rational quaternion division algebra which is split at the real place of $\bf{Q}$. Let $k$ be an imaginary quadratic field which does not embed into $B$. Then $A:=B\otimes_{\textbf{Q}} k$ is a quaternion division algebra over $k$. (Here I have used the fact that a quadratic field $L$ embeds into $B$ if and only if $B\otimes_\textbf{Q} L \cong \mathrm{M}_2(L)$.) Let $\mathcal O$ be a maximal order of $A$ and $\mathcal{O}^1$ the multiplicative subgroup of $\mathcal{O}^*$ generated by elements of reduced norm $1$. Let $\psi: A\hookrightarrow \mathrm{M}_2(\textbf{C})$ be the map induced by the inclusion $A\hookrightarrow A\otimes_k \textbf{C}\cong \mathrm{M}_2(\textbf{C})$. Finally, let $\Gamma_\mathcal{O}$ denote the image in $\mathrm{PSL}_2(\textbf{C})$ of $\mathcal{O}^1$ under the map $\psi$ composed with the projection $P: \mathrm{SL}_2(\textbf{C})\rightarrow \mathrm{PSL}_2(\textbf{C})$. Then $\Gamma_\mathcal{O}$ is a discrete subgroup of $\mathrm{PSL}_2(\textbf{C})$ of finite covolume which is cocompact and has trace field $k$. Let $\Gamma$ be a finite index subgroup of $\Gamma_\mathcal{O}$ which is torsion-free and $M=\textbf{H}^3/\Gamma$ be the corresponding hyperbolic $3$-manifold. The arguments that I gave in my original response show that $M$ satisfies the three conditions of the OP's question. There are infinitely many commensurability classes of such $M$ because there are infinitely many imaginary quadratic fields $k$ which do not embed into $B$. • I had a short conversation with Alan Reid where he told me that I was basically thinking about examples with immersed totally geodesic surfaces. I understood what he was saying for about 10 minutes and then lost it, so thanks so much for this. I wonder now about non-arithmetic ones. I shall also try to look at the Chinburg-Reid paper but I don't have access on that link. I will try emailing Reid.. – j0equ1nn Oct 23 '15 at 2:44
2020-11-23 19:30:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9119385480880737, "perplexity": 137.4008252413508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141164142.1/warc/CC-MAIN-20201123182720-20201123212720-00561.warc.gz"}
http://eprint.iacr.org/2001/023/20010309:162339
## Cryptology ePrint Archive: Report 2001/023 Martin Hirt and Ueli Maurer Abstract: We present a very efficient multi-party computation protocol unconditionally secure against an active adversary. The security is maximal, i.e., active corruption of up to $t<n/3$ of the $n$ players is tolerated. The communication complexity for securely evaluating a circuit with $m$ multiplication gates over a finite field is $\O(mn^2)$ field elements, including the communication required for simulating broadcast. This corresponds to the complexity of the best known protocols for the passive model, where the corrupted players are guaranteed not to deviate from the protocol. Even in this model, it seems to be unavoidable that for every multiplication gate every player must send a value to every other player, and hence the complexity of our protocol may well be optimal. The constant overhead factor for robustness is small and the protocol is practical. Category / Keywords: cryptographic protocols / multi-party computation, optimal efficiency, unconditional security
2016-09-24 21:34:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.869338870048523, "perplexity": 918.9809352313823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738659496.36/warc/CC-MAIN-20160924173739-00196-ip-10-143-35-109.ec2.internal.warc.gz"}
https://jpalliativecare.com/impact-of-covid-19-pandemic-on-palliative-care-workers-an-international-cross-sectional-study/
Generic selectors Exact matches only Search in title Search in content Filter by Categories Abstract Abstracts Brief Communication Case Report Case Series Commentary Conference Abstract Conference Editorial Conference Proceedings Current Issue Editorial Editorial Commentary Erratum General Medicine, Case Report IAPCONKochi 2019 Conference Proceedings Letter to Editor Letter to the Editor Letters to Editor Narrative Review Original Article Palliative Medicine, Letter to the Editor Personal Reflection Perspective Perspectives Position Paper Position Statement Practitioner Section Report REPUBLICATION: Special Article (Guidelines) Review Article Short Communication Special Editorial Special Review Systematic Review Original Article 27 ( 2 ); 299-305 doi: 10.25259/IJPC_6_21 # Impact of COVID-19 Pandemic on Palliative Care Workers: An International Cross-sectional Study Department of Palliative Medicine, Medical Faculty, RWTH Aachen University, Aachen, Germany International Association for Hospice and Palliative Care, Houston, Taxes, United States Corresponding author: Liliana De Lima, International Association for Hospice and Palliative Care, Houston, Taxes, United States. ldelima@iahpc.com Licence This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-Share Alike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as the author is credited and the new creations are licensed under the identical terms. How to cite this article: Pastrana T, De Lima L, Pettus K, Ramsey A, Napier G, Wenk R, et al. Impact of COVID-19 pandemic on palliative care workers: An international cross-sectional study. Indian J Palliat Care 2021;27(2):299-305. ## Objectives: The COVID-19 pandemic and the measures taken to mitigate spread have affected countries in different ways. Healthcare workers, in particular, have been impacted by the pandemic and by these measures. This study aims to explore how COVID-19 has impacted on palliative care (PC) workers around the world. ## Materials and Methods: Online survey to members of the International Association for Hospice and PC during the initial months of the COVID-19 pandemic. Convenience sampling was used. Statistical descriptive and contingency analyses and Chi-square tests with P < 0.05 were conducted. ## Results: Seventy-nine participants (RR = 16%) from 41 countries responded. Over 93% of those who provide direct patient care reported feeling very or somewhat competent in PC provision for patients with COVID-19. Eighty-four felt unsafe or somewhat safe when caring for patients with COVID-19. Level of safety was associated with competence (P ≤ 0.000). Over 80% reported being highly or somewhat affected in their ability to continue working in their PC job, providing care to non-COVID patients and in staff availability in their workplace. About 37% reported that availability and access to essential medicines for PC were highly or somewhat affected, more so in low-income countries (P = 0.003). ## Conclusion: The results from this study highlight the impact of COVID-19 on the provision of PC. It is incumbent on government officials, academia, providers and affected populations, to develop and implement strategies to integrate PC in pandemic response, and preparedness for any similar future events, by providing appropriate and comprehensive education, uninterrupted access to essential medicines and personal protective equipment and ensure access to treatment and care, working together with all levels of society that is invested in care of individuals and populations at large. The long-term effects of the pandemic are still unknown and future research is needed to monitor and report on the appropriateness of measures. ## INTRODUCTION The “tsunami of suffering”[1] unleashed by the SARS-CoV-2 disease 2019 (COVID-19) global pandemic underscores the crucial role of palliative care (PC) in the comprehensive relief of health-related suffering[2] and the need to integrate PC in epidemic response and preparedness plans. Health systems that integrate PC in their COVID-19 response prioritise symptom management and appropriate communication that relieve the acute physical, psychosocial and spiritual suffering of affected patients and families.[3] PC is a component of health care, including acute conditions and in emergency situations.[4] There is no precedence of such a global pandemic in modern world – however, the role of PC is applicable in COVID-19 and other similar diseases. Since the first COVID-19 case was reported in December 2019 by Dong et al.,[5] the pandemic has affected countries in different ways. Governments have implemented control measures in the effort to minimise transmission and mortality and mitigate the impact on health systems. Many countries enforced lockdowns, allowing only essential workers to leave their homes. Health care systems in high-income countries have focused on expanding intensive care resources and delaying non-essential interventions to expand their ability to treat patients with COVID-19 and respiratory failure. All the measures have had both positive and negative effects on society in general and on health workers in particular.[6-9] The importance of PC in pandemics and humanitarian crises and emergencies has been reported and is increasingly being recognised.[10-12] In an effort to evaluate and assess the impact of the COVID-19 pandemic on PC workers, the International Association for Hospice and PC (IAHPC) conducted a cross-sectional online survey using Survey Monkey with its individual members. The survey consisted of multiple choice and open-ended questions. For example, participants could select among several options, including Highly affected; Somewhat affected, Not affected at all; Don’t know/Unsure. We expected that participants would report being highly affected or somewhat affected. The objective of the study was to explore how COVID-19 has impacted PC workers around the world. ## MATERIALS AND METHODS An ethics review board of the Fundacion FEMEBA in Argentina approved the study. The study consisted of a 20-question self-assessment survey, grouped into three blocks: The first block “Personal capacity and safety” assessed the impact of COVID-19 on participants (5 items). Questions included how respondents assessed their level of competency in caring for patients with COVID-19, how safe they felt when caring for those patients using a 3-point scale: “Not at all safe,” “Somewhat safe” and “Very safe,” and the availability of personal protective equipment (PPE) with the following scale “Appropriate,” “Adequate,” “Insufficient” and “We do not have any PPE.” A second block “Impact of the COVID-19 pandemic in care provision” of five questions assessed the impact on their ability to work and the impact on their institutions, using a 3-point answer scale (“Not affected at all,” “Somewhat affected” and “Highly affected” with the additional option “Don’t know/unsure”). This block covered the following domains: (1) Ability to continue working in usual role/job in PC, (2) service provision to non-COVID-19 PC patients, (3) staff availability in their own setting and (4) access to essential medicines for pain relief and PC. Respondents were provided space to elaborate on the situation and describe any adaptive/ coping strategies they may have implemented. A third block of five questions referenced the global policy documents and resolutions relevant to the COVID-19 pandemic and the advocacy strategies of national PC associations. Participants were given the opportunity to provide comments in text boxes, explaining or expanding on their responses. Five IAHPC staff members piloted the survey for content validity and recommended changes that were implemented prior to distribution. This paper describes the quantitative results from the first two blocks of questions described above. A separate report with the qualitative analysis of the comments will be prepared and submitted for publication. The survey was developed using SurveyMonkey© and distributed to IAHPC individual members by email. The survey was opened on 28 May and closed on 30 June 2020. An invitation to participate was sent on the 1st day to 979 individuals through email followed by a reminder on 22 June. Participants were provided with general information about the study as well as the objectives of the survey. Before responding, they had to confirm that their participation was voluntary and that they were 18 years or older. The survey was not anonymous, and participants were asked if an IAHPC staff member could contact them for follow-up interviews. As a gesture of gratitude, IAHPC extended the annual memberships of those who completed the survey by 3 months. Data were collected in Excel. Quantitative data were cleared of all incomplete data sets and exported to SPSS (v. 25, IBM Corporation, Armonk, USA) for the analyses. Data were stored in a secure account in the cloud, using Microsoft Office OneDrive. A statistical descriptive analysis as well as contingency analysis of all selected questions was conducted. The countries were dichotomised in high/more resourced (high-income countries [HICs] and upper-middle-income countries [UMICs]) and low/fewer resourced (lower-middle-income countries [LMICs] and low-income countries [LICs]). According to the World Bank, in 2019, countries with $1035 or less GNI per capita were classified as low income,$1036–$4045 lower-middle income,$4046–$12,535 as upper-middle income and over$12,536 as high income.[13] Chi-square tests were used for the latter, considering P < 0.05 statistically significant. ## RESULTS The email was opened by 494 members. Seventy-nine participants (response rate = 16%) from 41 countries responded the survey, representing between 14% and 22% of all the countries in each income category. Participants were mostly from HIC (34.2%) followed by LMIC (33%). Table 1 shows the distribution of participants by income group according to the World Bank classification. The regional representation was proportional to the IAHPC membership distribution[14] in each of the World Health Organisation’s (WHO) regions. Most of the participants were located in countries in the Americas (WHO region PAHO, n = 24; 30.4%) followed by Africa (WHO region AFRO, n = 17; 21.5%) and Western Pacific (WHO region WPRO, n = 16; 20.3%). ### Personal capacity and safety Twenty-three (29.1%) of the participants provide direct patient care for COVID-19 patients. Over 93% of those reported feeling very or somewhat competent in the provision of PC for patients with COVID-19 [Table 2]. No significant association was found between level of competence and having cared for patients with COVID-19 (Chi2 [2, n = 59] = 0.139, P = 0.933) nor between level of competence and country income group (Chi2 [4, n = 63] = 0.829, P = 0.935). More than half of the participants (n = 45; 57%) reported availability of PPE as adequate or appropriate. Participants in countries with more resources (HIC and UMIC) reported this significantly more often than those in countries with fewer resources (LMIC and LIC) (Chi2 [2, n = 67] = 10.946, P = 0.004). A large percentage (84.2%) of participants who provide direct patient care felt either unsafe or somewhat safe when caring for patients with COVID-19 (n = 9 and 39, respectively). A highly significant association was found between the perceived level of safety and level of competence (Chi2 [4, n = 57) = 27.282, P ≤ 0.000) and the availability of PPE (Chi2 [4, n = 52] = 14.208, P = 0.007). No significant association was found between the level of safety and country income group (Chi2 [4, n = 57] = 2.433, P = 0.296). ### Impact of the COVID-19 pandemic in care provision Figure 1 summarises participants’ responses regarding the impact of the pandemic on their institutions’ and their personal ability to continue providing care to patients with PC needs. ### Ability to continue working A large percentage (n = 66; 83.5%) of the participants reported being highly affected or somewhat affected in their ability to continue working in their PC role/job. A significant association was found between caring for patients with COVID-19 and being highly affected by the pandemic in their work (Chi2 [2, n = 68] = 8.958, P = 0.011). Ability to continue working was not associated with feeling of competency (Chi2 [4, n = 63] = 3.454, P = 0.485) or feeling safe (Chi2 [4, n = 57] = 2.815, P = 0.589). Ability to continue working was unrelated to country income group (Chi2 [4, n = 57] = 7.309, P = 0.120). ### Care provision to non-COVID PC patients The vast majority of the participants reported that the COVID-19 pandemic highly affected or somewhat affected (42% and 40%, respectively) their care provision to nonCOVID PC patients in their community or home settings. No significant association was found between the impact on the service provision to non-COVID PC patients and country income group (Chi2 [2, n = 75] = 7.309, P = 0.178). ### Staff availability More than half of the participants (n = 44; 55.7%) reported that the COVID-19 pandemic “somewhat affected,” staff availability in their workplace (as a result of death, sick leave, furlough, terminated or inability to work related to lockdowns – i.e., childcare) and 15.2% (n = 12) reported being highly affected. ### Access to essential medicines for pain relief and PC in work setting About 30% (n = 11) reported that the availability and access to essential medicines for PC and pain relief in their work setting were highly affected, and 24.1% (n = 19) reported availability as somewhat affected by the pandemic. Access to essential medicines for pain relief and PC for COVID and non-COVID PC patients was significantly less impacted in countries with more resources (HIC and UMIC) compared to LMIC and LICs (Chi2 [2, n = 67] = 11.893, P = 0.003). ## DISCUSSION By assessing the impact of COVID-19 pandemic, we found out that all aspects related with “personal capacity and safety” and “care provision” were affected. ### Personal capacity and protection Only one-third of PC professionals responding to the survey reported caring for patients with COVID-19, indicating both the limited integration of PC with communicable disease divisions and possibly reflecting a lack of awareness of the need for such integration.[15] Numerous publications have called for the integration of PC in rapid response plans and recognise the benefits of this type of comprehensive strategy.[1,10,16,17] On a positive note, a large majority of respondents reported feeling very or somewhat competent to provide PC to COVID-19 patients. The PC community has called for the inclusion of basic PC training in all medical and nursing schools[18,19] for many years. This includes the knowledge and skills to appropriately manage symptoms frequently occurring in COVID-19 patients, such as dyspnoea, depression, diarrhoea and anxiety, and the competency to appropriately communicate with patients and families regarding goals of care and expectations of treatment and interventions.[3] Feelings of safety were related to perceptions regarding adequacy of PPE as well as the perceptions of competence. According to the WHO, only 26% of member states report having occupational safety plans for healthcare workers.[20] Almost 25% of the respondents (all located in lower and LICs) cited insufficient availability of PPE or none at all. When asked about strategies and adaptive responses, participants reported using innovative approaches, such as the uptake and integration of telemedicine and manufacturing inexpensive PPE using large plastic garbage bags. Respondents’ comments will form the basis of a qualitative paper based on the survey analysis. ### Impact of the COVID-19 pandemic in care provision A large percentage of the participants reported being highly or somewhat affected in their ability to continue in their usual role/job in PC, regardless of their respective country’s income group, reflecting the global scale of this pandemic. Similarly, a vast majority of the participants reported that the COVID-19 pandemic highly or somewhat affected their care provision to non-COVID PC patients in the patients’ own setting, regardless of the country’s income level. Both responses underscore the need to develop and implement strategies to ensure that patients with non-communicable diseases and other conditions continue to access and receive much-needed treatment and care. Some of these include implementing safety protocols for consultation services, proactively reaching out to patients who may have missed appointments and using telemedicine technology when possible. A large majority of the participants reported that the effects of the pandemic affected staff availability, as their coworkers dropped out of the workforce due to quarantine, sickness or death, or were unable to work due to lockdown effects for example, the closure of in-person classes in schools forcing parents to stay at home to care for children. This resulted in work overload, feelings of burnout and increased levels of anxiety and stress among healthcare workers. Respondents in countries with fewer resources reported most frequently on the pandemic’s negative impact on access to essential medicines for pain relief and PC than their counterparts in HICs. This is a significant finding which underscores the inequity of the high burden of health-related suffering borne by patients in poor settings. The finding is confirmed by the numerous pre-pandemic reports and extensive studies showing limited availability and access to essential medicines for pain relief and PC in low and LMICs.[21-24] Some medicines such as opioids may be used for non-COVID and COVID patients, including opioids for pain relief and breathlessness. The global PC community[25-27] and UN special organisations such as the WHO, the Commission on Narcotic Drugs,[28] the Human Rights Council[29] and the International Narcotics Control Board[30] have been calling on member states to improve availability and ensure access to essential PC medicines for years. Some of these essential medicines have been regulated under the international drug control conventions for more than half a century and this makes improving system-level access, particularly during a pandemic, a high-level, evidence-based and multistakeholder process. In member states that have yet to undertake this process, shortages and stockouts are endemic. In many countries and regions, where survey participants live and work, access to morphine for the treatment of respiratory distress associated with the COVID-19 is limited or nonexistent.[31] The COVID-19 pandemic has resulted in enormous suffering across the world.[32] Although there have been some national and regional reports on preparedness of PC services to respond to the pandemic,[33,34] this study is the first we know of to report on the impact of the COVID-19 pandemic on PC workers around the world, with participation of individuals in different geographical regions and income groups. The extent of the impact is perceived and experienced differently in different countries and this study provides some insight into this differential impact. ### Study limitations Limitations of this study are inherent to any convenience sample or cross-sectional study. Self-reporting bias may have also affected responses. The content of the questionnaire was validated but no statistical test was carried on for further validation. The IAHPC is an international membership organisation with members located in all regions of the world and, thus, one of the few with such a database of PC workers. This sample may not be representative of the global PC workforce and thus the results are not globally generalisable. However, this paper provides a glimpse into the challenges and current situations faced by PC providers around the world during the COVID-19 pandemic which may be happening across countries. The response rate was low and there may be several reasons. During the time when the survey was conducted: (1) Many healthcare workers were struggling with a high workload due to the pandemic. (2) Numerous surveys and webinars were being disseminated and offered, possibly affecting the willingness, ability and interest of participants to respond. (3) Many workers were under lockdown and ordered to stay at home. Workers from LICs often face technical difficulties with internet access, which affects their ability to participate. (4) Some may not feel comfortable sharing information on what may be considered failures or limitations to appropriate protection, human resource management or response preparedness in their workplaces or health systems. In spite of this, a response rate below 10% is not uncommon for online surveys.[35] The survey was implemented during a limited time frame in the 1st months of the epidemic trajectory, but the course of the pandemic varies between the countries regarding onset, speed, severity and response. Therefore, it was not possible to relate individual responses with the phase of the pandemic in each of the participant’s country or extrapolate any data for specific pandemic stages. The follow-up qualitative analysis of the respondents’ comments will add to a more in-depth interpretation of the data. ## CONCLUSION ### Declaration of patient consent Patient’s consent not required as there are no patients in this study. Nil. ### Conflicts of interest There are no conflicts of interest. ## References 1. , , , , . The key role of palliative care in response to the COVID-19 tsunami of suffering. Lancet. 2020;395:1467-9. 2. , , , , , , et al. Redefining palliative care-A new consensus-based definition. J Pain Symptom Manage. 2020;60:754-64. 3. . Clinical Management of COVID-19 - Interim guidance. . Geneva: WHO; Available from: https://apps.who.int/iris/rest/bitstreams/1278777/retrieve [Last accessed on 2020 Oct 03] 4. . Integrating Palliative care and Symptom Relief Into Responses to Humanitarian Emergencies and Crises: A WHO Guide Geneva: World Health Organization; . 5. , , . An interactive web-based dashboard to track COVID-19 in real time. Lancet Infect Dis. 2020;20:533-4. 6. . The Distributional Effects of COVID-19 and Mitigation Policies Globalization and Monetary Policy Institute Working Paper No 400. . Available from: https://ssrn.com/abstract=3686276 or http://dx.doi.org/10.24149/gwp400 [Last accessed on 2020 Oct 03] 7. , , , , , , et al. Impact on mental health care and on mental health service users of the COVID-19 pandemic: A mixed methods survey of UK mental health care staff. Soc Psychiatry Psychiatr Epidemiol. 2021;56:25-37. 8. , , , , , , et al. Effect of COVID-19 lockdown on alcohol consumption in patients with pre-existing alcohol use disorder. Lancet Gastroenterol Hepatol. 2020;5:886-7. 9. , , . Physical and mental health impacts of COVID-19 on healthcare workers: A scoping review. Int J Emerg Med. 2020;13:40. 10. , , , , , , et al. The role and response of palliative care and hospice services in epidemics and pandemics: A rapid review to inform practice during the COVID-19 pandemic. J Pain Symptom Manage. 2020;60:e31-40. 11. , , , , , , et al. Palliative care in humanitarian crises: A review of the literature. J Int Humanitarian Action. 2018;3:5. 12. , , , , , , et al. Palliative care in humanitarian crises: Always something to offer. Lancet. 2017;389:1498-9. 13. . World Bank Country and Lending Groups. . Available from: https://datahelpdesk.worldbank.org/knowledgebase/articles/906519-world-bank-country-and-lending-groups [Last accessed on 2020 Sep 13] 14. . IAHPC Members. . Available from: https://hospicecare.com/members-section/iahpc-members-list/ [Last accessed on 2020 May 17] 15. , , . Integration of palliative care into COVID-19 pandemic planning. BMJ Support Palliat Care. 2021;11:40-4. 16. , , , . Pandemic palliative care: Beyond ventilators and saving lives. CMAJ. 2020;192:E400-4. 17. , . To face coronavirus disease 2019, surgeons must embrace palliative care. JAMA Surg. 2020;155:681-2. 18. , , , , , . Nursing education on palliative care across Europe: Results and recommendations from the EAPC Taskforce on preparation for practice in palliative care nursing across the EU based on an online-survey and country reports. Palliat Med. 2021;35:130-41. 19. , , . Primary palliative care education for trainees in U.S. medical residencies and fellowships: A Scoping Review. J Palliat Med. 2021;24:354-75. 20. . Weekly Operational Update on COVID-19. . Available from: https://www.who.int/docs/default-source/coronaviruse/weekly-updates/wou-9-september-2020-cleared.pdf?sfvrsn=d39784f7_2 [Last accessed on 2020 Sep 25] 21. , , , , , , et al. Use of and barriers to access to opioid analgesics: A worldwide, regional, and national study. Lancet. 2016;387:1644-56. 22. , , , , , . Solving the global crisis in access to pain relief: Lessons from country actions. Am J Public Health. 2019;109:58-60. 23. , , , , . The Global Opioid Policy Initiative (GOPI) project to evaluate the availability and accessibility of opioids for the management of cancer pain in Africa, Asia, Latin America and the Caribbean, and the Middle East: Introduction and methodology. Ann Oncol. 2013;24:i7-13. 24. , , . Global disparities in access to pain relief In: , ed. Pain: A Review Guide. Cham: Springer International Publishing; . p. 1185-9. 25. , . The Declaration Montreal: Access to pain management is a fundamental human right. Pain. 2011;152:2673-4. 26. , , , , , , et al. Alleviating the access abyss in palliative care and pain relief-an imperative of universal health coverage: The Lancet Commission report. Lancet. 2018;391:1391-454. 27. , , , , , , et al. An interdisciplinary working group to advocate universal palliative care and pain relief access. J Palliat Med. 2020;23:882-3. 28. . Ensuring Availability of Controlled Medications for the Relief of Pain and Preventing Diversion and Abuse Striking the Right Balance to Achieve the Optimal Public Health Outcome. . Discussion Paper Based on a Scientific Workshop New York: UN. Available from: https://www.unodc.org/docs/treatment/Pain/Ensuring_availability_of_controlled_medications_FINAL_15_March_CND_version.pdf [Last accessed on 2020 Sep 20] 29. . Right to Pain Relief: 5.5 Billion People Have no Access to Treatment, Warn UN Experts World Hospice and Palliative Care Day - Saturday 10 October 2015. . Available from: https://www.ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx?NewsID=16590 [Last accessed on 2020 Aug 30]
2022-01-19 11:02:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2805216312408447, "perplexity": 6510.110980499604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301309.22/warc/CC-MAIN-20220119094810-20220119124810-00183.warc.gz"}
https://atcoder.jp/contests/abc067/tasks/abc067_b
Contest Duration: ~ (local time) (100 minutes) Back to Home B - Snake Toy / Time Limit: 2 sec / Memory Limit: 256 MB ### 問題文 すぬけくんは N 本の棒を持っています。 i 番目の棒の長さは l_i です。 すぬけくんは K 本の棒を選んでつなげて、ヘビのおもちゃを作りたいです。 ヘビのおもちゃの長さは選んだ棒たちの長さの総和で表されます。 ヘビのおもちゃの長さとしてありうる長さのうち、最大値を求めなさい。 ### 制約 • 1 \leq K \leq N \leq 50 • 1 \leq l_i \leq 50 • l_i は整数 ### 入力 N K l_1 l_2 l_3 ... l_{N} 5 3 1 2 3 4 5 12 ### 入力例 2 15 14 50 26 27 21 41 7 42 35 7 5 5 36 39 1 45 ### 出力例 2 386 Score : 200 points ### Problem Statement Snuke has N sticks. The length of the i-th stick is l_i. Snuke is making a snake toy by joining K of the sticks together. The length of the toy is represented by the sum of the individual sticks that compose it. Find the maximum possible length of the toy. ### Constraints • 1 \leq K \leq N \leq 50 • 1 \leq l_i \leq 50 • l_i is an integer. ### Input Input is given from Standard Input in the following format: N K l_1 l_2 l_3 ... l_{N} 5 3 1 2 3 4 5 ### Sample Output 1 12 You can make a toy of length 12 by joining the sticks of lengths 3, 4 and 5, which is the maximum possible length. ### Sample Input 2 15 14 50 26 27 21 41 7 42 35 7 5 5 36 39 1 45 386
2020-08-07 01:40:38
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9896748661994934, "perplexity": 2566.5052517130907}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737050.56/warc/CC-MAIN-20200807000315-20200807030315-00192.warc.gz"}
https://science.astron.nl/telescopes/lofar/observing-with-lofar/regular-proposals/instructions/
# Instructions Instructions ## Proposal Call to the Worldwide Community International LOFAR Telescope Cycle 18: 01 June 2022 – 30 November 2022 Submission deadline Wednesday 09 March 2022, 12 UT Submission only via the online tool NorthStar. ** Proposers must ensure their justification files adhere to the instructions given in Northstar, including restrictions on formats and proposal length, repeated below here. ** ### WORLD-WIDE ACCESS: Time on the ILT is available to scientists from the worldwide community. The single-cycle time offered in this call is allocated in tandem between National Consortia and the ILT-PC. Scientific excellence and breadth as well as the technical strength of the proposals are taken along in the evaluations. The ILT-PC will define restricted data access rights (default period 1 year) based on the specific science goals and arguments in the proposal. Other groups may be allocated simultaneous access for different science. ### SINGLE-CYCLE ONLY: For Cycle 18, only single-cycle proposals are invited in any supported general-user mode. Single-cycle projects should achieve their science goals with an allocation only in the upcoming semester (1 June 2022 - 30 November 2022). Proposals to supplement Long-Term science already reviewed following previous deadlines are not eligible for the present call; pursuit of follow-up science related to results already in hand may be proposed. ### LST AND OTHER RESOURCE AVAILABILITY: LST availability: The availability of observing time as a function of LST after taking into account the already active long-term projects is presented here. Some further limitations due to tentative allocations made to Long Term proposals are not shown, since the ILT-PC may still tension off the merits of Cycle 18 proposals for that time. Processing time and/or data storage capacity: These can be limiting resources. Each proposal must request processing time to match the observing time within appropriate documented ratios (see documentation online), or should justify how processing will proceed elsewhere. Support staff effort required to support the proposed projects is taken into account during allocation. An online tool is available to understand whether a proposal entails a low, medium, or high support load of the Science Data Centre Operations staff. During the allocation process, the ILT-PC will be advised about the amounts of support effort requested and available, and these will be used as boundary conditions for proposal allocations; support will be treated as a scarce resource in the same way as has become customary on LOFAR for processing and data storage resources.  Advice is available online on how proposers can reduce the support load of their project by a prudent choice of observing mode. The proposal may also indicate that the support will be carried out by members of the proposing team; in that case, both the expertise and the work plan must be carefully argued in the proposal. Priority classes: Due to ongoing upgrade work, system availability and stability will vary with time. The ILT-PC will classify observing allocations as A and/or B priority time; roughly half in each. Allocations in A priority will have a high likelihood to yield a successful observation; in case such an observation suffers substantial failures, it will be repeated (once) at a later date. Allocations in priority B will likely be realized partially; they are carried out on a best-efforts basis, where the total of all priority B allocations requires an optimistic availability scenario. ### SYSTEM CAPABILITIES: The ILT is a powerful radio telescope for frequencies below 240 MHz that offers state-of-the-art observing capabilities thanks to its phased-array technology with digital beam-forming. LOFAR delivers correlated visibility data for synthesis imaging, plus (in)coherently-added single and multiple station data (several beam-formed modes) as well as transient buffer read-out, for example for studies of pulsars, transients, and cosmic rays. LOFAR capabilities are described in detail online. Note the restrictions on functionalities offered in Cycle 18, as described in this page. Proposals should request only the available system observing modes and functionalities described online. ### CO-OBSERVING OPTION FOR STANDARD HBA OR LBA IMAGING WITH LOFAR SKY SURVEYS: the possibility is offered to carry out imaging in co-observing mode with the LOFAR Survey projects, in their standard operating mode. This has the advantage for co-observing PIs that calibration and standard imaging are a routine process under guidance by the survey teams (see details here for HBA and here for LBA). ### PILOT LINC PROJECT: ASTRON is currently implementing in production LINC, the direction-independent calibration pipeline, which produces direction-independent calibrated visibilities and wide-band images of the target field, and diagnostic plots. Details about this pipeline, including its performance, are given online. While LINC cannot yet be widely offered in Cycle 18, we will select a sample of appropriate projects that will be given the opportunity to obtain data products processed through this pipeline. In this respect, PIs who are interested in taking advantage of this option for the reduction of their data should clearly indicate that in the technical justification of their proposals. ### FURTHER PROPOSAL REQUIREMENTS: •     For all proposals, the technical case must argue both the optimal and the minimal required total amount of time and other resources (and any requirements on cadence or time span). If the stated minimum time or other requirements cannot be allocated, the proposal will not be carried out at all. •     Proposals for multiple observations must list and argue the preferred priority order; this will be considered by the ILT-PC along with other factors in making (possibly partial) allocations. •     Proposers are required to verify whether any relevant data are already available in the LOFAR Long Term Archive. In view of the novel and evolving character of the ILT, proposers are strongly urged to get in contact with Science Data Center Operations through the JIRA helpdesk well ahead of the deadline. Novice groups may wish to seek or request to be connected to suitable collaborators, and should also consider keeping the scope of initial projects modest while they become familiar with the complexities of data handling and analysis. The Science Data Centre Operations group will explore the possibility for a few users unfamiliar with the reduction of LOFAR data to come to ASTRON (or have online interaction) for assistance with this. If this is desired, it must be specified in the proposal. Limited travel subsidies for eligible users can be supported through the Horizon-Europe ORP project. Further details are given online. Dr. R.C. Vermeulen Director, International LOFAR Telescope ## Target List Order If proposals request multiple observations (e.g. multiple fields/pointings, or multiple instrumental settings) these will be taken in all cases to be listed in decreasing order of preference/priority of the proposers. In case only a partial allocation can be made, the ILT-PC may, based on its science assessment, decide to deviate from the proposed priorities, but the proposer priorities must anyway be clear in advance. It will be assumed when dynamic scheduling is carried out that the list of targets is in priority order. ## Target Declination Limits As a phased-array system installed on level ground, LOFAR has greatest sensitivity when observing at high elevations.  Below approximately 30 degrees elevation sensitivity drops significantly such that the Sun becomes the only viable target for interferometric observation below about 10 degrees elevation. Commissioning observations have managed successful imaging of a target at -7 degrees declination, but imaging is not straightforward and the following points need to be noted: • The thermal noise cannot be attained at these declinations; • Short baselines have to be flagged; • Some additional flagging of data may be required. Furthermore, the shorter length of time that such targets are above a useable horizon can severely limit the u-v coverage attainable.  Therefore for interferometric observations, -5 degrees declination should be regarded as a lower limit and targets should preferably be above the celestial equator.  Proposers wishing to image targets below the celestial equator are expected to justify that their observing programme can attain the sensitivity and/or u-v coverage required. Pulsar observations have been successfully carried out in beam-formed mode at declinations down to -29 degrees.  In this mode, the main limitation is the sensitivity required and the duration of observation needed to attain this sensitivity. Support availability and User Shared Support Mode during cycle 18 To calculate the support level required for your project please use this tool. To maximize the telescope observing hours on sky, more observing hours have been offered than can be supported by ASTRON. Therefore, users can assist in running their own projects in a “user shared-support mode”. To this aim, proposers should state it in the proposal, indicating that they are able to provide experienced personnel to the support activities of their full project (see details below). In this model, ASTRON personnel would only have a supervisory role. In addition, this page explains which functionality requires a higher or lower support load. Project support consists of several labour-intensive or critical activities, such as • scheduling • preparing observations • reporting • data handling Depending on the specific characteristics of a project, the amount of support required can vary. When preparing a proposal, a tool is provided to determine the support load of a project (high, average or low). A detailed assessment of the amount of support needed (in hours) will be performed in the review process itself by the ASTRON personnel. This amount will be treated as a limited resource. Projects requiring a high degree of support should really consider using the ‘user shared support’ mode. More details about the project features that can raise the degree of support are explained below. User shared-support mode Science teams that have proven expertise in particular LOFAR observing modes and/or observing support procedures can provide personnel to handle the support workload for their full project, i.e. users are expected to handle their own projects for the full budget of allocated telescope time. This should be clearly written in the proposal. Before the start of each cycle, these users will be updated by ASTRON on current procedures & tools to handle the data flow for their project. Examples of project features requiring high levels of support include, but are not limited to: • Short observations or sets of observations, especially when scheduled on different days. e.g., 2 hour interferometric observations, “lucky imaging” and “interleaved observations”. • Parallel observations (e.g. commonly used setups for scintillometry and solar studies. • Restrictive scheduling constraints, e.g., at specific orbital phases; commensal with other telescopes; • Responsive Telescope • Updates to target lists during the cycles • Manual execution of system tasks (e.g. dynamic spectrum pipeline, manual ingests) • Multi-beam observations with >4 beams Projects can lower the required level of support by ASTRON by adopting the following criteria: • Large fraction of user support in administering the project [ “user shared support mode”, explained in the previous Section] • In particular, projects satisfying all or most of the requirements below are considered low support: • can use randomly occurring time slots with no LST constrain and flexible duration • can be rapidly started, e.g. because their setup does not require changes or can be automatically selected • are conducted almost entirely by the project team with little or no involvement from Observatory staff. • have significant independent processing resources and place little or no pressure on the ILT data processing queue. • have significant flexibility in terms of the minimum number of stations required. ## Instructions for Justification File Preparation Instructions for the NorthStar justification file: The proposers should make their case in a fully self-contained science justification, uploaded as an A4 pdf file. The page-limit for the pdf-file varies per call for proposals. The document should include the science justification, additional technical information that is not provided in the "technical questions" section within the Northstar tool, and any desired ancillary material such as figures and tables. Page limits are dependent on the amount of observing time requested. The total of the science and technical justification, including any desired figures and tabular material, will have page limit of 4 pages for small requests, and the following rules may increase this limit to at most 10 pages: 1. Single-Cycle proposals: the base allowance is 3 pages, plus 1 page per 250 hours of observing time requested (request 1 hour: 4 pages, from 251 hours: 5 pages, ... from 1001 hours: 8 pages). 2. Long-term proposals follow the same rule as above, but 2 extra pages are allowed. 3. Progress reports for long-term active project have a limit of 3 pages. EXAMPLE: the maximum, in case>1000 hrs are requested, is 8 pages for single-cycle proposals, 10 pages for for long-term proposals. Proposals exceeding the page limits will be rejected. For long-term proposals, any specific requirement about how the observing time should be distributed between the four cycles must be clearly specified in the justification document. If this information cannot be found, it will be assumed that the observing time can be distributed equally between the four cycles. Northstar accepts uploads of a single pdf file with a minimum font size of 11 pt (12 pt recommended). The pdf can be generated from, e.g., Microsoft Word, LaTeX and through several other routes at the proposer's choice. An example latex template is provided below with the appropriate sections and descriptions. Note: all PI's should check the pdf of the proposal BEFORE submission, making sure that text is not overlapping with headers and, in general, that no layout issues are present that make text unreadable. Proposals affected by layout issues may be rejected. For a proposal to be fully considered submitted, the pdf file should contain at least the following sections: 1 Scientific Rationale Scientific justification of the proposal %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%% Example TEX file %%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%% Howto generate the pdf %%%%%% %-> latex template.tex %-> dvipdf template.dvi %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass[a4paper,12pt]{article} \oddsidemargin=-0.54cm \evensidemargin=-0.54cm \topmargin=-1cm \textwidth=17cm \textheight=22cm \pagestyle{empty} \begin{document} %%%% Title of proposal \begin{center} {\bf \Large Proposal Title} \\*[3mm] \end{center} \section{Scientific Rationale} %%%% TEXT OF JUSTIFICATION HERE! %%%% Non-mandatory sections % \section{Technical Addendum} %%%% Additional technical justification which is not covered in the "Technical questions" section of the Northstar tool %%%% Example sections for additional technical information % \subsection{Sensitivity and instrument setup} % \subsection{data volumes/rates} % \subsection{Processing requirements} % \subsection{Data storage and LTA requirements} \end{document}
2022-12-01 21:01:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5096372961997986, "perplexity": 3447.7912665280364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710869.86/warc/CC-MAIN-20221201185801-20221201215801-00402.warc.gz"}
https://encyclopediaofmath.org/wiki/Belousov-Zhabotinskii_reaction
# Belousov-Zhabotinskii reaction symbolic dynamics in the The Belousov–Zhabotinskii reaction has revealed most of the well-known routes to chaos, including period-doubling, intermittency, quasi-periodicity, frequency locking, and fractal torus [a1], [a2], [a3], [a4], [a5]. However, although the data have been shown to display unambiguous features of deterministic chaos, the understanding of the nature and the origin of the observed behaviour has been incomplete. In 1976, O.E. Rössler suggested an intuitive interpretation to explain chemical chaos. His feeling was that non-periodic wandering trajectories might arise in chemical systems from a pleated slow manifold (Fig.a1a), if the flow on the lower surface of the pleat had the property of returning trajectories to a small neighbourhood of an unstable focus lying on the upper surface. More recently, Rössler's terminology of spiral-type, screw-type and funnel-type strange attractors has been revisited in terms of chaotic orbits that occur in nearly homoclinic conditions [a6]. By Shil'nikov's theorem, there exist uncountably many non-periodic trajectories in systems that display a homoclinic orbit bi-asymptotic to a saddle-focus at $O$, provided that ${\rho / \lambda } <1$, where the eigenvalues of $O$ are $( - \lambda, \rho \pm i \omega )$. This subset of chaotic trajectories is actually in one-to-one correspondence with a shift automorphism with an infinite number of symbols. Since homoclinic orbits are structurally unstable objects lying on codimension-one hypersurfaces in the constraint space, one can reasonably hope to cross these hypersurfaces when following a one-parameter path. The bifurcation structure encountered near homoclinic orbits involves infinite sequences of saddle-node and period-doubling bifurcations. Some numerical and experimental evidences for Shil'nikov homoclinic chaos in non-equilibrium chemical systems are obtained in [a7], [a8]. Figure: b110260a A homoclinic re-injection process resulting from a slow manifold effect Figure: b110260b The homoclinic re-injection process observed in a Belousov–Zhabotinskii experiment; only a part of the time series is shown Figure: b110260c Homoclinic orbit computed in a $3$- dimensional Rössler-like model ## Poincaré map model for homoclinic chaos. To connect Rössler's intuition to Shil'nikov's mathematical results, one can use a geometrical approach to construct a Poincaré return map model for homoclinic chaos [a7], [a8]. This approach consists in constructing a simple model of the flow that retains the two essential features of the flow near homoclinicity: i) an orbit close to a homoclinic orbit spends most of its time near the saddle-focus $O$ where the dynamics is approximately linear; and ii) the non-linear properties of the flow are such that the trajectories return to a small neighbourhood of $O$. Assuming the re-injection process to be a rigid motion, one obtains a $2$- dimensional Poincaré map model, which reduces to the following $1$- dimensional mapping in the infinite area contraction limit: $$X ^ \prime = \sqrt {X ^ {2} + {\widetilde{y} } ^ {2} } e ^ { {{( { {{ \mathop{\rm arctan} } {\widetilde{y} } } / X } + k \pi ) \rho } / \omega } } - X _ {H} + {\widetilde{x} } ,$$ where the parameters $\lambda$, $\rho$, $\omega$ characterize the dynamics near the saddle focus while $X _ {H} , {\widetilde{x} }$ and ${\widetilde{y} }$ parametrize the non-linear re-injection process. ${\widetilde{x} }$ and ${\widetilde{y} }$ measure the deviation from homoclinicity ( ${\widetilde{x} } = {\widetilde{y} } = 0$). $k$ is an integer, corresponding to the number of half-turns the trajectory completes around the saddle-focus in between two successive crossings of the Poincaré plane that must be chosen transverse to the re-injection process. Fig.a2a illustrates the $1$- dimensional mapping of a "spiral-type" strange attractor at homoclinicity. This piecewise-linear mapping contains an infinite number of increasing branches, which converge geometrically to $X =0$ with ratio $\delta = { \mathop{\rm exp} } ( { {- 2 \pi \rho } / \omega } )$. A symbol $k = 2m$( even) is assigned to each of these branches. "Spiral-type" strange attractors are such that the re-injection process returns the trajectories on one side of the saddle-focus only. Therefore, the number of half-turns of the trajectories around it is either even, $k = 2m$( increasing branches), or odd, $k = 2m + 1$( decreasing branches). Since for "screw-type" strange attractors the re-injection process returns the trajectories on both sides of $O$, the corresponding $1$- dimensional mapping exhibits infinite sequences of increasing ( $k = 2m$) and decreasing ( $k = 2m + 1$) branches, which both converge geometrically to $X = 0$, as shown in Fig.a2b. At finite distance from homoclinicity, the $1$- dimensional mapping of "screw-type" strange attractors contains only a finite number of increasing and decreasing branches, separated by a quadratic well (Fig.a2c). When approaching homoclinicity, the successive disappearances (at the bottom) and re-appearances (at the top) of the well in the invariant square governs the creation of new branches via a cascade of saddle-node bifurcations [a6]. Figure: b110260d "Spiral-type" strange attractor at homoclinicity Figure: b110260e "Screw-type" strange attractor at homoclinicity Figure: b110260f "Screw-type" strange attractor at finite distance from homoclinicity Figure: b110260g Sets of iterates of the $1$- dimensional mapping model ## Homoclinic chaos in a seven-variable Oregonator model. The controversy about the deterministic or stochastic character of chemical chaos has been essentially perpetuated by numerical simulations [a1], [a2], [a3], [a4], [a5]. In fact, none among all the chemical models proposed has been able to reproduce the scenarios to chaos observed in bench experiments. Recently, some success has been recorded in reproducing most of the alternating periodic-chaotic sequences detected in experiments with a seven-variable Oregonator model which retains the main steps of the F.K.N. model of the Belousov–Zhabotinskii reaction [a4], [a5]. While these simulations of experimental sequences have dispelled all doubts about the deterministic nature of chemical chaos, the striking resemblance of some $1$- dimensional mappings obtained with this Oregonator model (Fig.a3), with the multi-branched $1$- dimensional mapping model (Fig.a2) strongly indicates that the "screw-type" strange attractors observed along these sequences are the precursors to Shil'nikov's homoclinic chaos [a7], [a8], [a6]. Figure: b110260h Poincaré map Figure: b110260i The corresponding $1$- dimensional mapping Figure: b110260j Simulation of a "screw-type" strange attractor with a seven-variable Oregonator model ## Experimental evidence for homoclinic chaos. To conclude definitely the homoclinic nature of chemical chaos, some convincing experimental results have been obtained with the Belousov–Zhabotinskii reaction [a7], [a8]. On increasing the flow rate from low values, the thermodynamic branch has been traced up to a critical value (subcritical Hopf bifurcation), where a discontinuous transition leads to non-periodic oscillations (Fig.a4a). Apparently, at random a large amplitude oscillation occurs and is followed by an episode of small amplitude oscillations, the envelop of which increases nearly exponentially. The trajectory returns once in a while to the vicinity of a saddle-focus and close enough (Fig.a1b) so that the phase portrait looks like a "spiral-type" strange attractor (Fig.a4b). This observation is confirmed when reconstructing a $2$- dimensional Poincaré map (Fig.a4c): the whole set of experimental points is, to a good approximation, located along a smooth curve. This attests that the dynamics is attracted to a nearly $2$- dimensional fractal surface with a strong transverse packing of the sheets of the attractor. The corresponding $1$- dimensional mapping is shown in Fig.a4d; a symbol $m$( $k = 2m + 1$) has been assigned to each point of this $1$- dimensional mapping according to the number $m$ of small amplitude oscillations which immediately follow the homoclinic re-injection. Thus, $m$ is the number of small amplitude oscillations in a basic pattern in the time series (Fig.a4a). The distribution of the symbols in Fig.a4d displays some ordering which appears to be consistent with the theoretical ordering for "spiral-type" strange attractor in nearly homoclinic condition. Regarding the experimental uncertainty, the points associated with the same symbol fall roughly in vertical strips; moreover $m$ increases (from $0$ to $8$) from the left to the right of the figure, which is the clue that the chronology of patterns in the time-series in Fig.a4a conforms to the symbolic dynamics predicted by Shil'nikov's theory of homoclinic chaos. The reproduction in Fig.a5 of the experimental situation in a low-dimensional differential system of Rössler type consolidates the homoclinic appellation for the data shown in Fig.a4. This is the first experimental proof of the homoclinic nature of the chemical chaos occurring in the Belousov–Zhabotinskii experiment [a7], [a8], [a6]. Figure: b110260k Time series Figure: b110260l Phase portrait Figure: b110260m Poincaré map Figure: b110260n $1$- dimensional mapping Figure: b110260o Experimental homoclinic chaos observed in the Belousov–Zhabotinskii reaction Figure: b110260p Time series Figure: b110260q Phase portrait Figure: b110260r Poincaré map Figure: b110260s $1$- dimensional mapping Figure: b110260t Homoclinic chaos computed with a $3$- dimensional ordinary differential system in conditions close to the subcritical Hopf bifurcation of the origin $O$( in order to mimic the experimental situation in Fig.a4) #### References [a1] "Nonlinear phenomena in chemical dynamics" C. Vidal (ed.) A. Pacault (ed.) , Springer (1981) [a2] C. Vidal, A. Pacault, "Nonequilibrium dynamics in chemical systems" , Springer (1984) [a3] "Spatial inhomogeneities and transient behaviour in chemical kinetics" P. Gray (ed.) G. Nicolis (ed.) F. Baras (ed.) P. Borckmans (ed.) S.K. Scott (ed.) , Manchester Univ. Press (1990) [a4] A. Arneodo, F. Argoul, P. Richetti, J.C. Roux, "The Belousov–Zhabotinskii reaction: a paradigm for theoretical studies of dynamical systems" H.G. Bothe (ed.) W. Ebeling (ed.) A.M. Zurzhanski (ed.) M. Peschel (ed.) , Dynamical Systems and Environmental Models , Akademie Verlag (1987) pp. 122 [a5] F. Argoul, A. Arneodo, P. Richetti, J.C. Roux, H.L. Swinney, "Chaos in chemical systems: from hints to confirmation" Acc. Chem. Res. , 20 (1987) pp. 436 [a6] P. Gaspard, A. Arneodo, R. Kapral, C. Sparrow, "Homoclinic chaos" Physica D , 62 (1993) pp. 1–372 [a7] F. Argoul, A. Arneodo, P. Richetti, "Symbolic dynamics in the Belousov–Zhabotinskii reaction: from Rössler's intuiton to experimental evidence for Shil'nikov's homoclinic chaos" G. Baier (ed.) M. Klein (ed.) , A Chaotic Hierarchy , World Sci. (1991) pp. 79 (Phys. Lett. 120A (1987), 269) [a8] A. Arneodo, F. Argoul, J. Elezgaray, P. Richetti, "Homoclinic chaos in chemical systems" Physica D , 62 (1993) pp. 134 How to Cite This Entry: Belousov-Zhabotinskii reaction. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Belousov-Zhabotinskii_reaction&oldid=46008 This article was adapted from an original article by A. ArneodoF. ArgoulP. Richetti (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
2023-02-04 05:24:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7624576687812805, "perplexity": 3119.253371532283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500094.26/warc/CC-MAIN-20230204044030-20230204074030-00058.warc.gz"}
http://mathhelpforum.com/calculus/111913-derivatives-inverse-trigonometry-functions.html
# Math Help - derivatives of inverse trigonometry functions 1. ## derivatives of inverse trigonometry functions Find the derivatives of the following: 1) f(x) = cos-1(1/x-1) 2) f(x) = sin-1(2x+1) 3) f(x) = sin-1(pi/x) thankyou for any help given !! 2. Originally Posted by iiharthero Find the derivatives of the following: 1) f(x) = cos-1(1/x-1) 2) f(x) = sin-1(2x+1) 3) f(x) = sin-1(pi/x) thankyou for any help given !! In all 3 questions you have to use the chain rule. I'll show you how to do a) and leave the rest for you: 1. $f(x)=\arccos(x)~\implies~f'(x)=\dfrac{-1}{\sqrt{1-x^2}}$ $f(x)=\arccos\left(\dfrac1x-1\right)~\implies~f'(x)=\dfrac{-1}{\sqrt{1-\left(\dfrac1x-1\right)^2}} \cdot \left(-\dfrac1{x^2}\right)$ 2. Simplify: $\dfrac{-1}{\sqrt{1-\dfrac1{x^2}+\dfrac2x-1}} \cdot \left(-\dfrac1{x^2}\right) = \dfrac1{\dfrac1{|x|} \cdot \sqrt{2x-1}} \cdot \dfrac1{x^2} = \dfrac1{|x| \cdot \sqrt{2x-1}}$
2016-06-28 18:15:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6689175963401794, "perplexity": 1791.652763318715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00065-ip-10-164-35-72.ec2.internal.warc.gz"}
https://brilliant.org/problems/battle-of-castles/
# Battle of Castles Logic Level 4 What is white's strongest move in this position? Details and Assumptions White pawns are moving upwards (the bottom left square is a1). Take $$(1,1)$$ to be the bottom left square (so $$(8,8)$$ is the top right). Assign the following values: • King = 10 • Queen = 9 • Rook = 5 • Bishop = 4 • Knight = 3 • Pawn = 1 Submit your answer as $$v \times x \times y \times P$$ where $$v$$ is the value (as defined above) of the piece which is moved, $$(x,y)$$ is the square the piece is moved onto, and $$P$$ is the value of the piece the pawn is promoted to, should the move be a pawn promotion. If the move is not a promotion, let $$P = 1$$. Credit The original problem this position is pulled from is from a particular problem on chess.com's Tactics Trainer. ×
2017-12-18 12:58:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5057185292243958, "perplexity": 2267.6850061445984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948616132.89/warc/CC-MAIN-20171218122309-20171218144309-00606.warc.gz"}
https://www.eolymp.com:443/en/contests/20266/problems/219334
Competitions Circle The radius of a circle r is given. Find the circumference and the area of a circle. Input The double radius of a circle r (r > 0). Output Print in one line the circumference and the area of a circle with 4 decimal digits. Time limit 1 second Memory limit 128 MiB Input example #1 1.234 Output example #1 7.7535 4.7839
2022-12-02 19:46:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8272525072097778, "perplexity": 2182.0330219845378}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710916.40/warc/CC-MAIN-20221202183117-20221202213117-00126.warc.gz"}
https://deepai.org/publication/preferential-multi-context-systems
# Preferential Multi-Context Systems Multi-context systems (MCS) presented by Brewka and Eiter can be considered as a promising way to interlink decentralized and heterogeneous knowledge contexts. In this paper, we propose preferential multi-context systems (PMCS), which provide a framework for incorporating a total preorder relation over contexts in a multi-context system. In a given PMCS, its contexts are divided into several parts according to the total preorder relation over them, moreover, only information flows from a context to ones of the same part or less preferred parts are allowed to occur. As such, the first l preferred parts of an PMCS always fully capture the information exchange between contexts of these parts, and then compose another meaningful PMCS, termed the l-section of that PMCS. We generalize the equilibrium semantics for an MCS to the (maximal) l_≤-equilibrium which represents belief states at least acceptable for the l-section of an PMCS. We also investigate inconsistency analysis in PMCS and related computational complexity issues. ## Authors • 2 publications • 6 publications • 2 publications • ### On the cost-complexity of multi-context systems Multi-context systems provide a powerful framework for modelling informa... 05/28/2014 ∙ by Peter Novák, et al. ∙ 0 • ### Reactive Multi-Context Systems: Heterogeneous Reasoning in Dynamic Environments Managed multi-context systems (mMCSs) allow for the integration of heter... 09/12/2016 ∙ by Gerhard Brewka, et al. ∙ 0 • ### Advancing Multi-Context Systems by Inconsistency Management Multi-Context Systems are an expressive formalism to model (possibly) no... 07/11/2011 ∙ by Antonius Weinzierl, et al. ∙ 0 • ### Quantum Cognitive Triad. Semantic geometry of context representation The paper describes an algorithm for cognitive representation of triples... 02/22/2020 ∙ by Ilya A. Surov, et al. ∙ 0 • ### Optimal Belief Revision We propose a new approach to belief revision that provides a way to chan... 03/08/2000 ∙ by Carmen Vodislav, et al. ∙ 0 • ### Asynchronous Multi-Context Systems In this work, we present asynchronous multi-context systems (aMCSs), whi... 05/20/2015 ∙ by Stefan Ellmauthaler, et al. ∙ 0 • ### Multi-Context Systems: Dynamics and Evolution (Pre-Print of "Multi-context systems in dynamic environments") Multi-Context Systems (MCS) model in Computational Logic distributed sys... 06/12/2021 ∙ by Pedro Cabalar, et al. ∙ 0 ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Introduction Many (if not all) real-world applications of sharing and reasoning knowledge are characterized by heterogeneous contexts, especially with the advent of the world wide web. Research in representing contexts and information flow between contexts has gained much attention recently in artificial intelligence [11, 4, 7, 8, 5, 13] as well as in applications such as requirements engineering [10, 15, 14]. Instead of finding a universal knowledge representation for all contexts, it has been increasingly recognized that it may be desirable to allow each context to choose a suitable representation tool for its own to capture its knowledge precisely. For example, in some frameworks such as Viewpionts for eliciting and analyzing software requirements developers often encourage stakeholders to use their own familiar terms and notations to express their demands so as to elicit requirements as full as possible [10, 15]. Moreover, the heterogeneous nature of contexts representations may allow different monotonic or non-monotonic reasoning mechanisms to occur together in a given system. For example, as stated in [4], there is growing interest in combining ontologies based on description logics with non-monotonic formalisms in semantic web applications. However, the diversity of representations of contexts in such cases brings some important challenges to accessing each individual context as well as to interlinking these contexts [5]. Nonmonotonic multi-context systems presented by Brewka and Eiter [4] can be considered as a promising way to deal with these challenges [5] . Instead of attempting to translate all contexts with different formalisms into a unifying formalism, they leave the logics of contexts untouched and interlink contexts by modeling the inter-contextual information exchange in a uniform way. To be more precise, information flow among contexts is articulated by so-called bridge rules in a declarative way. Similar to logical programming rules, each bridge rule consists of two parts, the head of the rule and the body of the rule (possibly empty). More importantly, each bridge rule allows access to other contexts in its body. This makes it capable of adding information represented by its head to a context by exchanging information with other contexts. In semantics, several equilibria representing acceptable belief states for multi-context systems are also given by Brewka and Eiter [4]. Multi-context systems can be viewed as the first step towards interlinking distributed and heterogeneous contexts effectively. The way they operating contextual knowledge bases is only limited to adding information to a context when the corresponding bridge rules are applicable [5]. To be more applicable to real-world applications, it is advisable to generalize multi-context systems from some perspectives. For example, Brewka et al have considerably generalized multi-context systems to managed multi-context systems (mMCS) by allowing flexible operations on context knowledge bases [5]. Essentially, managed multi-context systems focus on managed contexts, which are contexts together with possible operations on them. Combining preferences and contexts is still an interesting issue in reasoning about contextual knowledge [3]. In particular, preferences on contexts have an important influence on information exchange between contexts and inter-contextual knowledge integration in many real-world applications. For example, it is intuitive to revise a less reliable knowledge base by accessing more reliable ones. But we cannot use information deriving from less reliable sources to revise more reliable knowledge bases in general case. In legal reasoning, consequences of applying a law to a case can be rebutted by that of applying another law with higher level when there is a conflict, and not vise versa. In such cases, it may be advisable to take into account preferences on contexts in characterizing inter-contextual information exchange in multi-context systems. Moreover, taking into account the preference relation on contexts makes some subsets of more preferred contexts satisfying some given constraints more significant when the whole set of contexts does not satisfy the constraints. For example, in a multi-party negotiation, an agreement between the most important parties is preferred if it is difficult to achieve an agreement between all parties in many cases. In an incremental software development, only requirements with priorities higher than a given level are concerns of developers at a given stage. To address these issues, we combine a multi-context system with a total preorder relation on its contexts to develop a preferential multi-context system (PMCS) in this paper. A preferential multi-context systems is given in the form of a sequence of sets of contexts such that the location of a set signifies its preference level. Without loss of generality, we assume that the smaller of the location of a set is, the more preferred contexts in that set are. We call each set of contexts in that sequence a stratum. Moreover, we assume that information flow cannot be from less preferred strata to more preferred ones. That is, any bridge rule of a given context does not allow any access to other strictly less preferred contexts in its body. As such, the first several strata also compose a new preferential multi-context system such that all the contexts involved in it are strictly more preferred than ones out of it. We call such a new preferential multi-context system a section of that system. We are interested in all sections as well as the whole preferential multi-context system, and then propose -equilibria to represent belief sets acceptable for at least contexts in the first strata. In particular, the maximal consistent section describes a maximal section that has an equilibrium. Actually, it plays an important role in inconsistency analysis in a given preferential multi-context system, because it can be considered as maximally reliable part of that preferential multi-context system. We are more interested in finding diagnoses and inconsistency explanations compatible with maximal consistent section instead of all ones. Finally, we discuss computational complexity issues. The rest of this paper is organized as follows. We give a brief introduction to multi-context systems in Section 2. We propose preferential multi-context systems in Section 3. In section 4, we discuss inconsistency analysis in preferential multi-context systems. We discuss complexity issues in Section 5. In section 6 we compare our work with some closely related work. Finally we conclude this paper in Section 7. ## 2 Preliminaries In this section, we review the details of the definitions of multi-context systems presented by Brewka and Eiter  [4] and inconsistency analysis in multi-context systems presented in [8]. The material is largely taken from [4] and  [8]. The goal of multi-context systems is to combine arbitrary monotonic and nonmonotonic logics. Here a logic is referred to as a triple , where is the set of well-formed knowledge bases of , which characterizes the syntax of ; is the set of belief sets; and is a function describing the semantics of the logic by assign to each knowledge base (a set of formulas) a set of acceptable sets of beliefs [4]. ###### Definition 2.1 [4] Let be a set of logics. A -bridge rule over , , is of the form (k:s)←(r1:p1),⋯,(rj:pj),not (rj+1:pj+1),⋯,not (rm:pm) where , is an element of some belief set of , and for each , . Similar to logical programming rules, we call the left (resp. right) part of the head (resp. body) of the bridge rule . ###### Definition 2.2 [4] A multi-context system consists of a collection of contexts , where is a logic, is a knowledge base, and is a set of -bridge rules over . A multi-context system is finite if all knowledge bases and sets of bridge rules are finite [4]. Given a -bridge rule , we use to denote the head of . Further, let and . Obviously, is exactly the set of contexts involved in the body of . We use to denote the set of all bridge rules in , i.e, . For any set , we use to denote the set of all the rules in in unconditional form, i.e., . Let be a set of bridge rules, we use to denote the MCS obtained from by replacing with . For a set of sets of bridge rules, we use to denote the union of all sets in . A belief state for is a sequence such that each . A bridge rule is applicable in a belief state iff for , and for , . We use to denote the set of all -bridge rules that are applicable in belief state . ###### Definition 2.3 [4] A belief state of is an equilibrium iff, for , . Essentially, an equilibrium is a belief state which contains an acceptable belief set for each context, given the belief sets for other contexts [4]. ###### Example 2.1 Let be an MCS, where is a propositional logic, whilst both and are ASP logics. Suppose that • , ; • , ; • , . Consider . Note that all bridge rules are applicable in , except . Evidently, we can check is an equilibrium of . Note that it cannot be guaranteed that there exists an equilibrium for a given multi-context system. Inconsistency in an MCS is referred to as the lack of an equilibrium [8]. We use to denote that is inconsistent, i.e., has no equilibrium. In this paper, we assume that every context to be consistent if no bridge rules apply, i.e., . ###### Example 2.2 Let be an MCS, where is a propositional logic, whilst both and are ASP logics. Suppose that • , ; • , ; • , . Note that all bridge rules are applicable, except . The three applicable bridge rules in turn adds to , and then activates . So, has no equilibrium, i.e., . To analyze inconsistency, inspired by debugging approaches used in the nonmonotonic reasoning community, T. Eiter et al have introduced two notions of explaining inconsistency, i.e., diagnoses and inconsistency explanations for multi-context systems [8]. Roughly speaking, diagnoses provide a consistency-based formulation for explaining inconsistency, by finding a part of bridge rules which need to be changed (deactivated or added in unconditional form) to restore consistency in a multi-context system, whilst inconsistency explanations provide an entailment-based formulation for inconsistency, by identifying a part of bridge rules which is needed to cause inconsistency [8]. ###### Definition 2.4 [8] Given an MCS , a diagnosis of is a pair , , s.t. . is the set of all such diagnosis. Essentially, a diagnosis exactly captures a pair of sets of bridge rules such that inconsistency will disappear if we deactivate the rules in the first set, and add the rules in the second set in unconditional form [8]. ###### Definition 2.5 [8] is the set of all pointwise subset-minimal diagnoses of an MCS , where the pointwise subset relation holds iff and . ###### Example 2.3 Consider again. Then D±m(M1)={({r1},∅),({r2},∅),({r3},∅),(∅,{r4})}. This means we need only to deactivate one of , , and , or to add unconditionally, in order to restore consistency for . ###### Definition 2.6 [8] Given an MCS , an inconsistency explanation of is a pair of sets of bridge rules s.t. for all where and , it holds that . By we denote the set of all inconsistency explanations of , and by the set of all pointwise subset-minimal ones. Essentially, an inconsistency explanation captures a pair of sets of bridge rules such that the rules in the first set cause an inconsistency relevant to the MCS, and this inconsistency cannot be resolved by adding bridge rules unconditionally, unless we use at least one bridge rule in the second set [8]. ###### Example 2.4 Consider again. Then E±m(M1)={({r1,r2,r3},{r4})}. This means that the inconsistency in is caused by , , and together, moreover, it can be resolved by adding unconditionally. Note that both addition and removal of knowledge can prevent inconsistency in nonmonotonic reasoning. So, a diagnosis consists of two sets of bridge rules including the set of bridge rules to be removed and that to be added unconditionally. As pointed out in  [8], for scenarios where removal of bridge rules is preferred to unconditional addition of rules, we may focus on diagnoses of the form only. ###### Definition 2.7 [8] Given an MCS , an -diagnosis of is a set s.t. . The set of all -diagnoses (resp., -minimal -diagnoses) is (resp., ). Similarly, we need only focus on inconsistency explanations in form of if adding rules unconditionally is less preferred. ###### Definition 2.8 [8] Given an MCS , an -inconsistency explanation of is a set s.t. each where , satisfies . The set of all -inconsistency explanations (resp., -minimal -inconsistency explanations) is (resp., ). ###### Example 2.5 Consider again. Then D−m(M1)={{r1},{r2},{r3}},  E+m(M1)={{r1,r2,r3}}. More interestingly, Eiter et al have obtained the following duality relation between diagnoses and inconsistency explanations: ###### Theorem 2.1 [8] Given an inconsistent MCS , ⋃D±m(M)=⋃E±m(M), and ⋃D−m(M)=⋃E+m(M). This duality theorem shows that the unions of all minimal diagnoses and all inconsistency explanations coincide, i.e., diagnoses and inconsistency explanations represent dual aspects of inconsistency in an MCS [8]. ## 3 Preferential Multi-context Systems In this section we formally introduce a class of MCSs that allows us to consider preference information on contexts, called preferential multi-context systems, or simply PMCSs. As explained in the introduction, the motivation for such MCSs is that in many practical applications, it is often the case that some context has higher priority over another context. For example, the ontology SNOWMED CT (a context) will have higher priority over Wikipedia (another context) for medical doctors. In the setting of MCSs, an PMCS is a pair such that the following conditions are satisfied: 1. is an MCS that has a splitting . 2. is a total preorder111 A binary relation on some set is a total preorder relation if it is reflexive, transitive, and total, i.e., for all , we have that: (reflexivity), if and , then (transitivity), or (totality). on the set . Recall that is a splitting for if for all and for all . Informally, means that a context in is always preferred to a context in . We assume that the smaller a subscript is , the more preferred is. Then we use instead of from now on. In an PMCS, preference information controls the information flow from one context to another context. Specifically, a context can be impacted only by more or equally preferred ones. This notion is formally defined as follows. ###### Definition 3.1 Let be a total preorder relation on the set of contexts . 1. The set of bridge rules of is compatible with the preorder relation on if for all , for all . 2. The set of bridge rules of is compatible with the preorder relation on if is compatible with for all . Essentially, the compatibility of with implies that only information exchange between with some s satisfying for each may activate possible change of in . Given an MCS and a total preorder relation on contexts in , we say that is compatible with iff is compatible with . ###### Definition 3.2 (Preferential multi-context system) A preferential multi-context system (PMCS) is a pair , where is an MCS, and is a total preorder relation on contexts in such that is compatible with . An PMCS is represented in the form of a sequence such that for , iff for some : , and . In particular, we may consider an MCS as a special PMCS , which contains only one stratum, i.e., . Essentially, preferential multi-context systems take into account the impact of preference relation over contexts on inter-contextual information exchange. Only information flow from a context to equally or less preferred ones are allowed to occur in preferential multi-context systems. Let be an PMCS. Then the -cut of for each , denoted , is defined as . Correspondingly, we call the -section of . Note that the compatibility of and ensures that each -section of is also an PMCS. Correspondingly, each -cut of is an MCS. Informally speaking, given an PMCS, the -section is exactly the PMCS consisting of the first strata in , in which all the contexts are preferred to ones in for each . This implies that the -section of an PMCS exactly capture the inter-contextual information exchange between contexts preferred to ones in . A belief state for is a sequence such that is a belief state of for all , where is a concatenation operator. In particular, we use to denote . ###### Definition 3.3 A belief state of is an equilibrium of iff is an equilibrium of . ###### Example 3.1 Consider an PMCS , where and are propositional logics, and others are ASP logics. Suppose that • , ; • , ; • , ; • , ; • , . Consider . Then all bridge rules are applicable in except . Moreover, it is easy to check that is an equilibrium of . On the other hand, we can use a directed graph to illustrate the information flow in a (preferential) multi-context system , where , and if s.t. . For example, the information flow in is illustrated in Figure 1. Note that in such an information flow graph, there is at most one edge between any two contexts belonging to different strata, moreover, such an edge must be from a preferred context to another context. As mentioned in [5], inter-contextual information exchange among decentralized and heterogeneous contexts can cause an MCS to be inconsistent. Moreover, inconsistency in an MCS renders the system useless. However, in the case of preferential multi-context systems, inconsistency may not be considered as a totally undesirable. Allowing for preferences on contexts, we are more interested in some consistent sections of an inconsistent PMCS, which are significant in some applications. To address this issue, we generalize the notion of equilibrium to an -equilibrium for an PMCS as follows. ###### Definition 3.4 (l≤-equilibrium) Given an PMCS and a number . A belief state of is an -equilibrium of iff is an equilibrium of the -section of . Roughly speaking, an -equilibrium of a preferential multi-context system represents belief sets acceptable for at least all the contexts in the first strata of , given the belief sets for other contexts. Note that an -equilibrium of must be an -equilibrium for all . In particular, an equilibrium of is an -equilibrium of for all . But it does not hold vice versa. ###### Definition 3.5 (l<-equilibrium) Given an PMCS and a number . A belief state of is called an -equilibrium of iff • is an -equilibrium of , • but is not an -equilibrium of if . Essentially, an -equilibrium of a preferential multi-context system represents belief sets acceptable for all the contexts in the first strata of , but not for at least one context in the -stratum if , given the belief sets for other contexts. Evidently, any equilibrium of is an -equilibrium according to this definition. ###### Definition 3.6 (Maximal l<-equilibrium) Given an PMCS and a number . A belief state of is called a maximal -equilibrium of iff • is an -equilibrium of , • For any -equilibrium of , . Actually, a maximal -equilibrium of a preferential multi-context system is indeed an equilibrium of that system if that system is consistent, otherwise, it represents belief sets acceptable for contexts in a section which cannot keep consistent if we add the next stratum to it. ###### Example 3.2 Consider an PMCS (M3,≤s)=⟨(C1,C2),(C3,C4),(C5),(C6)⟩, where , , and are propositional logics, and others are ASP logics. Suppose that • , ; • , ; • , ; • , ; • , ;} • , . Evidently, all bridge rules are applicable except . Moreover, applying , , and in turn adds to , and then activates . On the other hand, applying , , and in turn adds to , and then results in both and occurring in . So, has no equilibrium, i.e., . Moreover, it also implies that its -section also has no equilibrium, i.e., . However, both the -section and -section of are consistent. Obviously, we can check • is an -equilibrium, but not an -equilibrium; So, it is an -equilibrium. • is an -equilibrium; • is a maximal -equilibrium of . An occurrence of inconsistency in a multi-context system makes that system useless. However, considering preferences in preferential multi-context systems makes things better. The section corresponding to a maximal -equilibrium may be interesting and useful in the presence of inconsistency, because it fully captures the meaningful information exchange among contexts involved in this section. ## 4 Inconsistency Analysis Now an interesting question arises: how to measure the degree of inconsistency for an PMCS? Note that the value points out the stratum where we first meet inconsistency if a given inconsistent PMCS has a maximal -equilibrium. In particular, if we abuse the notation and say that has a maximal -equilibrium if it has no maximal -equilibrium for any given . Then is exactly the inconsistency rank for stratified knowledge bases presented in [1, 2] in essence. To bear this in mind, we present the following inconsistency measure. ###### Definition 4.1 Given an PMCS . The degree of inconsistency of , denoted , is defined as DI((M,≤s))=1−lm, if has the maximal -equilibrium, where . Actually, the degree of inconsistency of is a slight adaptation of the inconsistency rank such that • ; • iff is consistent; • iff . Note that the first two properties are called Normalization and Consistency, respectively [12]. The third property says that an PMCS has the upper bound iff there is no consistent section. ###### Example 4.1 Consider again. Note that DI((M3,≤s))=1−24=12, because it has an maximal -equilibrium as illustrated above. The measure allows us to have a sketchy picture on the inconsistency in . In many applications, we need to find more information about the inconsistency. For example, we need to know which contexts and bridge rules of a given PMCS are involved in the inconsistency in order to restore consistency of the PMCS. Note that any two contexts are considered equally preferred in inconsistency handling in the case of multi-context systems. However, preferences over contexts play an important role in dealing with inconsistency among these contexts, especially in making some tradeoff decisions on resolving inconsistency when we take into account preferences. Generally, the more preferred contexts are considered more reliable when an inconsistency occurs in a preferential multi-context system, moreover, remaining unchanged is preferred to any action of revision for such contexts. For example, in requirements engineering, when two requirements with different priority levels contradict each other, a less preferred requirement will be revised to accommodate itself to another one in most cases. Given an PMCS, each section actually splits the whole set of contexts into two parts, i.e., itself and a set of other strictly less preferred contexts. Moreover, each consistent section fully captures information exchange among contexts which are strictly preferred to ones not included in that section. Generally, such a section may be considered as one of plausible parts of that PMCS. Allowing for this, we are more interested in a section that contains more preferred strata as much as possible. Moreover, any changes of bridge rules for restoring consistency should not affect information exchange among contexts in such a section. In this sense, identifying a consistent section with the maximal number of strata is central to inconsistency analysis in a preferential multi-context system. ###### Definition 4.2 (Maximal consistent section) Given an PMCS , the -section of , is called a maximal consistent section of , if • ; • for all . Informally speaking, the maximal consistent section of an PMCS can be considered as a reliable part of that PMCS. We use to denote the maximal consistent section of . Evidently, given an inconsistent PMCS , a maximal -equilibrium of is exactly an equilibrium of the -section , because less preferred contexts cannot bring new information to more preferred contexts in an PMCS. This implies that finding the maximal consistent section may be not harder than finding maximal -equilibrium. ###### Example 4.2 Consider again. The -section is its maximal consistent section. As mentioned above, Eiter et al have proposed diagnoses and inconsistency explanations for a multi-context system. We use the following example to demonstrate what will happen when we apply these to a preferential multi-context system. ###### Example 4.3 Consider again. Note that all of the following sets of rules are -minimal -diagnoses of : • , ,; • , , ; • , , . Note that all of the -minimal -diagnoses contains one bridge rule of maximal consistent section except . That is, according to for all , we need to deactivate some information exchange in maximal consistent section to restore consistency in . In contrast, leaves information exchange in maximal consistent section unchanged. Allowing for preferences relation over contexts, is more significant for inconsistency handling in . The example above illustrates that diagnoses not involving maximal consistent section in inconsistency are more preferred. Allowing for the duality relation between diagnoses and explanations, we have the same opinion on inconsistency explanations. However, the compatibility to more preferred knowledge is considered as one of useful strategies in preferential knowledge revision and integration [1, 2]. Next we adapt diagnoses and inconsistency explanations to accommodate maximal consistent section, respectively. ###### Definition 4.3 Given an PMCS , a diagnosis of is compatible to the maximal consistent section of if . Note that if we focus on the maximal consistent section of a preferential multi-context system, then the set of bridge rules of all contexts out of the section exactly composes a diagnosis of inconsistency for that system, because . This guarantees that there exists at least one diagnosis compatible with the maximal consistent section. ###### Example 4.4 Consider again. All of , and are diagnoses compatible to the maximal consistent section. Furthermore, we consider minimal diagnoses compatible with the maximal consistent section of a given PMCS. ###### Definition 4.4 (c-diagnosis) Given an PMCS , an -diagnosis of , is called an -diagnosis of , if and . The set of all -diagnosis of is . Essentially, an -diagnosis of is an -minimal -diagnosis that is compatible with the maximal consistent section of , i.e., none of bridge rules of the maximal consistent section of is involved in . ###### Example 4.5 Consider again. Then is a unique -diagnosis compatible to the maximal consistent section, i.e., . Note that for all , and . So, , but not vice versa. ###### Definition 4.5 (c-inconsistency explanation) Given an PMCS , an -inconsistency explanation of , is a set s.t. each , satisfies . The set of all -minimal -inconsistency explanations of is . Essentially, an -inconsistency explanation focuses on the set of other bridges rules need to cause an inconsistency given a set of bridge rules of the maximal consistent section. Both -inconsistency explanations and -diagnoses capture the inconsistency under an assumption that every bridge rule of the maximal consistent section should not be revised or modified to restore consistency. ###### Example 4.6 Consider again. Then both and are -minimal -inconsistency explanations compatible to the maximal consistent section, moreover, . More interestingly, we have the following weak duality relation between -diagnoses and -inconsistency explanations. ###### Proposition 4.1 Given an inconsistent PMCS , then ⋃E+c((M,≤s))=⋃D−c((M,≤s)). #### Proof This is a direct consequence of Theorem  2.1 in essence. The main part of this proof is the same as that of Theorem  2.1 provided in [8]. Let be an PMCS and its maximal consistent section. The complement of w.r.t. is denoted as . We first prove that holds. Let , then . We show that there exists with , for . Consider , then and . Let . Then for all , . Suppose that there exists with and . Then , and , then . So, . Then we prove that holds. Let , then . We show that there exists with , for . Consider . Let . Assume that , then
2021-12-06 05:03:56
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8749555945396423, "perplexity": 1605.2484751543172}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363290.39/warc/CC-MAIN-20211206042636-20211206072636-00342.warc.gz"}
https://www.cferthorney.com/archive/2010/05/rhel-6-beta-review/
# RHEL 6 beta - a review Once again I am late to the party here (Well kinda) This is a beta release review and as such everything is subject to change prior to the final release. Please note: screenshots to be added at a later date, due to upload problems on my wireless connection. Red Hat Enterprise Linux is (as the name suggests) an enterprise class distribution, with support for 7 years from release. This has advantages and disadvantages. One of the biggest advantages is the stability and long term support for businesses. It’s also a major disadvantage too though. Take for example RHEL 5 and PHP. Red Hat chose 5.1.6 as the PHP version. Whilst this was stable, PHP 5.2 released a number of new features and security fixes I feel should have been pushed to an enterprise distribution. Red Hat do port security features back to the stable release, but do not upgrade major versions, leaving web development houses like the company I work for to either run an “officially unsupported” version of PHP via a number of unofficial RPM sources, or compiling it myself. Now I personally am not afraid of compiling from source, and have done so many times, but it is not for the feint of heart, and requires some knowledge of the systems you are running. ## Setup I downloaded the distribution from ftp://ftp.redhat.com/pub/redhat/rhel/beta/6/i386/iso/RHEL6.0-20100414.0-AP-i386-DVD1.iso . I used 32 bit simply because my laptop is installed with 32bit Windows 7, and I have yet to get round to reinstall it) Once downloaded I used an sha1sum checker to ensure the download was correct. I used sha1 over sha256 simply because I have a sha1sum.exe on my Windows 7 laptop and not a sha256sum. After this I loaded up VMWare Player (My chosen virtual machine console - due to my own experience with ESXi and VMWare server) and setup the machine as follows: Create New Virtual Machine -> Install from Disc image file -> (In my case C:\Users{user}\Downloads\RHEL6.0-20100414.0-AP-i386-DVD1.iso) VMWare Player (Understandably) does not automatically recognise this beta release’s version. So I chose Guest operating system -> Linux -> Red Hat Enterprise Linux 5 . I set up the VM name as RHEL 6, and used the default location for install (On Windows 7 C:\Users{user}\Documents\Virtual Machines{machine name}) I set the virtual hard drive up to be the default 20 GB. As this is a test VM for demonstration purposes I stored it as a single file, although if I was looking at using this VM in a production environment I would probably use the “split into 2GB files” to make backing up and migrating slightly easier. I used the default 1024MB memory, again as this was a demonstration machine. Normally I would choose something like 2GB for a web server, 8GB for a memcache based machine, and 4GB for a dedicated database server. These are just examples of what I would use and are by no means optimal settings. (8GB for example would not be suitable for my 32bit based Windows 7 system, and would be more likely placed on my more powerful ESXi server with 32GB memory available) ## Installation Rather than trying to decide server review or desktop, I went for a 2 install approach, 1 server 1 desktop. I’ll run through the install process (with screen shots) for the server process, simply because that was the install I first did. I will then describe the different packages selected for the desktop install. I chose the English language install, and the United Kingdom keyboard, as that is where I’m based. I was confident the ISO was OK (Due to the sha1 I had performed) so I skipped testing the ISO test offered and I also chose a local CDRom/image for the install source. As this is a beta release, Red Hat kindly give you a warning that this is beta software. In order to perform this review I need to accept this. Now we are offered how we want to use storage for our new VM. In this case we want basic storage, so that is selected. I’m then asked what I want to call the machine. I’ve chosen the hostname of CFCSVM1 - which refers to my companies initials, and Virtual Machine 1. I then selected London, Europe as the timezone. It then asked me to set the root password. I set this as I normally would and proceeded to the partitioning screen. I accepted the defaults, and then wrote the changes to disk. Red Hat have obviously done a lot of work on speeding up the formating tools, as this was a lot faster than that of the early RHEL 5 releases, and also the CentOS releases I moved to once Red Hat were charging for access to RHEL 5. The 20 GB space I allocated was formated to ext-4 in a matter of about 2 minutes. On my similar set up CentOS 5.4 VM I setup for comparison this took around 10 mins to complete. Now we are asked what sort of machine we want to setup, and whether we want to customise it. I’m setting this server up as a web server, so I selected web server, and then “customize now” so I could ensure MySQL and PHP were installed as I wanted. After this the install itself started, about 10 minutes later (I forgot to start my stopwatch exactly as the install started - the time on this was 9 mins 05 seconds when I pressed the reboot button) the install was finished, and my server was rebooted, and ready to roll. ## Server initial impressions In typical Red Hat style, you are only given the root user as default. I still feel this is a mistake on Red Hat’s part, as in my opinion you should never log in as root. A system setup using sudo with either individual users added to the sudoers file, or (If you wish) the wheel group added to the sudoers file is a much more comprehensive setup, as this means the root password need never be known by any one individual. It can be written on a piece of paper in a sealed envelope to be opened in emergancies. Resetting root passwords is trivial if you have physical access to the box anyway so this small step is not a huge barrier to preventing root access but it does help. I was surprised by some of the default package choices for the server install. As far as I am concerned whilst Plymouth does give wonderful graphical bootup screens to machines with supported graphics cards (At time of writing, this was mostly ATI cards, due to the closed source nature of NVIDIA’s linux drivers) I do not see the point in installing such fancy graphics on a server install that has no X server installed. I also question a java runtime by default in a server environment, especially as the webserver setup does not include Tomcat or (as far as I can see) anything that actually depends on a java runtime (indeed an rpm -q –whatrequires java returns “no package requires java”) A full list of RPM’s installed in the default Webserver (with PHP, Web Server and MySQL additionally installed) can be found here. The installed kernel is 2.6.32-19. Red Hat’s policy is to stick with the same major Kernel version for the life of the RHEL release, and backport features and security fixes as required, so it is good to see a recently released kernel version in the mix. Other important upgrades include Python to 2.6 (This is a great relief as 2.4 was aging rather, and many of the python based tools I’ve used of late require 2.5 or above) When looking over the list of packages included in RHEL 6 (Default installs or otherwise) I found Karanbir Singh’s (One of the CentOS core developers) article at http://www.karan.org/blog/index.php/2010/04/25/first-look-at-the-rhel-6-package-list to be of great use. Like him I am a little disappointed to see Exim go, although I suspect that unofficial yum repositories will re-add this, it is a shame it is no longer supported by official channels. One other thing which surprised (and annoyed) me slightly was when I booted into the server setup my network card had not enabled. A previous RHEL 6 setup I had created had installed my network card by default, so this was a rather unpleasent shock. When I looked at the network configuration it appeared that the ONBOOT parametre for eth0 was set to No . I set this to yes and ran /sbin/service network restart. This only got me an IP v6 address. In case I had inadvertantly selected a wrong option I reinstalled the VM. Whilst I realise IP v6 support is essential now for an enterprise distro with 7 years of support to it, not enabling IP v4 by default does seem a little bizarre. If I select the online repositories I can easily enable IP v4 support (And indeed this does carry over as I would expect) so why not allow me to configure IPv4 by default, as was an option in Fedora 12 (Which is the Fedora version this RHEL is based on) A quick google showed this was a bit of an issue for many people: http://centos.org/modules/newbb/viewtopic.php?topic_id=25876&viewmode=flat&order=ASC showed me it was an issue with NetworkManager and network service running together. I had set this box up as X-less, meaning interfacing with NetworkManager was not as easy as when I had Gnome or KDE installed. sigh http://centos.org/modules/newbb/viewtopic.php?topic_id=25876&viewmode=flat&order=ASC post 13 gives a work around - ifup eth0 && dhclient -4. Another workaround is to manually edit /etc/sysconfig/network-scripts/ifcfg-eth0 and add BOOTPROTO=dhcp and ONBOOT=yes . This is not ideal, and infact is a big negative to RHEL 6 at present. It looks as though it’s actually easier to enable IP v4 through configuring it during the install process (Select an online repo during the repository selection) than needing to manually edit ifcfg-eth0 afterwards; especially as the release notes state RHEL 6 uses NetworkManager by default and not the older Network Administration Tool. On the desktop, as NetworkManager is installed by default, and NM does not yet support IPv6 properly, this issue should not exist. According to post 30 in the above mentioned centos.org topic Red Hat are aware of this and it will be fixed by the time RHEL 6 is released. I could not find a bug number for this issue however. One would also think, if NetworkManager was so critical to RHEL 6 it would be installed by default on any of the default install images - this is not the case on a webserver install (With MySQL and PHP additional selections) . From a performance side, whilst it’s noticeably snappier than RHEL 5 was at this stage, some things that I would not expect to take a long time do. For example sudo su - There’s a delay of about 15 seconds on my machine - something that doesn’t happen on CentOS 5 image with the same virtual setup. Apache and PHP seem to play nicely together, and the inclusion of apc as a default option is a great addition, especially with caching due to become more and more important over the next few years as web apps get more and more complex. I was slightly disappointed to see PHP 5.3.1 instead of the latest stable 5.2 - however given in 7 years time most people are likely to have upgraded to PHP 5.3 (or greater) this slight annoyance can easily be overlooked. Overall I think this is an improvement on RHEL 5, although this is clearly still a beta product with the networking issues I mentioned earlier. Again in 7 years time I doubt IP v4 will be an issue; but at the moment it is a bit of a bugbear to live without it. ## Desktop initial impressions I did a reinstall for the OS for this setup - so I could setup a default Gnome based desktop. This time I did not do any customisation. I wanted to setup a raw desktop. I did however enable the IPv4 settings in the installation process, as I didn’t want to have ot fiddle with NetworkManager - as I find NM to be great for wireless networks but not much else. Hopefully this will be resolved once RHEL 6 goes “gold” later this year. The number of default packages doubles to over 1000. Not a huge surprise there and the install time increased proportionatly. The install time was about 20 mins (Again my stop watch wasn’t started bang on the install start. It was 19 mins 05 seconds) The install boots OK. Some problems with screen resolution were had, but I had been forwarned about these in the above mentioend Centos forum post. Gnome 2.28 has been chosen, which was the same as used in Fedora 12. First boot provided me the opportunity to create my personal user, something I feel should have a command line only option as logging in as root as I have already said is a big no no in my eyes. Again I question some of the default choices - does CVS need to be installed by default these days? I doubt it… (rpm -q –whatrequires cvs reports gettext, but gettext is not required by anything - one wonders why…) Openoffice 3.1 is a welcome edition, as in 7 years time, this will almost certainly be an aging office package. Given RHEL is more often used on servers than desktops, I have to wonder if this really matters? ## Overall RHEL 6 is a welcome release, especially given RHEL 5 is now nearly 3 12 years through it’s 7 years support. Hopefully it won’t be so long between releases, and they will look to match Ubuntu’s strict 2 year LTS release policy. Whilst I am all for stability, and appreciate the 7 years support for enterprise companies, I like to be relatively up to date (especially with languages such as PHP which move on a long way in 3 12 years - RHEL has ended up skipping a major version between RHEL 5 and RHEL 6) I’d give this release 7 out of 10. Lot’s to be positive about as a server (Dispite the problems with IPv4) but I have to wonder how important the desktop release will be for enterprises, as not many businesses use Linux on the desktop (Certainly not in the UK anyway) and though who do often use the free forks of RHEL such as CentOS anyway. Thought as alway appreciated.
2020-01-19 18:31:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3229040205478668, "perplexity": 2114.346747605455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594705.17/warc/CC-MAIN-20200119180644-20200119204644-00180.warc.gz"}
http://textop.org/wiki/index.php?title=Logical_equality
# Logical equality The article below may contain errors of fact, bias, grammar, etc. The Citizendium Foundation and the participants in the Citizendium project make no representations about the reliability of this article or, generally, its suitability for any purpose. We make this disclaimer of all Citizendium article versions that have not been specifically approved. Image:XNOR.jpg XNOR Logic Gate Symbol Logical equality is a logical operator that corresponds to equality in boolean algebra and to the logical biconditional in propositional calculus. It gives the functional value true if both functional arguments have the same logical value, and false if they are different. It is customary practice in various applications, if not always technically precise, to indicate the operation of logical equality on the logical operands x and y by any of the following forms: $\begin{matrix} x \leftrightarrow y & \quad & \quad & x \Leftrightarrow y \\ x \ \mbox{EQ} \ y & \quad & \quad & x = y \end{matrix}$ Some logicians, however, draw a firm distinction between a functional form, like those in the lefthand column, which they interpret as an application of a function to a pair of arguments ? and thus a mere indication that the value of the compound expression depends on the values of the component expressions ? and an equational form, like those in the righthand column, which they interpret as an assertion that the arguments have equal values, in other words, that the functional value of the compound expression is true. In mathematics, the plus sign "+" almost invariably indicates an operation that satisfies the axioms assigned to addition in the type of algebraic structure that is known as a field. For boolean algebra, this means that the logical operation signified by "+" is not the same as the inclusive disjunction signified by "∨" but is actually equivalent to the logical inequality operator signified by "≠", or what amounts to the same thing, the exclusive disjunction signified by "XOR". Naturally, these variations in usage have caused some failures to communicate between mathematicians and switching engineers over the years. At any rate, one has the following array of corresponding forms for the symbols associated with logical inequality: $\begin{matrix} x + y & \quad & \quad & x \not\equiv y \\ x \ \mbox{XOR} \ y & \quad & \quad & x \ne y \end{matrix}$ This explains why "EQ" is often called "XNOR" in the combinational logic of circuit engineers, since it is the Negation of the XOR operation. Another rationalization of the admittedly circuitous name "XNOR" is that one begins with the "both false" operator NOR and then adds the eXception, "or both true". ## Definition Logical equality is an operation on two logical values, typically the values of two propositions, that produces a value of true if and only if both operands are false or both operands are true. The truth table of p EQ q (also written as p = q, p ↔ q, or p ≡ q) is as follows: Logical Equality p q p = q F F T F T F T F F T T T ## Alternative descriptions The form (x = y) is equivalent to the form (xy) ∨ (¬x ∧ ¬y). $(x = y) = \lnot(x + y) = (x \land y) \lor (\lnot x \land \lnot y)$ For the operands x and y, the truth table of the logical equality operator is as follows: $x \leftrightarrow y$ y T F x T T F F F T ### Related topics Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section titled GNU FDL text.
2013-05-19 22:44:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8585431575775146, "perplexity": 918.6218717699833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698104521/warc/CC-MAIN-20130516095504-00001-ip-10-60-113-184.ec2.internal.warc.gz"}
http://openstudy.com/updates/4f303f10e4b0fc09381ecaea
## anonymous 4 years ago Could someone explain how 4 + x^4 - 2 + (1/x^4) is equal to (x^2 + 1/(x^2))^2 please? 1. anonymous is the first part really $4+x^4-2$?? 2. anonymous yes, + 1/(x^4) 3. anonymous so first part is really $2+x^4$ and you have $2+x^4+\frac{1}{x^4}$ 4. anonymous then to add you have a denominator of $x^4$ so you need to write $\frac{2x^4}{x^4}+\frac{x^8}{x^4}+\frac{1}{x^4}$ $\frac{x^8+2x^4+1}{x^4}$ $\frac{(x^4+1)^2}{x^4}$ or $\left( \frac{x^4+1}{x^2}\right)^2$ 5. anonymous Oh! Thanks! :)
2016-10-22 16:34:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6776559948921204, "perplexity": 1798.012432109853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719027.25/warc/CC-MAIN-20161020183839-00440-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-early-transcendentals-8th-edition/chapter-3-section-3-11-hyperbolic-functions-3-11-exercises-page-265/38
## Calculus: Early Transcendentals 8th Edition $$f'(t)=\frac{2\cosh t}{(1-\sinh t)^2}$$ $f'(t)=\frac{d}{dt}\frac{1+\sinh t}{1-\sinh t}$ Using the quotient rule: $f'(t)=\frac{\cosh t(1-\sinh t)-(1+\sinh t)(-\cosh t)}{(1-\sinh t)^2}$ $=\frac{\cosh t-\cosh t\sinh t+\cosh t+\cosh t\sinh t}{(1-\sinh t)^2}$ $=\frac{2\cosh t}{(1-\sinh t)^2}$
2018-04-26 12:01:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7808626890182495, "perplexity": 706.1489538513299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948126.97/warc/CC-MAIN-20180426105552-20180426125552-00389.warc.gz"}
https://codecov.io/gh/mtennekes/tmap/src/master/R/tm_layers.R
1 check_deprecated_layer_fun_args <- function(auto.palette.mapping, max.categories, midpoint) { 2 2 if (!is.null(auto.palette.mapping)) { 3 0 warning("The argument auto.palette.mapping is deprecated. Please use midpoint for numeric data and stretch.palette for categorical data to control the palette mapping.", call. = FALSE) 4 0 if (auto.palette.mapping && is.null(midpoint)) midpoint <- 0 # for backwards compatability 5 2 } 6 2 if (!is.null(max.categories)) warning("The argument max.categories is deprecated. It can be specified with tmap_options.", call. = FALSE) 7 2 midpoint 8 } 9 10 #' Add text labels 11 #' 12 #' Creates a \code{\link{tmap-element}} that adds text labels. 13 #' 14 #' @param text name of the variable in the shape object that contains the text labels 15 #' @param size relative size of the text labels (see note). Either one number, a name of a numeric variable in the shape data that is used to scale the sizes proportionally, or the value \code{"AREA"}, where the text size is proportional to the area size of the polygons. 16 #' @param col color of the text labels. Either a color value or a data variable name. If multiple values are specified, small multiples are drawn (see details). 17 #' @param root root number to which the font sizes are scaled. Only applicable if \code{size} is a variable name or \code{"AREA"}. If \code{root=2}, the square root is taken, if \code{root=3}, the cube root etc. 18 #' @param clustering value that determines whether the text labels are clustered in \code{"view"} mode. One of: \code{TRUE}, \code{FALSE}, or the output of \code{\link[leaflet:markerClusterOptions]{markerClusterOptions}}. 19 #' @param size.lim vector of two limit values of the \code{size} variable. Only text labels are drawn whose value is greater than or equal to the first value. Text labels whose values exceed the second value are drawn at the size of the second value. Only applicable when \code{size} is the name of a numeric variable of \code{shp}. See also \code{size.lowerbound} which is a threshold of the relative font size. 20 #' @param sizes.legend vector of text sizes that are shown in the legend. By default, this is determined automatically. 21 #' @param sizes.legend.labels vector of labels for that correspond to \code{sizes.legend}. 22 #' @param sizes.legend.text vector of example text to show in the legend next to sizes.legend.labels. By default "Abc". When \code{NA}, examples from the data variable whose sizes are close to the sizes.legend are taken and \code{"NA"} for classes where no match is found. 23 #' @param n preferred number of color scale classes. Only applicable when \code{col} is a numeric variable name. 24 #' @param style method to process the color scale when \code{col} is a numeric variable. Discrete options are \code{"cat"}, \code{"fixed"}, \code{"sd"}, \code{"equal"}, \code{"pretty"}, \code{"quantile"}, \code{"kmeans"}, \code{"hclust"}, \code{"bclust"}, \code{"fisher"}, \code{"jenks"}, and \code{"log10_pretty"}. A numeric variable is processed as a categorical variable when using \code{"cat"}, i.e. each unique value will correspond to a distinct category. For the other discrete options (except \code{"log10_pretty"}), see the details in \code{\link[classInt:classIntervals]{classIntervals}}. Continuous options are \code{"cont"}, \code{"order"}, and \code{"log10"}. The first maps the values of \code{col} to a smooth gradient, the second maps the order of values of \code{col} to a smooth gradient, and the third uses a logarithmic transformation. 25 #' @param breaks in case \code{style=="fixed"}, breaks should be specified. The \code{breaks} argument can also be used when \code{style="cont"}. In that case, the breaks are mapped evenly to the sequential or diverging color palette. 26 #' @param interval.closure value that determines whether where the intervals are closed: \code{"left"} or \code{"right"}. Only applicable if \code{col} is a numeric variable. 27 #' @param palette a palette name or a vector of colors. See \code{tmaptools::palette_explorer()} for the named palettes. Use a \code{"-"} as prefix to reverse the palette. The default palette is taken from \code{\link{tm_layout}}'s argument \code{aes.palette}, which typically depends on the style. The type of palette from \code{aes.palette} is automatically determined, but can be overwritten: use \code{"seq"} for sequential, \code{"div"} for diverging, and \code{"cat"} for categorical. 28 #' @param labels labels of the color classes, applicable if \code{col} is a data variable name 29 #' @param labels.text Example text to show in the legend next to the \code{labels}. When \code{NA} (default), examples from the data variable are taken and \code{"NA"} for classes where they don't exist. 30 #' @param midpoint The value mapped to the middle color of a diverging palette. By default it is set to 0 if negative and positive values are present. In that case, the two sides of the color palette are assigned to negative respectively positive values. If all values are positive or all values are negative, then the midpoint is set to \code{NA}, which means that the value that corresponds to the middle color class (see \code{style}) is mapped to the middle color. Only applies when \code{col} is a numeric variable. If it is specified for sequential color palettes (e.g. \code{"Blues"}), then this color palette will be treated as a diverging color palette. 31 #' @param stretch.palette Logical that determines whether the categorical color palette should be stretched if there are more categories than colors. If \code{TRUE} (default), interpolated colors are used (like a rainbow). If \code{FALSE}, the palette is repeated. 32 #' @param contrast vector of two numbers that determine the range that is used for sequential and diverging palettes (applicable when \code{auto.palette.mapping=TRUE}). Both numbers should be between 0 and 1. The first number determines where the palette begins, and the second number where it ends. For sequential palettes, 0 means the brightest color, and 1 the darkest color. For diverging palettes, 0 means the middle color, and 1 both extremes. If only one number is provided, this number is interpreted as the endpoint (with 0 taken as the start). 33 #' @param colorNA colour for missing values. Use \code{NULL} for transparency. 34 #' @param textNA text used for missing values. 35 #' @param showNA logical that determines whether missing values are named in the legend. By default (\code{NA}), this depends on the presence of missing values. 36 #' @param colorNULL colour for polygons that are shown on the map that are out of scope 37 #' @param fontface font face of the text labels. By default, determined by the fontface argument of \code{\link{tm_layout}}. 38 #' @param fontfamily font family of the text labels. By default, determined by the fontfamily argument of \code{\link{tm_layout}}. 39 #' @param alpha transparency number between 0 (totally transparent) and 1 (not transparent). By default, the alpha value of the \code{fontcolor} is used (normally 1). 40 #' @param case case of the font. Use "upper" to generate upper-case text, "lower" to generate lower-case text, and \code{NA} to leave the text as is. 41 #' @param shadow logical that determines whether a shadow is depicted behind the text. The color of the shadow is either white or yellow, depending of the \code{fontcolor}. 42 #' @param bg.color background color of the text labels. By default, \code{bg.color=NA}, so no background is drawn. 43 #' @param bg.alpha number between 0 and 1 that specifies the transparency of the text background (0 is totally transparent, 1 is solid background). 44 #' @param size.lowerbound lowerbound for \code{size}. Only applicable when \code{size} is not a constant. If \code{print.tiny} is \code{TRUE}, then all text labels which relative text is smaller than \code{size.lowerbound} are depicted at relative size \code{size.lowerbound}. If \code{print.tiny} is \code{FALSE}, then text labels are only depicted if their relative sizes are at least \code{size.lowerbound} (in other words, tiny labels are omitted). 45 #' @param print.tiny boolean, see \code{size.lowerbound} 46 #' @param scale text size multiplier, useful in case \code{size} is variable or \code{"AREA"}. 47 #' @param auto.placement logical (or numeric) that determines whether the labels are placed automatically. If \code{TRUE}, the labels are placed next to the coordinate points with as little overlap as possible using the simulated annealing algorithm. Therefore, it is recommended for labeling spatial dots or symbols. If a numeric value is provided, this value acts as a parameter that specifies the distance between the coordinate points and the text labels in terms of text line heights. 48 #' @param remove.overlap logical that determines whether the overlapping labels are removed 49 #' @param along.lines logical that determines whether labels are rotated along the spatial lines. Only applicable if a spatial lines shape is used. 50 #' @param overwrite.lines logical that determines whether the part of the lines below the text labels is removed. Only applicable if a spatial lines shape is used. 51 #' @param just justification of the text relative to the point coordinates. Either one of the following values: \code{"left"} , \code{"right"}, \code{"center"}, \code{"bottom"}, and \code{"top"}, or a vector of two values where first value specifies horizontal and the second value vertical justification. Besides the mentioned values, also numeric values between 0 and 1 can be used. 0 means left justification for the first value and bottom justification for the second value. Note that in view mode, only one value is used. 52 #' @param xmod horizontal position modification of the text (relatively): 0 means no modification, and 1 corresponds to the height of one line of text. Either a single number for all polygons, or a numeric variable in the shape data specifying a number for each polygon. Together with \code{ymod}, it determines position modification of the text labels. In most coordinate systems (projections), the origin is located at the bottom left, so negative \code{xmod} move the text to the left, and negative \code{ymod} values to the bottom. 53 #' @param ymod vertical position modification. See xmod. 54 #' @param title.size title of the legend element regarding the text sizes 55 #' @param title.col title of the legend element regarding the text colors 56 #' @param legend.size.show logical that determines whether the legend for the text sizes is shown 57 #' @param legend.col.show logical that determines whether the legend for the text colors is shown 58 #' @param legend.format list of formatting options for the legend numbers. Only applicable if \code{labels} is undefined. Parameters are: 59 #' \describe{ 60 #' \item{fun}{Function to specify the labels. It should take a numeric vector, and should return a character vector of the same size. By default it is not specified. If specified, the list items \code{scientific}, \code{format}, and \code{digits} (see below) are not used.} 61 #' \item{scientific}{Should the labels be formatted scientifically? If so, square brackets are used, and the \code{format} of the numbers is \code{"g"}. Otherwise, \code{format="f"}, and \code{text.separator}, \code{text.less.than}, and \code{text.or.more} are used. Also, the numbers are automatically rounded to millions or billions if applicable.} 62 #' \item{format}{By default, \code{"f"}, i.e. the standard notation \code{xxx.xxx}, is used. If \code{scientific=TRUE} then \code{"g"}, which means that numbers are formatted scientifically, i.e. \code{n.dddE+nn} if needed to save space.} 63 #' \item{digits}{Number of digits after the decimal point if \code{format="f"}, and the number of significant digits otherwise.} 64 #' \item{big.num.abbr}{Vector that defines whether and which abbrevations are used for large numbers. It is a named numeric vector, where the name indicated the abbreviation, and the number the magnitude (in terms on numbers of zero). Numbers are only abbrevation when they are large enough. Set it to \code{NA} to disable abbrevations. The default is \code{c("mln" = 6, "bln" = 9)}. For layers where \code{style} is set to \code{log10} or \code{log10_pretty}, the default is \code{NA}.} 65 #' \item{text.separator}{Character string to use to separate numbers in the legend (default: "to").} 66 #' \item{text.less.than}{Character value(s) to use to translate "Less than". When a character vector of length 2 is specified, one for each word, these words are aligned when \code{text.to.columns = TRUE}} 67 #' \item{text.or.more}{Character value(s) to use to translate "or more". When a character vector of length 2 is specified, one for each word, these words are aligned when \code{text.to.columns = TRUE}} 68 #' \item{text.align}{Value that determines how the numbers are aligned, \code{"left"}, \code{"center"} or \code{"right"}}. By default \code{"left"} for legends in portrait format (\code{legend.is.portrait = TRUE}), and \code{"center"} otherwise. 69 #' \item{text.to.columns}{Logical that determines whether the text is aligned to three columns (from, text.separator, to). By default \code{FALSE}.} 70 #' \item{...}{Other arguments passed on to \code{\link[base:formatC]{formatC}}} 71 #' } 72 #' @param legend.size.is.portrait logical that determines whether the legend element regarding the text sizes is in portrait mode (\code{TRUE}) or landscape (\code{FALSE}) 73 #' @param legend.size.reverse logical that determines whether the items of the legend regarding the text sizes are shown in reverse order, i.e. from bottom to top when \code{legend.size.is.portrait = TRUE} and from right to left when \code{legend.size.is.portrait = FALSE} 74 #' @param legend.hist logical that determines whether a histogram is shown regarding the text colors 75 #' @param legend.hist.title title for the histogram. By default, one title is used for both the histogram and the normal legend for text colors. 76 #' @param legend.col.is.portrait logical that determines whether the legend element regarding the text colors is in portrait mode (\code{TRUE}) or landscape (\code{FALSE}) 77 #' @param legend.col.reverse logical that determines whether the items of the legend regarding the text colors are shown in reverse order, i.e. from bottom to top when \code{legend.col.is.portrait = TRUE} and from right to left when \code{legend.col.is.portrait = FALSE} 78 #' @param legend.size.z index value that determines the position of the legend element regarding the text sizes with respect to other legend elements. The legend elements are stacked according to their z values. The legend element with the lowest z value is placed on top. 79 #' @param legend.col.z index value that determines the position of the legend element regarding the text colors. (See \code{legend.size.z}) 80 #' @param legend.hist.z index value that determines the position of the histogram legend element. (See \code{legend.size.z}) 81 #' @param group name of the group to which this layer belongs in view mode. Each group can be selected or deselected in the layer control item. Set \code{group = NULL} to hide the layer in the layer control item. By default, it will be set to the name of the shape (specified in \code{\link{tm_shape}}). 82 #' @param auto.palette.mapping deprecated. It has been replaced by \code{midpoint} for numeric variables and \code{stretch.palette} for categorical variables. 83 #' @param max.categories deprecated. It has moved to \code{\link{tmap_options}}. 84 #' @note The absolute fontsize (in points) is determined by the (ROOT) viewport, which may depend on the graphics device. 85 #' @export 86 #' @example ./examples/tm_text.R 87 #' @seealso \href{../doc/tmap-getstarted.html}{\code{vignette("tmap-getstarted")}} 88 #' @references Tennekes, M., 2018, {tmap}: Thematic Maps in {R}, Journal of Statistical Software, 84(6), 1-39, \href{https://doi.org/10.18637/jss.v084.i06}{DOI} 89 #' @return \code{\link{tmap-element}} 90 tm_text <- function(text, size=1, col=NA, root=3, 91 clustering=FALSE, 92 size.lim=NA, 93 sizes.legend = NULL, 94 sizes.legend.labels = NULL, 95 sizes.legend.text = "Abc", 96 n = 5, style = ifelse(is.null(breaks), "pretty", "fixed"), 97 breaks = NULL, 98 interval.closure = "left", 99 palette = NULL, 100 labels = NULL, 101 labels.text = NA, 102 midpoint = NULL, 103 stretch.palette = TRUE, 104 contrast = NA, 105 colorNA = NA, 106 textNA = "Missing", 107 showNA = NA, 108 colorNULL = NA, 109 fontface=NA, 110 fontfamily=NA, alpha=NA, case=NA, shadow=FALSE, bg.color=NA, bg.alpha=NA, size.lowerbound=.4, print.tiny=FALSE, scale=1, auto.placement=FALSE, remove.overlap=FALSE, along.lines=FALSE, overwrite.lines=FALSE, just="center", xmod=0, ymod=0, 111 title.size = NA, 112 title.col = NA, 113 legend.size.show=TRUE, 114 legend.col.show=TRUE, 115 legend.format=list(), 116 legend.size.is.portrait=FALSE, 117 legend.col.is.portrait=TRUE, 118 legend.size.reverse=FALSE, 119 legend.col.reverse=FALSE, 120 legend.hist=FALSE, 121 legend.hist.title=NA, 122 legend.size.z=NA, 123 legend.col.z=NA, 124 legend.hist.z=NA, 125 group = NA, 126 auto.palette.mapping = NULL, 127 max.categories = NULL) { 128 2 midpoint <- check_deprecated_layer_fun_args(auto.palette.mapping, max.categories, midpoint) 129 130 2 g <- list(tm_text=c(as.list(environment()), list(call=names(match.call(expand.dots = TRUE)[-1])))) 131 2 class(g) <- "tmap" 132 2 g 133 } 134 135 #' Draw iso (contour) lines with labels 136 #' 137 #' This function is a wrapper of \code{\link{tm_lines}} and \code{\link{tm_text}} aimed to draw isopleths, which can be created with \code{\link[tmaptools:smooth_map]{smooth_map}}. 138 #' 139 #' @param col line color. See \code{\link{tm_lines}}. 140 #' @param text text to display. By default, it is the variable named \code{"level"} of the shape that is created with \code{\link[tmaptools:smooth_map]{smooth_map}} 141 #' @param size text size (see \code{\link{tm_text}}) 142 #' @param remove.overlap see \code{\link{tm_text}} 143 #' @param along.lines see \code{\link{tm_text}} 144 #' @param overwrite.lines see \code{\link{tm_text}} 145 #' @param group name of the group to which this layer belongs in view mode. Each group can be selected or deselected in the layer control item. Set \code{group = NULL} to hide the layer in the layer control item. By default, it will be set to the name of the shape (specified in \code{\link{tm_shape}}). 146 #' @param ... arguments passed on to \code{\link{tm_lines}} or \code{\link{tm_text}} 147 #' @export 148 #' @seealso \code{\link[tmaptools:smooth_map]{smooth_map}} 149 tm_iso <- function(col=NA, text="level", size=.5, 150 remove.overlap=TRUE, along.lines=TRUE, overwrite.lines=TRUE, 151 group = NA, ...) { 152 0 args <- list(...) 153 0 argsL <- args[intersect(names(formals("tm_lines")), names(args))] 154 0 argsT <- args[intersect(names(formals("tm_text")), names(args))] 155 156 0 do.call("tm_lines", c(list(col=col), argsL)) + 157 0 do.call("tm_text", c(list(text=text, size=size, 158 0 remove.overlap=remove.overlap, 159 0 along.lines=along.lines, 160 0 overwrite.lines = overwrite.lines), 161 0 argsT)) 162 } 163 164 #' Draw spatial lines 165 #' 166 #' Creates a \code{\link{tmap-element}} that draw spatial lines. 167 #' 168 #' Small multiples can be drawn in two ways: either by specifying the \code{by} argument in \code{\link{tm_facets}}, or by defining multiple variables in the aesthetic arguments. The aesthetic arguments of \code{tm_lines} are \code{col} and \code{lwd}. In the latter case, the arguments, except for the ones starting with \code{legend.}, can be specified for small multiples as follows. If the argument normally only takes a single value, such as \code{n}, then a vector of those values can be specified, one for each small multiple. If the argument normally can take a vector, such as \code{palette}, then a list of those vectors (or values) can be specified, one for each small multiple. 169 #' 170 #' @param col color of the lines. Either a color value or a data variable name. If multiple values are specified, small multiples are drawn (see details). 171 #' @param lwd line width. Either a numeric value or a data variable. In the latter case, the class of the highest values (see \code{style}) will get the line width defined by \code{scale}. If multiple values are specified, small multiples are drawn (see details). 172 #' @param lty line type. 173 #' @param alpha transparency number between 0 (totally transparent) and 1 (not transparent). By default, the alpha value of the \code{col} is used (normally 1). 174 #' @param scale line width multiplier number. 175 #' @param lwd.legend vector of line widths that are shown in the legend. By default, this is determined automatically. 176 #' @param lwd.legend.labels vector of labels for that correspond to \code{lwd.legend}. 177 #' @param n preferred number of color scale classes. Only applicable when \code{lwd} is the name of a numeric variable. 178 #' @param style method to process the color scale when \code{col} is a numeric variable. Discrete options are \code{"cat"}, \code{"fixed"}, \code{"sd"}, \code{"equal"}, \code{"pretty"}, \code{"quantile"}, \code{"kmeans"}, \code{"hclust"}, \code{"bclust"}, \code{"fisher"}, \code{"jenks"}, and \code{"log10_pretty"}. A numeric variable is processed as a categorical variable when using \code{"cat"}, i.e. each unique value will correspond to a distinct category. For the other discrete options (except \code{"log10_pretty"}), see the details in \code{\link[classInt:classIntervals]{classIntervals}}. Continuous options are \code{"cont"}, \code{"order"}, and \code{"log10"}. The first maps the values of \code{col} to a smooth gradient, the second maps the order of values of \code{col} to a smooth gradient, and the third uses a logarithmic transformation. 179 #' @param breaks in case \code{style=="fixed"}, breaks should be specified. The \code{breaks} argument can also be used when \code{style="cont"}. In that case, the breaks are mapped evenly to the sequential or diverging color palette. 180 #' @param interval.closure value that determines whether where the intervals are closed: \code{"left"} or \code{"right"}. Only applicable if \code{col} is a numeric variable. 181 #' @param palette a palette name or a vector of colors. See \code{tmaptools::palette_explorer()} for the named palettes. Use a \code{"-"} as prefix to reverse the palette. The default palette is taken from \code{\link{tm_layout}}'s argument \code{aes.palette}, which typically depends on the style. The type of palette from \code{aes.palette} is automatically determined, but can be overwritten: use \code{"seq"} for sequential, \code{"div"} for diverging, and \code{"cat"} for categorical. 182 #' @param labels labels of the classes 183 #' @param midpoint The value mapped to the middle color of a diverging palette. By default it is set to 0 if negative and positive values are present. In that case, the two sides of the color palette are assigned to negative respectively positive values. If all values are positive or all values are negative, then the midpoint is set to \code{NA}, which means that the value that corresponds to the middle color class (see \code{style}) is mapped to the middle color. Only applies when \code{col} is a numeric variable. If it is specified for sequential color palettes (e.g. \code{"Blues"}), then this color palette will be treated as a diverging color palette. 184 #' @param stretch.palette Logical that determines whether the categorical color palette should be stretched if there are more categories than colors. If \code{TRUE} (default), interpolated colors are used (like a rainbow). If \code{FALSE}, the palette is repeated. 185 #' @param contrast vector of two numbers that determine the range that is used for sequential and diverging palettes (applicable when \code{auto.palette.mapping=TRUE}). Both numbers should be between 0 and 1. The first number determines where the palette begins, and the second number where it ends. For sequential palettes, 0 means the brightest color, and 1 the darkest color. For diverging palettes, 0 means the middle color, and 1 both extremes. If only one number is provided, this number is interpreted as the endpoint (with 0 taken as the start). 186 #' @param colorNA color used for missing values. Use \code{NULL} for transparency. 187 #' @param textNA text used for missing values. 188 #' @param showNA logical that determines whether missing values are named in the legend. By default (\code{NA}), this depends on the presence of missing values. 189 #' @param colorNULL colour for polygons that are shown on the map that are out of scope 190 #' @param title.col title of the legend element regarding the line colors 191 #' @param title.lwd title of the legend element regarding the line widths 192 #' @param legend.col.show logical that determines whether the legend for the line colors is shown 193 #' @param legend.lwd.show logical that determines whether the legend for the line widths is shown 194 #' @param legend.format list of formatting options for the legend numbers. Only applicable if \code{labels} is undefined. Parameters are: 195 #' \describe{ 196 #' \item{fun}{Function to specify the labels. It should take a numeric vector, and should return a character vector of the same size. By default it is not specified. If specified, the list items \code{scientific}, \code{format}, and \code{digits} (see below) are not used.} 197 #' \item{scientific}{Should the labels be formatted scientifically? If so, square brackets are used, and the \code{format} of the numbers is \code{"g"}. Otherwise, \code{format="f"}, and \code{text.separator}, \code{text.less.than}, and \code{text.or.more} are used. Also, the numbers are automatically rounded to millions or billions if applicable.} 198 #' \item{format}{By default, \code{"f"}, i.e. the standard notation \code{xxx.xxx}, is used. If \code{scientific=TRUE} then \code{"g"}, which means that numbers are formatted scientifically, i.e. \code{n.dddE+nn} if needed to save space.} 199 #' \item{digits}{Number of digits after the decimal point if \code{format="f"}, and the number of significant digits otherwise.} 200 #' \item{big.num.abbr}{Vector that defines whether and which abbrevations are used for large numbers. It is a named numeric vector, where the name indicated the abbreviation, and the number the magnitude (in terms on numbers of zero). Numbers are only abbrevation when they are large enough. Set it to \code{NA} to disable abbrevations. The default is \code{c("mln" = 6, "bln" = 9)}. For layers where \code{style} is set to \code{log10} or \code{log10_pretty}, the default is \code{NA}.} 201 #' \item{text.separator}{Character string to use to separate numbers in the legend (default: "to").} 202 #' \item{text.less.than}{Character value(s) to use to translate "Less than". When a character vector of length 2 is specified, one for each word, these words are aligned when \code{text.to.columns = TRUE}} 203 #' \item{text.or.more}{Character value(s) to use to translate "or more". When a character vector of length 2 is specified, one for each word, these words are aligned when \code{text.to.columns = TRUE}} 204 #' \item{text.align}{Value that determines how the numbers are aligned, \code{"left"}, \code{"center"} or \code{"right"}}. By default \code{"left"} for legends in portrait format (\code{legend.is.protrait = TRUE}), and \code{"center"} otherwise. 205 #' \item{text.to.columns}{Logical that determines whether the text is aligned to three columns (from, text.separator, to). By default \code{FALSE}.} 206 #' \item{...}{Other arguments passed on to \code{\link[base:formatC]{formatC}}} 207 #' } 208 #' @param legend.col.is.portrait logical that determines whether the legend element regarding the line colors is in portrait mode (\code{TRUE}) or landscape (\code{FALSE}) 209 #' @param legend.lwd.is.portrait logical that determines whether the legend element regarding the line widths is in portrait mode (\code{TRUE}) or landscape (\code{FALSE}) 210 #' @param legend.col.reverse logical that determines whether the items of the legend regarding the line colors sizes are shown in reverse order, i.e. from bottom to top when \code{legend.col.is.portrait = TRUE} and from right to left when \code{legend.col.is.portrait = FALSE} 211 #' @param legend.lwd.reverse logical that determines whether the items of the legend regarding the line widths are shown in reverse order, i.e. from bottom to top when \code{legend.lwd.is.portrait = TRUE} and from right to left when \code{legend.lwd.is.portrait = FALSE} 212 #' @param legend.hist logical that determines whether a histogram is shown regarding the line colors 213 #' @param legend.hist.title title for the histogram. By default, one title is used for both the histogram and the normal legend for line colors. 214 #' @param legend.col.z index value that determines the position of the legend element regarding the line colors with respect to other legend elements. The legend elements are stacked according to their z values. The legend element with the lowest z value is placed on top. 215 #' @param legend.lwd.z index value that determines the position of the legend element regarding the line widths. (See \code{legend.col.z}) 216 #' @param legend.hist.z index value that determines the position of the legend element regarding the histogram. (See \code{legend.col.z}) 217 #' @param id name of the data variable that specifies the indices of the lines. Only used for \code{"view"} mode (see \code{\link{tmap_mode}}). 218 #' @param popup.vars names of data variables that are shown in the popups in \code{"view"} mode. If \code{NA} (default), only aesthetic variables (i.e. specified by \code{col} and \code{lwd}) are shown). If they are not specified, all variables are shown. Set popup.vars to \code{FALSE} to disable popups. When a vector of variable names is provided, the names (if specified) are printed in the popups. 219 #' @param popup.format list of formatting options for the popup values. See the argument \code{legend.format} for options. Only applicable for numeric data variables. If one list of formatting options is provided, it is applied to all numeric variables of \code{popup.vars}. Also, a (named) list of lists can be provided. In that case, each list of formatting options is applied to the named variable. 220 #' @param group name of the group to which this layer belongs in view mode. Each group can be selected or deselected in the layer control item. Set \code{group = NULL} to hide the layer in the layer control item. By default, it will be set to the name of the shape (specified in \code{\link{tm_shape}}). 221 #' @param auto.palette.mapping deprecated. It has been replaced by \code{midpoint} for numeric variables and \code{stretch.palette} for categorical variables. 222 #' @param max.categories deprecated. It has moved to \code{\link{tmap_options}}. 223 #' @export 224 #' @seealso \href{../doc/tmap-getstarted.html}{\code{vignette("tmap-getstarted")}} 225 #' @references Tennekes, M., 2018, {tmap}: Thematic Maps in {R}, Journal of Statistical Software, 84(6), 1-39, \href{https://doi.org/10.18637/jss.v084.i06}{DOI} 226 #' @example ./examples/tm_lines.R 227 #' @return \code{\link{tmap-element}} 228 tm_lines <- function(col=NA, lwd=1, lty="solid", alpha=NA, 229 scale=1, 230 lwd.legend = NULL, 231 lwd.legend.labels = NULL, 232 n = 5, style = ifelse(is.null(breaks), "pretty", "fixed"), 233 breaks = NULL, 234 interval.closure = "left", 235 palette = NULL, 236 labels = NULL, 237 midpoint = NULL, 238 stretch.palette = TRUE, 239 contrast = NA, 240 colorNA = NA, 241 textNA = "Missing", 242 showNA = NA, 243 colorNULL = NA, 244 title.col=NA, 245 title.lwd=NA, 246 legend.col.show=TRUE, 247 legend.lwd.show=TRUE, 248 legend.format=list(), 249 legend.col.is.portrait=TRUE, 250 legend.lwd.is.portrait=FALSE, 251 legend.col.reverse=FALSE, 252 legend.lwd.reverse=FALSE, 253 legend.hist=FALSE, 254 legend.hist.title=NA, 255 legend.col.z=NA, 256 legend.lwd.z=NA, 257 legend.hist.z=NA, 258 id=NA, 259 popup.vars=NA, 260 popup.format=list(), 261 group = NA, 262 auto.palette.mapping = NULL, 263 max.categories = NULL) { 264 2 midpoint <- check_deprecated_layer_fun_args(auto.palette.mapping, max.categories, midpoint) 265 2 g <- list(tm_lines=c(as.list(environment()), list(call=names(match.call(expand.dots = TRUE)[-1])))) 266 2 class(g) <- "tmap" 267 2 g 268 } 269 270 271 272 273 #' Draw polygons 274 #' 275 #' Creates a \code{\link{tmap-element}} that draws the polygons. \code{tm_fill} fills the polygons. Either a fixed color is used, or a color palette is mapped to a data variable. \code{tm_borders} draws the borders of the polygons. \code{tm_polygons} fills the polygons and draws the polygon borders. 276 #' 277 #' Small multiples can be drawn in two ways: either by specifying the \code{by} argument in \code{\link{tm_facets}}, or by defining multiple variables in the aesthetic arguments. The aesthetic argument of \code{tm_fill} (and \code{tm_polygons}) is \code{col}. In the latter case, the arguments, except for \code{thres.poly}, and the ones starting with \code{legend.}, can be specified for small multiples as follows. If the argument normally only takes a single value, such as \code{n}, then a vector of those values can be specified, one for each small multiple. If the argument normally can take a vector, such as \code{palette}, then a list of those vectors (or values) can be specified, one for each small multiple. 278 #' 279 #' @name tm_fill 280 #' @rdname tm_polygons 281 #' @param col For \code{tm_fill}, it is one of 282 #' \itemize{ 283 #' \item a single color value 284 #' \item the name of a data variable that is contained in \code{shp}. Either the data variable contains color values, or values (numeric or categorical) that will be depicted by a color palette (see \code{palette}. In the latter case, a choropleth is drawn. 285 #' \item \code{"MAP_COLORS"}. In this case polygons will be colored such that adjacent polygons do not get the same color. See the underlying function \code{\link[tmaptools:map_coloring]{map_coloring}} for details.} 286 #' For \code{tm_borders}, it is a single color value that specifies the border line color. If multiple values are specified, small multiples are drawn (see details). 287 #' @param alpha transparency number between 0 (totally transparent) and 1 (not transparent). By default, the alpha value of the \code{col} is used (normally 1). 288 #' @param palette a palette name or a vector of colors. See \code{tmaptools::palette_explorer()} for the named palettes. Use a \code{"-"} as prefix to reverse the palette. The default palette is taken from \code{\link{tm_layout}}'s argument \code{aes.palette}, which typically depends on the style. The type of palette from \code{aes.palette} is automatically determined, but can be overwritten: use \code{"seq"} for sequential, \code{"div"} for diverging, and \code{"cat"} for categorical. 289 #' @param convert2density boolean that determines whether \code{col} is converted to a density variable. Should be \code{TRUE} when \code{col} consists of absolute numbers. The area size is either approximated from the shape object, or given by the argument \code{area}. 290 #' @param area Name of the data variable that contains the area sizes in squared kilometer. 291 #' @param n preferred number of classes (in case \code{col} is a numeric variable). 292 #' @param style method to process the color scale when \code{col} is a numeric variable. Discrete options are \code{"cat"}, \code{"fixed"}, \code{"sd"}, \code{"equal"}, \code{"pretty"}, \code{"quantile"}, \code{"kmeans"}, \code{"hclust"}, \code{"bclust"}, \code{"fisher"}, \code{"jenks"}, and \code{"log10_pretty"}. A numeric variable is processed as a categorical variable when using \code{"cat"}, i.e. each unique value will correspond to a distinct category. For the other discrete options (except \code{"log10_pretty"}), see the details in \code{\link[classInt:classIntervals]{classIntervals}}. Continuous options are \code{"cont"}, \code{"order"}, and \code{"log10"}. The first maps the values of \code{col} to a smooth gradient, the second maps the order of values of \code{col} to a smooth gradient, and the third uses a logarithmic transformation. 293 #' @param breaks in case \code{style=="fixed"}, breaks should be specified. The \code{breaks} argument can also be used when \code{style="cont"}. In that case, the breaks are mapped evenly to the sequential or diverging color palette. 294 #' @param interval.closure value that determines whether where the intervals are closed: \code{"left"} or \code{"right"}. Only applicable if \code{col} is a numeric variable. 295 #' @param labels labels of the classes. 296 #' @param midpoint The value mapped to the middle color of a diverging palette. By default it is set to 0 if negative and positive values are present. In that case, the two sides of the color palette are assigned to negative respectively positive values. If all values are positive or all values are negative, then the midpoint is set to \code{NA}, which means that the value that corresponds to the middle color class (see \code{style}) is mapped to the middle color. Only applies when \code{col} is a numeric variable. If it is specified for sequential color palettes (e.g. \code{"Blues"}), then this color palette will be treated as a diverging color palette. 297 #' @param stretch.palette Logical that determines whether the categorical color palette should be stretched if there are more categories than colors. If \code{TRUE} (default), interpolated colors are used (like a rainbow). If \code{FALSE}, the palette is repeated. 298 #' @param contrast vector of two numbers that determine the range that is used for sequential and diverging palettes (applicable when \code{auto.palette.mapping=TRUE}). Both numbers should be between 0 and 1. The first number determines where the palette begins, and the second number where it ends. For sequential palettes, 0 means the brightest color, and 1 the darkest color. For diverging palettes, 0 means the middle color, and 1 both extremes. If only one number is provided, this number is interpreted as the endpoint (with 0 taken as the start). 299 #' @param colorNA color used for missing values. Use \code{NULL} for transparency. 300 #' @param textNA text used for missing values. 301 #' @param showNA logical that determines whether missing values are named in the legend. By default (\code{NA}), this depends on the presence of missing values. 302 #' @param colorNULL colour for polygons that are shown on the map that are out of scope 303 #' @param thres.poly number that specifies the threshold at which polygons are taken into account. The number itself corresponds to the proportion of the area sizes of the polygons to the total polygon size. By default, all polygons are drawn. To ignore polygons that are not visible in a normal plot, a value like \code{1e-05} is recommended. 304 #' @param title title of the legend element 305 #' @param legend.show logical that determines whether the legend is shown 306 #' @param legend.format list of formatting options for the legend numbers. Only applicable if \code{labels} is undefined. Parameters are: 307 #' \describe{ 308 #' \item{fun}{Function to specify the labels. It should take a numeric vector, and should return a character vector of the same size. By default it is not specified. If specified, the list items \code{scientific}, \code{format}, and \code{digits} (see below) are not used.} 309 #' \item{scientific}{Should the labels be formatted scientifically? If so, square brackets are used, and the \code{format} of the numbers is \code{"g"}. Otherwise, \code{format="f"}, and \code{text.separator}, \code{text.less.than}, and \code{text.or.more} are used. Also, the numbers are automatically rounded to millions or billions if applicable.} 310 #' \item{format}{By default, \code{"f"}, i.e. the standard notation \code{xxx.xxx}, is used. If \code{scientific=TRUE} then \code{"g"}, which means that numbers are formatted scientifically, i.e. \code{n.dddE+nn} if needed to save space.} 311 #' \item{digits}{Number of digits after the decimal point if \code{format="f"}, and the number of significant digits otherwise.} 312 #' \item{big.num.abbr}{Vector that defines whether and which abbrevations are used for large numbers. It is a named numeric vector, where the name indicated the abbreviation, and the number the magnitude (in terms on numbers of zero). Numbers are only abbrevation when they are large enough. Set it to \code{NA} to disable abbrevations. The default is \code{c("mln" = 6, "bln" = 9)}. For layers where \code{style} is set to \code{log10} or \code{log10_pretty}, the default is \code{NA}.} 313 #' \item{text.separator}{Character string to use to separate numbers in the legend (default: "to").} 314 #' \item{text.less.than}{Character value(s) to use to translate "Less than". When a character vector of length 2 is specified, one for each word, these words are aligned when \code{text.to.columns = TRUE}} 315 #' \item{text.or.more}{Character value(s) to use to translate "or more". When a character vector of length 2 is specified, one for each word, these words are aligned when \code{text.to.columns = TRUE}} 316 #' \item{text.align}{Value that determines how the numbers are aligned, \code{"left"}, \code{"center"} or \code{"right"}}. By default \code{"left"} for legends in portrait format (\code{legend.is.protrait = TRUE}), and \code{"center"} otherwise. 317 #' \item{text.to.columns}{Logical that determines whether the text is aligned to three columns (from, text.separator, to). By default \code{FALSE}.} 318 #' \item{...}{Other arguments passed on to \code{\link[base:formatC]{formatC}}} 319 #' } 320 #' @param legend.is.portrait logical that determines whether the legend is in portrait mode (\code{TRUE}) or landscape (\code{FALSE}) 321 #' @param legend.reverse logical that determines whether the items are shown in reverse order, i.e. from bottom to top when \code{legend.is.portrait = TRUE} and from right to left when \code{legend.is.portrait = FALSE} 322 #' @param legend.hist logical that determines whether a histogram is shown 323 #' @param legend.hist.title title for the histogram. By default, one title is used for both the histogram and the normal legend. 324 #' @param legend.z index value that determines the position of the legend element with respect to other legend elements. The legend elements are stacked according to their z values. The legend element with the lowest z value is placed on top. 325 #' @param legend.hist.z index value that determines the position of the histogram legend element 326 #' @param id name of the data variable that specifies the indices of the polygons. Only used for \code{"view"} mode (see \code{\link{tmap_mode}}). 327 #' @param popup.vars names of data variables that are shown in the popups in \code{"view"} mode. If \code{convert2density=TRUE}, the derived density variable name is suffixed with \code{_density}. If \code{NA} (default), only aesthetic variables (i.e. specified by \code{col} and \code{lwd}) are shown). If they are not specified, all variables are shown. Set popup.vars to \code{FALSE} to disable popups. When a vector of variable names is provided, the names (if specified) are printed in the popups. 328 #' @param popup.format list of formatting options for the popup values. See the argument \code{legend.format} for options. Only applicable for numeric data variables. If one list of formatting options is provided, it is applied to all numeric variables of \code{popup.vars}. Also, a (named) list of lists can be provided. In that case, each list of formatting options is applied to the named variable. 329 #' @param group name of the group to which this layer belongs in view mode. Each group can be selected or deselected in the layer control item. Set \code{group = NULL} to hide the layer in the layer control item. By default, it will be set to the name of the shape (specified in \code{\link{tm_shape}}). 330 #' @param auto.palette.mapping deprecated. It has been replaced by \code{midpoint} for numeric variables and \code{stretch.palette} for categorical variables. 331 #' @param max.categories deprecated. It has moved to \code{\link{tmap_options}}. 332 #' @param ... for \code{tm_polygons}, these arguments passed to either \code{tm_fill} or \code{tm_borders}. For \code{tm_fill}, these arguments are passed on to \code{\link[tmaptools:map_coloring]{map_coloring}}. 333 #' @keywords choropleth 334 #' @export 335 #' @example ./examples/tm_fill.R 336 #' @seealso \href{../doc/tmap-getstarted.html}{\code{vignette("tmap-getstarted")}} 337 #' @references Tennekes, M., 2018, {tmap}: Thematic Maps in {R}, Journal of Statistical Software, 84(6), 1-39, \href{https://doi.org/10.18637/jss.v084.i06}{DOI} 338 #' @return \code{\link{tmap-element}} 339 tm_fill <- function(col=NA, 340 alpha=NA, 341 palette = NULL, 342 convert2density = FALSE, 343 area = NULL, 344 n = 5, 345 style = ifelse(is.null(breaks), "pretty", "fixed"), 346 breaks = NULL, 347 interval.closure = "left", 348 labels = NULL, 349 midpoint = NULL, 350 stretch.palette = TRUE, 351 contrast = NA, 352 colorNA = NA, 353 textNA = "Missing", 354 showNA = NA, 355 colorNULL = NA, 356 thres.poly = 0, 357 title=NA, 358 legend.show=TRUE, 359 legend.format=list(), 360 legend.is.portrait=TRUE, 361 legend.reverse=FALSE, 362 legend.hist=FALSE, 363 legend.hist.title=NA, 364 legend.z=NA, 365 legend.hist.z=NA, 366 id=NA, 367 popup.vars=NA, 368 popup.format=list(), 369 group = NA, 370 auto.palette.mapping = NULL, 371 max.categories = NULL, 372 ...) { 373 2 midpoint <- check_deprecated_layer_fun_args(auto.palette.mapping, max.categories, midpoint) 374 2 g <- list(tm_fill=c(as.list(environment()), list(map_coloring=list(...), call=names(match.call(expand.dots = TRUE)[-1])))) 375 2 class(g) <- "tmap" 376 2 g 377 } 378 379 380 #' @name tm_borders 381 #' @rdname tm_polygons 382 #' @param lwd border line width (see \code{\link[graphics:par]{par}}) 383 #' @param lty border line type (see \code{\link[graphics:par]{par}}) 384 #' @export 385 tm_borders <- function(col=NA, lwd=1, lty="solid", alpha=NA, group = NA) { 386 2 g <- list(tm_borders=as.list(environment())) 387 2 class(g) <- "tmap" 388 2 g 389 } 390 391 #' @name tm_polygons 392 #' @rdname tm_polygons 393 #' @param border.col border line color 394 #' @param border.alpha transparency number between 0 (totally transparent) and 1 (not transparent). By default, the alpha value of the \code{col} is used (normally 1). 395 #' @export 396 tm_polygons <- function(col=NA, 397 alpha=NA, 398 border.col=NA, 399 border.alpha=NA, 400 group = NA, 401 ...) { 402 2 args <- list(...) 403 2 argsFill <- c(list(col=col, alpha=alpha, group=group), args[setdiff(names(args), c("lwd", "lty"))]) 404 2 argsBorders <- c(list(col=border.col, alpha=border.alpha), args[intersect(names(args), names(formals("tm_borders")))]) 405 2 g <- do.call("tm_fill", argsFill) + do.call("tm_borders", argsBorders) 406 2 g$tm_fill$call <- names(match.call(expand.dots = TRUE)[-1]) 407 2 g 408 } 409 410 411 #' Draw a raster 412 #' 413 #' Creates a \code{\link{tmap-element}} that draws a raster. For coloring, there are three options: 1) a fixed color is used, 2) a color palette is mapped to a data variable, 3) RGB values are used. The function \code{tm_raster} is designed for options 1 and 2, while \code{tm_rgb} is used for option 3. 414 #' 415 #' Small multiples can be drawn in two ways: either by specifying the \code{by} argument in \code{\link{tm_facets}}, or by defining multiple variables in the aesthetic arguments. The aesthetic argument of \code{tm_raster} is \code{col}. In the latter case, the arguments, except for the ones starting with \code{legend.}, can be specified for small multiples as follows. If the argument normally only takes a single value, such as \code{n}, then a vector of those values can be specified, one for each small multiple. If the argument normally can take a vector, such as \code{palette}, then a list of those vectors (or values) can be specified, one for each small multiple. 416 #' 417 #' @param col three options: a single color value, the name of a data variable that is contained in \code{shp}, or the name of a variable in \code{shp} that contain color values. In the second case the values (numeric or categorical) that will be depicted by a color palette (see \code{palette}. If multiple values are specified, small multiples are drawn (see details). By default, it is a vector of the names of all data variables unless the \code{by} argument of \code{\link{tm_facets}} is defined (in that case, the default color of dots is taken from the tmap option \code{aes.color}). Note that the number of small multiples is limited by \code{tmap_options("limits")}). 418 #' @param r raster band for the red channel. It should be an integer between 1 and the number of raster layers. 419 #' @param g raster band for the green channel. It should be an integer between 1 and the number of raster layers. 420 #' @param b raster band for the blue channel. It should be an integer between 1 and the number of raster layers. 421 #' @param a raster band for the alpha channel. It should be an integer between 1 and the number of raster layers. 422 #' @param alpha transparency number between 0 (totally transparent) and 1 (not transparent). By default, the alpha value of the \code{col} is used (normally 1). 423 #' @param palette a palette name or a vector of colors. See \code{tmaptools::palette_explorer()} for the named palettes. Use a \code{"-"} as prefix to reverse the palette. The default palette is taken from \code{\link{tm_layout}}'s argument \code{aes.palette}, which typically depends on the style. The type of palette from \code{aes.palette} is automatically determined, but can be overwritten: use \code{"seq"} for sequential, \code{"div"} for diverging, and \code{"cat"} for categorical. 424 #' @param n preferred number of classes (in case \code{col} is a numeric variable) 425 #' @param style method to process the color scale when \code{col} is a numeric variable. Discrete options are \code{"cat"}, \code{"fixed"}, \code{"sd"}, \code{"equal"}, \code{"pretty"}, \code{"quantile"}, \code{"kmeans"}, \code{"hclust"}, \code{"bclust"}, \code{"fisher"}, \code{"jenks"}, and \code{"log10_pretty"}. A numeric variable is processed as a categorical variable when using \code{"cat"}, i.e. each unique value will correspond to a distinct category. For the other discrete options (except \code{"log10_pretty"}), see the details in \code{\link[classInt:classIntervals]{classIntervals}}. Continuous options are \code{"cont"}, \code{"order"}, and \code{"log10"}. The first maps the values of \code{col} to a smooth gradient, the second maps the order of values of \code{col} to a smooth gradient, and the third uses a logarithmic transformation. 426 #' @param breaks in case \code{style=="fixed"}, breaks should be specified. The \code{breaks} argument can also be used when \code{style="cont"}. In that case, the breaks are mapped evenly to the sequential or diverging color palette. 427 #' @param interval.closure value that determines whether where the intervals are closed: \code{"left"} or \code{"right"}. Only applicable if \code{col} is a numeric variable. 428 #' @param labels labels of the classes 429 #' @param midpoint The value mapped to the middle color of a diverging palette. By default it is set to 0 if negative and positive values are present. In that case, the two sides of the color palette are assigned to negative respectively positive values. If all values are positive or all values are negative, then the midpoint is set to \code{NA}, which means that the value that corresponds to the middle color class (see \code{style}) is mapped to the middle color. Only applies when \code{col} is a numeric variable. If it is specified for sequential color palettes (e.g. \code{"Blues"}), then this color palette will be treated as a diverging color palette. 430 #' @param stretch.palette Logical that determines whether the categorical color palette should be stretched if there are more categories than colors. If \code{TRUE} (default), interpolated colors are used (like a rainbow). If \code{FALSE}, the palette is repeated. 431 #' @param contrast vector of two numbers that determine the range that is used for sequential and diverging palettes (applicable when \code{auto.palette.mapping=TRUE}). Both numbers should be between 0 and 1. The first number determines where the palette begins, and the second number where it ends. For sequential palettes, 0 means the brightest color, and 1 the darkest color. For diverging palettes, 0 means the middle color, and 1 both extremes. If only one number is provided, this number is interpreted as the endpoint (with 0 taken as the start). 432 #' @param saturation Number that determines how much saturation (also known as chroma) is used: \code{saturation=0} is greyscale and \code{saturation=1} is normal. This saturation value is multiplied by the overall saturation of the map (see \code{\link{tm_layout}}). 433 #' @param interpolate Should the raster image be interpolated? By default \code{FALSE} for \code{tm_raster} and \code{TRUE} for \code{tm_rgb}. 434 #' @param colorNA color used for missing values. Use \code{NULL} for transparency. 435 #' @param textNA text used for missing values. 436 #' @param showNA logical that determines whether missing values are named in the legend. By default (\code{NA}), this depends on the presence of missing values. 437 #' @param colorNULL colour for polygons that are shown on the map that are out of scope 438 #' @param title title of the legend element 439 #' @param legend.show logical that determines whether the legend is shown 440 #' @param legend.format list of formatting options for the legend numbers. Only applicable if \code{labels} is undefined. Parameters are: 441 #' \describe{ 442 #' \item{fun}{Function to specify the labels. It should take a numeric vector, and should return a character vector of the same size. By default it is not specified. If specified, the list items \code{scientific}, \code{format}, and \code{digits} (see below) are not used.} 443 #' \item{scientific}{Should the labels be formatted scientifically? If so, square brackets are used, and the \code{format} of the numbers is \code{"g"}. Otherwise, \code{format="f"}, and \code{text.separator}, \code{text.less.than}, and \code{text.or.more} are used. Also, the numbers are automatically rounded to millions or billions if applicable.} 444 #' \item{format}{By default, \code{"f"}, i.e. the standard notation \code{xxx.xxx}, is used. If \code{scientific=TRUE} then \code{"g"}, which means that numbers are formatted scientifically, i.e. \code{n.dddE+nn} if needed to save space.} 445 #' \item{digits}{Number of digits after the decimal point if \code{format="f"}, and the number of significant digits otherwise.} 446 #' \item{big.num.abbr}{Vector that defines whether and which abbrevations are used for large numbers. It is a named numeric vector, where the name indicated the abbreviation, and the number the magnitude (in terms on numbers of zero). Numbers are only abbrevation when they are large enough. Set it to \code{NA} to disable abbrevations. The default is \code{c("mln" = 6, "bln" = 9)}. For layers where \code{style} is set to \code{log10} or \code{log10_pretty}, the default is \code{NA}.} 447 #' \item{text.separator}{Character string to use to separate numbers in the legend (default: "to").} 448 #' \item{text.less.than}{Character value(s) to use to translate "Less than". When a character vector of length 2 is specified, one for each word, these words are aligned when \code{text.to.columns = TRUE}} 449 #' \item{text.or.more}{Character value(s) to use to translate "or more". When a character vector of length 2 is specified, one for each word, these words are aligned when \code{text.to.columns = TRUE}} 450 #' \item{text.align}{Value that determines how the numbers are aligned, \code{"left"}, \code{"center"} or \code{"right"}}. By default \code{"left"} for legends in portrait format (\code{legend.is.protrait = TRUE}), and \code{"center"} otherwise. 451 #' \item{text.to.columns}{Logical that determines whether the text is aligned to three columns (from, text.separator, to). By default \code{FALSE}.} 452 #' \item{...}{Other arguments passed on to \code{\link[base:formatC]{formatC}}} 453 #' } 454 #' @param legend.is.portrait logical that determines whether the legend is in portrait mode (\code{TRUE}) or landscape (\code{FALSE}) 455 #' @param legend.reverse logical that determines whether the items of the legend regarding the text sizes are shown in reverse order, i.e. from bottom to top when \code{legend.is.portrait = TRUE} and from right to left when \code{legend.is.portrait = FALSE} 456 #' @param legend.hist logical that determines whether a histogram is shown 457 #' @param legend.hist.title title for the histogram. By default, one title is used for both the histogram and the normal legend. 458 #' @param legend.z index value that determines the position of the legend element with respect to other legend elements. The legend elements are stacked according to their z values. The legend element with the lowest z value is placed on top. 459 #' @param legend.hist.z index value that determines the position of the histogram legend element 460 #' @param group name of the group to which this layer belongs in view mode. Each group can be selected or deselected in the layer control item. Set \code{group = NULL} to hide the layer in the layer control item. By default, it will be set to the name of the shape (specified in \code{\link{tm_shape}}). 461 #' @param auto.palette.mapping deprecated. It has been replaced by \code{midpoint} for numeric variables and \code{stretch.palette} for categorical variables. 462 #' @param max.categories deprecated. It has moved to \code{\link{tmap_options}}. 463 #' @param max.value for \code{tm_rgb}, what is the maximum value per layer? By default 255. 464 #' @param ... arguments passed on from \code{tm_raster} to \code{tm_rgb} 465 #' @name tm_raster 466 #' @rdname tm_raster 467 #' @export 468 #' @example ./examples/tm_raster.R 469 #' @seealso \href{../doc/tmap-getstarted.html}{\code{vignette("tmap-getstarted")}} 470 #' @references Tennekes, M., 2018, {tmap}: Thematic Maps in {R}, Journal of Statistical Software, 84(6), 1-39, \href{https://doi.org/10.18637/jss.v084.i06}{DOI} 471 #' @return \code{\link{tmap-element}} 472 tm_raster <- function(col=NA, 473 alpha = NA, 474 palette = NULL, 475 n = 5, 476 style = ifelse(is.null(breaks), "pretty", "fixed"), 477 breaks = NULL, 478 interval.closure = "left", 479 labels = NULL, 480 midpoint = NULL, 481 stretch.palette = TRUE, 482 contrast = NA, 483 saturation = 1, 484 interpolate = NA, 485 colorNA = NULL, 486 textNA = "Missing", 487 showNA = NA, 488 colorNULL = NULL, 489 title=NA, 490 legend.show=TRUE, 491 legend.format=list(), 492 legend.is.portrait=TRUE, 493 legend.reverse=FALSE, 494 legend.hist=FALSE, 495 legend.hist.title=NA, 496 legend.z=NA, 497 legend.hist.z=NA, 498 group = NA, 499 auto.palette.mapping = NULL, 500 max.categories = NULL, 501 max.value = 255) { 502 2 midpoint <- check_deprecated_layer_fun_args(auto.palette.mapping, max.categories, midpoint) 503 2 g <- list(tm_raster=as.list(environment())) 504 2 g$tm_raster$is.RGB <- FALSE 505 2 g$tm_raster$rbg.vars <- NULL 506 2 class(g) <- "tmap" 507 2 g 508 } 509 510 #' @name tm_rgb 511 #' @rdname tm_raster 512 #' @export 513 tm_rgb <- function(r = 1, g = 2, b = 3, alpha = NA, saturation = 1, interpolate=TRUE, max.value = 255, ...) { 514 0 h <- do.call("tm_raster", c(list(alpha=alpha, saturation=saturation, interpolate=interpolate, max.value=max.value), list(...))) 515 0 h$tm_raster$is.RGB <- TRUE 516 0 h$tm_raster$rgb.vars <- c(r, g, b) 517 0 class(h) <- "tmap" 518 0 h 519 } 520 521 #' @name tm_rgba 522 #' @rdname tm_raster 523 #' @export 524 tm_rgba <- function(r = 1, g = 2, b = 3, a = 4, alpha = NA, saturation = 1, interpolate=TRUE, max.value = 255, ...) { 525 0 h <- do.call("tm_raster", c(list(alpha=alpha, saturation=saturation, interpolate=interpolate, max.value=max.value), list(...))) 526 0 h$tm_raster$is.RGB <- TRUE 527 0 h$tm_raster$rgb.vars <- c(r, g, b, a) 528 0 class(h) <- "tmap" 529 0 h 530 } 531 532 533 #' Draw symbols 534 #' 535 #' Creates a \code{\link{tmap-element}} that draws symbols, including symbols and dots. The color, size, and shape of the symbols can be mapped to data variables. 536 #' 537 #' Small multiples can be drawn in two ways: either by specifying the \code{by} argument in \code{\link{tm_facets}}, or by defining multiple variables in the aesthetic arguments, which are \code{size}, \code{col}, and \code{shape}. In the latter case, the arguments, except for the ones starting with \code{legend.}, can be specified for small multiples as follows. If the argument normally only takes a single value, such as \code{n}, then a vector of those values can be specified, one for each small multiple. If the argument normally can take a vector, such as \code{palette}, then a list of those vectors (or values) can be specified, one for each small multiple. 538 #' 539 #' A shape specification is one of the following three options. To specify multiple shapes (needed for the \code{shapes} argument), a vector or list of these shape specification is required. The shape specification options can also be mixed. For the \code{shapes} argument, it is possible to use a named vector or list, where the names correspond to the value of the variable specified by the \code{shape} argument. 540 #' \enumerate{ 541 #' \item{A numeric value that specifies the plotting character of the symbol. See parameter \code{pch} of \code{\link[graphics:points]{points}} and the last example to create a plot with all options.} 542 #' \item{A \code{\link[grid:grid.grob]{grob}} object, which can be a ggplot2 plot object created with \code{\link[ggplot2:ggplotGrob]{ggplotGrob}}. To specify multiple shapes, a list of grob objects is required. See example of a proportional symbol map with ggplot2 plots}. 543 #' \item{An icon specification, which can be created with \code{\link{tmap_icons}}.} 544 #' } 545 #' For small multiples, a list of these shape specification(s) should be provided. 546 #' 547 #' @name tm_symbols 548 #' @rdname tm_symbols 549 #' @param size a single value or a \code{shp} data variable that determines the symbol sizes. The reference value \code{size=1} corresponds to the area of symbols that have the same height as one line of text. If a data variable is provided, the symbol sizes are scaled proportionally (or perceptually, see \code{perceptual}) where by default the symbol with the largest data value will get \code{size=1} (see also \code{size.max}). If multiple values are specified, small multiples are drawn (see details). 550 #' @param col color(s) of the symbol. Either a color (vector), or categorical variable name(s). If multiple values are specified, small multiples are drawn (see details). 551 #' @param shape shape(s) of the symbol. Either direct shape specification(s) or a data variable name(s) that is mapped to the symbols specified by the \code{shapes} argument. See details for the shape specification. 552 #' @param alpha transparency number between 0 (totally transparent) and 1 (not transparent). By default, the alpha value of the \code{col} is used (normally 1). 553 #' @param border.col color of the symbol borders. 554 #' @param border.lwd line width of the symbol borders. If \code{NA}, no symbol borders are drawn. 555 #' @param border.alpha transparency number, regarding the symbol borders, between 0 (totally transparent) and 1 (not transparent). By default, the alpha value of the \code{col} is used (normally 1). 556 #' @param scale symbol size multiplier number. 557 #' @param perceptual logical that determines whether symbols are scales with a perceptually (\code{TRUE}) or mathematically (\code{FALSE}, default value). The perceived area of larger symbols is often underestimated. Flannery (1971) experimentally derived a method to compensate this for symbols, which is enabled by this argument. 558 #' @param clustering value that determines whether the symbols are clustered in \code{"view"} mode. It does not work proportional bubbles (i.e. \code{tm_bubbles}). One of: \code{TRUE}, \code{FALSE}, or the output of \code{\link[leaflet:markerClusterOptions]{markerClusterOptions}}. 559 #' @param size.max value that is mapped to \code{size=1}. By default (\code{NA}), the maximum data value is chosen. Only applicable when \code{size} is the name of a numeric variable of \code{shp} 560 #' @param size.lim vector of two limit values of the \code{size} variable. Only symbols are drawn whose value is greater than or equal to the first value. Symbols whose values exceed the second value are drawn at the size of the second value. Only applicable when \code{size} is the name of a numeric variable of \code{shp} 561 #' @param sizes.legend vector of symbol sizes that are shown in the legend. By default, this is determined automatically. 562 #' @param sizes.legend.labels vector of labels for that correspond to \code{sizes.legend}. 563 #' @param n preferred number of color scale classes. Only applicable when \code{col} is a numeric variable name. 564 #' @param style method to process the color scale when \code{col} is a numeric variable. Discrete options are \code{"cat"}, \code{"fixed"}, \code{"sd"}, \code{"equal"}, \code{"pretty"}, \code{"quantile"}, \code{"kmeans"}, \code{"hclust"}, \code{"bclust"}, \code{"fisher"}, \code{"jenks"}, and \code{"log10_pretty"}. A numeric variable is processed as a categorical variable when using \code{"cat"}, i.e. each unique value will correspond to a distinct category. For the other discrete options (except \code{"log10_pretty"}), see the details in \code{\link[classInt:classIntervals]{classIntervals}}. Continuous options are \code{"cont"}, \code{"order"}, and \code{"log10"}. The first maps the values of \code{col} to a smooth gradient, the second maps the order of values of \code{col} to a smooth gradient, and the third uses a logarithmic transformation. 565 #' @param breaks in case \code{style=="fixed"}, breaks should be specified. The \code{breaks} argument can also be used when \code{style="cont"}. In that case, the breaks are mapped evenly to the sequential or diverging color palette. 566 #' @param interval.closure value that determines whether where the intervals are closed: \code{"left"} or \code{"right"}. Only applicable if \code{col} is a numeric variable. 567 #' @param palette a palette name or a vector of colors. See \code{tmaptools::palette_explorer()} for the named palettes. Use a \code{"-"} as prefix to reverse the palette. The default palette is taken from \code{\link{tm_layout}}'s argument \code{aes.palette}, which typically depends on the style. The type of palette from \code{aes.palette} is automatically determined, but can be overwritten: use \code{"seq"} for sequential, \code{"div"} for diverging, and \code{"cat"} for categorical. 568 #' @param labels labels of the classes 569 #' @param midpoint The value mapped to the middle color of a diverging palette. By default it is set to 0 if negative and positive values are present. In that case, the two sides of the color palette are assigned to negative respectively positive values. If all values are positive or all values are negative, then the midpoint is set to \code{NA}, which means that the value that corresponds to the middle color class (see \code{style}) is mapped to the middle color. Only applies when \code{col} is a numeric variable. If it is specified for sequential color palettes (e.g. \code{"Blues"}), then this color palette will be treated as a diverging color palette. 570 #' @param stretch.palette Logical that determines whether the categorical color palette should be stretched if there are more categories than colors. If \code{TRUE} (default), interpolated colors are used (like a rainbow). If \code{FALSE}, the palette is repeated. 571 #' @param contrast vector of two numbers that determine the range that is used for sequential and diverging palettes (applicable when \code{auto.palette.mapping=TRUE}). Both numbers should be between 0 and 1. The first number determines where the palette begins, and the second number where it ends. For sequential palettes, 0 means the brightest color, and 1 the darkest color. For diverging palettes, 0 means the middle color, and 1 both extremes. If only one number is provided, this number is interpreted as the endpoint (with 0 taken as the start). 572 #' @param colorNA colour for missing values. Use \code{NULL} for transparency. 573 #' @param textNA text used for missing values of the color variable. 574 #' @param showNA logical that determines whether missing values are named in the legend. By default (\code{NA}), this depends on the presence of missing values. 575 #' @param colorNULL colour for polygons that are shown on the map that are out of scope 576 #' @param shapes palette of symbol shapes. Only applicable if \code{shape} is a (vector of) categorical variable(s). See details for the shape specification. By default, the filled symbols 21 to 25 are taken. 577 #' @param shapes.legend symbol shapes that are used in the legend (instead of the symbols specified with \code{shape}. Especially useful when \code{shapes} consist of grobs that have to be represented by neutrally colored shapes (see also \code{shapes.legend.fill}. 578 #' @param shapes.legend.fill Fill color of legend shapes (see \code{shapes.legend}) 579 #' @param shapes.labels Legend labels for the symbol shapes 580 #' @param shapeNA the shape (a number or grob) for missing values. By default a cross (number 4). Set to \code{NA} to hide symbols for missing values. 581 #' @param shape.textNA text used for missing values of the shape variable. 582 #' @param shape.showNA logical that determines whether missing values are named in the legend. By default (\code{NA}), this depends on the presence of missing values. 583 #' @param shapes.n preferred number of shape classes. Only applicable when \code{shape} is a numeric variable name. 584 #' @param shapes.style method to process the shape scale when \code{shape} is a numeric variable. See \code{style} argument for options 585 #' @param shapes.breaks in case \code{shapes.style=="fixed"}, breaks should be specified 586 #' @param shapes.interval.closure value that determines whether where the intervals are closed: \code{"left"} or \code{"right"}. Only applicable if \code{shape} is a numeric variable. 587 #' @param legend.max.symbol.size Maximum size of the symbols that are drawn in the legend. For circles and bubbles, a value larger than one is recommended (and used for \code{tm_bubbles}) 588 #' @param just justification of the symbols relative to the point coordinates. The first value specifies horizontal and the second value vertical justification. Possible values are: \code{"left"} , \code{"right"}, \code{"center"}, \code{"bottom"}, and \code{"top"}. Numeric values of 0 specify left alignment and 1 right alignment. The default value is \code{c("center", "center")}. For icons, this value may already be speficied (see \code{\link{tmap_icons}}). The \code{just}, if specified, will overrides this. 589 #' @param jitter number that determines the amount of jittering, i.e. the random noise added to the position of the symbols. 0 means no jittering is applied, any positive number means that the random noise has a standard deviation of \code{jitter} times the height of one line of text line. 590 #' @param xmod horizontal position modification of the symbols, in terms of the height of one line of text. Either a single number for all polygons, or a numeric variable in the shape data specifying a number for each polygon. Together with \code{ymod}, it determines position modification of the symbols. See also \code{jitter} for random position modifications. In most coordinate systems (projections), the origin is located at the bottom left, so negative \code{xmod} move the symbols to the left, and negative \code{ymod} values to the bottom. 591 #' @param ymod vertical position modification. See xmod. 592 #' @param icon.scale scaling number that determines how large the icons (or grobs) are in plot mode in comparison to proportional symbols (such as bubbles). In view mode, the size is determined by the icon specification (see \code{\link{tmap_icons}}) or, if grobs are specified by \code{grob.width} and \code{grob.heigth} 593 #' @param grob.dim vector of four values that determine how grob objects (see details) are shown in view mode. The first and second value are the width and height of the displayed icon. The third and fourth value are the width and height of the rendered png image that is used for the icon. Generally, the third and fourth value should be large enough to render a ggplot2 graphic successfully. Only needed for the view mode. 594 #' @param title.size title of the legend element regarding the symbol sizes 595 #' @param title.col title of the legend element regarding the symbol colors 596 #' @param title.shape title of the legend element regarding the symbol shapes 597 #' @param legend.size.show logical that determines whether the legend for the symbol sizes is shown 598 #' @param legend.col.show logical that determines whether the legend for the symbol colors is shown 599 #' @param legend.shape.show logical that determines whether the legend for the symbol shapes is shown 600 #' @param legend.format list of formatting options for the legend numbers. Only applicable if \code{labels} is undefined. Parameters are: 601 #' \describe{ 602 #' \item{fun}{Function to specify the labels. It should take a numeric vector, and should return a character vector of the same size. By default it is not specified. If specified, the list items \code{scientific}, \code{format}, and \code{digits} (see below) are not used.} 603 #' \item{scientific}{Should the labels be formatted scientifically? If so, square brackets are used, and the \code{format} of the numbers is \code{"g"}. Otherwise, \code{format="f"}, and \code{text.separator}, \code{text.less.than}, and \code{text.or.more} are used. Also, the numbers are automatically rounded to millions or billions if applicable.} 604 #' \item{format}{By default, \code{"f"}, i.e. the standard notation \code{xxx.xxx}, is used. If \code{scientific=TRUE} then \code{"g"}, which means that numbers are formatted scientifically, i.e. \code{n.dddE+nn} if needed to save space.} 605 #' \item{digits}{Number of digits after the decimal point if \code{format="f"}, and the number of significant digits otherwise.} 606 #' \item{big.num.abbr}{Vector that defines whether and which abbrevations are used for large numbers. It is a named numeric vector, where the name indicated the abbreviation, and the number the magnitude (in terms on numbers of zero). Numbers are only abbrevation when they are large enough. Set it to \code{NA} to disable abbrevations. The default is \code{c("mln" = 6, "bln" = 9)}. For layers where \code{style} is set to \code{log10} or \code{log10_pretty}, the default is \code{NA}.} 607 #' \item{text.separator}{Character string to use to separate numbers in the legend (default: "to").} 608 #' \item{text.less.than}{Character value(s) to use to translate "Less than". When a character vector of length 2 is specified, one for each word, these words are aligned when \code{text.to.columns = TRUE}} 609 #' \item{text.or.more}{Character value(s) to use to translate "or more". When a character vector of length 2 is specified, one for each word, these words are aligned when \code{text.to.columns = TRUE}} 610 #' \item{text.align}{Value that determines how the numbers are aligned, \code{"left"}, \code{"center"} or \code{"right"}}. By default \code{"left"} for legends in portrait format (\code{legend.is.protrait = TRUE}), and \code{"center"} otherwise. 611 #' \item{text.to.columns}{Logical that determines whether the text is aligned to three columns (from, text.separator, to). By default \code{FALSE}.} 612 #' \item{...}{Other arguments passed on to \code{\link[base:formatC]{formatC}}} 613 #' } 614 #' @param legend.size.is.portrait logical that determines whether the legend element regarding the symbol sizes is in portrait mode (\code{TRUE}) or landscape (\code{FALSE}) 615 #' @param legend.col.is.portrait logical that determines whether the legend element regarding the symbol colors is in portrait mode (\code{TRUE}) or landscape (\code{FALSE}) 616 #' @param legend.shape.is.portrait logical that determines whether the legend element regarding the symbol shapes is in portrait mode (\code{TRUE}) or landscape (\code{FALSE}) 617 #' @param legend.size.reverse logical that determines whether the items of the legend regarding the symbol sizes are shown in reverse order, i.e. from bottom to top when \code{legend.size.is.portrait = TRUE} and from right to left when \code{legend.size.is.portrait = FALSE} 618 #' @param legend.col.reverse logical that determines whether the items of the legend regarding the symbol colors are shown in reverse order, i.e. from bottom to top when \code{legend.col.is.portrait = TRUE} and from right to left when \code{legend.col.is.portrait = FALSE} 619 #' @param legend.shape.reverse logical that determines whether the items of the legend regarding the symbol shapes are shown in reverse order, i.e. from bottom to top when \code{legend.shape.is.portrait = TRUE} and from right to left when \code{legend.shape.is.portrait = FALSE} 620 #' @param legend.hist logical that determines whether a histogram is shown regarding the symbol colors 621 #' @param legend.hist.title title for the histogram. By default, one title is used for both the histogram and the normal legend for symbol colors. 622 #' @param legend.size.z index value that determines the position of the legend element regarding the symbol sizes with respect to other legend elements. The legend elements are stacked according to their z values. The legend element with the lowest z value is placed on top. 623 #' @param legend.col.z index value that determines the position of the legend element regarding the symbol colors. (See \code{legend.size.z}) 624 #' @param legend.shape.z index value that determines the position of the legend element regarding the symbol shapes. (See \code{legend.size.z}) 625 #' @param legend.hist.z index value that determines the position of the histogram legend element. (See \code{legend.size.z}) 626 #' @param id name of the data variable that specifies the indices of the symbols. Only used for \code{"view"} mode (see \code{\link{tmap_mode}}). 627 #' @param popup.vars names of data variables that are shown in the popups in \code{"view"} mode. If \code{NA} (default), only aesthetic variables (i.e. specified by \code{col} and \code{lwd}) are shown). If they are not specified, all variables are shown. Set popup.vars to \code{FALSE} to disable popups. When a vector of variable names is provided, the names (if specified) are printed in the popups. 628 #' @param popup.format list of formatting options for the popup values. See the argument \code{legend.format} for options. Only applicable for numeric data variables. If one list of formatting options is provided, it is applied to all numeric variables of \code{popup.vars}. Also, a (named) list of lists can be provided. In that case, each list of formatting options is applied to the named variable. 629 #' @param title shortcut for \code{title.col} for \code{tm_dots} 630 #' @param legend.show shortcut for \code{legend.col.show} for \code{tm_dots} 631 #' @param legend.is.portrait shortcut for \code{legend.col.is.portrait} for \code{tm_dots} 632 #' @param legend.z shortcut for \code{legend.col.z shortcut} for \code{tm_dots} 633 #' @param group name of the group to which this layer belongs in view mode. Each group can be selected or deselected in the layer control item. Set \code{group = NULL} to hide the layer in the layer control item. By default, it will be set to the name of the shape (specified in \code{\link{tm_shape}}). 634 #' @param auto.palette.mapping deprecated. It has been replaced by \code{midpoint} for numeric variables and \code{stretch.palette} for categorical variables. 635 #' @param max.categories deprecated. It has moved to \code{\link{tmap_options}}. 636 #' @keywords symbol map 637 #' @export 638 #' @example ./examples/tm_symbols.R 639 #' @references Flannery J (1971). The Relative Effectiveness of Some Common Graduated Point Symbols in the Presentation of Quantitative Data. Canadian Cartographer, 8(2), 96-109. 640 #' @references Tennekes, M., 2018, {tmap}: Thematic Maps in {R}, Journal of Statistical Software, 84(6), 1-39, \href{https://doi.org/10.18637/jss.v084.i06}{DOI} 641 #' @seealso \href{../doc/tmap-getstarted.html}{\code{vignette("tmap-getstarted")}} 642 #' @return \code{\link{tmap-element}} 643 tm_symbols <- function(size=1, col=NA, 644 shape=21, 645 alpha=NA, 646 border.col=NA, 647 border.lwd=1, 648 border.alpha=NA, 649 scale=1, 650 perceptual=FALSE, 651 clustering=FALSE, 652 size.max=NA, 653 size.lim=NA, 654 sizes.legend = NULL, 655 sizes.legend.labels = NULL, 656 n = 5, style = ifelse(is.null(breaks), "pretty", "fixed"), 657 breaks = NULL, 658 interval.closure = "left", 659 palette = NULL, 660 labels = NULL, 661 midpoint = NULL, 662 stretch.palette = TRUE, 663 contrast = NA, 664 colorNA = NA, 665 textNA = "Missing", 666 showNA = NA, 667 colorNULL = NA, 668 shapes = 21:25, 669 shapes.legend = NULL, 670 shapes.legend.fill = NA, 671 shapes.labels = NULL, 672 shapeNA = 4, 673 shape.textNA = "Missing", 674 shape.showNA = NA, 675 shapes.n = 5, shapes.style = ifelse(is.null(shapes.breaks), "pretty", "fixed"), 676 shapes.breaks = NULL, 677 shapes.interval.closure = "left", 678 legend.max.symbol.size = .8, 679 just=NA, 680 jitter=0, 681 xmod = 0, 682 ymod = 0, 683 icon.scale = 3, 684 grob.dim = c(width=48, height=48, render.width=256, render.height=256), 685 title.size = NA, 686 title.col = NA, 687 title.shape=NA, 688 legend.size.show=TRUE, 689 legend.col.show=TRUE, 690 legend.shape.show=TRUE, 691 legend.format=list(), 692 legend.size.is.portrait=FALSE, 693 legend.col.is.portrait=TRUE, 694 legend.shape.is.portrait=TRUE, 695 legend.size.reverse=FALSE, 696 legend.col.reverse=FALSE, 697 legend.shape.reverse=FALSE, 698 legend.hist=FALSE, 699 legend.hist.title=NA, 700 legend.size.z=NA, 701 legend.col.z=NA, 702 legend.shape.z=NA, 703 legend.hist.z=NA, 704 id=NA, 705 popup.vars=NA, 706 popup.format=list(), 707 group = NA, 708 auto.palette.mapping = NULL, 709 max.categories = NULL) { 710 2 midpoint <- check_deprecated_layer_fun_args(auto.palette.mapping, max.categories, midpoint) 711 2 g <- list(tm_symbols=c(as.list(environment()), list(are.dots=FALSE, are.markers=FALSE, call=names(match.call(expand.dots = TRUE)[-1])))) 712 2 class(g) <- "tmap" 713 2 g 714 715 } 716 717 #' @rdname tm_symbols 718 #' @export 719 tm_squares <- function(size=1, 720 col=NA, 721 shape=22, 722 scale=4/3, 723 ...) { 724 0 g <- do.call("tm_symbols", c(list(size=size, col=col, shape=shape, scale=scale), list(...))) 725 0 g 726 } 727 728 #' @rdname tm_symbols 729 #' @export 730 tm_bubbles <- function(size=1, 731 col=NA, 732 shape=21, 733 scale=4/3, 734 legend.max.symbol.size=1, 735 ...) { 736 2 g <- do.call("tm_symbols", c(list(size=size, col=col, shape=shape, scale=scale, legend.max.symbol.size=legend.max.symbol.size), list(...))) 737 2 g 738 } 739 740 741 742 #' @rdname tm_symbols 743 #' @export 744 tm_dots <- function(col=NA, 745 size=.02, 746 shape=16, 747 title = NA, 748 legend.show=TRUE, 749 legend.is.portrait=TRUE, 750 legend.z=NA, ...) { 751 0 g <- do.call("tm_symbols", c(list(size=size, col=col, shape=shape, 752 0 title.col=title, 753 0 legend.col.show=legend.show, 754 0 legend.col.is.portrait=legend.is.portrait, 755 0 legend.col.z=legend.z), list(...))) 756 0 g$tm_symbols$are.dots <- TRUE 757 0 g 758 } 759 760 761 #' @rdname tm_symbols 762 #' @param text text of the markers. Shown in plot mode, and as popup text in view mode. 763 #' @param text.just justification of marker text (see \code{just} argument of \code{\link{tm_text}}). Only applicable in plot mode. 764 #' @param markers.on.top.of.text For \code{tm_markers}, should the markers be drawn on top of the text labels? 765 #' @param ... arguments passed on to \code{tm_symbols}. For \code{tm_markers}, arguments can also be passed on to \code{tm_text}. In that case, they have to be prefixed with \code{text.}, e.g. the \code{col} argument should be names \code{text.col} 766 #' @export 767 tm_markers <- function(shape=marker_icon(), 768 col=NA, 769 border.col=NULL, 770 clustering=TRUE, 771 text=NULL, 772 text.just="top", 773 markers.on.top.of.text=TRUE, 774 group = NA, 775 ...) { 776 0 args <- list(...) 777 0 argsS <- args[intersect(names(formals("tm_symbols")), names(args))] 778 779 # all text label items are preceeded with "text." 780 0 argsT <- args[intersect(paste("text", names(formals("tm_text")), sep="."), names(args))] 781 0 argsT[c("text.text", "text.text.just")] <- NULL # already explicit arguments 782 0 argsT_names <- names(argsT) 783 784 0 names(argsT) <- substr(argsT_names, 6, nchar(argsT_names)) 785 786 0 if (is.null(text)) { 787 0 tmT <- NULL 788 0 } else { 789 0 tmT <- do.call("tm_text", c(list(text=text, just=text.just, clustering = clustering), argsT)) 790 0 } 791 792 0 tmS <- do.call("tm_symbols", c(list(shape=shape, col=col, border.col=border.col, clustering = clustering), argsS)) 793 794 0 g <- if (markers.on.top.of.text) { 795 0 tmS + tmT 796 0 } else { 797 0 tmT + tmS 798 0 } 799 0 g$tm_symbols$are.markers <- TRUE 800 0 g 801 } 802 803 #' Draw simple features 804 #' 805 #' Creates a \code{\link{tmap-element}} that draws simple features. Basically, it is a stack of \code{\link{tm_polygons}}, \code{\link{tm_lines}} and \code{\link{tm_dots}}. In other words, polygons are plotted as polygons, lines as lines and points as dots. 806 807 #' @param col color of the simple features. See the \code{col} argument of \code{\link{tm_polygons}}, \code{\link{tm_lines}} and \code{\link{tm_symbols}}. 808 #' @param size size of the dots. See the \code{size} argument \code{\link{tm_symbols}}. By default, the size is similar to dot size (see \code{\link{tm_dots}}) 809 #' @param shape shape of the dots. See the \code{shape} argument \code{\link{tm_symbols}}. By default, dots are shown. 810 #' @param lwd width of the lines. See the \code{lwd} argument of \code{\link{tm_lines}} 811 #' @param lty type of the lines. See the \code{lty} argument of \code{\link{tm_lines}} 812 #' @param alpha transparency number. See \code{alpha} argument of \code{\link{tm_polygons}}, \code{\link{tm_lines}} and \code{\link{tm_symbols}} 813 #' @param palette palette. See \code{palette} argument of \code{\link{tm_polygons}}, \code{\link{tm_lines}} and \code{\link{tm_symbols}} 814 #' @param border.col color of the borders. See \code{border.col} argument of \code{\link{tm_polygons}} and \code{\link{tm_symbols}}. 815 #' @param border.lwd line width of the borders. See \code{border.lwd} argument of \code{\link{tm_polygons}} and \code{\link{tm_symbols}}. 816 #' @param border.lty line type of the borders. See \code{border.lwd} argument of \code{\link{tm_polygons}} and \code{\link{tm_symbols}}. 817 #' @param border.alpha transparency of the borders. See \code{border.alpha} argument of \code{\link{tm_polygons}} and \code{\link{tm_symbols}}. 818 #' @param group name of the group to which this layer belongs in view mode. Each group can be selected or deselected in the layer control item. Set \code{group = NULL} to hide the layer in the layer control item. By default, it will be set to the name of the shape (specified in \code{\link{tm_shape}}). 819 #' @param ... other arguments passed on to \code{\link{tm_polygons}}, \code{\link{tm_lines}} and \code{\link{tm_symbols}} 820 #' @keywords simple features 821 #' @export 822 #' @example ./examples/tm_sf.R 823 #' @seealso \href{../doc/tmap-getstarted.html}{\code{vignette("tmap-getstarted")}} 824 #' @return \code{\link{tmap-element}} 825 tm_sf <- function(col=NA, size=.02, shape = 16, lwd=1, lty = "solid", alpha=NA, palette=NULL, border.col=NA, border.lwd=1, border.lty = "solid", border.alpha=NA, group = NA, ...) { 826 2 args <- list(...) 827 828 2 argsFill <- c(list(col = col, alpha = alpha, palette = palette), args[intersect(names(args), names(formals("tm_fill")))]) 829 2 argsBorders <- c(list(col = border.col, alpha = border.alpha, lty = border.lty)) 830 2 argsLines <- c(list(col = col, lwd = lwd, lty = lty, alpha = alpha, palette = palette), args[intersect(names(args), names(formals("tm_lines")))]) 831 2 argsSymbols <- c(list(col = col, size = size, shape = shape, alpha = alpha, palette = palette, border.col = border.col, border.lwd = border.lwd, border.alpha = border.alpha), args[intersect(names(args), names(formals("tm_symbols")))]) 832 833 2 g <- do.call("tm_fill", argsFill) + do.call("tm_borders", argsBorders) + do.call("tm_lines", argsLines) + do.call("tm_symbols", argsSymbols) 834 835 2 called_names <- names(match.call(expand.dots = TRUE)[-1]) 836 837 838 839 2 g$tm_fill$call <- called_names 840 2 g$tm_fill$from_tm_sf <- TRUE 841 842 2 g$tm_lines$call <- called_names 843 2 g$tm_symbols$call <- called_names 844 845 2 g 846 } 847 848 #' @rdname tm_tiles 849 #' @export 850 tm_basemap <- function(server=NA, group = NA, alpha = NA) { 851 2 g <- list(tm_basemap=c(as.list(environment()), list(grouptype = "base"))) 852 2 class(g) <- "tmap" 853 2 g 854 } 855 856 #' Draw a tile layer 857 #' 858 #' Creates a \code{\link{tmap-element}} that draws a tile layer. This feature is only available in view mode. For plot mode, a tile image can be retrieved by \code{\link[tmaptools:read_osm]{read_osm}}. The function \code{tm_basemap} draws the tile layer as basemap (i.e. as bottom layer), whereas \code{tm_tiles} draws the tile layer as overlay layer (where the stacking order corresponds to the order in which this layer is called). Note that basemaps are shown by default (see details). 859 #' 860 #' When \code{tm_basemap} is not specified, the default basemaps are shown, which can be configured by the \code{basemaps} arugument in \code{\link{tmap_options}}. By default (for style \code{"white"}) three basemaps are drawn: \code{c("Esri.WorldGrayCanvas", "OpenStreetMap", "Esri.WorldTopoMap")}. To disable basemaps, add \code{tm_basemap(NULL)} to the plot, or set \code{tmap_options(basemaps = NULL)}. Similarly, when \code{tm_tiles} is not specified, the overlay maps specified by the \code{overlays} argument in in \code{\link{tmap_options}} are shown as front layer. By default, this argument is set to \code{NULL}, so no overlay maps are shown by default. See examples. 861 #' 862 #' @param server name of the provider or an URL. The list of available providers can be obtained with \code{leaflet::providers}. See \url{http://leaflet-extras.github.io/leaflet-providers/preview} for a preview of those. When a URL is provided, it should be in template format, e.g. \code{"http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png"}. Use \code{NULL} in \code{tm_basemap} to disable the basemaps. 863 #' @param group name of the group to which this layer belongs in view mode. Each group can be selected or deselected in the layer control item. Set \code{group = NULL} to hide the layer in the layer control item. By default, it will be set to the name of the shape (specified in \code{\link{tm_shape}}). Tile layers generated with \code{tm_basemap} will be base groups whereas tile layers generated with \code{tm_tiles} will be overlay groups. 864 #' @param alpha alpha 865 #' @export 866 #' @rdname tm_tiles 867 #' @name tm_tiles 868 #' @example ./examples/tm_tiles.R 869 tm_tiles <- function(server, group = NA, alpha = 1) { 870 2 if (missing(server)) stop("Please specify server (name or url)") 871 2 g <- list(tm_tiles=c(as.list(environment()), list(grouptype = "overlay"))) 872 2 class(g) <- "tmap" 873 2 g 874 } Read our documentation on viewing source code .
2021-01-20 09:48:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7927050590515137, "perplexity": 8019.572852473531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519984.9/warc/CC-MAIN-20210120085204-20210120115204-00081.warc.gz"}
https://www.wikihow.com/Calculate-Combinations
# How to Calculate Combinations Permutations and combinations have uses in math classes and in daily life. Thankfully, they are easy to calculate once you know how. Unlike permutations, where group order matters, in combinations, the order doesn't matter.[1] Combinations tell you how many ways there are to combine a given number of items in a group. To calculate combinations, you just need to know the number of items you're choosing from, the number of items to choose, and whether or not repetition is allowed (in the most common form of this problem, repetition is not allowed). ### Method 1 of 2: Calculating Combinations Without Repetition 1. 1 Consider an example problem where order does not matter and repetition is not allowed. In this kind of problem, you won't use the same item more than once. • For instance, you may have 10 books, and you'd like to find the number of ways to combine 6 of those books on your shelf. In this case, you don't care about order - you just want to know which groupings of books you could display, assuming you only use any given book once. • This kind of problem is often labeled as ${\displaystyle {}_{n}C_{r}}$, ${\displaystyle C(n,r)}$, ${\displaystyle {\binom {n}{r}}}$, or "n choose r". • In all of these notations, ${\displaystyle n}$ is the number of items you have to choose from (your sample) and ${\displaystyle r}$ is the number of items you're going to select.[2] 2. 2 Know the formula: ${\displaystyle {}_{n}C_{r}={\frac {n!}{(n-r)!r!}}}$.[3] [4] • The formula is similar the one for permutations but not exactly the same. Permutations can be found using ${\displaystyle {}_{n}P_{r}={\frac {n!}{(n-r)!}}}$. The combination formula is slightly different because order no longer matters; therefore, you divide the permutations formula by ${\displaystyle n!}$ in order to eliminate the redundancies.[5] You are essentially reducing the result by the number of options that would be considered a different permutation but the same combination (because order doesn't matter for combinations).[6] [7] 3. 3 Plug in your values for and . • In the case above, you would have this formula: ${\displaystyle {}_{n}C_{r}={\frac {10!}{(10-6)!6!}}}$. It would simplify to ${\displaystyle {}_{n}C_{r}={\frac {10!}{(4!)(6!)}}}$. 4. 4 Solve the equation to find the number of combinations. You can do this either by hand or with a calculator. • If you have a calculator available, find the factorial setting and use that to calculate the number of combinations. If you're using Google Calculator, click on the x! button each time after entering the necessary digits. • If you have to solve by hand, keep in mind that for each factorial, you start with the main number given and then multiply it by the next smallest number, and so on until you get down to 0. • For the example, you can calculate 10! with (10 * 9 * 8 * 7 * 6 * 5 * 4 * 3 * 2 * 1), which gives you 3,628,800. Find 4! with (4 * 3 * 2 * 1), which gives you 24. Find 6! with (6 * 5 * 4 * 3 * 2 * 1), which gives you 720. • Then multiply the two numbers that add to the total of items together. In this example, you should have 24 * 720, so 17,280 will be your denominator. • Divide the factorial of the total by the denominator, as described above: 3,628,800/17,280. • In the example case, you'd do get 210. This means that there are 210 different ways to combine the books on a shelf, without repetition and where order doesn't matter. ### Method 2 of 2: Calculating Combinations with Repetition 1. 1 Consider an example problem where order does not matter but repetition is allowed. In this kind of problem, you can use the same item more than once. • For instance, imagine that you're going to order 5 items from a menu offering 15 items; the order of your selections doesn't matter, and you don't mind getting multiples of the same item (ie repetitions are allowed). • This kind of problem can be labeled as ${\displaystyle {}_{n+r-1}C_{r}}$. You would generally use ${\displaystyle n}$ to represent the number of options you have to choose from and ${\displaystyle r}$ to represent the number of items you're going to select.[8] Remember, in this kind of problem, repetition is allowed and the order isn't relevant. • This is the least common and least understood type of combination or permutation, and isn't generally taught as often.[9] Where it is covered, it is often also known as a k-selection, a k-multiset, or a k-combination with repetition.[10] 2. 2 Know the formula: ${\displaystyle {}_{n+r-1}C_{r}={\frac {(n+r-1)!}{(n-1)!r!}}}$.[11] [12] 3. 3 Plug in your values for and . • In the example case, you would have this formula: ${\displaystyle {}_{n+r-1}C_{r}={\frac {(15+5-1)!}{(15-1)!5!}}}$. It would simplify to ${\displaystyle {}_{n+r-1}C_{r}={\frac {19!}{(14!)(5!)}}}$. 4. 4 Solve the equation to find the number of combinations. You can do this either by hand or with a calculator. • If you have a calculator available, find the factorial setting and use that to calculate the number of combinations. If you're using Google Calculator, click on the x! button each time after entering the necessary digits. • If you have to solve by hand, keep in mind that for each factorial, you start with the main number given and then multiply it by the next smallest number, and so on until you get down to 0. • For the example problem, your solution should be 11,628. There are 11,628 different ways you could order any 5 items from a selection of 15 items on a menu, where order doesn't matter and repetition is allowed. ## Community Q&A Search • Question For: 6 starters 10 mains 7 desserts, how many different three course meals? You can use what is called a “counting method” or by creating a tree. You don’t actually need the formula; you can simply do 6x10x7 and you will get the answer, which is 420. • Question How many combinatioons of 2 are in 32 numbers? There are 496 combinations without repetition. Here’s the formula: 32!/(32-2)!*2! = 32*31/2! = 496. 200 characters left ## Tips • Some graphing calculators offer a button to help you solve combinations without repetition quickly. It usually looks like nCr. If your calculator has one, hit your ${\displaystyle n}$ value first, then the combination button, and then your ${\displaystyle r}$ value.[13] Thanks! Submit a Tip All tip submissions are carefully reviewed before being published Thanks for submitting a tip for review!
2021-01-18 02:45:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 18, "math_alttext": 4, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6461629271507263, "perplexity": 375.2361404583026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514046.20/warc/CC-MAIN-20210117235743-20210118025743-00558.warc.gz"}
http://blog.mikael.johanssons.org/going-to-tbilisi.html
Going to T'bilisi Published: Fri 25 May 2007 In about 23 hours, I'll step on to the train in Jena, heading for T'bilisi, Georgia. On Monday, I'll give a talk on my research into $$A_\infty$$-structures in group cohomology. If you're curious, I already put the slides up on the web. I'll try to blog from T'bilisi, but I don't know what connectivity I'll have at all.
2017-03-25 19:47:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29955771565437317, "perplexity": 2873.597601808096}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189032.76/warc/CC-MAIN-20170322212949-00633-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.thenakedscientists.com/forum/index.php?topic=42986.50
# Are some people born good at maths? • 57 Replies • 12453 Views 0 Members and 1 Guest are viewing this topic. #### Geezer • Neilep Level Member • 8328 • "Vive la résistance!" ##### Re: Are some people born good at maths? « Reply #50 on: 09/02/2012 16:05:37 » but, but, but, isn't it just a "slope" which is a ratio of uppyness to alongyness? There ain'ta no sanity clause, and there ain'ta no centrifugal force æther. #### imatfaal • Neilep Level Member • 2787 • rouge moderator ##### Re: Are some people born good at maths? « Reply #51 on: 09/02/2012 17:38:07 » yes  - but when you deal with limits approaching zero or off to infinity things become very complicated - instantaneous measurements calculations etc can become very screwy .  As I tried to say and failed - it is the limit as the  change in x tends to zero. $$\frac {dy}{dx}(x_0)= \displaystyle \lim_{h \to 0} \frac {y(x_0 +h) - y(x_0)}{h}$$ Mad mathematicians would tell you that the ratio of two infinitessimals can be anything you choose (kind of like the ration of two infinities it is not well defined). No, AFAIK that's definitely wrong. The reason that calculus works is that the LIMIT of dy/dx as you take the deltas towards zero is (in most normal cases) completely well defined. In some case limits can take two values in different directions where the curve jumps, so the curve has to be smooth where you differentiate; trying to differentiate a fractal doesn't do anything very good! Maybe I didnt explain well - but it is not wrong.  There is a big difference between the limit as I have it above and the ratio of two variables as they tend to zero.  dx is not the  $$\lim_{\Delta x \rightarrow 0}\Delta x$$ as this is zero. There’s no sense in being precise when you don’t even know what you’re talking about.  John Von Neumann At the surface, we may appear as intellects, helpful people, friendly staff or protectors of the interwebs. Deep down inside, we're all trolls. CaptainPanic @ sf.n #### Geezer • Neilep Level Member • 8328 • "Vive la résistance!" ##### Re: Are some people born good at maths? « Reply #52 on: 09/02/2012 17:57:35 » yes  - but when you deal with limits approaching zero or off to infinity things become very complicated - instantaneous measurements calculations etc can become very screwy . WHOOSH!! I think I better stick to uppyness and alongyness. There ain'ta no sanity clause, and there ain'ta no centrifugal force æther. #### Geezer • Neilep Level Member • 8328 • "Vive la résistance!" ##### Re: Are some people born good at maths? « Reply #53 on: 09/02/2012 18:46:00 » Actually, isn't this, to some extent, what the OP was referring to? These points are interesting to the mathematically inclined, but to the average engineer who just wants to solve a stinking problem, they might appear slightly esoteric and academic. It's like any tool. The experts will test it to the limits and understand all of its corner-cases. The rest of us will use it like any other hammer, and, if it doesn't work, we'll get a bigger one. There ain'ta no sanity clause, and there ain'ta no centrifugal force æther. #### JP • Neilep Level Member • 3366 ##### Re: Are some people born good at maths? « Reply #54 on: 10/02/2012 02:43:45 » Except there are certain rare circumstances in which the hammer will fail catastrophically, and you can only know them by understanding how the hammer was built. You can use it without worrying too much, but sometimes... (By the way, as a physicist I'm very guilty of using derivatives as fractions and throwing infinity around with reckless abandon.) #### imatfaal • Neilep Level Member • 2787 • rouge moderator ##### Re: Are some people born good at maths? « Reply #55 on: 10/02/2012 09:48:31 » Actually, isn't this, to some extent, what the OP was referring to? These points are interesting to the mathematically inclined, but to the average engineer who just wants to solve a stinking problem, they might appear slightly esoteric and academic. It's like any tool. The experts will test it to the limits and understand all of its corner-cases. The rest of us will use it like any other hammer, and, if it doesn't work, we'll get a bigger one. Almost totally agree - but with same proviso as JP.  The hard core scientists I know are all very clear on the rules and derivation of those rules and then break them almost all the time - I swear I once saw someone cancelling the ds in dy/dx There’s no sense in being precise when you don’t even know what you’re talking about.  John Von Neumann At the surface, we may appear as intellects, helpful people, friendly staff or protectors of the interwebs. Deep down inside, we're all trolls. CaptainPanic @ sf.n #### Geezer • Neilep Level Member • 8328 • "Vive la résistance!" ##### Re: Are some people born good at maths? « Reply #56 on: 11/02/2012 21:48:26 » BTW, I posted a video in the "Fireman's Hose" topic, and I was wondering how you would figure out how to produce maximum thrust form such a device. I'm pretty sure it would take a bit of calculus. There ain'ta no sanity clause, and there ain'ta no centrifugal force æther. #### sasha44 • Jr. Member • 16 ##### Re: Are some people born good at maths? « Reply #57 on: 12/02/2012 05:23:48 » Ha haa you guys should have seen our Applied Mathematics lecturers face when a student asked what is the use of us Biological Science students learning applied mathematics. She just said " I don't know, I only teach mathematics" :O I mean at least say you can use it to find the area of a graph!
2017-01-24 23:29:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8263533711433411, "perplexity": 3660.671581815567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00199-ip-10-171-10-70.ec2.internal.warc.gz"}
https://learn.careers360.com/ncert/question-in-fig-657-d-is-a-point-on-hypotenuse-ac-of-d-abc-such-that-bd-perpendicular-ac-dm-perpendicular-bc-and-dn-perpendicular-ab-prove-that-dn-square-equals-dm-an/
Q # In Fig. 6.57, D is a point on hypotenuse AC of D ABC, such that BD perpendicular AC, DM perpendicular BC and DN perpendicular AB. Prove that: DN ^2 = DM. AN Q2 (2)     In Fig. 6.57, D is a point on hypotenuse AC of D ABC, such that BD $\perp$ AC, DM $\perp$ BC and DN $\perp$ AB. Prove that : $DN^2 = DM . AN$ Views In $\triangle$ DBN, $\angle 5+\angle 7=90 \degree.......................1$ In $\triangle$ DAN, $\angle 6+\angle 8=90 \degree.......................2$ BD $\perp$AC, $\therefore \angle ADB=90 \degree$ $\angle 5+\angle 6=90 \degree.......................3$ From equation 1 and 3, we get $\angle 6=\angle 7$ From equation 2 and 3, we get $\angle 5=\angle 8$ In $\triangle DNA\, \, and\, \, \triangle BND,$ $\angle 6=\angle 7$ $\angle 5=\angle 8$ $\triangle DNA\, \, \sim \, \, \triangle BND$     (By AA) $\Rightarrow \frac{AN}{DN}=\frac{DN}{NB}$ $\Rightarrow \frac{AN}{DN}=\frac{DN}{DM}$          (NB=DM) $\Rightarrow$$DN^2 = AN . DM$ Hence proved Exams Articles Questions
2020-02-20 05:25:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6282289028167725, "perplexity": 7739.083053084104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144637.88/warc/CC-MAIN-20200220035657-20200220065657-00057.warc.gz"}
http://srutisj.in/2017-09-03-Stock-Market-Analysis/
# Data Project - Stock Market Analysis ## Risk Analysis Monte Carlo We have already learnt about Monte Carlo method in the previous post - Monte Carlo. In this post I am going to discuss on how can that method be used for determining the maximum risk involved in investing to a stock. #import libraries import pandas as pd from pandas import Series,DataFrame import numpy as np # For Visualization import matplotlib.pyplot as plt import seaborn as sns sns.set_style('whitegrid') %matplotlib inline # For reading stock data from yahoo from datetime import datetime from __future__ import division import pandas_datareader.data as web tech_list = ['AAPL','GOOG','MSFT','AMZN'] # Set up End and Start times for data grab end = datetime.now() start = datetime(end.year - 1,end.month,end.day) for stock in tech_list: #Descriptive & Exploratory analysis AAPL.describe() Open High Low Close Volume Adj Close count 252.000000 252.000000 252.000000 252.000000 2.520000e+02 252.000000 mean 115.796667 116.627698 115.155357 115.973928 3.193192e+07 115.217969 std 15.948756 15.931637 15.978002 15.994669 1.433693e+07 16.482845 min 90.000000 91.669998 89.470001 90.339996 1.147590e+07 89.008370 25% 104.767498 105.959999 104.060001 105.499998 2.352475e+07 104.394087 50% 113.225003 114.055001 112.389999 113.069999 2.818830e+07 112.006823 75% 128.062502 129.665000 127.875000 128.832501 3.588788e+07 128.276348 max 147.539993 148.089996 146.839996 147.509995 1.119850e+08 147.509995 # Let's see historical view of the closing price for the last one year # Now let's plot the total volume of stock being traded each day over the past year AAPL['Volume'].plot(legend=True,figsize=(10,4)) #Calculate and plot the simple moving average (SMA) for upto 10, 20 and 50 days moving_avg_day = [10,20,50] for ma in moving_avg_day: column_name = "MA for %s days" % (str(ma)) AAPL[['Adj Close','MA for 10 days','MA for 20 days','MA for 50 days']].plot(subplots=False,figsize=(15,6)) #Now lets analyse the Daily Return Analysis as our first step to determing the volatility of the stock prices # We'll use pct_change to find the percent change for each day # Then we'll plot the daily return percentage AAPL['Daily Return'].plot(figsize=(12,4),legend=True,linestyle='--',marker='o') #Average daily return #Using dropna to eliminate NaN values sns.distplot(AAPL['Daily Return'].dropna(),bins=100,color='red') #Now lets do a comparative study to analyze the returns of all the stocks in our list # Grab all the closing prices for the tech stocks into one new DataFrame # Let's take a quick look AAPL AMZN GOOG MSFT Date 2016-05-05 91.865625 659.090027 701.429993 48.660218 2016-05-06 91.353293 673.950012 711.119995 49.098687 2016-05-09 91.422261 679.750000 712.900024 48.786888 2016-05-10 92.042972 703.070007 723.179993 49.712543 2016-05-11 91.146389 713.229980 715.289978 49.741774 # Make a new DataFrame for storing stock daily return tech_rets = closing_df.pct_change() # Using joinplot to compare the daily returns of Google and Microsoft sns.jointplot('GOOG','MSFT',tech_rets,kind='scatter') <seaborn.axisgrid.JointGrid at 0x24cfc1bf080> The correlation coefficient of 0.6 indicates a strong correlation between the daily return values of Microsoft and Google. This inplies that if google’s stock value increases, microsoft’s stock value and hence daily return increases and vice versa. This analysis can further be extended by first segmentation of stock based on industry and then further analysis can be done to determine the choice of stock within that industry. # We can also simply call pairplot on our DataFrame for visual analysis of all the comparisons sns.pairplot(tech_rets.dropna()) <seaborn.axisgrid.PairGrid at 0x24cfc6a4dd8> #Another analysis on the closing prices for each stock and their individual comparisons # Call PairPLot on the DataFrame close_fig = sns.PairGrid(closing_df) # Using map_upper we can specify what the upper triangle will look like. close_fig.map_upper(plt.scatter,color='purple') # We can also define the lower triangle in the figure, including the plot type (kde) or the color map (BluePurple) close_fig.map_lower(sns.kdeplot,cmap='cool_d') # Finally we'll define the diagonal as a series of histogram plots of the closing price close_fig.map_diag(plt.hist,bins=30) <seaborn.axisgrid.PairGrid at 0x24cfdf8a160> #For a quick look at the correlation analysis for all stocks, we can plot for the daily returns using corrplot from seaborn.linearmodels import corrplot as cor cor(tech_rets.dropna(),annot=True) C:\Users\saj16\Anaconda3\lib\site-packages\seaborn\linearmodels.py:1290: UserWarning: The corrplot function has been deprecated in favor of heatmap and will be removed in a forthcoming release. Please update your code. warnings.warn(("The corrplot function has been deprecated in favor " C:\Users\saj16\Anaconda3\lib\site-packages\seaborn\linearmodels.py:1356: UserWarning: The symmatplot function has been deprecated in favor of heatmap and will be removed in a forthcoming release. Please update your code. warnings.warn(("The symmatplot function has been deprecated in favor " <matplotlib.axes._subplots.AxesSubplot at 0x24cfff485f8> Risk Analysis: Bootstraping method, Monte Carlo method #It is important to note the risk involved with the expected return for each stock visually before starting the analysis rets = tech_rets.dropna() area = np.pi*20 plt.scatter(rets.mean(), rets.std(),alpha = 0.5,s =area) # Setting the x and y limits & plt axis titles plt.ylim([0.01,0.025]) plt.xlim([-0.003,0.004]) plt.xlabel('Expected returns') plt.ylabel('Risk') for label, x, y in zip(rets.columns, rets.mean(), rets.std()): plt.annotate( label, xy = (x, y), xytext = (50, 50), textcoords = 'offset points', ha = 'right', va = 'bottom', arrowprops = dict(arrowstyle = '-', connectionstyle = 'arc3,rad=-0.3')) As seen in the above plot the less the expected return, lesser the risk involved. Next lets determine the “Value at Risk”, that is the worst daily loss that can be encountered with 95 % Confidence. rets['AAPL'].quantile(0.05) -0.014682686093512198 The 0.05 empirical quantile of daily returns is at -0.015. That means that with 95% confidence, worst daily loss will not exceed 1.5%. If we have a 1 million dollar investment, our one day 5% VaR is 0.015 * 1,000,000 = $15,000. Value at Risk using the Monte Carlo method Using the Monte Carlo to run many trials with random market conditions, then we’ll calculate portfolio losses for each trial. After this, we’ll use the aggregation of all these simulations to establish how risky the stock is. Let’s start with a brief explanation of what we’re going to do: We will use the geometric Brownian motion (GBM), which is technically known as a Markov process. This means that the stock price follows a random walk and is consistent with (at the very least) the weak form of the efficient market hypothesis (EMH): past price information is already incorporated and the next price movement is “conditionally independent” of past price movements. This means that the past information on the price of a stock is independent of where the stock price will be in the future, basically meaning, you can’t perfectly predict the future solely based on the previous price of a stock. The equation for geometric Browninan motion is given by the following equation: $\frac{\Delta{S}}{S}= {\mu \Delta{t}}+{\sigma \epsilon \sqrt{\Delta{t}}}$ Where S is the stock price, mu is the expected return (which we calculated earlier),sigma is the standard deviation of the returns, t is time, and epsilon is the random variable. We can mulitply both sides by the stock price (S) to rearrange the formula and solve for the stock price. ${\Delta{S}}= S({\mu \Delta{t}}+{\sigma \epsilon \sqrt{\Delta{t}}})$ Now we see that the change in the stock price is the current stock price multiplied by two terms. The first term is known as “drift”, which is the average daily return multiplied by the change of time. The second term is known as “shock”, for each tiem period the stock will “drift” and then experience a “shock” which will randomly push the stock price up or down. By simulating this series of steps of drift and shock thousands of times, we can begin to do a simulation of where we might expect the stock price to be. # Set up time horizon and delta values days = 365 dt = 1/days # Calculate mu (drift) from the expected return data we got for AAPL mu = rets.mean()['GOOG'] # Calculate volatility of the stock from the std() of the average return sigma = rets.std()['GOOG'] def stock_monte_carlo(start_price,days,mu,sigma): ''' This function takes in starting stock price, days of simulation, mu, sigma and returns simulated price array''' price = np.zeros(days) price[0] = start_price shock = np.zeros(days) drift = np.zeros(days) # Calculating and returning price array for number of days for x in range(1,days): shock[x] = np.random.normal(loc=mu * dt, scale=sigma * np.sqrt(dt)) drift[x] = mu * dt price[x] = price[x-1] + (price[x-1] * (drift[x] + shock[x])) return price # Get start price from GOOG.head() start_price = 560.85 simulations = np.zeros(100) for run in range(100): simulations[run] = stock_monte_carlo(start_price,days,mu,sigma)[days-1] plt.plot(stock_monte_carlo(start_price,days,mu,sigma)) plt.xlabel("Days") plt.ylabel("Price") plt.title('Monte Carlo Analysis for Google') <matplotlib.text.Text at 0x24c805d4278> # Now we'lll define q as the 1% empirical qunatile, this basically means that 99% of the values should fall between here q = np.percentile(simulations, 1) # Now let's plot the distribution of the end prices plt.hist(simulations,bins=200) # Using plt.figtext to fill in some additional information onto the plot # Starting Price plt.figtext(0.6, 0.8, s="Start price:$%.2f" %start_price) # Mean ending price plt.figtext(0.6, 0.7, "Mean final price: $%.2f" % simulations.mean()) # Variance of the price (within 99% confidence interval) plt.figtext(0.6, 0.6, "VaR(0.99):$%.2f" % (start_price - q,)) # Display 1% quantile plt.figtext(0.15, 0.6, "q(0.99): $%.2f" % q) # Plot a line at the 1% quantile result plt.axvline(x=q, linewidth=4, color='r') # Title plt.title(u"Final price distribution for Google Stock after %s days" % days, weight='bold'); This basically means that for every initial stock you purchase there is about$13.61 at risk 1% of the time as predicted by Monte Carlo Simulation.
2021-07-27 10:06:46
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4746074974536896, "perplexity": 5185.925419947819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153223.30/warc/CC-MAIN-20210727072531-20210727102531-00315.warc.gz"}
https://webwork.maa.org/moodle/mod/forum/discuss.php?d=4667
## WeBWorK Problems ### checking an answer checker accepts the correct answer by Alex Jordan - Number of replies: 1 If you have an answer checker hash... Actually let me pause here, I'm not sure I am using the right vocabulary. What I mean is something like when $m is a math object, and the object$m->cmp(). So when you have that thing, how can you verify that it actually accepts $m as a correct answer? This is for situations where the answer checker is custom. Some random version of the problem may be buggy and not take$m. I'd like to wrap it all in a loop and check that if $m is not counted as correct, it should re-randomize. I'm thinking it could be something like: do { ...code... until ($m->cmp($m) == 1); but my experiments are suggesting that is not the right syntax. In reply to Alex Jordan ### Re: checking an answer checker accepts the correct answer by Glenn Rice - I think what you are looking for is do { ...code... } until ($m->cmp->evaluate(\$m)->{score} == 1); Note that if you are using a custom checker you will need to pass that as an option to the cmp method here. Also note that the check if the score is equal to 1 may not be correct if your custom checker gives partial credit.
2021-12-07 18:51:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3305279612541199, "perplexity": 845.9597489358066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363405.77/warc/CC-MAIN-20211207170825-20211207200825-00486.warc.gz"}
https://blog.csdn.net/nameofcsdn/article/details/113072863
# 状态压缩DP POJ 1185 炮兵阵地 POJ 1321 棋盘问题 POJ 2411 Mondriaan's Dream POJ 3254 Corn Fields POJ 2441 Arrange the Bulls HDU - 1565 方格取数(1) # 二,OJ实战 ## POJ 1185 炮兵阵地 Description Input Output Sample Input 5 4 PHPP PPHH PPPP PHPP PHHP Sample Output 6 bool ok(int n) //保证每2个1之间至少有2个0 { int a = 3, b = 5; for (int i = 0; i < 10; i++) { if ((n & a) == a)return false; if ((n & b) == b)return false; a *= 2; b *= 2; } return true; } #include<iostream> #include<string.h> using namespace std; long long sum[2][1024][1024]; int list[100], a, b, n, m, f[1024]; char c; int ok[60] = { 0, 1, 2, 4, 8, 9, 16, 17, 18, 32, 33, 34, 36, 64, 65, 66, 68, 72, 73, 128, 129, 130, 132, 136, 137, 144, 145, 146, 256, 257, 258, 260, 264, 265, 272, 273, 274, 288, 289, 290, 292, 512, 513, 514, 516, 520, 521, 528, 529, 530, 544, 545,546,548,576,577,578,580,584, 585 }; int r1, r2, r3; int main() { cin >> n >> m; memset(list, 0, sizeof(list)); memset(sum, 0, sizeof(sum)); memset(f, 0, sizeof(f)); for (int i = 0; i < 1024; i++)for (int j = 0; j < m; j++)if (i & (1 << j))f[i]++; for (int i = 0; i < n; i++) { for (int j = 0; j < m; j++) { cin >> c; if (c == 'P')list[i] += (1 << j); } } long long s = 0; if (n == 1) { for (int j = 0; j < 60; j++) if ((ok[j] & list[0]) == ok[j] && s < f[ok[j]])s = f[ok[j]]; cout << s; return 0; } for (int i = 0; i < n - 1; i++) //第i行和第i+1行 { a = i % 2, b = 1 - a; for (int j = 0; j < 60; j++) //第i行 { r2 = ok[j]; if ((r2 & list[i]) != r2)continue; for (int k = 0; k < 60; k++) //第i+1行 { r3 = ok[k]; if ((r3 & list[i + 1]) != r3 || (r3 & r2) != 0)continue; sum[b][r2][r3] = f[r3]; if (i == 0) { sum[b][r2][r3] += f[r2]; continue; } for (int l = 0; l < 60; l++) //第i-1行 { r1 = ok[l]; if ((r1 & list[i - 1]) == r1 && (r1 & r2) == 0 && (r1 & r3) == 0 && sum[b][r2][r3] < sum[a][r1][r2] + f[r3]) { sum[b][r2][r3] = sum[a][r1][r2] + f[r3]; } } } } } for (int j = 0; j < 60; j++)for (int k = 0; k < 60; k++) if (s < sum[b][ok[j]][ok[k]])s = sum[b][ok[j]][ok[k]]; cout << s; return 0; } ## POJ 1321 棋盘问题 Description Input Output Sample Input 2 1 #. .# 4 4 ...# ..#. .#.. #... -1 -1 Sample Output 2 1 #include<iostream> #include<string.h> using namespace std; int list[8]; //输入的棋盘状态 char c; int sum; int result; //最后的结果 int main() { int n, k; while (cin >> n >> k) { if (n == -1)break; memset(list, 0, sizeof(list)); result = 0; for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { cin >> c; if (c == '#')list[i] += (1 << j); } } for (int i7 = 0; i7 < (1 << n); i7++)//8层循环,要注意顺序,因为如果n<7的话i7只能是0 { if (i7 != (i7&(-i7)))continue; if ((i7&list[7]) != i7)continue; for (int i6 = 0; i6 < (1 << n); i6++) { if (i6 != (i6&(-i6)))continue; //1行只能放1个 if ((i6&list[6]) != i6)continue; //只能放在写了'#'的格子里面 if (i6&i7)continue; //1列只能放1个 for (int i5 = 0; i5 < (1 << n); i5++) { if (i5 != (i5&(-i5)))continue; if ((i5&list[5]) != i5)continue; if (i5&(i6 | i7))continue; for (int i4 = 0; i4 < (1 << n); i4++) { if (i4 != (i4&(-i4)))continue; if ((i4&list[4]) != i4)continue; if (i4&(i5 | i6 | i7))continue; for (int i3 = 0; i3 < (1 << n); i3++) { if (i3 != (i3&(-i3)))continue; if ((i3&list[3]) != i3)continue; if (i3&(i4 | i5 | i6 | i7))continue; for (int i2 = 0; i2 < (1 << n); i2++) { if (i2 != (i2&(-i2)))continue; if ((i2&list[2]) != i2)continue; if (i2&(i3 | i4 | i5 | i6 | i7))continue; for (int i1 = 0; i1 < (1 << n); i1++) { if (i1 != (i1&(-i1)))continue; if ((i1&list[1]) != i1)continue; if (i1&(i2 | i3 | i4 | i5 | i6 | i7))continue; for (int i0 = 0; i0 < (1 << n); i0++) { if (i0 != (i0&(-i0)))continue; if ((i0&list[0]) != i0)continue; if (i0&(i1 | i2 | i3 | i4 | i5 | i6 | i7))continue; sum = 0; if (i0)sum++; //整个棋盘放了多少个棋子 if (i1)sum++; if (i2)sum++; if (i3)sum++; if (i4)sum++; if (i5)sum++; if (i6)sum++; if (i7)sum++; if (sum == k)result++; } } } } } } } } cout << result << endl } return 0; } (为了把这个代码弄的这么好看,我也是拼了。。。) #include<iostream> #include<string.h> using namespace std; int list[8]; //输入的棋盘状态 int r[8][256]; char c; int sum; int k; bool ok(int n) { int s = 0; for (int a = 1; a < 256; a *= 2)if (n & a)s++; if (s>k)return false; return true; } int main() { int n; while (cin >> n >> k) { if (n == -1)break; memset(list, 0, sizeof(list)); memset(r, 0, sizeof(r)); for (int i = 0; i < n; i++) //输入 { for (int j = 0; j < n; j++) { cin >> c; if (c == '#')list[i] += (1 << j); } } for (int i = 0; i < (1 << n); i++) if ((i&list[0]) == i && ok(i)&&(i&(-i))==i)r[0][i] = 1; for (int i = 1; i < n; i++)for (int j = 0; j < (1 << n); j++) if (ok(j))for (int kk = 0; kk < (1 << n); kk++) if ((j&kk) == kk && ((j^kk)&list[i]) == (j^kk) && ((j^kk)&(-(j^kk))) == (j^kk)) r[i][j] += r[i - 1][kk]; sum = 0; for (int i = 0; i < (1 << n); i++) { int s = 0; for (int a = 1; a < 256; a *= 2)if (i & a)s++; if(s==k)sum += r[n - 1][i]; } cout << sum << endl; } return 0; } r[i][j] 表示的是前i行的状态是 j 的情况数。 ## POJ 2411 Mondriaan's Dream Description Squares and rectangles fascinated the famous Dutch painter Piet Mondriaan. One night, after producing the drawings in his 'toilet series' (where he had to use his toilet paper to draw on, for all of his paper was filled with squares and rectangles), he dreamt of filling a large rectangle with small rectangles of width 2 and height 1 in varying ways. Expert as he was in this material, he saw at a glance that he'll need a computer to calculate the number of ways to fill the large rectangle whose dimensions were integer values, as well. Help him, so that his dream won't turn into a nightmare! Input The input contains several test cases. Each test case is made up of two integer numbers: the height h and the width w of the large rectangle. Input is terminated by h=w=0. Otherwise, 1<=h,w<=11. Output For each test case, output the number of different ways the given rectangle can be filled with small rectangles of size 2 times 1. Assume the given large rectangle is oriented, i.e. count symmetrical tilings multiple times. Sample Input 1 2 1 3 1 4 2 2 2 3 2 4 2 11 4 11 0 0 Sample Output 1 0 1 2 3 5 144 51205 #include<iostream> #include<string.h> using namespace std; int h, w; long long dp[11][2048]; //前i行的和 long long sum; //对于每一行,横为0,竖为1,所以00011是不可能的,00100是可能的 bool ok(int n) //每2个0都是在一起的 { n += (1 << w); //因为不知道维度的奇偶性 while (n) { if (n % 2)n /= 2; else { if (n % 4)return false; n /= 4; } } return true; } int main() { while (cin >> h >> w) { if (h == 0)break; if (h == 1) { cout << (w + 1) % 2 << endl; continue; } if (h % 2 && w % 2) { cout << 0 << endl; continue; } memset(dp, 0, sizeof(dp)); for (int i = 0; i < (1 << w); i++)if(ok(i))dp[0][i] = 1; for (int i = 1; i < h-1; i++) { for (int j = 0; j < (1 << w); j++) { for (int k = 0; k < (1 << w); k++) { if (j&k)continue; if (ok(j^k)) { dp[i][j] += dp[i - 1][k]; } } } } sum = 0; for (int k = 0; k < (1 << w); k++)if (ok(k))sum += dp[h - 2][k]; cout << sum << endl; } return 0; } ## POJ 3254 Corn Fields Description Farmer John has purchased a lush new rectangular pasture composed of M by N (1 ≤ M ≤ 12; 1 ≤ N ≤ 12) square parcels. He wants to grow some yummy corn for the cows on a number of squares. Regrettably, some of the squares are infertile and can't be planted. Canny FJ knows that the cows dislike eating close to each other, so when choosing which squares to plant, he avoids choosing squares that are adjacent; no two chosen squares share an edge. He has not yet made the final choice as to which squares to plant. Being a very open-minded man, Farmer John wants to consider all possible options for how to choose the squares for planting. He is so open-minded that he considers choosing no squares as a valid option! Please help Farmer John determine the number of ways he can choose the squares to plant. Input Line 1: Two space-separated integers:  M and  N Lines 2..  M+1: Line  i+1 describes row  i of the pasture with  N space-separated integers indicating whether a square is fertile (1 for fertile, 0 for infertile) Output Line 1: One integer: the number of ways that FJ can choose the squares modulo 100,000,000. Sample Input 2 3 1 1 1 0 1 0 Sample Output 9 #include<iostream> #include<string.h> using namespace std; int list[13]; int r[13][4096]; bool ok(int n) //一行里面不能有相邻的2个 { for (int i = 3; i <= 3072; i *= 2)if ((n&i) == i)return false; return true; } int main() { int m, n; int a; cin >> m >> n; memset(list, 0, sizeof(list)); for (int i = 1; i <= m; i++) //输入 { for (int j = 0; j < n; j++) { cin >> a; if (a)list[i] += (1 << j); } } memset(r, 0, sizeof(r)); for (int i = 1; i <= m; i++)for (int j = 0; j < (1 << n); j++) //求解 if (ok(j) && (j&list[i]) == j) //单就这一行来说是满足的 { if (i == 1)r[1][j] = 1; else for (int k = 0; k < (1 << n); k++) if ((k&j) == 0)r[i][j] = (r[i - 1][k] + r[i][j]) % 100000000; } int s = 0; for (int i = 0; i < (1 << n); i++)s = (s + r[m][i]) % 100000000; //对最后一行求和 cout << s; return 0; } ## POJ 2441 Arrange the Bulls Description Farmer Johnson's Bulls love playing basketball very much. But none of them would like to play basketball with the other bulls because they believe that the others are all very weak. Farmer Johnson has N cows (we number the cows from 1 to N) and M barns (we number the barns from 1 to M), which is his bulls' basketball fields. However, his bulls are all very captious, they only like to play in some specific barns, and don’t want to share a barn with the others. So it is difficult for Farmer Johnson to arrange his bulls, he wants you to help him. Of course, find one solution is easy, but your task is to find how many solutions there are. You should know that a solution is a situation that every bull can play basketball in a barn he likes and no two bulls share a barn. To make the problem a little easy, it is assumed that the number of solutions will not exceed 10000000. Input In the first line of input contains two integers N and M (1 <= N <= 20, 1 <= M <= 20). Then come N lines. The i-th line first contains an integer P (1 <= P <= M) referring to the number of barns cow i likes to play in. Then follow P integers, which give the number of there P barns. Output Print a single integer in a line, which is the number of solutions. Sample Input 3 4 2 1 4 2 1 3 2 2 4 Sample Output 4 #include<iostream> #include<string.h> using namespace std; int list[20]; //每一行输入的数据 int r[20][1048576]; int n, m; int f(int num) //求num一共有多少位是1 { int sum = 0; for (int i = 1; i < (1 << m); i *= 2)if (num&i)sum++; return sum; } int main() { cin >> n >> m; int a, b; memset(list, 0, sizeof(list)); for (int i = 0; i < n; i++) { cin >> a; for (int j = 0; j < a; j++) { cin >> b; list[i] += (1 << (b-1)); } } for (int i = 1; i < (1 << m); i++)if ((i&(-i))==i && (i&list[0]) == i)r[0][i] = 1; int k; for (int i = 1; i < n; i++)for (int j = 1; j < (1 << m); j++) if (f(j) == i + 1)for (int t = 1; t < (1 << m); t *= 2) if ((t&list[i]) == t && (t&j) == t)r[i][j] += r[i - 1][t^j]; int sum = 0; for (int i = 0; i < (1 << m); i++)sum += r[n - 1][i]; cout << sum; return 0; } #include<iostream> #include<string.h> #include<queue> using namespace std; int list[20]; //每一行输入的数据 queue<int>q; int r[1048576]; int n, m; int main() { cin >> n >> m; int a, b; memset(list, 0, sizeof(list)); memset(r, 0, sizeof(r)); for (int i = 0; i < n; i++) { cin >> a; for (int j = 0; j < a; j++) { cin >> b; list[i] += (1 << (b - 1)); } } int i, j, t; q.push(0); r[0] = 1; int sum = 0; while (!q.empty()) { i = q.front(); q.pop(); j = 0; for (int ii = 1; ii < (1 << m); ii *= 2)if (i&ii)j++; if (j == n)sum += r[i]; else { for (t = 1; t < (1 << m); t *= 2) { if (i&t)continue; if ((t&list[j]) == t) { if (r[t | i] == 0) q.push(t | i); r[t | i] += r[i]; } } } } cout << sum; return 0; } (顺便一提,函数 f 仍然用了,只不过写到main函数里面去了,没有写成函数的形式) ## HDU - 1565 方格取数(1) Input Output Sample Input 3 75 15 21 75 15 28 34 70 5 Sample Output 188 #include<iostream> using namespace std; int n, num[20000], s = 1, maxx[20][20000], number[20]; void build(int k) { if (k >20)return; for (int i = 0; num[i] < 1 << (k - 2); i++)num[++s] = num[i] + (1 << (k - 1)); build(k + 1); } int getsum(int j) { int sum = 0; for (int i = 0; i < n; i++)sum += ((j&(1 << i))>0)*number[i]; return sum; } int main() { num[0] = 0, num[1] = 1; build(2); while (cin >> n) { if (n == 0) { cout << "0\n"; continue; } for (int i = 0; i < n; i++)for (int j = 0; num[j] < (1 << n); j++)maxx[i][j] = 0; for (int i = 0; i < n; i++)cin >> number[i]; for (int j = 0; num[j] < (1 << n); j++)maxx[0][j] = getsum(num[j]); for (int i = 1; i < n; i++) { for (int i = 0; i < n; i++)cin >> number[i]; for (int j = 0; num[j] < (1 << n); j++)for (int k = 0; num[k] < (1 << n); k++) if ((num[j] & num[k]) == 0 && maxx[i][j] < maxx[i - 1][k] + getsum(num[j])) maxx[i][j] = maxx[i - 1][k] + getsum(num[j]); } int ans = 0; for (int j = 0; num[j] < (1 << n); j++)if (ans < maxx[n - 1][j])ans = maxx[n - 1][j]; cout << ans << endl; } return 0; } 07-16 1万+ 02-04 3万+ 02-16 885 11-11 138 04-28 699 05-31 1517 08-22 1866 05-25 2526 08-27 569 01-09 1796 07-15 3万+ 06-28 524 07-17 175 09-04 9196 03-19 4861 04-07 739 08-02 846 08-05 37 ©️2020 CSDN 皮肤主题: 编程工作室 设计师:CSDN官方博客
2021-03-01 16:03:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22276580333709717, "perplexity": 6935.63173575214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362741.28/warc/CC-MAIN-20210301151825-20210301181825-00366.warc.gz"}
https://archive.lib.msu.edu/crcmath/math/math/b/b248.htm
## Birthday Problem Consider the probability that no two people out of a group of will have matching birthdays out of equally possible birthdays. Start with an arbitrary person's birthday, then note that the probability that the second person's birthday is different is , that the third person's birthday is different from the first two is , and so on, up through the th person. Explicitly, (1) But this can be written in terms of Factorials as (2) so the probability that two people out of a group of do have the same birthday is therefore (3) If 365-day years have been assumed, i.e., the existence of leap days is ignored, then the number of people needed for there to be at least a 50% chance that two share birthdays is the smallest such that . This is given by , since (4) The number of people needed to obtain for , 2, ..., are 2, 2, 3, 3, 3, 4, 4, 4, 4, 5, ... (Sloane's A033810). The probability can be estimated as (5) (6) where the latter has error (7) In general, let denote the probability that a birthday is shared by exactly (and no more) people out of a group of people. Then the probability that a birthday is shared by or more people is given by (8) can be computed explicitly as (9) where is a Binomial Coefficient, is a Gamma Function, and is an Ultraspherical Polynomial. This gives the explicit formula for as (10) cannot be computed in entirely closed form, but a partially reduced form is (11) where (12) and is a Generalized Hypergeometric Function. In general, can be computed using the Recurrence Relation (13) (Finch). However, the time to compute this recursive function grows exponentially with and so rapidly becomes unwieldy. The minimal number of people to give a 50% probability of having at least coincident birthdays is 1, 23, 88, 187, 313, 460, 623, 798, 985, 1181, 1385, 1596, 1813, ... (Sloane's A014088; Diaconis and Mosteller 1989). A good approximation to the number of people such that is some given value can be given by solving the equation (14) for and taking , where is the Ceiling Function (Diaconis and Mosteller 1989). For and , 2, 3, ..., this formula gives , 23, 88, 187, 313, 459, 722, 797, 983, 1179, 1382, 1592, 1809, ..., which differ from the true values by from 0 to 4. A much simpler but also poorer approximation for such that for is given by (15) (Diaconis and Mosteller 1989), which gives 86, 185, 307, 448, 606, 778, 965, 1164, 1376, 1599, 1832, ... for , 4, .... The almost'' birthday problem, which asks the number of people needed such that two have a birthday within a day of each other, was considered by Abramson and Moser (1970), who showed that 14 people suffice. An approximation for the minimum number of people needed to get a 50-50 chance that two have a match within days out of possible is given by (16) (Sevast'yanov 1972, Diaconis and Mosteller 1989). References Abramson, M. and Moser, W. O. J. More Birthday Surprises.'' Amer. Math. Monthly 77, 856-858, 1970. Ball, W. W. R. and Coxeter, H. S. M. Mathematical Recreations and Essays, 13th ed. New York: Dover, pp. 45-46, 1987. Bloom, D. M. A Birthday Problem.'' Amer. Math. Monthly 80, 1141-1142, 1973. Bogomolny, A. Coincidence.'' http://www.cut-the-knot.com/do_you_know/coincidence.html. Clevenson, M. L. and Watkins, W. Majorization and the Birthday Inequality.'' Math. Mag. 64, 183-188, 1991. Diaconis, P. and Mosteller, F. Methods of Studying Coincidences.'' J. Amer. Statist. Assoc. 84, 853-861, 1989. Feller, W. An Introduction to Probability Theory and Its Applications, Vol. 1, 3rd ed. New York: Wiley, pp. 31-32, 1968. Finch, S. Puzzle #28 [June 1997]: Coincident Birthdays.'' http://www.mathsoft.com/mathcad/library/puzzle/soln28/soln28.html. Gehan, E. A. Note on the Birthday Problem.''' Amer. Stat. 22, 28, Apr. 1968. Heuer, G. A. Estimation in a Certain Probability Problem.'' Amer. Math. Monthly 66, 704-706, 1959. Hocking, R. L. and Schwertman, N. C. An Extension of the Birthday Problem to Exactly Matches.'' College Math. J. 17, 315-321, 1986. Hunter, J. A. H. and Madachy, J. S. Mathematical Diversions. New York: Dover, pp. 102-103, 1975. Klamkin, M. S. and Newman, D. J. Extensions of the Birthday Surprise.'' J. Combin. Th. 3, 279-282, 1967. Levin, B. A Representation for Multinomial Cumulative Distribution Functions.'' Ann. Statistics 9, 1123-1126, 1981. McKinney, E. H. Generalized Birthday Problem.'' Amer. Math. Monthly 73, 385-387, 1966. Mises, R. von. Über Aufteilungs--und Besetzungs-Wahrscheinlichkeiten.'' Revue de la Faculté des Sciences de l'Université d'Istanbul, N. S. 4, 145-163, 1939. Reprinted in Selected Papers of Richard von Mises, Vol. 2 (Ed. P. Frank, S. Goldstein, M. Kac, W. Prager, G. Szegö, and G. Birkhoff). Providence, RI: Amer. Math. Soc., pp. 313-334, 1964. Riesel, H. Prime Numbers and Computer Methods for Factorization, 2nd ed. Boston, MA: Birkhäuser, pp. 179-180, 1994. Sayrafiezadeh, M. The Birthday Problem Revisited.'' Math. Mag. 67, 220-223, 1994. Sevast'yanov, B. A. Poisson Limit Law for a Scheme of Sums of Dependent Random Variables.'' Th. Prob. Appl. 17, 695-699, 1972. Sloane, N. J. A. A014088 and A033810 in An On-Line Version of the Encyclopedia of Integer Sequences.'' http://www.research.att.com/~njas/sequences/eisonline.html. Stewart, I. What a Coincidence!'' Sci. Amer. 278, 95-96, June 1998. Tesler, L. `Not a Coincidence!'' http://www.nomodes.com/coincidence.html.
2021-11-30 22:07:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8934580087661743, "perplexity": 1031.0340637522913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359073.63/warc/CC-MAIN-20211130201935-20211130231935-00324.warc.gz"}
https://claritychallenge.org/docs/icassp2023/data/cec2_scenario
Modelling the scenario The scenario​ We want entrants to improve speech in the presence of background noise; see Figure 1. On the left there is a person with a quantified hearing loss who is listening to speech from the target talker on the right. Both people are in a living room. There is interfering noise from a number of sources (a TV and washing machine in this case). The speech and noise are sensed by microphones on the hearing aids of the listener. The task is to take these microphone feeds and the listener’s hearing characteristics, and produce signals for the hearing aid processor that will make the speech more intelligible. We will evaluate the success of the processing using a combination of objective metrics for speech intelligibility and quality. Baseline system and software tools​ Challenge entrants are supplied with an end-to-end baseline system. Figure 2 shows a simplified schematic, which comprises: • A scene generator (blue box) creates speech in noise (SPIN). • A listener is chosen (green ellipse), so the processing can be individualised for each listener with quantified hearing characteristics. • The speech is enhanced (pink box). The entrants are tasked to improve this. • The hearing aid we provide then amplifies the improved speech (yellow box) • The amplified and improved speech that is emitted by your hearing aid is then passed to the prediction stage (red boxes). A combination of HASPI and HASQI is the output of the objective metrics for intelligibility and quality respectively (Kates and Arehart, 2021, Kates and Arehart 2014). • All software tools will be available as a single GitHub repository. The software is split into core components e.g. HASPI, HASQI, and additional tools e.g. a hearing loss model. All software is open-source and in Python. Room geometry​ • Cuboid rooms with dimensions length $L$ by width $W$ by height $H$. • Length $L$ set using a uniform probability distribution random number generator with 3 $<$ $L$ (m) $≤$ 8. • Height $H$ set using a Gaussian distribution random number generator with a mean of 2.7 m and standard deviation of 0.8 m. • Area $L×W$ set using a Gaussian distribution random number generator with mean 17.7 m$^2$ and standard deviation of 5.5 m$^2$ Room materials​ One of the walls of the room is randomly selected for the location of the door. The door can be at any position with the constraint of being at least 20 cm from the corner of the wall. A window is placed on one of the other three walls. The window could be at any position of the wall but at 1.9 m height and at 0.4 m from any corner. The curtains are simulated to the side of the window. For larger rooms, a second window and curtains are simulated following a similar methodology. A sofa is simulated at a random position as a layer on the wall and the floor. Finally, a rug is simulated at a random location on the floor. The listener has position, $\vec{r} = (x_r,y_r,z_r)$ This is positioned within the room using uniform probability distribution random number generators for the x and y coordinates (see Figure 2 for origin location). There are constraints to ensure that the receiver is not too close to the wall: • $-W/2+1 \le x_r \le W/2-1$ • $1 \le y_r \le L-1$ • $z_r$ either 1.2 m (sitting) or 1.6 m (standing). The listener is initially oriented away from the target and will turn to be roughly facing the target talker around the time when the target speech starts • Orientation of listener at start of the sample ~25° from facing the target (standard deviation = 5°), limited to +-2 standard deviations. • Start of rotation is between -0.635 s to 0.865s (rectangular probability) • The rotation lasts for 200 ms (standard deviation =10 ms) • Orientation after rotation is 0-10° (random with rectangular probability distribution). The target talker​ ​​The target talker has position $\vec{t} = (x_t,y_t,z_t)$ The target talker is positioned within the room using uniform probability distribution random number generators for the coordinates. Constraints ensure the target is not too close to the wall or receiver. It is set to have the same height as the receiver. • $-W/2+1 \le x_t \le W/2-1$ • $1 \le y_t \le L-1$ • $|r-t| > 1$ • $z_t=z_r$ A speech directivity pattern is used, which is directed at the listener. The target speech starts between 1.0 and 1.5 seconds into the mixed sound files (rectangular probability distribution). The interferers​ The interferers have position $\vec{i_{1,2,3}} = (x_i,y_i,z_i)$ Each interferer is modelled as an omnidirectional point source. They will be radiating: speech, noise or music. They are placed within the room using uniform probability distribution random number generators for the coordinates. The following constraints ensure the interferer is not too close to the wall or listener. However, interferers are independently positioned with no constraint on their position relative to each other. They are set to be at the same height as the listener. Note, this means that the interferers can be at any angle relative to the listener. • $-W/2+1 \le x_i \le W/2-1$ • $1 \le y_i \le L-1$ • $|r-i| \gt 1$ • $z_i = z_r$ The interferers are present over the whole mixed sound file. Signal-to-noise ratio (SNR)​ The SNR of the mixtures are engineered to achieve a suitable range of speech intelligibility values. A desired signal-to-noise ratio, SNRD (dB), is chosen at random. This is generated with a uniform probability distribution between limits determined by pilot listening tests. The better ear SNR (BE_SNR) models the better ear effect in binaural listening. It is calculated for the reference channel (channel 1, which corresponds to the front microphone of the hearing aid). This value is used to scale all interferer channels. The procedure is described below. For the reference channel, • The segment of the summed interferers that overlaps with the target (without padding), $i'$, and the target (without padding), $t'$, are extracted • Speech-weighted SNRs are calculated for each ear, SNR$_L$ and SNR$_R$: • Signals $i'$ and $t'$ are separately convolved with a speech-weighting filter, h (specified below). • The rms is calculated for each convolved signal. • SNR$_L$ and SNR$_R$ are calculated as the ratio of these rms values. • The BE_SNR is selected as the maximum of the two SNRs: BE_SNR = max(SNR$_L$ and SNR$_R$). Then per channel, • The summed interferer signal, i, is scaled by the BE_SNR • $i = i \times$ BE_SNR • Finally, i is scaled as follows: • $i = i \times 10^{((-SNR_D)/20)}$ The speech-weighting filter is an FIR designed using the host window method [2, 3]. The frequency response is shown in Figure 2. The specification is: • Frequency (Hz) = [0, 150, 250, 350, 450, 4000, 4800, 5800, 7000, 8500, 9500, 22050] • Magnitude of transfer function at each frequency = [0.0001, 0.0103, 0.0261, 0.0419, 0.0577, 0.0577, 0.046, 0.0343, 0.0226, 0.0110, 0.0001, 0.0001] References​ 1. Schröder, D. and Vorländer, M., 2011, January. RAVEN: A real-time framework for the auralization of interactive virtual environments. In Proceedings of Forum Acusticum 2011 (pp. 1541-1546). Denmark: Aalborg. 2. Abed, A.H.M. and Cain, G.D., 1978. Low-pass digital filtering with the host windowing design technique. Radio and Electronic Engineer, 48(6), pp.293-300. 3. Abed, A.E. and Cain, G., 1984. The host windowing technique for FIR digital filter design. IEEE transactions on acoustics, speech, and signal processing, 32(4), pp.683-694.
2023-03-31 15:16:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 37, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.600223958492279, "perplexity": 1804.6443738211758}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949644.27/warc/CC-MAIN-20230331144941-20230331174941-00678.warc.gz"}
https://akoratana.xyz/
Search # The Butterfly Effect ### for the idea in all of us In this series, we are observing the semantic errors of a hippocampal simulation of neurointerfaces, and the sampling grid approach used to model its unsupervised feature maps. This section will get into the linear algebra and calculus behind the sampling grids and how they relate to a variate error in the final system. ### Parametrized Sampling Grid A sampling grid, neuroanatomically a receptive network, will be parameterized to allow the mutation of various neurobiological parameters, such as dopamine, oxytocin, or adrenaline, and produce a synthetic, reactionary response in the neurointerface stack. A modulation of the initial sampling grid will be used to classify the transformations to their respective location in the comprehensive memory field. In order to perform the spatial transform of the normalized input feature map, a sampler must sample a set of parameters from ${ \tau }_{ \theta }({ G }_{ i })$ where $G$ represents a static translational grid of the applied transforms. The input feature map $U$, the raw equivalent of the receptive fields, along with its primed resultant of the ${ f }_{ loc }(x) = V$ function will be accounted for as well in the translational grid. Each coordinate in $G$ represented as ${ \left( { x }_{ j }^{ s },{ y }_{ j }^{ s } \right) }_{ j }$, giving a gradient dimensionality $j$ to the spatial grid input. A gradient dimensionality allows the sparse network to have an infinite number of spatial perspectives as I will soon be posting about concentric bias simulation for mental illnesses. Each coordinate in the ${ \tau }_{ \theta }({ G }_{ i })$ represents a spatial location in the input where the sampling kernel can concentrically be applied to get a projected and subsequent value in $V$ \. This, for stimuli transforms, can be written as: ${ V }_{ i }^{ c }(j)=\frac { \sum _{ n }^{ H }{ \sum _{ m }^{ W }{ { U }_{ nm }^{ c } } k\left( { x }_{ i }^{ s }-{ m };{ \Phi }_{ x } \right) k\left( { y }_{ i }^{ s }-n;{ \Phi }_{ x } \right) { :\quad \forall }_{ i }\in \left[ 1\dots { H }^{ ' }{ W }^{ ' } \right] } { :\quad \forall }_{ c }\in \left[ 1\dots C \right] }{ \left< { j }|{ { H }^{ ' } }|{ { W }^{ ' } } \right> }$ Here, $\Phi$ represents the parameterized potential of the sampling kernel of the spatial transformer which will be used to forward neuroanatomical equivalences through recall gradients. The use of kernel sampling can be varied as long as all levels of gradients can be simplified to functions of ${ \left( { x }_{ j }^{ s },{ y }_{ j }^{ s } \right) }_{ j }$. For the purposes of our experimentation, a bilinear sampling kernel will be used to co-parallely process inputs, allowing for a larger parametrization of learning transforms. To allow backpropagation of loss through this sampling mechanism, the gradient functions must be with respect to $U$ and $G$. This observation was initially established as a means to allow sub-differentiable sampling in a similar bilinear sampling method: $\frac { \delta { V }_{ i }^{ c } }{ \delta { U }_{ nm }^{ c } } =\sum _{ n }^{ H }{ \sum _{ m }^{ W }{ \max _{ j }{ (0,1-\left| { x }_{ i }^{ s }-m \right| ) } \max _{ j }{ (0,1-\left| { y }_{ i }^{ s }-n \right| ) } } }$ $\frac { \delta { V }_{ i }^{ c } }{ \delta { x }_{ i }^{ s } } =\sum _{ n }^{ H }{ \sum _{ m }^{ W }{ { U }_{ nm }^{ c }\max _{ j }{ (0,1-\left| { y }_{ i }^{ s }-n \right| ) } \begin{cases} 0 & if\left| m-{ x }_{ i }^{ s } \right| \ge 1 \\ 1 & if\quad m\ge { x }_{ i }^{ s } \\ -1 & if\quad m<{ x }_{ i }^{ s } \end{cases} } }$ Therefore, loss gradients can be attributed not only to the spatial transformers, but also to the input feature map, sampling grid, and, finally, back to the parameters, $\Phi$ & $\theta$. The bilinear sampler has been slightly modified in this case to allow for concentric recall functions to be applied to its resultant fields. It is worth noting that due to this feature, the spatial networks representation of the learned behavior is unique in the rate and method of preservation, much like how each person is unique in his ability to learn and process information. The observable synthetic activation complexes can also be modeled through the monitoring of these parameters as they elastically adapt to the stimulus. The knowledge of how to transform is encoded in localization networks, which fundamentally are non-static as well. ### Sparse Learning Recall Networks Recall-based functions are classically indicative of a mirror neuron system in which each approximation of the neural representation remains equally utilized, functioning as a load balancing mechanism. Commonly attributed to the preemptive execution of a planned task, the retention of memory in mirror neural systems tends to be modular in persistence and metaphysical in nature. Sparse neural systems interpret signals from cortical portions of the brain, allowing learned behaviors from multiple portions of the brain to execute simultaneously as observed in Fink’s studies on cerebral memory structures. It is theorized that the schematic representation of memory in these portions of the brain exists in memory fields only after a number of transformations have occurred in response to the incoming stimulus. Within these transformations lies the inherent differentiating factor in functional learning behavior: specifically, those which cause the flawed memory functions in the patients of such mental illnesses. #### Semantic Learning Transformation Now, similar to my fluid intelligence paper, we will need to semantically represent all types of ideas in a way that most directly allows for future transformations and biases to be included. For this, we will use a mutated version of the semantic lexical transformations. The transformation of raw stimulus, in this case a verbal and unstructured story-like input, to a recall-able and normalized memory field will be simulated by a spatial transformer network. These mutations in raw input are the inherent reason for differentiated recall mechanisms between all humans. An altered version of the spatial transformer network, as developed in \cite{JaderbergSpatialNetworks} in Google’s Deepmind initiative, will be used to explicitly allow the spatial manipulation of data within the neural stack. Recall gradients mapped from our specialized network find their activation complexes similar to that of the prefrontal cortex in the brain, An altered version of the spatial transformer network, as developed in Google’s Deepmind initiative, will be used to explicitly allow the spatial manipulation of data within the neural stack. Recall gradients mapped from our specialized network find their activation complexes similar to that of the prefrontal cortex in the brain, tasked with directing and encoding raw stimulus. ##### The Spatial Transformer Network (Unsupervised) Originally designed for pixel transformations inside a neural network, the sampling grid or the input feature map will be parameterized to fit the translational needs of comprehension. The formulation of such a network will incorporate an elastic set of spatial transformers, each with a localisation network and a grid generator. Together, these will function as the receptive fields interfacing with the hypercolumns. Now these transformer networks allowed us to parameterize any type of raw stimulus to be parsed and propagated through a more abstracted and generalized network capable of modulating fluid outputs. The localisation network will take a mutated input feature map of $U\in { \textbf{R}}^{ { H }_{ i }\times { W }_{ i }\times { C }_{ i } }$, with width $W$, height $H$, channels $C$ and outputs ${\theta }_{i }$. $i$ represents a differentiated gradient-dimensional resultant prioritized for storage in the stack. This net feature map allows the convolution of learned transformations to a neural stack in a compartmentalized system. A key characteristic of this modular transformation, as noted in Jaderberg’s spatial networks, is that the parameters of the transformations in the input feature map, as the size of $\theta$, can vary depending on the transformation type. This allows the sparse network to easily retain the elasticity needed to react to any type of stimulus, giving opportunity for compartmentalized learning space. The net dimensionality of the transformation ${ \tau }_{ \theta }$ on the feature map can be represented: $\theta ={ f }_{ loc }\left( x \right)$. In any case, the ${ f }_{ loc }\left( \right)$ can take any form, especially that of a learning network. For example, for a simple laplace transform, $\theta$ will assume a 6-dimensional position, and ${ f }_{ loc }\left( \right)$ will take the form of a convolutional network or a fully connected network (\cite{AndrewsIntegratingRepresentations}). The form of ${ f }_{ loc }\left( \right)$ is unbounded and nonrestrictive in domain, allowing all forms of memory persistence to coexist in the spatial stack. If you didn’t get a chance to see my TED talk live, the video has just been produced and uploaded onto the TEDx channel on Youtube (below). The talk is about some of my work in artificial intelligence: specifically the results we’ve observed in our research in synthetic neurointerfaces. Our goal was to functionally and synthetically model the human neocortical columns in an artificial intelligence to give a more differentiable insight into the cognitive behaviors we, as humans, exhibit on a daily basis. If you would like to know more, I have published the working paper here. Please let me know what you all think in the comments section below or on Youtube, I would love all the feedback I can get! Earlier this week, I published my working paper on simulating synthetic neurointerfaces. It’s been quite a journey getting here, and I apologize for the delay in posting about the posting of my paper. I’m going to submit the paper to the 2017 International Conference for Learning Representations (ICLR). What I have posted is a working paper, meaning that there will be more drafts and revisions to come before January. If you have any questions please feel free to contact me. I would also like to give a disclaimer that my work purely comes from a mathematical, and a computer science background. This is a draft, and there are field experts that helped me with the computational neuroscience portion of this project. In the end, my goal was to make the brain itself, a formal system: and I have treated the brain as such throughout. I’m very excited about this project not only because of its potential but because of what it’s already showed us. We are now able to get some basic neural representations of simple cognitive functions and modulate the functional anatomy of a synthetic neocortical column with ease, a step that we couldn’t achieve otherwise. In this study, we explore the potential of an unbounded, self-organizing spatial network to simulate translational awareness lent by the brain’s neocortical hypercolumns as a means to better understand the nature of awareness and memory. We modularly examine the prefrontal cortical function, amygdalar responses, and cortical activation complexes to model a synthetic recall system capable of functioning as a compartmentalized and virtual equivalent of the human memory functions. The produced neurointerfaces are able to consistently reproduce the reductive learning quotients of humans in various learning complexities and increase generalizing potentials across all learned behaviors. The cognitive system is validated by examining its persistence under the induction of various mental illnesses and mapping the synthetic changes to their equivalent neuroanatomical mutations. The resultant set of neurointerfaces is a form of artificial general intelligence that produces wave forms empirically similar to that of a patient’s brain. The interfaces also allow us to pinpoint, geometrically and neuroanatomically, the source of any functional behavior. The rest of the paper can be found here: https://www.researchgate.net/publication/308421342_Synthetic_Neurointerfaces_Simulating_Neural_Hypercolumns_in_Unbounded_Spatial_Networks I’m getting ready to release my work in persisting synthetic neurointerfaces in unbounded spatial networks. I truly believe that the use of computational tools such as this can be used to study the structure of intelligent computation in high-dimensional neural systems. What I tried to emulate in this project was a neuron by neuron representation of some basic cognitive functions by persisting a memory field in which self organizing neocortical hypercolumns could be functionally represented. The project was inspired by biological neural dynamical systems and foundationally rooted in some of the brilliant work Google’s Deep Mind project has been doing.  Before I publish any results, I would like to give a special thanks to my mentor and long time friend, Dr. Celia Rhodes Davis. Also, I would like to especially thank the Stanford Department of Computational Neuroscience  (Center for Brain, Mind & Computation) for functioning as an advisory board throughout my independent research and functioning as a sound logic board for general guidance. Below is a problem definition, goals, and a small sneak peek regarding the immediate potential, and execution of my project: ### Introduction The interface between the neuroanatomical activation of neocortical hypercolumns and their expressive function is a realm largely unobserved, due to the inability to efficiently and ethically study causational relationships between previously exclusively observed phenomenon. The field of general neuroscience explores the anatomical significance of cortical portions of the brain, extending anatomy as a means to explain the persistence of various nervous and physically expressive systems. Psychological approaches focus purely on \textit{expressive} behaviors as means to extend, with greater fidelity, the existence and constancy of the brain-mind interface. The interface between the anatomical realms of the mind and their expressive behaviors is a field widely unexplored, with surgeries such as the lobotomy and other controversial, experimental, and life-threatening procedures at the forefront of such study. However, the understanding of these neurological interfaces has potential to function as a window into the neural circuitry of mental illnesses, opening the door for cures and an ultimately more complete understanding of our brain. ### Goals We propose a method to simulate unbounded memory fields upon which recall functions can be parameterized. This model will be able to simulate cortical functions of the amygdala in its reaction to various, unfiltered stimuli. An observer network will be parallely created to analyze geometric anomalies in the neuroanatomical interface in memory recall functions, and extend equivalences between recall function parameters and memory recall gradients. This enables it to extend hypothesis to neuroanatomical functions. In order to understand the evolutionary imperative of a fluid intelligent cognitive system, it is necessary to examine the function of artificial neural networks (ANN) as they stand today. Broadly defined, artificial neural networks are models of sets of neurons used to estimate or approximate functions that can depend on a large number of inputs and are generally unknown. This approach thus far has resulted in a standard design of ANNs being persisted in a two dimensional model, and this fundamental structure is used for all variants of the neural network family including deep learning and convolutional learning models. This approach is fundamentally restrictive in the sense that all learned attributes lie on the same plane— meaning all regressive learned attributes, when compared mathematically, persist as functions of a singular dimensionality.  The function of this system is therefore limited to a single type of learned regression with strong biases against learning new regressions. The capacity for fluid intelligent intuition in humans allows us to compartmentalize these discrete learned attributes and fluidly find relations between them. This capacity is especially critical in finding unsupervised intelligence from polymorphic unstructured data. Simply put, if we, as humans, would learn with the same characteristics of an existing ANN model, then it would have resulted in an intrinsically stovepipe way of learning. However, humans have a much more sophisticated fluid intelligent capacity. This project is an attempt at creating a fundamentally new way of designing cognitive systems: one that attempts to mimic and enhance human learning patterns. ### Idea Disparity The process of node generation from unstructured data requires a foundation to find statistical distributions of words of a set A consisting of each of the documents aggregated. The dynamic set A will be a finite and elastic set of documents that will serve the purpose of representing the first layer of temporal memory without any sub-categorizations. Using a hybrid version of the collapsed Gibbs sampler, we are able to integrate out a set of variables into which we can assign distributions of words. Hierarchical Bayesian models yield multi modal distributions of words. This bag-of-words approach allows us to view the words of each subset distribution as statistical members of a larger set rather than lexical members of a semantic set. The equivalence is set up as x~y between a permutation of possible node types. We begin with tokenizing the documents within A as inputs to our Bayesian Gibbs sampler. As an initial dimension to work off of, the derived distributions function similarly to those generated by the Latent Dirichlet allocation methods (LDA). We use the LDA model used in the  to find topic distributions in social media data. In essence, this approach is a hybrid of the LDA classifier method. Instead of topic distributions, we are able to find probabilities of each word given each node type. The sampler is able to find the following conditional probabilities using the bag-of-words approach in which each word is viewed as a statistical element within a vocabulary rather than a coherent part of a larger context. In the figure above, we demonstrate the hybrid Latent Dirichlet allocation classifier as it find the probability of a statistical element within a subset, Z, of the populations set of documents, A. Each significant subset, Z, of our document collection, A, now becomes a contender for becoming a node within our graph. ### Unsupervised Multinetwork The topic distributions of the current snapshot of nodes (of intermixed types) are then forwarded to an unsupervised neural network with a range of 10-20 hidden layers. A flexible preconditioned version of the conjugate gradient back-propagation method is used: Alpha is the next optimal location vector relative to its position in the gradient of the linearized distribution sets, where the trained value would be a set of vectors of magnitude determining the distance of each distribution from the others from the subset. The hybrid gradient descent algorithm helps minimize the cross-entropy values during its classification. A separate and adequate network is trained and maintained for each subset of the original document set. The distributions with the greatest distance are then passed to another unique clustering algorithm based around minimizing the Davies–Bouldin index between cluster components but still maintaining the statistical significance between cluster distributions derived in the LDA phase. Where n is the number of clusters, c is the centroid of cluster x, sigma X is the average distance of all elements in cluster x to centroid c, and is the distance between centroids. Fluid intelligence: the capacity to think logically and solve problems in novel situations, independent of acquired knowledge Psychology has found the basis of fluid intelligence in the juxtaposition of layered memory and application as means to essentially “connect two fluid ideas with an an abstractly analogous property”. Such a mathematical design would have to be able to therefore derive temporal relationships with weighted bonds between two coherently disparate concepts through the means of similar properties. These properties within node types will have to be self-defined and self-propagated within idea types. ## Why? In a pursuit towards a truly dynamic artificial intelligence, it is necessary to establish a recurrent method to decipher the presence of concrete yet abstract entities (“ideas”) independent of a related and coherent topic set. A considerable amount of work venturing into this field has culminated in the prevalence of statistical methods to extract probabilistic models dependent on large amounts of unstructured data. These Bayesian data analytic techniques often result in an understanding superficial in the context of a true relational understanding. Furthermore, this “bag-of-words” approach when looking at amounts of unstructured data (quantifiable by correct relationships derived between the idea nodes) often relate to a single dimensional understanding of the topics at hand. Traditionally, when these topics are transformed, it is difficult to extract hierarchy and queryable relations using matrix transformations from a derived data set. The project that I will be describing in the subsequent posts is an effort to change the approach from which dynamic fluid intelligence is derived, finding a backbone in streaming big data. Ideally, this model would be able to take a layered, multi-dimensional approach to autonomous identification of properties of dynamically changing ideas from portions of said data set. It would also be able to find types of relationships, ultimately deriving a set of previously undefined relational schemas through unsupervised machine learning techniques that would ultimately allow for a queryable graph with properties and nodes initially undefined. ## Hive The Apache Hive project gives a Hadoop developer a view of the data in the Hadoop Distributed File System. This is basically a file manager for Hadoop. Using a SQL-like language, Hive lets you create summarizations of your data, perform ad-hoc queries, and analysis of large datasets in the Hadoop cluster. The overall approach with Hive is to project a table structure on the dataset and then manipulate it with HiveQL. The table structure effectively projects a structured data set onto unstructured data. If we are using data in HDFS (which we are) our operations can be scaled across all the data nodes and we can manipulate huge datasets. ## HCatalog The function of Apache HCatalog is to hold location and metadata8 about the data in a Hadoop single node system or cluster. This allows scripts and MapReduce jobs to be separated from each other into data location and metadata. Basically this project is what catalogs and sets pointers to other data bits in different nodes. In our “Hello World” analogy, HCatalog would tell where and which node “Hello” is and where and which node “World” is. Since HCatalog can be used with other Hadoop technologies like Pig and Hive, HCatalog can also help those tools in cataloging and indexing their data. For our purposes we can now reference data by name and we can share or inherit the location and metadata between nodes and Hadoop sub-units. ## Pig Apache Pig is a high-level scripting language. This language though, expresses data analysis and infrastructure processes. When a Pig set is executed, it is translated into a series of MapReduce jobs which are later sent to the Hadoop infrastructure (single node or cluster) though the MapReduce program. Pig’s user defined functions can be written in Java. This is the final layer of the cake on top of MapReduce to give the developer more control and a higher level of precision to create the MapReduce jobs which later translate into data processing in a Hadoop cluster. ## Ambari Apache Ambari is a an operational framework for provisioning and managing Hadoop clusters of multiple nodes or single nodes. Ambari is an effort of cleaning up the messy scripts and views of Hadoop to give a clean look for management and incubating. ## YARN Yarn is basically the new version of MapReduce in Hadoop 2.0. It is the Hadoop operating system that is overlaid on top of the system’s base operating system (CentOS13). YARN provides a global Resource Manager and a per-application manager in its newest iteration. The new idea behind this newer version of MapReduce is to split up the functions of JobTracker into two separate parts. This results in a tighter control of the system and ultimately results in more efficiency and ease of use. The illustration shows that an application run natively in Hadoop can utilize YARN as a cluster resource management tool along with its MapReduce 2.0 features as a bridge to the HDFS. ## Oozie Apache Oozie is effectively just a calendar for running Hadoop processes. For Hadoop, it is a system to manage a workflow through the Oozie Coordinator to trigger workflow jobs from MapReduce or YARN. Oozie is also a scalable system along with Hadoop and its other sub-products. Its workflow scheduler system runs in the base operating system (YARN) and takes commands from user programs. ## Hadoop Distributed File System (HDFS) The Hadoop Distributed File System is the foundation for any Hadoop Cluster and/or single-node implementations. The HDFS is the underlying difference between a normal MySQL6 database and a Hadoop implementation. This small change in approaching the data makes all the difference. A standard MySQL server serves the purpose for any small endeavors and can support an infrastructure about the size of Apple’s database with no problems. The method for processing data usually follows a linear though pattern.Take an example of a phrase “Hello world”. In a very rough representation a MySQL server would save the entire phrase on one hard disk. Then, when the data would be needed the CPU would send a request for the data, the hard disk would spin, and the data would be read/processed. This traditional approach to managing a database hits a few, key problems with no rational and affordable solution. The largest problem that is faced in this system is a mechanical one. At a certain point of complexity and size, a single hard disk can no longer physically spin fast enough to keep up with the seek capabilities of a single CPU. This problem can lead two solutions: make a better hard disk or rethink the way data is processed in the world today. Hadoop offers a solution to rethink the way this problem is dealt with in a radical new way. A Hadoop cluster implements a parallel computing cluster using inexpensive and standard pieces of hardware. The cluster is distributed among many servers running in parallel. The philosophy behind Hadoop is basically to bring the computing to the data. To successfully implement this, the system has to distribute pieces of the same block of data among multiple servers. So basically each data node holds part of the overall data and can process the little data that it holds. This pyramid scheme is visible when the system is scaled up to an infrastructure of Google’s size. The system no longer has the physical barrier of the spinning disks but rather a problem of just storage capacity (which is a very solvable and good problem to have).
2017-11-21 02:16:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 27, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4083409607410431, "perplexity": 1253.3655170420836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806310.85/warc/CC-MAIN-20171121021058-20171121041058-00768.warc.gz"}
http://clay6.com/qa/1652/the-maximum-number-of-equivalence-relations-on-the-set-a-are
Browse Questions # The maximum number of equivalence relations on the set A={1,2,3} are \begin{array}{1 1}(A)\;1\qquad & (B)\; 2\\(C)\; 3\qquad & (D)\; 5\end{array} Can you answer this question? Toolbox: • A relation R in a set A is called $\mathbf{ reflexive},$ if $(a,a) \in R\;$ for every $\; a\in\;A$ • A relation R in a set A is called $\mathbf{symmetric}$, if $(a_1,a_2) \in R\;\Rightarrow\; (a_2,a_1)\in R \; for \;a_1,a_2 \in A$ • A relation R in a set A is called $\mathbf{transitive},$ if $(a_1,a_2) \in R$ and $(a_2,a_3) \in R \; \Rightarrow \;(a_1,a_3)\in R$ for all$\; a_1,a_2,a_3 \in A$ step 1. consider the relation R 1 = { (1,1) } it is reflexive ,symmetric and transitive similarlyR 2= {(2,2)} , R  3= {(3,3)} are reflexive ,symmetric and transitive Step 2. Also R 4 = {  (1,1) ,(2,2),(3,3), (1,2),(2,1)} it is reflexive as$(a,a) \in R$ for all $a \in {1,2,3}$ it is symmetric as $(a,b)\in R => (b,a) \in R$ for all  $a\in {1,2,3}$ also it is transitive as $(1,2)\in R , (2,1)\in R => (1,1)\in R$ Step. 3 The relation defined by R = {(1,1), (2,2) , (3,3) , (1,2), (1,3),(2,1),(2,3) (3,1),(33,2)} is reflexive symmetric and transitive Thus Maximum number of equivalance relation on set $A=\{1,2,3\}$ is 5 answered Mar 5, 2013 by edited Mar 27, 2013 by meena.p
2016-10-25 10:09:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8904936909675598, "perplexity": 3016.8574355267583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720026.81/warc/CC-MAIN-20161020183840-00226-ip-10-171-6-4.ec2.internal.warc.gz"}
http://mca.ignougroup.com/2017/03/solved-lambda-calculus-type-inference.html
World's most popular travel blog for travel bloggers. # [Solved]: Lambda Calculus Type Inference , , Problem Detail: I'm currently trying to learn how to infer most general types on lambda calculus, and due to the lack of information on the subject I could find on Google I'm forced to attempt what I think is logical and ask here if it's correct, it's most likely not, so please correct me! Here's a couple exercises I've got, the idea is to reduce the redex and infer the most general type: 1. $(λf.λg.λx.g\space x(f\space x)) (λy.y)$ 2. $λz.λx.((λy.λw.y w)x z)$ After I reduce them: 1. $λg.λx.(g x) x$ 2. $λz.λx.x z$ And my attempt to infer the most general type: 1. Since $g \space x$ is applied to $x$ I assumed $g$ must be $g :: a\rightarrow (a \rightarrow b)$ while $x :: a$ Then $(λg.λx.(g\space x)\space x)$ must be of type $(a\rightarrow (a\rightarrow b))\rightarrow (a\rightarrow b)$ 2. Since $x$ is applied to $z$, let's say $z :: a$ and $x :: a\rightarrow b$, therefor the function is $a\rightarrow (a\rightarrow b)\rightarrow b$ #### Answered By : Andrej Bauer Let me assume that you are asking about basic type inference for $\lambda$-calculus with parametric polymorphism a la Hindley-Milner (it's not entirely clear from your question). I would recommend the Types and Programming Languages textbook by Benjamin Pierce as a general reference for this sort of thing. In there you can look up parametric polymorphism and Hindley-Milner type inference. These are the buzzwords you should be Googling. And when you do, it should be easy to find many resources. P.S. You should not reduce before type inference. You should just do type inference on the original terms. GHCi, version 7.6.3: http://www.haskell.org/ghc/ :? for help Prelude> :t \g -> \x -> (g x) x \g -> \x -> (g x) x :: (t1 -> t1 -> t) -> t1 -> t Prelude> :t \z -> \x -> x z \z -> \x -> x z :: t1 -> (t1 -> t) -> t Yup, you were correct.
2020-07-05 19:40:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6068503856658936, "perplexity": 816.6603847162278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655888561.21/warc/CC-MAIN-20200705184325-20200705214325-00198.warc.gz"}
http://forums.autodesk.com/t5/inventor-general-discussion/2d-equation-curve-for-a-parabolic-spiral/td-p/4306743?nobounce=
Inventor General Discussion ## Inventor General Discussion Valued Contributor Posts: 71 Registered: ‎06-21-2013 Message 1 of 5 (550 Views) # 2D equation curve for a parabolic spiral 550 Views, 4 Replies 06-22-2013 05:19 PM Can anyone please advise me what the 2D or 3D equation curve for a parabolic spiral, otherwise known as a Fermat's spiral would be? I have found this info but do not know how to apply it: Fermat equation Therefore in terms of the affine plane its equation is The Fermat curve is non-singular and has genus I have also found this: $r^{2}=a^{2}\theta$ *Expert Elite* Posts: 28,046 Registered: ‎04-20-2006 Message 2 of 5 (519 Views) # Re: 2D equation curve for a parabolic spiral 06-23-2013 07:00 AM in reply to: CJ.10E-10mm Are you familiar with using Equation Curves in general? A while back I posted some examples that my students submitted. I think Autodesk employee has also posted some examples. At one time there were several that were included in the Samples install, there should be a thread here with those attached. Do a search and come back with more questions if you can't find the examples to help you work it out. ----------------------------------------------------------------------------------------- Autodesk Inventor 2014 Certified Professional Certified SolidWorks Professional Inventor Professional 2015-SP1 64-bit http://www.autodesk.com/edcommunity http://home.pct.edu/~jmather/content/DSG322/inventor_surface_tutorials.htm Valued Contributor Posts: 71 Registered: ‎06-21-2013 Message 3 of 5 (505 Views) # Re: 2D equation curve for a parabolic spiral 06-23-2013 09:17 AM in reply to: CJ.10E-10mm Thank you, I will search later today. Equation Curves do appear to a novice like some form of 'black art.' So I am wondering if there is a 'user friendly; App that allows one to configure the equation for any given shape in an Excel Spreadsheet? Thank you once again. Distinguished Mentor Posts: 909 Registered: ‎04-11-2005 Message 4 of 5 (496 Views) # Re: 2D equation curve for a parabolic spiral 06-23-2013 11:09 AM in reply to: CJ.10E-10mm If you can create x,y,z coordinates in a spreadsheet, IV will import them and draw a spline through them. Valued Contributor Posts: 71 Registered: ‎06-21-2013 Message 5 of 5 (478 Views) # Re: 2D equation curve for a parabolic spiral 06-23-2013 03:11 PM in reply to: rdyson Thank you Rdyson. Is there any chance you could assist me with my question on:Variable pitch helical coil with smooth blend please? Thank you once again. Post to the Community
2014-11-26 10:21:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5619232058525085, "perplexity": 5096.882762670489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931006716.69/warc/CC-MAIN-20141125155646-00025-ip-10-235-23-156.ec2.internal.warc.gz"}
http://openstudy.com/updates/4f20a4f0e4b076dbc348e100
## anonymous 4 years ago How to put y=4cos(x) in X= form? 1. anonymous $x=\cos^{-1} (y/4)$ 2. anonymous It is only allowed in said domain in order to make it the inverse function (It needs to pass the horizontal line test) 3. anonymous Inverse y = 4cos(x) x/4 = 4cos(x)/4 y/4 = cos(x) take inverse of each side arccos(y/4) = arccos(cos(x)) arccos(y/4) = 1x arccos(y/4) = x Remember that arccos is the inverse function of cos(x) and is limited to the domain [0, pi] 4. anonymous ignore the mistake i made in the variables 5. anonymous Here is the problem I'm working at: 6. anonymous Find the volume of the solid generated by revolving the described region about the given axis: The region in the first quadrant bounded above by the line y=4 and by the curve y=4sin(x) for the interval 0≤x≤π2 about the line y=4 7. anonymous I think I'm using the cross-section method, but am not too sure on the radius. 8. anonymous Which I think would be (4-4sin(x) 9. anonymous Wait do you want 4sin(x) to equal 4? 10. anonymous I'm honestly not too sure on where to start with this problem. 11. anonymous I'm confused by your question but if you want it to equal four you can use the unit circle and think where is cos(x) = 1, the answer being pi 12. anonymous Are you familiar with solids of rotation? 13. anonymous tbh no I know trig functions though but meh you should ask in chat for help 14. anonymous haha alright, thanks for the help 15. anonymous 16. anonymous 17. anonymous @Cinar, do you know which method I would use here? 18. anonymous little bit (: I am trying to find it 19. anonymous Sweet, thanks 20. anonymous what is the rotation axis? 21. anonymous it is y=4 22. anonymous so I'm thinking the radius is (4-4sin(x)) 23. anonymous Any luck? 24. anonymous nope 25. anonymous $V=\pi \int\limits_{0}^{2\pi}(4-4\sin x)^2dx$ 26. anonymous
2016-06-30 23:16:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7924067378044128, "perplexity": 2762.7041425320567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00124-ip-10-164-35-72.ec2.internal.warc.gz"}
http://dntermpaperomnn.quickrewards.info/cornell-thesis-latex-class.html
# Cornell thesis latex class Business plan writers in bangalore dissertation on class size introduction cornell phd thesis latex dissertation paper size type should be. Preparing your master's thesis using latex if you have been using the old ucthesis document class, then you should update your thesis to. Latex/document structure thesis book: for real books slides: for slides the standard latex book class follows the same layout described above with. Thesis distribution list this is a list of library of congress call numbers assigned to cornell theses by department and library location useful for finding and browsing theses on a general subject area. Cornell latex thesis class peace thesis and shows amanda caltabiano thesis how effortless presentation-making with latex can be cornell latex thesis template. Wikiclassecornelledu. How to write a thesis in latex pt 1 - basic structure we’re going to teach you how to write a basic thesis using latex need to choose is a document class. Thesis class tex and latex - th tex and latex - thesis class documentation and sample files sample of latex thesis source files. Agricultural economics graduate program search writing the thesis in latex (october 17 department of applied economics and management at cornell. Preparing a thesis with latex 1 about the thesis class the rensselaer latex thesis document class, available for download on the web, can be used to produce either a master’s or a doctoral thesis with a format that meets the requirements of. Down­load the con­tents of this pack­age in one zip archive (5831k) clas­sicth­e­sis – a “clas­si­cally styled” the­sis pack­age this pack­age pro­vides an el­e­gant lay­out de­signed in homage to bringhurst’s “the ele­ments of ty­po­graphic style. Unisa masters thesis cornell dissertation latex customer writing dissertation hypothesis question. Modelos thesis cornell university thesis (cuthesis) sobre latex template to conform with the requirements of the graduate school at cornell university for the layout of phd dissertations and masters theses. I'm trying to use phdthesispsnpdf class but writing in latex stack exchange is a question and answer githubcom/kks32/phd-thesis-template – pierpaolo croce. Cornell dissertation latex cornell phd thesis latex uc berkeley phd dissertation guidelines tips for writing a college the cornell document class for latex 2. Here is a latex style file for the cornell letterhead format it closely follows the layout prescribed in the official cornell style guide, and comes with headers for. Latex thesis template as downloaded from this template contains a small fix t latex thesis template as downloaded from. Sc932 / thesis code issues 0 thesis / cornellcls fetching contributors % % document class `cornell' to use with latex 2e. Thesis class is a variation of the basic report class of latex, so it takes many of the same options the simplest two options, “11pt. \documentclass[phd,tocprelim] %% added to it'cornell phd thesis latex s also a great way to learn cornell phd thesis latex how to use latex and produce professional. Students almost always find the honors thesis to be an the english department offers three minors open to non-majors in all cornell students in the class of. Formatting template & requirements do not use a former student's document to format your thesis/dissertation latex: thank you to dr. ## Cornell thesis latex class Cornell dissertation latex cornell dissertation latex cornell phd sample introduction to persuasive essay latex cornell university thesis cuthesis httpwww and a landscape literature review article research thesis on pakistani literature are cornell university dissertation and dissertation guide similar in language minority cornell. The thesis or dissertation is a scholarly the thesis and dissertation guide theses and dissertations are made available through the cornell. The following article summarizes the most important aspects of writing a thesis in latex, providing you with a document skeleton (at the end) document class. • Templates thesis graduate-thesis cornell university thesis latex thesis template indian institute of technology madras thesis. • Latex templates cornell university graduate school caldwell hall cornell university ithaca, ny 14853-2602cornell phd thesis latexromeo and juliet essay help,write. • How to write a thesis in latex pt 1 - basic structure the first thing we need to choose is a document class now we have a basic structure for a thesis set up. • The cornell document class for latex 2ε∗ sergio gelato† aleksey nogin¶ daniel kartch‡ steve holland§ andrew myers, nate nystromk 2008/4/17. • Direc­tory macros/latex/contrib/thesis (the­sis and thema) de­vel­oped from the re­port class for a more euro­pean and a more flex­i­ble look. For a masters thesis, master thesis latex class master thesis latex class tex and latex - thesis class the university of colorado specifies just how masters. Aubin an interaction design research paper written dissertation advisor cornell university of edinburgh 2007 top picks her doctoral dissertations finance thesis proposal service your thesis cole carnesecca kinetics group cornell dissertation liggett received high school essay about architecture is, wisconsin electronic thesis or dissertation latex. Phd thesis class latex phd thesis class latex latex phd thesis documentclass latex phd thesis documentclass preparing a thesis with latex this is a latex document class conforming to the durham university thesis guidelines. Cornell thesis latex class Rated 5/5 based on 39 review
2018-07-17 05:39:43
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8097366690635681, "perplexity": 8341.099005061551}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589573.27/warc/CC-MAIN-20180717051134-20180717071134-00216.warc.gz"}
https://homework.cpm.org/category/CCI_CT/textbook/int1/chapter/5/lesson/5.2.1/problem/5-54
### Home > INT1 > Chapter 5 > Lesson 5.2.1 > Problem5-54 5-54. Interpret each of the following effects on the function, $f(x)$. For example, $f(x + 2)$ is “the output for the input that is $2$ greater than $x$”. 1. $f(c-4)$ 1. $f(0.5b)$ What operation is being done to the input variable $b$? 1. $f(d) + 12$ $12$ more than the output when the input is $d$.
2021-09-25 00:53:26
{"extraction_info": {"found_math": true, "script_math_tex": 10, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6119601726531982, "perplexity": 973.9769529309269}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057584.91/warc/CC-MAIN-20210924231621-20210925021621-00579.warc.gz"}
https://socratic.org/questions/what-is-the-freezing-point-of-a-0-05500-m-aqueous-solution-of-nano-3-if-the-mola
# What is the freezing point of a 0.05500 m aqueous solution of NaNO_3, if the molal freezing-point-depression constant of water is 1.86°C /m? ##### 1 Answer Dec 28, 2016 We know the depression of freezing point $\Delta {T}_{f} = {K}_{f} \times b \times i$ Where ${K}_{f} \to \text{cryoscopic consrant"=1.86^@"C/m}$ $b \to \text{molal concentration} = 0.055 m$ $i \to \text{Van'tHoff factor"=2" for "NaNO_3 "*}$ $\text{*}$ As $N a N {O}_{3}$ dissociates as follows producing 2 ions per molecule. $N a N {O}_{3} r i g h t \le f t h a r p \infty n s N {a}^{+} + N {O}_{3}^{-}$ So $\Delta {T}_{f} = 1.86 \times 0.055 \times 2 \approx 0.2$ So the freezing point of the given aqueous solution will be $- {0.2}^{\circ} C$
2021-04-11 09:41:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8055447936058044, "perplexity": 4587.477320638529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038061820.19/warc/CC-MAIN-20210411085610-20210411115610-00609.warc.gz"}
https://www.zbmath.org/authors/?q=ai%3Alesniewski.andrzej
# zbMATH — the first resource for mathematics ## Lesniewski, Andrzej Compute Distance To: Author ID: lesniewski.andrzej Published as: Lesniewski, Andrezej; Lesniewski, Andrzej; Leśniewski, Andrezj; Leśniewski, Andrzej External Links: MGP Documents Indexed: 55 Publications since 1983 all top 5 #### Co-Authors 3 single-authored 23 Klimek, Slawomir 18 Jaffe, Arthur Michael 6 Borthwick, David 6 Rinaldi, Maurizio 5 Osterwalder, Konrad 4 Rzezuchowski, Tadeusz 4 Weitsman, Jonathan 3 Wieczerkowski, Christian 2 King, Christopher K. 2 Rubin, Ron 1 Ernst, Kaspar 1 Feng, Ping 1 Grzybowski, Jerzy 1 Kondracki, Witold III 1 Lang, Guido 1 Lewenstein, Maciej 1 Maitra, Neepa 1 Salwen, Nathan 1 Upmeier, Harald 1 Wisniowski, Marek all top 5 #### Serials 17 Communications in Mathematical Physics 8 Journal of Mathematical Physics 7 Journal of Functional Analysis 5 Letters in Mathematical Physics 4 Annals of Physics 3 Demonstratio Mathematica 3 $K$-Theory 1 Reviews in Mathematical Physics 1 Canadian Mathematical Bulletin 1 Commentarii Mathematici Helvetici 1 Mathematica Bohemica 1 Journal of Convex Analysis 1 Journal of Mathematics and Applications all top 5 #### Fields 33 Quantum Theory (81-XX) 25 Functional analysis (46-XX) 19 Global analysis, analysis on manifolds (58-XX) 15 Operator theory (47-XX) 6 Dynamical systems and ergodic theory (37-XX) 4 Ordinary differential equations (34-XX) 4 Differential geometry (53-XX) 3 Nonassociative rings and algebras (17-XX) 3 Partial differential equations (35-XX) 3 Calculus of variations and optimal control; optimization (49-XX) 2 Algebraic geometry (14-XX) 2 Functions of a complex variable (30-XX) 2 Several complex variables and analytic spaces (32-XX) 2 Convex and discrete geometry (52-XX) 2 General topology (54-XX) 2 Manifolds and cell complexes (57-XX) 2 Statistical mechanics, structure of matter (82-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Category theory, homological algebra (18-XX) 1 $K$-theory (19-XX) 1 Topological groups, Lie groups (22-XX) 1 Algebraic topology (55-XX) 1 Mechanics of particles and systems (70-XX) 1 Information and communication, circuits (94-XX)
2019-12-08 15:10:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1791912168264389, "perplexity": 12760.12218034762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540511946.30/warc/CC-MAIN-20191208150734-20191208174734-00537.warc.gz"}
https://www.intechopen.com/books/recent-advances-in-autism-spectrum-disorders-volume-i/genetic-evaluation-of-individuals-with-autism-spectrum-disorders
Open access peer-reviewed chapter # Genetic Evaluation of Individuals with Autism Spectrum Disorders By Eric C. Larsen, Catherine Croft Swanwick and Sharmila Banerjee- Basu Submitted: May 22nd 2012Reviewed: September 28th 2012Published: March 6th 2013 DOI: 10.5772/53900 ## 1. Introduction While the genetic component of Autism Spectrum Disorders (ASD) has been clearly established from various lines of study, the multitude of genes and chromosomal loci associated with ASD has made identification of the underlying molecular mechanisms of pathogenesis difficult to resolve. A range of diverse methodologies and study types have identified both rare and common genetic variants in ASD candidate genes and chromosomal loci. Moreover, the recent development of high-throughput next generation sequencing (NGS) technologies and the increasing usage of chromosomal microarray analysis (CMA) has led to a significant expansion in the number of single nucleotide variants (SNVs) and copy number variants (CNVs) potentially affecting one or more genes that have been identified in ASD individuals. This, in turn, has given critical insight into the molecular and cellular processes that may be preferentially targeted for disruption by genetic lesions in ASD patients. However, it is important to note that there is no genetic test available for the diagnosis of ASD. Rather, genetic testing is primarily aimed at identifying genetic variants potentially responsible for disease pathogenesis in a given individual diagnosed with ASD. Furthermore, the utility of NGS and CMA in genetic evaluation of ASD individuals is dependent on proper interpretation and reporting of test results. In this chapter we will discuss 1) genetic testing technologies currently available for the identification of genetic variation in ASD cases, 2) the genes and genomic loci targeted by single nucleotide and copy number variants that have been linked to ASD susceptibility, 3) the bioinformatics tools that enable researchers to process the enormous amount of genetic data associated with ASD, and 4) challenges that exist in the interpretation and reporting of genetic evaluation results in ASD cases. ## 2. Genetic screening technologies for the evaluation of ASD cases Autism spectrum disorders (ASD) are among the most highly heritable neurodevelopmental disorders, and extensive research has been focused on identifying the underlying genetic basis of these disorders. It has become apparent that ASD is a genetically heterogeneous disorder, with hundreds of genes and chromosomal rearrangements identified that confer varying degrees of risk for disease. Initially, susceptibility genes and genomic loci were identified by costly, low-throughput techniques, such as automated Sanger sequencing and conventional cytogenetic techniques. The need for lower-cost, higher-throughput genetic screening technologies capable of identifying genome-wide variation in individuals with genetically complex diseases, such as ASD, has driven improvements in pre-existing techniques and the development of new technologies. The genetic screening technologies presently available to clinical geneticists and researchers are capable of providing lower-cost, high-throughput genetic data that have significantly expanded our knowledge of genetic variation, both in the general population and in ASD individuals in particular. The first major high-throughput studies aimed at identifying CNVs and SNVs in ASD cohorts were published in 2010 and 2011, respectively [1, 2]. Here we describe in greater detail two of these genetic screening technologies that have become widely used in the genetic evaluation of ASD cases: next generation sequencing (NGS) and chromosomal microarray (CMA). ### 2.1. Next generation sequencing Next-generation sequencing (NGS) is a term used to describe a collection of high-throughput sequencing technologies that have enabled clinicans to screen larger amounts of genetic material at lower cost than traditional sequencing technologies, such as automated Sanger sequencing [3]. NGS is typically used to identify single nucleotide variants (SNVs), as well as small insertions or deletions in candidate genes. However, NGS can also be used to identify copy number variants (CNVs), as was recently demonstrated in a report detailing whole exome sequencing in a cohort of ASD cases [4], as well as balanced chromosomal rearrangments, which are typically not detected by genome-wide microarrays [5, 6]. Since 2011, six research articles have been published that have identified rare variants in both existing and novel ASD susceptibility genes using NGS techniques [2, 4, 7-10], a fact that illustrates how extensively these techniques have been adopted by the ASD research community. As a result of these studies, the potential number of potential ASD-linked genes have increased dramatically (Table 1). Furthermore, as demonstrated by the minimal overlap of candidate genes across these studies, the results of these studies further illustrate the genetic heterogeneity of ASD. NGS techniques are typically divided into three categories, each with its own advantages and disadvantages (summarized in Table 2). These techniques vary in terms of genetic coverage (the size of the sequenced target, which can range in size from one or a few genes to the entire genome) and genetic resolution (sensitivity in detection of variants per sequencing target). In general, the smaller the genetic coverage, the higher the genetic resolution. It should be noted that, as the size of the target for sequencing increases, so does the number of both false-positive and false-negative variants [11]; this is one of the many considerations that must be taken into account in deciding which NGS technique to utilize. Exome Report Total # of genes # of unique genes # of overlapping genes O'Roak 2011 21 21 0 Sanders 2012 170 166 4 O'Roak 2012 240 227 13 Neale 2012 173 168 5 Chahrour 2012 53 53 0 Iossifov 2012 363 338 25 Total 1020 973 47 ### Table 1. Six recent scientific articles describing rare genetic variants identified in ASD cases by next generation sequencing (NGS) has led to a dramatic increase in the number of potential ASD candidate genes and further illustrates the genetic heterogeneity of ASD. Overlapping genes are genes in which a rare variant was identified in more than one exome report and are used as a measure of genetic heterogeneity. Availability Cost Advantages Disadvantages Targeted gene panels Commercially available ~$5000 -Highest resolution of NGS approaches-Contains a number of well-characterized ASD susceptibility genes (syndromic and non-syndromic) -Unable to detect variants in genes not included in the panel Whole exome sequencing Available only in research settings ~$1000 -Estimated to detect the majority of disease-covering variants -Unable to detect variants in non-coding regions Whole genome sequencing Available only in research settings ~$4000-$5000 -Greatest coverage of the genome (both coding and non-coding regions) -Lowest resolution of NGS techniques-Higher cost than whole exome sequencing ### Table 2. A summary of the benefits and drawbacks of the three types of next generation sequencing (NGS) techniques in the genetic evaluation of ASD cases. Cost estimates of whole-exome and whole-genome sequencing in [14]. #### 2.1.1. Targeted gene panels Targeted gene panels generally test for 50-100 genes that have been demonstrated to be strongly associated with a particular disease. Such gene panels are already extensively used to screen individuals for a wide range of cancers and inherited diseases for which causative genes have been identified. A number of commercially-available ASD gene panels have recently been designed to target both genes strongly associated with non-syndromic ASD as well as syndromic genes (genes that cause syndromes in which a subset of affected individuals also develop ASD, such as FMR1, MECP2, and CACNA1C, which cause Fragile X, Rett, and Timothy syndromes, respectively). For example, the Greenwood Genetic Center offers a 62-gene syndromic gene panel that covers the coding region and flanking intronic boundaries of ASD-linked 62 genes for \$5500 (http://www.ggc.org/images/pdfs/syndromicautism62-genengspanel.pdf). While targeted gene panels offer the smallest coverage of the human genome of the three NGS approaches, they offer the highest resolution. One of the major drawbacks to the use of targeted gene panels for a genetically heterogeneous disorder such as ASD is the inability to detect mutations in genes outside of those included in the gene panel. #### 2.1.2. Whole exome sequencing Whole exome sequencing, which is also known as targeted exome capture, is designed to specifically identify variants in protein-coding regions of the human genome. Although these protein-coding regions, called exons, constitute a very small percentage of the human genome, it is estimated that they contain up to 85% of disease-causing mutations [12, 13]. However, whole exome sequencing will fail to detect any potentially pathogenic variants in non-coding regions of the human genome. This NGS method also provides lower resolution than targeted gene panels. Nonetheless, whole exome sequencing is increasingly being used to identify potentially pathogenic rare single gene variants in individuals with ASD [2, 4, 7-10]. #### 2.1.3. Whole genome sequencing In contrast to whole exome sequencing, which only covers protein-coding regions of the human genome, whole genome sequencing provide coverage of the entire genome, allowing for the sequencing of both coding and non-coding genomic regions. As such, single nucleotide changes and small insertions/deletions within non-coding regions of the genome can be detected by this method. While whole genome sequencing covers the largest amount of the human genome of all NGS techniques, it offers the lowest resolution of the three NGS technologies. Whole genome sequencing is also more costly than whole exome sequencing, although the differences in cost betwen these two techniques have fallen from 10- to 20-fold [13] to 4- to 5-fold [14]. ### 2.2. Chromosomal microarray Microscopically-visible chromosomal rearrangments have long been implicated in the onset and pathogenesis of neurodevelopmental disorders, includng ASD. Indeed, many of the most strongly ASD-linked chromosomal deletions and duplications, collectively referred to as copy number variants (CNVs), were discovered through the use of conventional cytogenetic techniques such as G-banded karyotyping, fluorescent in situ hybridization (FISH), and microsatellite analysis. For example, duplications of chromosome 15q11-q13 were first implicated in ASD in the mid-1990s by these methods [15-17]. Likewise, these methods identified chromosomal rearrangments on the long arm of chromosome 22 in ASD cases [18, 19]. However, conventional cytogenetic techniques are impractical in the identification of copy number variation throughout the human genome in large case cohorts. While G-banded karyotyping is capable of detecting large chromosomal deletions and duplications (~1 Mb and larger), it lacks the sensitivity to detect smaller CNVs. Alternatively, the use of techniques such as FISH is generally limited to screen a particular chromosomal region, so while they are useful for examining copy number variation in a genomic loci of interest in larger case populations, they are impractical for the purposes of identifying deletions and duplications throughout the genome. In the last decade, technological and computational advances have allowed clinical geneticists and researchers to detect submicroscopic chromosomal deletions and duplications throughout the human genome in large case cohorts that would not be detected by traditional cytogenetic techniques. Chromosomal microarray (CMA) is a term frequently used to include all types of array-based whole genome copy number analyses, with the two most widely used being array-comparative genomic hybridization (aCGH) and single nucleotide polymorphism (SNP) arrays. CMA has been demonstrated to provide a higher diagnostic yield than G-banded karyotyping (15-20% compared to ~3%) due to its ability to detect submicroscopic deletions and duplications, and it has been proposed that CMA should replace conventional cytogenetic techniques as a first-tier diagnostic tool for individuals with congential abnormalities and developmental disorders, including ASD [20]. High-throughput genome-wide aCGH and SNP arrays are now regularly used in the detection of CNVs in large ASD cohorts [1, 21-24]. aCGH and SNP arrays employ similar methodologies in the detection of CNVs (Figure 1). The first step involves labeling the DNA of the ASD patient with a fluorophore, thereby creating a test sample. The test sample is then mixed with an equal amount of DNA from a normal reference sample that has been labeled with a different fluorophore. This mixed DNA sample is added to a glass slide containing thousands of oligonucleotide probes corresponding to different chromosomal regions that cover the human genome; in the case of SNP arrays, the oligonucleotide probes are specific for common polymorphisms found in the general population. The sensitivity of CMA has been greatly increased in recent years by the development of arrays employing a larger number of smaller oligonulceotide probes; in doing so, clinical geneticists and researchers are able to detect even smaller copy number changes than before without compromising genomic coverage. The test and reference DNA samples hybridize with the probes on the slide, and the fluorescence intensities of the test and reference DNA can then be measured. Following analysis with software that is typically specific for the platform being used, one or more algorithms are used to call the CNV. The ratio between the two fluorescence intensities is used to identify copy number changes. For example, if the test-to-reference ratio is 1 (yellow in the example below), then there is no change in copy number at the chromosomal region corresponding to a given probe, If the test-to-reference fluorescence ratio is > 1 for a particular probe (green in the example below), then the ASD patient carries a duplication in the chromosomal region corresponding to that probe. If the test-to-reference ratio is < 1 (red in the example below), then the patient carries a deletion at that site of the genome. Despite the recommended use of CMA as a first-tier genetic evaluation tool in place of conventional cytogenetic techniques, it should be noted that aCGH is unable to detect balanced chromosomal rearrangments and other chromosomal abnormalities that have traditionally been detected by karyotype analysis [25]. In addition to their traditional utilization in the detection of risk-conferring common polymorphisms, SNP arrays have the added advantage of being able to detect copy number neutral genetic variation such as uniparental disomy and long contiguous streteches of homozygosity (LCSH) that cannot be detected by aCGH [25, 26]. ## 3. Genetic variation in ASD With the advent of NGS techniques and increasing usage of CMA screening, the number of SNVs and CNVs that have been identified in ASD individuals has grown significantly. Based on a survey of recently published exome sequencing studies of ASD cohorts, it was estimated that the number of dosage-sensitive ASD susceptibility genes is approximately 370, with roughly a third of these genes having been identified [10]. However, even this number might be a conservative projection. As shown in Figure 2, the number of ASD susceptibility genes in the Human Gene Module of the autism genetic database AutDB [27] has increased from 284 genes in September 2011 to 369 genes in June 2012. A large number of newer susceptibility genes have been annotated from reports employing whole exome sequencing of ASD cases [2, 4, 7-10], illustrating the increasing usage of NGS techniques in the study of genetic variation in ASD. In addition to the identification of novel ASD susceptibility genes, NGS techniques have identified novel rare variants in previously identified ASD susceptibility genes. The number of ASD-associated CNV loci has also increased significantly, with the CNV module of AutDB expanding from 1034 CNV loci in September 2011 to 1173 loci in June 2012 (Figure 2). In this section we describe the genetic categories into which ASD susceptibility genes have been classified, as well as describe recent studies that have yielded invaluable insight on the functional profiles of ASD-associated genes and CNV loci. ### 3.1. Genetic categories of ASD susceptibility genes The earliest ASD susceptibility genes were rare single gene variants in genes associated with syndromes such as Fragile X syndrome and Rett syndrome. The discovery of single gene mutations/disruptions in two neuroligin genes, NLGN3 and NLGN4, in ASD siblings [28] initiated the search for additional ASD susceptibility genes in non-syndromic ASD cases. The continued identification of rare genetic variants associated with both syndromic and non-syndromic ASD, as well as of risk-conferring polymorphisms enriched in ASD populations compared to unaffected controls in genetic association studies, has led to significant increases in the number of ASD-linked genes. While the majority of ASD-associated genes have been linked to disease on the basis of genetic studies in human populations, a number of additional ASD-linked genes have been identified by alternate methodologies, such as gene expression studies in post-mortem brain tissue of ASD individuals. ASD susceptibilty genes in the Human Gene Module of AutDB are defined into four distinct categories: 1. Rare. This category features genes implicated in rare monogenic forms of non-syndromic ASD. Rare allelic variants within this category include single nucleotide variants, small insertions and deletions, chromosomal rearrangements such as translocations and inversions, and monogenic submicroscopic deletions and duplications. Among the genes within this category are CACNA1H and SHANK1. 2. Syndromic. Syndromic genes were among the first genes for which rare genetic variants linked to autism were identifed. In addition to well-characterized syndromic genes such as FMR1 (Fragile X syndrome), MECP2 (Rett syndrome), and CACNA1C (Timothy syndrome), genes such as CHD7 and SLC9A6 fall into the syndromic category. 3. Association. This category includes genes in which small risk-conferring common polymorphisms have been identified from genetic association studies in idiopathic ASD populations. Among the genes within this category are MET and MTHFR. 4. Functional. This category includes functional candidate genes that have not yet been experimentally linked to ASD by genetic studies. Among the genes in this category are BCL2 and PDE4B, whose inclusion is based on changes in gene expression in post-mortem brain tissue of ASD subjects. As shown in Figure 3, while the number of both rare and common ASD-associated variants in the Human Gene module of AutDB has increased over the last four quarterly release dates, the number of rare variants has increased at a much greater rate than the number of common variants. The number of rare variants increased from 1141 in September 2011 to 1675 in June 2012, an increase of ~146%. In contrast, the number of common variants rose form 508 to 575 over the same span of time, an increase of only ~113%. This disparity between the addition of rare and common variants to AutDB is in part due to the increased usage of NGS and CMA and subsequent identification of rare ASD-associated variants in large ASD cohorts. It should be noted that a given gene can fall under multiple genetic categories, depending on the affected population under investigation and the type of study. For example, both rare variants and risk-conferring common polymorphisms have been identified in the CNTNAP2 gene in ASD individuals across multiple studies [2, 29-31] However, in addition to its role as an ASD susceptibility factor, recent studies suggest that rare variants in CNTNAP2 are responsible for two additional syndromes: cortical dysplasia-focal epilepsy syndrome [32] and Pitt-Hopkins-like syndrome 1 [33]. Therefore, based on the combined evidence from all of these aforementioned studies, CNTNAP2 is classified in AutDB as a syndromic gene, a rare gene, and an association gene. The classification of ASD-linked genes into genetic categories is a useful tool in assessing the strength of the evidence for the connection of a given gene with ASD. Genes within the rare and syndromic categories are generally considered to have the strongest link to ASD [34]. Due to the frequent lack of replication in their association with ASD from one study to the next, genes within the association category are considered to have a weaker link to ASD than genes within the rare and syndromic categories. Genes within the functional category have no direct documented connection to ASD and are therefore considered to be among the weakest ASD candidate genes. ### 3.2. Functional profiles of ASD-associated genes and CNV loci The increasing number of ASD-associated genetic factors, as shown in Figure 2, has only added to the well-established genetic heterogeneity of ASD. In spite of the complexity caused by this genetic heterogeneity, bioinformatic analysis of ASD-linked genes and CNV loci has yielded valuable insight into the molecular interactions and cellular pathways preferentially targeted by genetic lesions in individuals with ASD. Not only can this information be potentially used to design therapeutic approaches targeting disrupted pathways, but it can also aid in assessing the clinical importance of newly-discovered ASD candidate genes and CNV loci in which pathogenic variants are identified by NGS and CMA, respectively. For example, a gene whose encoded gene product resides within a known ASD-associated cellular process or interacts with a known ASD-associated gene is a stronger candidate than a gene that fails to reside within known ASD-associated cellular processes or interact with known ASD-associated genes. Recent large-scale ASD genetic studies have used a systems biology approach to translate genetic information into functional profiles that shed light on how genetic variation in ASD may lead to disease onset and pathogenesis. Rare CNVs identified in large ASD cohorts have been shown to be enriched for genes involved in cellular processes of relevance for ASD, including cellular proliferation, projection, and motility, and GTPase/Ras signaling [1], neuronal cell adhesion and ubiquitin-mediated degradation [22], glycobiology [35], axon growth and pathfinding [36], and synapse development, axon targeting, and neuron motility [37]. Gene datasets from genome-wide association studies in ASD populations were demonstrated to be enriched for Gene Ontology (GO) classifications for cellular processes including pyruvate metabolism, transcription factor activation, cell signaling and cell-cycle regulation [38]. A recent report describing gene pathway analysis using single nucleotide polymorphism (SNP) data from the Autism Genetics Research Exchange (AGRE) identified cellular pathways such as calcium signaling, long-term depression and potentiation, and phosphotidylinositol signaling that reached statistical significance in both Central European and Han Chinese populations [39]. More recently, whole exome sequencing studies in large ASD cohorts have demonstrated that proteins encoded by genes in which potentially disruptive de novo mutations were identified showed a higher degree of connectivity among themselves and to previously identified ASD genes based on protein-protein interaction network analysis [4, 8]. Another exome sequencing study in ASD individuals found that many of the genes in which potentially disruptive variants were identified associated with the Fragile X Mental Retardation Protein (FMRP), the encoded product of the syndromic ASD gene FMR1 [10]. Taken together, these functional maps suggest that specific cellular pathways and processes are preferentially targeted by genetic variation in ASD cases, and that association with the encoded products of well-characterized ASD-linked genes offers evidence for pathogenic relevance. Knowledge of ASD-associated genes can also be used to identify novel ASD candidate genes. Following the construction of functional and expression profiles from a reference set of 84 rare and syndromic ASD-linked genes, we generated a predictive map of novel ASD candidate genes [40]. In total, 460 potential candidate genes were identified that overlapped both the functional profile and the brain expression profile of the initial reference set. The power of this predictive gene map was demonstrated by the capture of 18 pre-existing ASD-associated genes that were not included in the reference gene dataset, with the remaining 442 genes serving as novel ASD candidate genes. Since the publication of our predictive gene map, 12 of the novel ASD candidate genes identified in [40] have been added to AutDB, demonstrating the continued power of this analysis (manuscript in preparation). ## 4. Bioinformatics of ASD With the rapid growth of genetic data obtained from ASD individuals, there has become a critical need for databases specializing in the storage and assessment of this data. Here we highlight several of the ASD-related genetics databases that are available to researchers. ### 4.1. AutDB Our autism database AutDB (http://autism.mindspec.org/autdb/Welcome.do) is a web-based, searchable database of ASD candidate genes identified in genetic association studies, genes linked to syndromic autism, and rare single gene mutations [27]. Evidence regarding ASD candidate genes is systematically extracted from peer-reviewed, primary scientific literature and manually curated by our researchers for inclusion in AutDB. To provide high-resolution view of various components linked to ASD, we developed detailed annotation rules based on the biology of each data type and generated controlled vocabulary for data representation. AutDB is widely used by individual laboratories in the ASD research community, as well as by consortiums such as the Simons Foundation, which licenses it as SFARI Gene. AutDB is designed with a systems biology approach, integrating genetic information within the original Human Gene module to corresponding data in subsequent Animal Model, Protein Interaction (PIN) and Copy Number Variant (CNV) modules. The Animal Model module contains a comprehensive collection of mouse models linked to ASD [41]. While the Animal Model module initially contained only genetic mouse models of ASD, it has since been expanded to include induced mouse models of ASD in which a chemical or biological agent linked to ASD has been administered. As core behavioral features of ASD such as social interactions and communications can only be approximated in animal models, the annotation strategy for this module includes four broad areas: 1) core behavioral features of ASD, 2) ASD-related traits such as seizures and circadian rhythms that are heritable and more easily quantified in animal models; 3) neuroanatomical features, and 4) molecular profiles. To this end, we developed PhenoBase, a classification table for systematically annotating models with controlled vocabulary containing 16 major categories and >100 standardized phenotype terms. The PIN module of AutDB serves as a repository for all known protein interactions of ASD candidate genes, documenting six major types of direct interactions: 1) protein binding, 2) promoter binding, 3), RNA binding, 4) protein modification, 5) direct regulation, and 6) autoregulation. Its content is envisioned to have immediate application for network biology analysis of molecular pathways involved in ASD pathogenesis. For the purposes of genetic evaluation of individuals with ASD, knowledge of the protein interactions of ASD-associated genes can potentially aid in the clinical assessment of novel ASD candidate genes based on their interactions, or lack thereof, with known ASD-linked genes. ### 4.2. Gene scoring module of SFARI gene As previously mentioned, AutDB is licensed to the Simons Foundation as SFARI Gene. However, unlike AutDB, SFARI Gene includes a unique feature initiated by the Simons Foundation called the Gene Scoring module (https://gene.sfari.org/autdb/GS_Home.do). The Gene Scoring module is a web-based platform detailing the rank of ASD-associated genes in the SFARI Gene Human Gene module [42]. With the increase in the number of genes linked to ASD, a Gene Scoring initiative was launched to assess the ASD candidate genes based on a set of standardized annotation rules. Following evaluation by an expert panel of advisors, the gene assessment results are then integrated in the form of Gene Score Cards to display the scores and the evidence in a graphical user interface for the ASD-linked gene. Recently, a community-wide annotation functionality was incorporated into the Gene Scoring module, allowing users to download the Gene Scoring dataset, score genes of their choice, and submit their scores to SFARI for possible inclusion. ### 4.3. DECIPHER DECIPHER (Database of Chromosomal Imbalance and Phenotype in Humans Using Ensembl Resources) (http://decipher.sanger.ac.uk/) is an interactive web-based database that incorporates a suite of tools designed to aid in the interpretation of submicroscopic chromosomal deletions and duplications [43]. Genetic and phenotypic information is publically available not only for individuals diagnosed with idiopathic ASD, but also for individuals diagnosed with a recognized microdeletion or microduplication syndrome in which a subset of affected individuals also develop ASD. ### 4.4. AutismKB AutismKB (http://autismkb.cbi.pku.edu.cn/) is a web-based, searchable database hosted by the Center for Bioinformatics, Peking University [44]. AutismKB is an evidence-based knowledge resource for ASD genetics containing information on genes, copy number variants, and linkage regions associated with ASD. Analysis of the gene content in AutismKB is available for users in the form of GO term enrichment analysis using the DAVID functional annotation tool and pathway enrichment analysis. Much like the Gene Scoring Module of SFARI Gene (see section 4.2), the genes within AutismKB are scored. ### 4.5. Autism Chromosome Rearrangement Database The Autism Chromosome Rearrangement Database (http://projects.tcag.ca/autism/) is a web-based, searchable genetic database of chromosomal structural variation in ASD that is hosted by The Centre for Applied Genomics at the Hospital for Sick Children in Toronto, Canada [21]. The content of this database, which is derived both from published research articles and in-house experimental results, includes cytogenetic and microarray data from individuals with ASD. ### 4.6. Autism Genetic Database The Autism Genetic Database (http://wren.bcf.ku.edu/) is a web-based, searchable genetic database developed by researchers at the University of Kansas [45]. In addition to ASD-associated genes and CNVs, this database also includes information on known non-coding RNAs and chemically-induced fragile sites in the human genome. Recent lines of evidence have placed non-coding RNAs under increased scrutiny with regards to their potential pathogenic role in ASD. A number of small nucleolar RNAs (snoRNAs) reside within the ASD-associated 15q11-q13 region. A mouse model engineered to mimic duplication of the 15q11-q13 region observed in ~1% of ASD cases exhibited overexpression of the snoRNA MBII52 (the mouse ortholog of the human snoRNA HBII52), which could potentially alter serotonergic signaling and contribute in part to the ASD-associated traits exhibited by these mice [46]. More recently, it was discovered that a non-coding RNA is transcribed from a gene-poor region of chromosome 5p14.1 identified in genome-wide association studies of ASD cohorts [47]. Expression of the non-coding RNA, designated MSNP1AS, was shown to be higher both in individuals carrying the ASD-associated T allele and in post-mortem brain tissue of individuals with ASD. Spontaneous breakage during DNA replication at rare chromosomal fragile sites may also play a role in the pathogenesis of neuropsychiatric disorders such as ASD. The chromosomal fragile site FRAXA has been implicated in fragile X syndrome, and other fragile sites have been identified that associate with ASD, such as FRA2B, FRA6A, and FRA13A [48]. ## 5. Challenges of genetic evaluation in ASD NGS and CMA have expanded the ability of clinical geneticists and researchers to identify potential genetic causes of ASD. However, there are many challenges still present in the field of genetic evaluation. A recent report in the American Journal of Medical Genetics found that many children with ASD fail to get genetic evaluation, and that parents and medical professionals need to be better educated about the potential benefits of genetic evaluation [49]. Educating parents on genetic evaluation is especially critical in light of a recent survey of nine parents regarding their child’s participation in genetic research in ASD [50], in which parents valued having had their child enrolled for a variety of reasons, including the potential use of genetic results in tailoring intervention and in family planning, the establishment of connections with experts in the field of ASD, and networking with other families, among others. Even with the increased sensitivity of genetic evaluation techniques, an underlying genetic cause of ASD is still only identified in a minority (< 25%) of ASD cases [51]. One of the major challenges in the clinical interpretation of NGS and CMA lies in differentiating between pathogenic and benign genetic variants identified in ASD patients. The pathogenic relevance of the vast majority of ASD-linked genetic variants remains unknown; such variants are frequently classified as variants of unknown significance, or VOUS. While the identification of a genetic lesion in an existing ASD susceptibility gene or CNV locus is suggestive of a possible genetic cause of disease, variants in these genes and CNV loci have also been observed in seemingly unaffected individuals. Furthermore, it is important to note that, while technological advances have expanded the ability of clinical geneticists and researchers to identify these potential genetic causes of ASD, there is no genetic test available for the diagnosis of ASD. A recent report proposed a means of predicting a diagnosis of ASD based on the identification of candidate SNPs [39]. The accuracy of the predictive classifier was found to be 71.7% in individuals of Central European descent from validation datasets. However, the accuracy of the predictive classifier fell when tested in a Han Chinese cohort, a finding that stresses how genetic heterogeneity across populations complicates the use of such an approach. In addition, the overall accuracy of the predictive classifier is likely too low to serve as an effective diagnostic tool. A number of guidelines have already been proposed to aid clinicians and clinical geneticists in the interpretation and reporting of CNVs. With the increasing use of high resolution NGS technologies, similar guidelines will likely be proposed for the interpretation and reporting of single nucleotide variants (SNVs). Furthermore, tools and prioritization schema have also been developed to aid clinicians in the interpretation of genetic testing results. Here we discuss in greater detail the challenges in interpreting genetic screening results in ASD cases, the strategies that have been proposed for the interpretation and reporting of screening results, and the resources available to aid in that interpretation. ### 5.1. Challenges in the interpretation of ASD genetic screening results #### 5.1.1. Technical limitations of NGS and CMA As previously mentioned, as the size of the sequenced target increases, so does the potential number of false-positive and false-negative variants identified [11]. Such sequencing artifacts are particularly problematic for the detection of spontaneous, or de novo variants, as false-positive variants would appear to be de novo in origin when they are observed in an offspring’s genome but not in parental genomes. Furthermore, the source of DNA used in sequencing studies can introduce sequencing artifacts. DNA from lymphoblastoid cell lines from individuals to be genetically evaluated is a commonly used template for sequencing; however, the creation and culturing of these cell lines can introduce genetic changes that would appear as de novo variants when such cell lines are compared between parents and offspring. In order to remove or reduce the possibility of artifactual results, subsequent variant validation should be performed. In the case of single gene variants identified by NGS, a more targeted sequencing approach limited to the gene or region of interest would confirm the variant previously identified. In the case of CNVs identified by NGS or CMA, a targeted detection method such as quantitative real-time PCR or FISH is frequently used to confirm their discovery. #### 5.1.2. Genetic heterogeneity While the genetic basis of many human diseases can be traced back to one or a few genes, the genetic basis of complex neuropsychiatric disorders such as ASD has proven to be far more complicated, with hundreds of genes and genomic loci associated with varying risks of disease. The recent utilization of NGS and CMA approaches in genetic evaluation of ASD cases has led to the detection of genetic variation not only in both existing and novel susceptibility genes and genomic loci. However, the strength of evidence for many of these novel candidate genes or genomic loci is minimal, and some degree of replication in follow-up studies will be required to fully assess the relevance of many of these newly-identified variants. #### 5.1.3. Incompelete penetrance and variable expressivity One of the major challenges in identifying potential causative genetic variation in ASD cases lies in the fact that a potentially disruptive variant in a gene or genomic loci may not always associate or segregate with disease. For example, a potentially pathogenic variant in a gene may not only be present in an ASD individual, but it may also be present in seemingly unaffected family members. Similarly, the pathogenic variant may also be observed in seemingly unaffected individuals in the general population. This phenomenon, referred to as incomplete penetrance, complicates the interpretation of genetic evalaution. Alternatively, a genetic variant may result in a range of disease severity in affected individuals, a phenomenon known as variable expressivity. For example, ~500 kb deletions and duplications at the 16p11.2 locus are among the most heavily studied ASD-associated CNVs. However, CNVs at this locus are also responsible for a range of other neurodevelopmental and neuropsychiatric disorders, such as schizophrenia. CNVs at the 16p11.2 locus can also be inherited from seemingly unaffected family members and have been observed in unaffected individuals in the general population. This lack of correlation between genotype and phenotype as it relates to ASD-associated genetic variation may be in part due to differences in gene-environment interactions between individuals carrying such variation. A recent report highlights some of the challenges inherent in the genetic evaluation of ASD individuals. A putative disruptive variant in the ASD-associated SHANK3 gene was identified in a boy with autism [52]. The variant, which was inherited from a healthy mother, was a small insertion that would be predicted to result in a frameshift and premature stop. Based on this evidence, as well as the relatively high penetrance of SHANK3 mutations in ASD and other neuropsychiatric diseases, one could conclude that this variant in the SHANK3 gene was pathogenically relevant in this autistic male. However, follow-up studies revealed that this variant was unlikely to be present in the majority of SHANK3 transcripts due to alternative splicing events. Furthermore, this variant was observed in 4 out of 382 control individuals without neuropsychiatric conditions, a rate >1%. This report not only illustrates the necessity of determining the frequency of a given potentially pathogenic variant in the general population but also warns against relying too heavily on computational, or in silico, predictions of the effects of that variant on gene function. ### 5.2. Ethical considerations in the reporting of ASD genetic screening results Informed consent, and the extent to which participants have been sufficiently informed as to the purpose of a research or clinical study, has long been an issue in the field of genetic evaluation. For example, the extent to which research findings will be released and made available is one that participants in genetic evaluation studies should be informed of beforehand. In some cases, the participants themselves may not be able to gain access to the findings of genetic evaluation. This is an issue that has only been compounded with the rise of NGS and CMA and subsequent explosion in the amount of genetic data generated by these techniques. The sheer volume of genetic information generated by NGS and CMA not only leads to the identification of potential genetic causes of a disease of interest, but also frequently leads to the detection of other variants that are no directly related to the disease under investigation but are related to other inherited human diseases. The extent to which these incidental findings should be reported is a subject of some controversy, particularly in those situations in which genetic predisposition to an adult-onset disease is discovered in a child being evaluated for genetic causes of childhood developmental disorders. One such situation was described in a recent news feature in Nature in which the family of a child who had undergone genetic testing for developmental disability had to be informed that the child carried a genetic predisposition to colon cancer after extensive debate between clinical geneticists and ethics reviewers as to the extent to which such genetic information should be reported [53]. The degree to which clinical geneticists should report incidental findings in research participants has been considered by numerous authors [54-57], but as of yet there is not consensus. Many of these same ethical concerns must be considered in the reporting of genetic evaluation results in individuals with ASD. Another consideration lies in the use of genetic evaluation to determine the recurrence risk in the siblings of children with ASD and in family planning [50]. Given a recent estimate that the recurrence rate of ASD in siblings may be as high as ~20% [58], the identification of inherited variants that potentially impart susceptibility to ASD is of critical importance both in identifying at-risk siblings that have not yet begun to manifest symptoms of ASD and in making informed decisions with regards to family planning. ### 5.3. Strategies for ASD genetic screening interpretation and reporting The American College of Medicine Genetics released practice guidelines for the use of genetic screening techniques in the evaluation of individuals with ASD in 2008 [59]. In the years that have followed, additional practice guidleines and consensus statements discussing the use of CMA in the genetic evaluation of ASD cases have been published [25, 26]. With its increasing usage in the genetic evaluation of ASD cases, similar practice guidelines and consensus statements regarding NGS will likely be forthcoming, and strategies for the interpretation of NGS data in the evaluation of neurological diseases have recently been proposed [14]. In this section we highlight some of the factors to consider in the interpretation of genetic screening results in ASD cases. #### 5.3.1. Variant inheritance and segregation with ASD One of the key determinants in the interpretation of ASD genetic screening results is the mechanism of variant inheritance and how closely that variant segregates with ASD. Genetic variation can either arise de novo or be transmitted from one or both parents. There has been considerable interest in the ASD research community in the pathogenic relevance of de novo variants, especially within the context of sporadic ASD cases. As they have been subjected to less stringent evolutionary selection, de novo variants tend to be more deleterious than inherited variants, making them excellent candidates for sporadically-occurring disease [11]. An increased rate of de novo CNVs in sporadic cases compared to familial cases has been reported [21, 60], and rare de novo CNVs at specific genomic loci were found to associate with ASD in sporadic cases from the Simons Simplex Collection [24]. Exome sequencing studies using ASD cohorts have reported an increased rate of de novo gene-disrupting events (i.e. nonsense, splice-site, and frameshift mutations) in affected children compared to their unaffected siblings [10]. Whereas ASD genetic research is increasingly focused on de novo genetic variation, it should be remembered that the genetic basis of ASD was first established by studies demonstrating the high heritability of the disease, a fact that illustrates the continued importance of identifying inherited genetic variation in ASD cases. A number of inherited single gene variants and CNVs that segregate with disease in ASD families has been recently described [9, 61-64]; these and other findings clearly demonstrate the importance of identifying inherited variants that closely segregate with disease in affected families. It should be noted that determining the extent of variant segregation in ASD families can be complicated by the phenotypic heterogeneity that a given variant can cause from one affected family member to another. Furthermore, a disease-causing variant may exclusively segregate with disease in males, even if the variant does not reside on the X chromosome, as is the case with a SHANK1 mutation identified in a four-generation ASD family [61]. Detailed family history and genetic evaluation of both affected and unaffected family members is essential in determining the signficance of both de novo and inherited variants in ASD cases. #### 5.3.2. Functional impact of variant In addition to the mechanism of variant inheritance and variant segregation with disease, another important consideration in interpretation of genetic screening results lies in the functional impact of the variant. In many cases, especially with the use of high-throughput screening technologies, variant function is predicted in silico. In the case of single gene mutations, variation that results in disruption of gene function, such as nonsense mutations, splice-site mutations, or frameshift mutations that introduce premature stop codons, are strong genetic candidates, especially if such gene-disrupting variants are identified in a known ASD suscepibility gene or a gene associated with an ASD-linked pathway. The interpretation of missense mutations is more complicated and requires assessment of evolutionary conservation using phyloP or Genomic Evolutionary Rate Profiling (GERP) conservation scores, as well as scoring of the functional impact using Grantham or PolyPhen-2. However, as previously mentioned [52], dependency on in silico predictions for variant function, even in well characterized ASD-linked genes, can lead to false conclusions. As such, experimental functional assays are essential to accurately determine the impact of a given variant on gene expression or function of the encoded gene product. #### 5.3.3. Clinical correlations of the variant with ASD Another consideration in the interpretation of genetic screening results in ASD cases is the degree of clinical correlation of a given variant with ASD. Hundreds of susceptibility genes and CNV loci linked with ASD have been identified and catalogued in online genetic databases such as AutDB, DECIPHER, and others. The identification of a novel, potentially pathogenic variant in one of these known susceptibility genes or CNV loci would be strong evidence for a causal role. To a lesser extent, a novel variant in a gene in an ASD-associated pathway or a gene previously shown by gene expression studies to be differentially regulated in ASD tissue would be a strong candidate. Another factor to consider is the frequency of a variant of interest in healthy control populations; the absence or significantly reduced frequency of the variant of interest in unaffected individuals would offer strong evidence for a causal role. ### 5.4. Resources for ASD genetic screening interpretation A number of online resources are available to aid clinical geneticists in the interpretation of genetic screening results in ASD individuals. Many of these resources are aimed at differentiating between rare, potentially ASD-specific variants and benign variants observed in the general population. In this section we will describe some of these resources in greater detail. #### 5.4.1. Genetic variation in control populations Differentiating between potentially pathogenic and benign genetic variants in ASD cases requires knowledge of the degree of genetic variation that resides within seemingly unaffected individuals in the general population. A number of online resources, several of which are hosted by the National Center for Biotechnology Information (NCBI) [65], have been developed to allow clinical geneticists to visualize genetic variation identified in the general population. The genetic variation curated in these databases can range from single nucleotide polymorphisms to chromosomal structural variation and has proven invaluable in assessing the potential pathogenic relevance of novel genetic variants. #### 5.4.1.1. dbSNP (database of single nucleotide polymorphisms) dbSNP (http://www.ncbi.nlm.nih.gov/snp) is a public domain database hosted by NCBI collecting a range of polymorphic genetic variation, including single nucleotide polymorphisms (SNPs), small-scale multi-base deletions or insertions (also called deletion insertion polymorphisms or DIPs), and retroposable element insertions and microsatellite repeat variations (also called short tandem repeats or STRs) [65]. #### 5.4.1.2. 1,000 Genomes Project The 1,000 Genomes Project (http://www.1000genomes.org/) is a consortium employing high-throughput NGS techniques for the purposes of characterizing over 95% of genetic variants located in genomic regions accessible to sequencing and occurring at an allelic freuqency of 1% or higher in each of five major population groups [66]. #### 5.4.1.3. dbVar (database of genomic structural variation) dbVar (http://www.ncbi.nlm.nih.gov/dbvar/) is a searchable online database hosted by NCBI containing genomic structual variation, defined by the database as inversions, balanced translocations, and CNVs approximately 1 kb or larger in size, that has been observed in both case and control populations [65]. #### 5.4.1.4. Database of Genomic Variants The Database of Genomic Variants (http://dgvbeta.tcag.ca/dgv/app/home?ref=NCBI36/hg18) is an curated online database hosted by the Centre for Applied Genomic that contains structural variation, defined by the developers of the database as genomic alterations that involve segments of DNA that are larger than 50bp, in control individuals [67]. Users can search the database for genetic variants such as CNVs, insertions, inversions, and regions of uniparental disomy, as well as download database contents. #### 5.4.2. Genotype-phenotype association The dbGaP public repository (http://www.ncbi.nlm.nih.gov/gap/) was created by the National Institutes of Health for the purposes of collecting individual-level genotype and phenotype data and associations between them [68]. The studies collected in dbGaP include genome-wide association studies, sequencing and diagnostic assays, and associations between genotype and non-clinical traits. Users can browse association results, utilize the Phenotype-Genotype Integrator (PheGenI) to search for phenotypic traits linked to GWAS data, and download data. ## 6. Conclusion The development of lower-cost, high-throughput genome-wide genetic screening technologies has revolutionized the field of genetic evaluation and now provides clinical geneticists and researchers the opportunity to detect genetic variation in ASD individuals like never before. In doing so, the evidence for previously identified genetic susceptibility factors will expand, and novel ASD candidate genes and genomic loci will be identified, resulting in a better understanding of the genetic basis of ASD. However, precautions must be taken to ensure that genetic screening results are interpreted and reported properly. ## Acknowledgements The authors would like to thank the other members of MindSpec, Inc. (Ajay Kumar, M.S., Idan Menashe, Ph.D., Wayne Pereanu, Ph.D., Rainier Rodriguez, and Sue Spence), as well as the Simons Foundation. AutDB is licensed to the Simons Foundation as SFARI Gene. ## How to cite and reference ### Cite this chapter Copy to clipboard Eric C. Larsen, Catherine Croft Swanwick and Sharmila Banerjee- Basu (March 6th 2013). Genetic Evaluation of Individuals with Autism Spectrum Disorders, Recent Advances in Autism Spectrum Disorders, Michael Fitzgerald, IntechOpen, DOI: 10.5772/53900. Available from: ### Embed this chapter on your site Copy to clipboard Embed this code snippet in the HTML of your website to show this chapter ### Related Content #### Recent Advances in Autism Spectrum Disorders Edited by Michael Fitzgerald Next chapter #### Genetic Etiology of Autism By Agnes Cristina Fett-Conte, Ana Luiza Bossolani-Martins and Patrícia Pereira-Nascimento #### Recent Advances in Autism Spectrum Disorders Edited by Michael Fitzgerald First chapter #### The Stone Age Origins of Autism By Penny Spikins We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities. View all books
2019-01-17 09:55:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33635038137435913, "perplexity": 4069.7551538074636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658901.41/warc/CC-MAIN-20190117082302-20190117104302-00618.warc.gz"}
https://msp.org/apde/2021/14-8/p08.xhtml
#### Vol. 14, No. 8, 2021 Recent Issues The Journal About the Journal Editorial Board Editors’ Interests Subscriptions Submission Guidelines Submission Form Policies for Authors Ethics Statement ISSN: 1948-206X (e-only) ISSN: 2157-5045 (print) Author Index To Appear Other MSP Journals Small eigenvalues of the Witten Laplacian with Dirichlet boundary conditions: the case with critical points on the boundary ### Dorian Le Peutrec and Boris Nectoux Vol. 14 (2021), No. 8, 2595–2651 ##### Abstract We give sharp asymptotic equivalents in the limit $h\to 0$ of the small eigenvalues of the Witten Laplacian, that is, the operator associated with the quadratic form $\psi \in {H}_{0}^{1}\left(\Omega \right)↦{h}^{2}{\int }_{\Omega }|\nabla \left({e}^{\frac{1}{h}f}\psi \right){|}^{2}{e}^{-\frac{2}{h}f},$ where $\overline{\Omega }=\Omega \cup \partial \Omega$ is an oriented ${C}^{\infty }$ compact and connected Riemannian manifold with nonempty boundary $\partial \Omega$ and $f:\overline{\Omega }\to ℝ$ is a ${C}^{\infty }$ Morse function. The function $f$ is allowed to admit critical points on $\partial \Omega$, which is the main novelty of this work in comparison with the existing literature. ##### Keywords Witten Laplacian, overdamped Langevin dynamics, semiclassical analysis, metastability, spectral theory, Eyring–Kramers formulas ##### Mathematical Subject Classification 2010 Primary: 35P15, 35P20, 35Q82, 47F05
2022-09-30 13:46:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5136908292770386, "perplexity": 1360.189669327521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00766.warc.gz"}
https://www.bostonharborangels.com/2021/11/10/omnilife-and-action-network-solidify-collaboration-to-drive-innovation/
OmniLife and ACTION Network Solidify Collaboration to Drive Innovation OmniLife and ACTION Network Solidify Collaboration to Drive Innovation OmniLife, a health technology communication and collaboration platform, has finalized its formal collaboration with the Advanced Cardiac Therapies Improving Outcomes Network, or ACTION. The organizations will work together to drive innovation around patient care and clinical decision making, specifically with the goal of improving outcomes for children with heart failure. Read more >>
2022-05-17 21:13:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9180856943130493, "perplexity": 13139.288486194399}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520817.27/warc/CC-MAIN-20220517194243-20220517224243-00232.warc.gz"}
https://solvedlib.com/n/soive-fbsin-quot-x-cos-r-dx-04-asin-4-x-cb8-cos-13-c-13sin,6642898
# Soive:fBsin"? (x)cos(r)dx =04 Asin?4(x) + cB8( -cos())13 + c 13sin"3 (z)cos?(z) + c 13D.8-si"3 (r) + c 13"E.96sin 11 (x)cos(z) ###### Question: Soive: fBsin"? (x)cos(r)dx = 04 Asin?4(x) + c B 8( -cos())13 + c 13 sin"3 (z)cos?(z) + c 13 D. 8-si"3 (r) + c 13" E. 96sin 11 (x)cos(z) + c #### Similar Solved Questions ##### <Ch 16 HW Problem 16.15 < 14 of 14 > At each corner of a square... <Ch 16 HW Problem 16.15 < 14 of 14 > At each corner of a square of side there are point charges of magnitude Q, 20, 3Q, and 4Q 2Q - 4Q 3Q х Part A Determine the magnitude of the force on the charge 20 Problem 16.15 < 14 of 14 > Part A Determine the magnitude of the force on the ... ##### Use the given mnatrices and vectors to find each solution below . If the operation not Tegal oze, wtite No Solution_ Show your work for full credit, where applicable: A = -1 "-[ c-[ -4 =0J v = [1 -1 2Find A+CFind IyFind CAe) Find BA Use the given mnatrices and vectors to find each solution below . If the operation not Tegal oze, wtite No Solution_ Show your work for full credit, where applicable: A = -1 "-[ c-[ -4 =0J v = [1 -1 2 Find A+C Find Iy Find CA e) Find BA... ##### Wha tis the inverse of F(x) = b^x? Wha tis the inverse of F(x) = b^x?... ##### You want to estimate the average difference in the number of complaints filed by women at work and the number of complaints filed by men at work: You took an SRS of 7 businesses and collected the following data for number of complaints in a year:menwomen1215Create a 90% confidence interval for the average difference in the number of complaints filed by women at work and the number of complaints filed by men at work (in a year): You must do this using table C If this is a 2 sample interval, be su You want to estimate the average difference in the number of complaints filed by women at work and the number of complaints filed by men at work: You took an SRS of 7 businesses and collected the following data for number of complaints in a year: men women 12 15 Create a 90% confidence interval for ... ##### Since the standard deviation decreases when angewen s expected that t h standard deviations from the... Since the standard deviation decreases when angewen s expected that t h standard deviations from the mean Consider a poput of 8.000 ini h awing dow 33. In other words of the data bebe 53For the samping itutions e means the p n and wanderde on samples from this population come e an Asano thumb on a 1... ##### Present two diabetes resources, national or local plan that can be utilized by the provider or... present two diabetes resources, national or local plan that can be utilized by the provider or the patient... ##### QUESHOLI(I0U marks tOtal)811"Evaluate the double integral T"dA over the region A: {0 <xs4 0sysW}Skeleh the region Of inlegralion and show the limils ol inlegralion Show all your working: present your answer in tle exact form and then evaluate upP t0 four significant figures. QUESHOLI (I0U marks tOtal) 811" Evaluate the double integral T" dA over the region A: {0 <xs4 0sysW} Skeleh the region Of inlegralion and show the limils ol inlegralion Show all your working: present your answer in tle exact form and then evaluate upP t0 four significant figures.... ##### Using induction, prove that each is a solution to the corresponding recurrence relation, where $c$ is a constant and $f(n)$ a function of $n$.$-a_{n}=c^{n} a_{0}+ rac{c^{n}-1}{c-1}, a_{n}=c a_{n-1}+1 ext { (assume } c eq 1)$ Using induction, prove that each is a solution to the corresponding recurrence relation, where $c$ is a constant and $f(n)$ a function of $n$. $-a_{n}=c^{n} a_{0}+\frac{c^{n}-1}{c-1}, a_{n}=c a_{n-1}+1 \text { (assume } c \neq 1)$... ##### 17 Heed Help? 2 8 Tetal IJ IOLALonaLYOn Enn n Vaue 1 1 1 nutend Ji 1 1 junot W Ii 34140 1 1 1 statuc 1 1 1 1 1 [ntolantlclent oumdian uDins ne population ohpindoo Mee means equal Jnrijuon 01 1 1{ 17 Heed Help? 2 8 Tetal IJ IOLALonaLYOn Enn n Vaue 1 1 1 nutend Ji 1 1 junot W Ii 34140 1 1 1 statuc 1 1 1 1 1 [ntolantlclent oumdian uDins ne population ohpindoo Mee means equal Jnrijuon 01 1 1 {... ##### Question 6 Is the molecule below a carbohydrate? HỌ -C-H. н—с-он CH OH Question 6 Is the molecule below a carbohydrate? HỌ -C-H. н—с-он CH OH... ##### 26} What is Ihe formula for the IQR?Above what number (called the fence) does an observalion have high outlier?be to beBelow what number doesobservation have l0 be to be low outlier?The National Basketball Association every year by averaging (NBA) determines its scoring champion players points points per game_ Over the last 25 years the number aserged (mean) by Ihe NBA scoring champion has had the following 5 summary:Min 26.8Q1 28.7Med 30.103 = 31.2Max 35.4Explain to a bright person who knows no 26} What is Ihe formula for the IQR? Above what number (called the fence) does an observalion have high outlier? be to be Below what number does observation have l0 be to be low outlier? The National Basketball Association every year by averaging (NBA) determines its scoring champion players points ... ##### Problem 9: 10 points Suppose that X, Y are two independent identically distributed random variables with... Problem 9: 10 points Suppose that X, Y are two independent identically distributed random variables with the density function f(x)= λ exp (-Az), for >0. Consider T- and find its cumulative distribution function and density function.... ##### Question1 1pts All RR Lyrae stars are about 80 to 100 times brighter than the sun. Why is that fact useful for usin... Question1 1pts All RR Lyrae stars are about 80 to 100 times brighter than the sun. Why is that fact useful for using them to measure distances to globular clusters? HTML Editor ▼ Paragr 0 words Question 2 1 pts 28 MacBook Air Question1 1pts All RR Lyrae stars are about 80 to 100 times brighte... ##### Sheffield Corporation began operations on January 1, 2017. During its first 3 years of operations, Sheffield... Sheffield Corporation began operations on January 1, 2017. During its first 3 years of operations, Sheffield reported net income and declared dividends as follows: 2017 2018 2019 Net income $49,700 126,000 168,200 Dividends declared$-0- 53,300 53,100 The following information relates to 2020. Incom... ##### Bw Vrestlg } J dmnelhy) ~cyclopenlenn Wilh HplOfth In THF/vster dolkoed by trestrncnt vth Mul h Uha pasmue) lotmed ImhW NqulaltHnIeVUnidmWanvenanereneechenetell bw Vrestlg } J dmnelhy) ~cyclopenlenn Wilh HplOfth In THF/vster dolkoed by trestrncnt vth Mul h Uha pasmue) lotmed ImhW Nqulalt Hn IeV Unidm Wanv enanereneechenetell... ##### In a vacuum, two particles have charges of q1 and q2, where q1 =+3.0C. They are separated by a distance of 0.23 m, and particle 1experiences an attractive force of 2.7 N. What is the value of q2,with its sign? In a vacuum, two particles have charges of q1 and q2, where q1 = +3.0C. They are separated by a distance of 0.23 m, and particle 1 experiences an attractive force of 2.7 N. What is the value of q2, with its sign?... ##### Rcquired information will be unable to return to this part multi-part question: Once an answer submitted, YOu NOTE: This Is - of water: The specific heat capacity of water perfectly insulated beaker containing Work of 1850 J is done by stirring 4186 J /kg"Cremoved) for this process? (You Must provide an answer before moving to the next part ) hat Is (neat added(heat added removed) for this processPruvNextZoom pkdQAM Circulu Mo. pdl1J Rolallonal Mo Da Rcquired information will be unable to return to this part multi-part question: Once an answer submitted, YOu NOTE: This Is - of water: The specific heat capacity of water perfectly insulated beaker containing Work of 1850 J is done by stirring 4186 J /kg"C removed) for this process? (You Must ... ##### Calculate the specific heat of a metal from the following data. A container made of the metal has a mass of $3.6 mathrm{~kg}$ and contains $14 mathrm{~kg}$ of water. A $1.8 mathrm{~kg}$ piece of the metal initially at a temperature of $180^{circ} mathrm{C}$ is dropped into the water. The container and water initially have a temperature of $16.0^{circ} mathrm{C}$, and the final temperature of the entire (insulated) system is $18.0^{circ} mathrm{C}$. Calculate the specific heat of a metal from the following data. A container made of the metal has a mass of $3.6 mathrm{~kg}$ and contains $14 mathrm{~kg}$ of water. A $1.8 mathrm{~kg}$ piece of the metal initially at a temperature of $180^{circ} mathrm{C}$ is dropped into the water. The container a... ##### Question .Predict the hybridization of the atom indicated below;By Question . Predict the hybridization of the atom indicated below; By...
2022-10-05 14:21:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5722866058349609, "perplexity": 3644.620572015956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00049.warc.gz"}
http://www.cjmr.org/EN/1005-3093/home.shtml
ISSN 1005-3093 CN 21-1328/TG Started in 1987 #### Office Featured Articles More>> Nano melamine cyanurate (NMC) was synthesized by a solvothermal method, and characterized by FTIR, XRD and SEM. The effect of solvents, surfactants, reaction-temperature and -time on the particle size of the product were investigated. NMC can be only obtained by using distilled water as a solvent, a. . . Chinese Journal of Materials Research, 2014 Vol. 28 (6): 401-406    DOI: 10.11901/1005.3093.2013.857 Just Accepted More>> Effect of Intermediate Cu Layer on Microstructure and Mechanical Properties of Welded Joints of TA1/X65 Composite Plate PDF (2962KB) 2017-09-27 Effects of strain rate on microstructure evolution and mechanical property of 316LN austenitic stainless steel at cryogenic temperature PDF (1528KB) 2017-09-27 Effects of cooling rate and Al content on microstructure and corrosion resistance of Zn-Al alloys containing trace Nd PDF (912KB) 2017-09-27 Effect of Additive on Synthesis of Metastable γ-Bi2O3 and Optical Properties PDF (1378KB) 2017-09-27 Preparation and Gas Separation Properties of PIM-CO19 Based Thermally Induced Rigid Membranes PDF (622KB) 2017-09-27 Current Issue More>> 20 July 2017, Volume 31 Issue 7 Previous Issue    Next Issue Select Preparation and Compressive Property of Single-crystal Titanium Made by Multi-stage Annealing Treatment Xiguang DENG, Songxiao HUI, Wenjun YE, Xiaoyun SONG Chinese Journal of Materials Research. 2017, 31 (7): 547-552.   DOI: 10.11901/1005.3093.2016.510 Abstract   HTML   PDF (2679KB) Poly-crystal Ti with grain size larger than 12 mm was prepared through multi-stage heat treatment. Among which, the first stage of long time anneal at 860℃ could facilitate the grain growth of the original α phase, therewith reduced the total grain boundary area as a result; Meanwhile, low heating rate of 0.1℃/min in the second and low cooling rate of 0.1℃/min in the fourth stage were adopted to ensure the titanium slowly passing through the phase transformation point at 883℃ in order to restrain the number of nucleation. The orientation of hexagonal close packed (HCP) unit cell was constructed according to the Euler angle detected by electron backscatter diffraction (EBSD), and single-crystal compressive specimen was prepared for mechanical test and scanning electron microscope (SEM) observation. Slip band of $112?3112?2$ type was determined, and the obvious influence of the grain orientation on mechanical behavior was analyzed by Schmid factor.
2017-10-23 05:49:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39390701055526733, "perplexity": 10587.76130778999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825700.38/warc/CC-MAIN-20171023054654-20171023074654-00387.warc.gz"}
https://covcel.com/ged-math-shortcuts-with-calculator/
# A new way to prep for the GED Math Test ### We’ve combined the power of the TI 30-XS calculator with math shortcuts and created a coursethat guarantees you pass the GED Math Test Are you afraid of failing the GED Math Test? Do you feel overwhelmed by Math lessons? That’s understandable. Thousands of people don’t get their GED diploma because they feel overwhelmed and discontinue learning. As the result, they don’t pass the Math test. We Have A Solution Our new GED Math Shortcuts course teaches you how to pass the Math Test by effectively using the TI-30 XS scientific calculator. You WILL pass the GED Math test. Guaranteed. It’s all about efficient results—our course is designed to help you pass the GED Math Test in only 18 study hours. So, within a week you can be ready to pass your GED Math test. ## GED Math Shortcuts with a Scientific Calculator ### $99 for 6 months • 57 Short Video Lessons in 9 Modules • 200 Questions with Explanations • Quizzes at the End of Every Module • 2 Math Practice Tests-Similar to the real exam • 2 GED Ready® Vouchers for Taking the Official Practice Test • 6 month of access • 72 hours Money-Back Guarantee • Content Developed by Industry-Leading Experts This course really works. Before I tried to pass the math test 2 times and I failed twice, after this course I got 159 points. Zack B. How Does It Work This course teaches you how to use the TI-30 XS calculator to answer GED Math questions and pass the Math Test. The TI-30 XS calculator is the only calculator that you can use during the GED Math Test. Most students learn only a few basic functions and don’t know how to maximize the power of this scientific calculator. When you learn how to use this calculator, your self-esteem grows and you feel capable of overcoming the obstacles. FAQs Do I need a calculator to participate in this course? Yes, you need a Ti-30 XS calculator. We provide links to resources offering free access to the digital versions of the calculator. You can also buy the handheld calculator at every Walmart, BestBuy, or on Amazon. It costs around$15. Will you give me the scientific calculator? We list the app and website that offer free access to the scientific calculator, Ti-30 XS, so you can start using the course right away. If you want to have a handheld device (a physical Ti-30 XS calculator) you can buy it on Amazon.com. What is included in the lesson? Every lesson has a video and a short quiz. A video shows exactly how to use the Ti-30 XS to solve the type of questions that are on the GED Math tests. A quiz under the video includes the explanation with screenshots of the calculator key press history, so you can always replicate the correct moves. Can I bring this calculator to the test center? Yes, you can take the handheld calculator with you to the test center. However, if you will be taking the GED Math test online, you will need to use the online version of the TI-30 XS calculator. It will be provided on-screen by GED Testing Service. The online version of the calculator provided during the test is the same as the software you can download as a trial. Can I use a different calculator? No, you need to use the TI 30-XS calculator. On the GED Math Test, you are allowed to use this calculator ONLY. How long does it take to prepare for the GED Math Test? That depends on how much time you will spend on learning. Some students are ready within a week, others need a few weeks. Can I take the GED Math Test online? Yes, you can take the Math Test online. How many points can I get on the Math test after this course? You will definitely get more than 145 points, usually, students get 155-180 points. If you choose to withdraw from the Math Shortcuts with a Calculator Course within 72 hours after enrollment, we will refund 100% of your fee. ## Student reviews Math is so challenging for me but Covcel has made the process easy. I pick a lesson and complete it if I want more practice I can re-watch the lesson and study what I need to know. Diane D. Being able to use the Math Shortcuts course has been a Life Saver! I can log in at any time, and get right to work. It is so easy to use and understand. The quality of learning is exceptional. Leann This math course is a real game-changer, it helped me to pass the math test without any problems and I just want to say thank you for your help. Arch Covcel Shortcuts course has made it so much easier to attain my diploma. These calculator lessons and practice tests got me thoroughly prepared for the math exam, without taking out too much time from my normal daily routines. Now I can start preparing for the university and I have Covcel to thank for it. Dylan Covcel Math Shortcuts has been one of the fastest and easiest ways of understanding numerous topics. I really like this way of learning with the videos and step by step techniques, while also being able to take down notes on the side as you go. I have recommended Covcel prep to a few people and I will gladly do it again. Delila
2021-01-27 21:50:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3323322534561157, "perplexity": 1171.3040287000229}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704833804.93/warc/CC-MAIN-20210127214413-20210128004413-00696.warc.gz"}
https://math.stackexchange.com/questions/116350/continuous-injective-map-f-mathbbr3-to-mathbbr?rq=1
# Continuous injective map $f:\mathbb{R}^3 \to \mathbb{R}$? How would you show that there is no continuous injective map $f:\mathbb{R}^3 \to \mathbb{R}$? I tried the approach where if $a \in \mathbb{R}$ then $\mathbb{R} \backslash \{a\}$ is not clearly connected so there can't exist a continuous function otherwise $\mathbb{R} \backslash \{a\}$ would be connected? • That seems like a good approach. What goes wrong? – Dylan Moreland Mar 4 '12 at 16:51 • I think you're assuming the existence of a homeomorphism. If one existed, then, since $R^3-{a}$ is connected, so is $R-{f(a)$ (and, in general, if f:X-->Y is a homeo., the number of cutpoints is the same in X,Y). Connectedness number is a topological invariant, but is not preserved by continuity alone. – AQP Mar 4 '12 at 16:53 • I think that you are on the right track. At a glance, $\mathbb{R}$ does not have any nontrivial connected subspaces that remain connected with the removal of any arbitrary point. – user642796 Mar 4 '12 at 16:59 • @Arthur: that's a nice observation. I incorporated a version of it into my answer. – Pete L. Clark Mar 4 '12 at 17:24 I will present an answer which can be (in principle, at least) understood by anyone who knows single variable calculus and the definition of a continuous function $f: \mathbb{R}^n \rightarrow \mathbb{R}$. Then I will explain how to shorten the argument a little by using topological language. Step 1: Let $f: [0,1] \rightarrow \mathbb{R}$ be a continuous function with $f(0) = f(1)$. Then there exist $x,y \in [0,1)$ such that $f(x) = f(y)$ and $x\neq y$. Proof: We may assume $f$ is nonconstant. By the Extreme Value Theorem it assumes a minimum value $m$ and a maximum value $M$ with $m < M$. Let $x_m,x_M$ be such that $f(x_m) = m$ and $f(x_M) = M$. Without loss of generality $x_m < x_M$. By the Intermediate Value Theorem, every value in $(m,M)$ is assumed on the interval $(x_m,x_M)$. Moreover, because $f(1) = f(0)$, the function $g: [x_M,1+x_m]: \rightarrow \mathbb{R}$ given by $x \mapsto f(x)$, $x_M \leq x \leq 1$, $x \mapsto f(x-1)$, $1 \leq x \leq 1+x_m$ is continuous, with $g(x_M) = M$, $g(1+x_m) = m$, so by the Intermediate Value Theorem takes every value in $(m,M)$ on the interval $(x_M,1+x_m)$, so that $f$ takes every value in $(m,M)$ on $(x_M,1) \cup (0,x_m) = [0,1] \setminus [x_m,x_M]$. Thus $f$ takes every value in $(m,M)$ at least twice and is not injective on $[0,1)$. Step 2: Let $n$ be an integer greater than one, and let $f: \mathbb{R}^n \rightarrow \mathbb{R}$ be a continuous function. Then $g: [0,1] \rightarrow \mathbb{R}$ given by $g(t) = f(\cos(2\pi t),\sin(2\pi t),0,\ldots,0)$ is continuous with $g(0) = g(1)$, so by Step 1 there is $0 \leq t_0 < t_1 < 1$ such that $g(t_1) = g(t_2)$. That is, $f(\cos(2\pi t_1),\sin(2\pi t_1),0,\ldots,0) = f(\cos(2\pi t_2),\sin(2\pi t_2),0,\ldots,0)$, so $f$ is not injective. Step 3: A softer, more topological version of this is as follows: let $f: \mathbb{R}^n \rightarrow \mathbb{R}$ be continuous. Seeking a contradiction, we suppose it is injective. Let $S^{n-1} \subset \mathbb{R^n}$ be the unit sphere. Since it is compact, the restriction of $f$ to $S^{n-1}$ gives a homeomorphism onto its image, which is a compact, connected subset of $\mathbb{R}$ hence a closed bounded interval $[a,b]$. If $a = b$ then $f$ is constant, hence not injective. Otherwise, observe that if we remove any one of the uncountably infinitely many points from $S^{n-1}$ we get a space homeomorphic to $\mathbb{R}^{n-1}$, which is connected if $n \geq 2$. However, there are only two points in $[a,b]$ whose removal leads to a connected space: the two endpoints. Contradiction! Take two lines through the origin $\ell_1=tb_1,\ell_2=tb_2$ where $b_1,b_2 \in \Bbb{R}^3$. Then $g_i(t)=f(tb_i)$ are injective and continuous, and therefore strictly monotone. This proves that $f(\ell_1)\cap f(\ell_2)$ cannot equal a single point, therefore contradicting injectivity. Another approach. Take $a,b \in \Bbb{R}^3$ such that $f(a)<f(b)$. Then the image of every bounded connected path between $a$ and $b$ must be the interval $[f(a),f(b)]$. This contradicts injectivity. Suppose such an $f$ exists. Then, since $f$ is continuous, $f(\Bbb R^3)$ is a connected subset of $\Bbb R$ and hence, an interval $U$. Also, since $f$ is one-to-one, the following two properties hold: $\ \$1) $U$ is not a singleton point, and, $\ \$2) for each $a\in\Bbb R^3$, we have $f\bigl(\, \Bbb R^3\setminus\{a\}\,\bigr) = f(\Bbb R^3) \setminus \{\,f(a)\,\} =U\setminus\{\,f(a)\,\}$. Now, by 1), we may (and do) choose $b\in\Bbb R^3$ such that $f(b)\in\text{Int}(U)$, where $\text{Int}(U)$ is the interior of $U$. Then, by 2), $$f\bigl(\, \Bbb R^3\setminus\{\,b\,\}\,\bigr) = U \setminus \{\,f(b)\,\};$$ which is a contradiction, since $\Bbb R^3\setminus\{\,b\,\}$ is connected whilst $U \setminus \{\,f(b)\,\}$ is not. Another version that requires a lot less of topology, but only the intermediate value theorem is the following. Let $f \colon \mathbb{R}^n \to \mathbb{R}$ be continuous, with $n \geq 2$. Without loss of generality, we may assume $f(0,\ldots,0) = 0$. Consider the curve $\gamma_1 \colon [-1,1] \to \mathbb{R}^n; x \mapsto (x,0,\ldots,0)$, and consider $g := f \circ \gamma_1$. If both $a_1 := g(-1)$ and $a_2:= g(1)$ have the same sign, proceed as follows. Without loss of generality, assume $a_1 \leq a_2$. Apply the intermediate value theorem to find a $c \in [0,1]$ with $g(c) = a_1$. Then $g(-1) = g(c)$, with $c \ne -1$. Hence $f(\gamma(-1) = f(\gamma(c))$ (and $\gamma(-1) \ne \gamma(c)$). Consider on the other hand that $a_1$ and $a_2$ have different signs, consider the curve $\gamma_2 \colon [-1,1] \to \mathbb{R}^n; x \mapsto (0,x,\ldots,0)$ and $h(x) = f(\gamma_2(x))$. Obviously at least two of $h(-1), h(1), g(-1)$ and $g(1)$ will have the same sign. Apply the earlier reasoning to find two points $c$ and $d$ with $f(c) = f(d)$. This obviously extends to show that no continuous $f \colon U \to \mathbb{R}$ with $U$ open in $\mathbb{R}^n$ can be injective (for $n \geq 2$). Suppose there exist continuous injective map $f: \mathbb R^3 \to \mathbb R$ is continuous injective , then $f:\mathbb R^3 \to f(\mathbb R^3)$ is continuous bijective , then $f(\mathbb R^3)$ is an infinite connected set so a non-singleton interval in $\mathbb R$ say $I:= f(\mathbb R^3)$ , but then for any three distinct $a,b,c \in \mathbb R^3$ , $f(a),f(b),f(c)$ are three distinct elements of $I$ and $f:\mathbb R^3\setminus \{a,b,c\} \to I\setminus \{f(a),f(b),f(c)\}$ is a continuous bijection , so that $I\setminus \{f(a),f(b),f(c)\}$ must be connected i.e. an interval of $\mathbb R$ , but it is known that for any non-singleton interval $I$ in $\mathbb R$ and any three distinct points $x,y,z \in I$ , the set $I \setminus \{x,y,z\}$ cannot be an interval , contradiction ! [NOTE : Let $I \subseteq \mathbb R$ be a non-singleton interval , $x,y,z \in I$ be three distinct points , w.l.o.g. $x<y<z$ , then as I is an interval , $\dfrac {x+y}2 , \dfrac {y+z}2 \in I$ and also $x<\dfrac{x+y}2<y<\dfrac {y+z}2<z$ , so that $\dfrac {x+y}2 , \dfrac {y+z}2 \in I\setminus \{x,y,z\}$ , then if $I\setminus \{x,y,z\}$ were an interval , the fact $\dfrac {x+y}2 <y< \dfrac {y+z}2$ would imply $y \in I\setminus \{x,y,z\}$ , contradiction ! Thus $I\setminus \{x,y,z\}$ cannot be an interval ]
2019-11-21 18:26:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9719712734222412, "perplexity": 89.66987744584969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670948.64/warc/CC-MAIN-20191121180800-20191121204800-00148.warc.gz"}
https://chemistry.stackexchange.com/questions/48222/on-stretching-a-rubberband-does-the-entropy-increase-or-decrease
# On stretching a rubberband does the entropy increase or decrease? It seems that on stretching a rubberband, entropy or the degree of disorder is likely to increase. Is this the correct answer? My text says that on stretching, the arrangement of particles become more 'ordered' and hence entropy 'decreases'. How? Think of it this way, when an elastic polymer is in its relaxed state, these molecules are all tangled up with each other and have no particular direction to them, but when you apply a force, i.e stretch the rubber band, you end up disentangling some of the polymer molecules. We are not stretching the actual polymer molecules, just sort of changing the way they are organised. This disentanglement corresponds to a reduction in micro-states that the system occupies and thus, a decrease in entropy. In simpler words, there's more ways for a polymer chains to jumbled up with each other (more microstates) than for them to be aligned. More simply-- If the chain were totally stretched out, it would only have one possible conformation. By coiling itself, it increases the number of conformations it can have, and therefore, its entropy. A more rigorous treatment can be found if you read up about the ideal chain, which serves as a model for such systems. Thus, assuming that no other energy exchanges takes place, the tendency of the elastic band to shrink back to its original conformation is due to something called an entropic force, i.e a tendency of the system to move to a configuration that maximises entropy. Some Details Note: I am not quite confident with this material and might've made some mistakes--if that be the case, I am sure someone here will correct me Consider the following model system: The chain is made of $n$ rigid segments of size $b$; Thus, the length of the chain is $L = nb$. Also, assume that the chain has one fixed end, and one free end and the distance between the two is denoted as $r$. Then, one can derive the probability of finding the chain ends a distance $r$ apart is given by the following Gaussian distribution: $$P(r,n)\mathrm dr = 4\pi r^2 \left(\frac{2nb^2\pi}{3}\right)^{\frac{-3}{2}}\exp\left(\frac{-3r^2}{2nb^2}\right)\mathrm dr$$ The classical entropy is approximately $S = k_\mathrm b\ln(P(r,n))$ From hereon, one can go on to show that for an ideal chain, maximising its entropy means reducing the distance between its two free ends (i.e reducing $r$) • Nice job of articulating the concept of configurational entropy as it applies to polymer molecules. Mar 20 '16 at 20:18
2022-01-21 03:40:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8511685132980347, "perplexity": 510.6417897789247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302715.38/warc/CC-MAIN-20220121010736-20220121040736-00701.warc.gz"}