url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://mathoverflow.net/questions/70371/greatest-common-divisor-of-a2n-1-and-b2n-1/70389
Greatest common divisor of a^{2^n}-1 and b^{2^n}-1 Let a and b be coprime integers. Do we know, expect, or unexpect that there are infinitely many primes p which divide $gcd(a^{2^n} - 1, b^{2^n}-1)$ for some n? Certainly any Fermat prime will divide both if I let n get large enough, but one doesn't know whether there are infinitely many of those. - Add some hypothesis ($a,b$ multiplicatively independent?) to avoid cases like $b=a^2$. –  Felipe Voloch Jul 14 '11 at 20:55 Also have a look at some recent papers of Corvaja and Zannier. –  Felipe Voloch Jul 14 '11 at 20:58 @Felipe: If $b=a^2$, then the gcd is just $a^{2^n}-1$, so the problem becomes easier since Jordan is only asking about the support problem, i.e., the set of primes dividing at least one number in a sequence. And of course, in this easier case, Bang (before Zsigmondy) proved that all but finitely many terms in the sequence $a^N-1$ have a primitive prime divisor. So Support($a^{N_i}-1$) is infinite for any increasing sequence of integers $N_1,N_2,\dots$. –  Joe Silverman Jul 14 '11 at 22:39 Drat, I was kind of hoping Joe would know the answer to this one. –  JSE Jul 14 '11 at 22:43 Hah! My initial impression after thinking about it for a few minutes is that it looks very hard. Even $\gcd(a^n-1,b^n-1)$ is hard, e.g., not known it equals 1 infinitely often (assuming $a$ and $b$ mult. indep.). Of course, the support problem for $\gcd(a^n-1,b^n-1)$ is trivial by Fermat's little theorem. But $2^n$ is such a sparse sequence, I don't see where to begin. Here's a question: Is the support of $\gcd(a^{n^2}-1,b^{n^2}-1)$ infinite? Follows from "infinitely many primes of the form $n^2+1$", but do we have the tools to prove this unconditionally? –  Joe Silverman Jul 14 '11 at 23:18 One can rewrite your problem as follows: For $p$ prime, $p\mid a^{2^n}-1$ for some $n$ is equivalent to $\mathrm{ord}_{\mathbb{F}_p^\times}(a)$ being a power of $2$. The probability for a random element of the multiplicative group $\mathbb{F}_p^\times$ to have order a power of $2$ is $\frac{2^n}{p-1}$ where $n$ is chosen maximal among the natural numbers $m$ with $2^m \mid p-1$. A naive (hopefully not too naive) heuristic for the expected number of primes dividing both $a^{2^n}-1$ and $b^{2^n}-1$ for some $n$ is $-$ assuming that both conditions are independent: $$\sum_{n\in\mathbb N}\sum_{\mbox{p\in\mathbb{P} : n maximal w.r.t. p = 1 \bmod 2^n}} \left(\frac{2^n}{p-1}\right)^2 \approx \sum_{n\in\mathbb N} \sum_{q\in\mathbb N} \frac{1}{\log(q\cdot 2^n+1)\cdot q^2}$$ For the approximation the heuristics is used that the probability for a number $m$ to be prime is about $\frac{1}{\log m}$. As the latter sum diverges one would expect that infinitely many primes divide your greatest common divisor for some $n$. - Boiling this line of thought down even further: we expect (do we?) that there are infinitely many primes of the form $p=3\cdot 2^n+1$, and the "random element" heuristic above predicts that $1/9$ of these primes will eventually divide both $a^{2^n}-1$ and $b^{2^n}-1$; this already gives infinitely many primes dividing these gcds. –  Greg Martin Jul 22 '11 at 22:47 A comment on one of Joe's questions: Let $B$ be any real number. It is known unconditionally that there are infinitely many $m$ for which $\phi(m)$ is a square and for which the smallest prime factor of $m$ exceeds $B$. One can even take $m$ as a product of two primes here; see, e.g., article 4 from http://www.integers-ejcnt.org/vol11a.html or an arXiv preprint of Tristan Freiberg. If we choose $B$ larger than $|a|$ and $|b|$, then $m \mid \gcd(a^{\phi(m)}-1, b^{\phi(m)}-1)$, and so there is a prime $> B$ in the support of $\gcd(a^{n^2}-1, b^{n^2}-1)$. - Well, I guess I've actually made things too hard here, since $\gcd(a^n-1, b^n-1) \mid \gcd(a^{n^2}-1, b^{n^2}-1)$; hence the support problem is trivial, again by Fermat's little theorem. I'll leave my answer up for now though. –  so-called friend Don Jul 17 '11 at 2:04 Good point, I guess that was as silly question. –  Joe Silverman Jul 17 '11 at 2:46 Just to get a feeling for what's going on here, I asked Maple for $\gcd(2^{2^n}-1,3^{2^n}-1)$ for $n=1,2,\dots,20$ and got 1 for $n=1$, 5 for $n=2,3$, $85=5\cdot17$ for $n=4,5,6,7$, $21845=5\cdot17\cdot257$ for $n=8,\dots,15$, $1431655765=5\cdot17\cdot257\cdot65537$ for $n=16$ to $n=19$, all pretty much as expected, then $19515599812384085=5\cdot17\cdot257\cdot65537\cdot13631489$ for $n=20$. The first few results are as expected from the question statement, as 5, 17, 257, and 65537 are Fermat primes. 13631489 is a factor of a Fermat number. - Of course, each term is divisible by the previous term. A slightly harder, but maybe more natural, question, would be the set of primes dividing some term in the sequence $\gcd(2^{2^n}+1,3^{2^n}+1)$. More generally, some subsequence of $\gcd(\Phi_N(2),\Phi_N(3))$, where~$\Phi_N$ is the cyclotomic polynomial. In this case, we're taking $N=2^n$ for $n=1,2,3,...$. –  Joe Silverman Jul 15 '11 at 0:59 It seems that $\gcd(2^{2^n}+1,3^{2^n}+1)$ is 5 for $n=1$ and 1 for $2\le n\le20$. –  Gerry Myerson Jul 15 '11 at 1:12 @Gerry: Do you want to conjecture it's 1 for all $n\ge2$? Here's an amusing example. Consider the sequence $A_n=\gcd(2^n+3^n+1,2^n+7^n+2)$. I checked that $A_n=1$ for all $n\le5000$. Further, the support of the sequence $(A_n)$ contains no primes smaller than 5000. Challenge: Prove that $A_n=1$ for infinitely many $n$. I have no idea how to begin to attack this, nor the easier(?) case of a function field analogue over $\mathbb{C}[T]$. For $\mathbb{C}[T]$, Ailon and Rudnick prove (under suitable hypotheses) that $\gcd(a(T)^n-1,b(T)^n-1)=1$ for lots of $n$. –  Joe Silverman Jul 15 '11 at 2:43 @Joe, I am far too timid to make a conjecture based on $2\le n\le20$. A bit closer to the ground, Ilan Vardi found $\gcd(n^7+19,(n+1)^7+19)=1$ for $n\lt8424432925592889329288197322308900672459420460792433$ but the equality fails for the next $n$. –  Gerry Myerson Jul 15 '11 at 5:58 @Gerry $\gcd(2^{2^n}+1,3^{2^n}+1)=1$ for $2 \le n \le 29$. Took about 30 minutes with ntl (up to 28 took 15 minutes in sage). $\gcd(2^{2^n}-1,3^{2^n}-1)=19515599812384085$ for $20 \le n \le 29$. –  joro Jul 15 '11 at 10:50
2015-01-27 23:20:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.970504879951477, "perplexity": 241.0258191377809}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122039674.71/warc/CC-MAIN-20150124175359-00108-ip-10-180-212-252.ec2.internal.warc.gz"}
http://www.gradesaver.com/black-beauty/q-and-a/who-were-the-dogs-chasing-why-were-they-doing-so--why-were-men-in-green-coats-following-them-299185
# Who were the dogs chasing? Why were they doing so ? Why were men in green coats following them? Who were the dogs chasing? Why were they doing so ? Why were men in green coats following them?
2018-01-23 00:31:56
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8462671637535095, "perplexity": 6186.612429356555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891546.92/warc/CC-MAIN-20180122232843-20180123012843-00326.warc.gz"}
https://www.revision.co.zw/experiment-current-voltage/
# Experiment: Current and Voltage ////Experiment: Current and Voltage ## Experiment: Current and Voltage ### ZIMSEC O Level Combined Science Notes: Experiment: Current and Voltage Aim: To show how current changes with voltage and to find the resistance of a resistor Materials: Three/Four 1.5V battery cells, ammeter, lamp,resistor, switch, leads with crocodile clips Method Current and Voltage 1. Set up the series using one cell as shown in the diagram above 2. Measure the current flowing on the ammeter when the switch is closed 3. Place another cell in the series with the first one 4. Measure the amount of the current flowing through the circuit 5. Repeat these steps with 3 and then 4 cells measuring the current each time 6. Put the results in a table and calculate resistance 7. Repeat the experiment using a lamp instead of the resistor 8. Use the data to plot a graph of voltage against current 9. The gradient of this graph is the resistance 10. The formula for finding gradient is: 11. $Gradient = \dfrac{\text{Vertical height(V)}}{\text{Horizontal Distance(I)}}$ Results and Observation • When the voltage increases the current passing through a circuit also increases • Current varies directly in proportion with voltage • Resistance does not change when the voltage increases To access more topics go to the Combined Science Notes page. By |2018-05-01T14:43:23+00:00May 30th, 2016|Notes, O Level Science Notes, Ordinary Level Notes|Comments Off on Experiment: Current and Voltage ### About the Author: Garikai Dzoma He holds an Honours in Accountancy degree from the University of Zimbabwe. He is passionate about technology and its practical application in today's world.
2018-12-10 17:31:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 1, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4829547703266144, "perplexity": 2911.620467055022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823382.1/warc/CC-MAIN-20181210170024-20181210191524-00473.warc.gz"}
https://en.wikipedia.org/wiki/Solar_rotation
# Solar rotation The solar rotation can be seen in the background Solar rotation varies with latitude because the Sun is composed of a gaseous plasma. The rate of rotation is observed to be fastest at the equator (latitude φ = 0 °), and to decrease as latitude increases. At the equator the solar rotation period is 24.47 days and almost 38 days at the poles. The differential rotation rate is usually described by the equation: ${\displaystyle \omega =A+B\,\sin ^{2}(\varphi )+C\,\sin ^{4}(\varphi )}$ where ω is the angular velocity in degrees per day, φ is the solar latitude and A, B, and C are constants. The values of A, B, and C differ depending on the techniques used to make the measurement, as well as the time period studied.[1] A current set of accepted average values[2] is: A= 14.713 ± 0.0491 °/d B= -2.396 ± 0.188 °/d C= -1.787 ± 0.253 °/d ## Sidereal rotation At the equator the solar rotation period is 24.47 days. This is called the sidereal rotation period, and should not be confused with the synodic rotation period of 26.24 days, which is the time for a fixed feature on the Sun to rotate to the same apparent position as viewed from Earth. The synodic period is longer because the Sun must rotate for a sidereal period plus an extra amount due to the orbital motion of the Earth around the Sun. Note that astrophysical literature does not typically use the equatorial rotation period, but instead often uses the definition of a Carrington rotation: a synodic rotation period of 27.2753 days (or a sidereal period of 25.38 days). This chosen period roughly corresponds to rotation at a latitude of 26 deg, which is consistent with the typical latitude of sunspots and corresponding periodic solar activity. When the Sun is viewed from the "north" (above the Earth's northern pole) solar rotation is counterclockwise. For a person standing on the North Pole, sunspots would appear to move from left to right across the face of the Sun. ### Bartels' Rotation Number Bartels' Rotation Number is a serial count that numbers the apparent rotations of the sun as viewed from Earth, and is used to track certain recurring or shifting patterns of solar activity. For this purpose, each rotation has a length of exactly 27 days, close to the synodic Carrington rotation rate. Julius Bartels arbitrarily assigned rotation one day one to 8 February 1832. The serial number serves as a kind of calendar to mark the recurrence periods of solar and geophysical parameters. ### Carrington rotation Five year video of Sun, one frame per Carrington period The Carrington rotation is a system for comparing locations on the Sun over a period of time, allowing the following of sunspot groups or reappearance of eruptions at a later time. Because the Solar rotation is variable with latitude, depth and time, any such system is necessarily arbitrary and only makes comparison meaningful over moderate periods of time. Solar rotation is arbitrarily taken to be 27.2753 days for the purpose of Carrington rotations. Each rotation of the Sun under this scheme is given a unique number called the Carrington Rotation Number, starting from November 9, 1853. (The Bartels Rotation Number[3] is a similar numbering scheme that uses a period of exactly 27 days and starts from February 8, 1832.) The heliographic longitude of a solar feature conventionally refers to its angular distance relative to the central meridian, i.e. that which the Sun-Earth line defines. The "Carrington longitude" of the same feature refers it to an arbitrary fixed reference point of an imagined rigid rotation, as defined originally by Carrington. Richard Christopher Carrington determined the solar rotation rate from low latitude sunspots in the 1850s and arrived at 25.38 days for the sidereal rotation period. Sidereal rotation is measured relative to the stars, but because the Earth is orbiting the Sun, we see this period as 27.2753 days. It is possible to construct a diagram with the longitude of sunspots horizontally and time vertically. The longitude is measured by the time of crossing the central meridian and based on the Carrington rotations. In each rotation, plotted under the preceding ones, most sunspots or other phenomena will reappear directly below the same phenomenon on the previous rotation. There may be slight drifts left or right over longer periods of time. The Bartels "musical diagram" or the Condegram spiral plot are other techniques for expressing the approximate 27-day periodicity of various phenomena originating at the solar surface. ## Using sunspots to measure rotation The rotation constants have been measured by measuring the motion of various features ("tracers") on the solar surface. The first and most widely used tracers are sunspots. Though sunspots had been observed since ancient times, it was only when the telescope came into use that they were observed to turn with the Sun, and thus the period of the solar rotation could be defined. The English scholar Thomas Harriot was probably the first to observe sunspots telescopically as evidenced by a drawing in his notebook dated December 8, 1610, and the first published observations (June 1611) entitled “De Maculis in Sole Observatis, et Apparente earum cum Sole Conversione Narratio” ("Narration on Spots Observed on the Sun and their Apparent Rotation with the Sun") were by Johannes Fabricius who had been systematically observing the spots for a few months and had noted also their movement across the solar disc. This can be considered the first observational evidence of the solar rotation. Christopher Scheiner (“Rosa Ursine sive solis”, book 4, part 2, 1630) was the first to measure the equatorial rotation rate of the Sun and noticed that the rotation at higher latitudes is slower, so he can be considered the discoverer of solar differential rotation. Each measurement gives a slightly different answer, yielding the above standard deviations (shown as +/-). St. John (1918) was perhaps the first to summarise the published solar rotation rates, and concluded that the differences in series measured in different years can hardly be attributed to personal observation or to local disturbances on the Sun, and are probably due to time variations in the rate of rotation, and Hubrecht (1915) was the first one to find that the two solar hemispheres rotate differently. A study of magnetograph data showed a synodic period in agreement with other studies of 26.24 days at the equator and almost 38 days at the poles.[4] ## Internal Solar Rotation Internal rotation in the Sun, showing differential rotation in the outer convective region and almost uniform rotation in the central radiative region. The transition between these regions is called the tachocline. Until the advent of helioseismology, the study of wave oscillations in the Sun, very little was known about the internal rotation of the Sun. The differential profile of the surface was thought to extend into the solar interior as rotating cylinders of constant angular momentum.[5] Through helioseismology this is now known not to be the case and the rotation profile of the Sun has been found. On the surface the Sun rotates slowly at the poles and quickly at the equator. This profile extends on roughly radial lines through the solar convection zone to the interior. At the tachocline the rotation abruptly changes to solid-body rotation in the solar radiation zone.[6] ## References 1. ^ Beck, J. (2000). "A comparison of differential rotation measurements". Solar Physics. 191: 47–70. Bibcode:2000SoPh..191...47B. doi:10.1023/A:1005226402796. 2. ^ Snodgrass, H.; Ulrich, R. (1990). "Rotation of Doppler features in the solar photosphere". Astrophysical Journal. 351: 309–316. Bibcode:1990ApJ...351..309S. doi:10.1086/168467. 3. ^ Bartels, J. (1934), "Twenty-Seven Day Recurrences in Terrestrial-Magnetic and Solar Activity, 1923-1933", Terrestrial Magnetism and Atmospheric Electricity, 39 (3): 201–202a, Bibcode:1934TeMAE..39..201B, doi:10.1029/TE039i003p00201 4. ^ 5. Astronomy and Astrophysics, vol. 233, no. 1, July 1990, p. 220-228. http://adsabs.harvard.edu/full/1990A%26A...233..220S 5. ^ Glatzmaier, G. A (1985). "Numerical simulations of stellar convective dynamos III. At the base of the convection zone". Solar Physics. 125: 1–12. Bibcode:1985GApFD..31..137G. doi:10.1080/03091928508219267. 6. ^ Christensen-Dalsgaard J. & Thompson, M.J. (2007). The Solar Tachocline:Observational results and issues concerning the tachocline. Cambridge University Press. pp. 53–86. • Cox, Arthur N., Ed. "Allen's Astrophysical Quantities", 4th Ed, Springer, 1999. • Javaraiah, J., 2003. Long-Term Variations in the Solar Differential Rotation. Solar Phys., 212 (1): 23-49. • St. John, C., 1918. The present condition of the problem of solar rotation, Publications of the Astronomical Society of the Pacific, V.30, No. 178, 318-325.
2017-01-25 00:53:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7079869508743286, "perplexity": 1494.9732137777048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00298-ip-10-171-10-70.ec2.internal.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=118&t=15897&p=40073
## Why can't a particle in a container have zero energy? $E_{n}=\frac{h^{2}n^{2}}{8mL^{2}}$ Yinhan_Liu_1D Posts: 51 Joined: Sat Sep 24, 2016 3:00 am ### Why can't a particle in a container have zero energy? Is it because that the zero-point energy is already E1? Navarro_Bree_1D Posts: 24 Joined: Wed Sep 21, 2016 3:00 pm ### Re: Why can't a particle in a container have zero energy? According to the website, "Fermilab Today," at the quantum scale, space never has zero energy because electrons have both particle-like and wave-like properties. Their constant movement can be measured in kinetic energy. From my understanding, I think a particle in a container cannot have zero energy due to its particle-like and wave-like functions. Source: https://www.fnal.gov/pub/today/archive/ ... dmore.html Katie 1E Posts: 21 Joined: Fri Sep 29, 2017 7:04 am Been upvoted: 2 times ### Re: Why can't a particle in a container have zero energy? I think it's also because the lowest possible energy level is n=1, which by default means that the electron has energy. Also, a particle with no energy means that it s completely motionless, which is impossible.
2021-03-07 03:08:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3905653655529022, "perplexity": 1376.851628888976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376006.87/warc/CC-MAIN-20210307013626-20210307043626-00488.warc.gz"}
http://physics.stackexchange.com/questions/5373/quantum-death-like-heat-death-possible
# Quantum death like heat death possible? Quantum decoherence is an irreversible process which is the result of interaction of the system with its environment. It prevents interference due to lack of coherence. Environment acts just like a heat bath. Now my question is, is the different branches of the wave function of the universe becoming gradually more and more decoherent so that in a far future no trace of interference can occur? In other words I would like to know whether there will be any quantum death (like heat death of thermodynamics) of the universe if one waits for a sufficiently long time? - ""Environment acts just like a heat bath"" I doubt that. A heat bath maybe enhances decoherence, but it does a lot more. And why is decoherence of "universe wave function" quantum death? –  Georg Feb 17 '11 at 16:41 Why would you think the universe will have a "wave function"? In my opinion the universe viewed quantum mechanically has a density matrix composed of all the zillion independent state functions of atoms, molecules, light, etc. It is thermodynamic really. –  anna v Feb 17 '11 at 16:50 @anna v: Of course there is a wave function of the universe as Hartle and Hawking showed us. Otherwise what do you mean by the subject called "quantum cosmology"? –  user1355 Feb 17 '11 at 17:00 @anna v: Are you not aware of this famous paper? link.aps.org/doi/10.1103/PhysRevD.28.2960 –  user1355 Feb 17 '11 at 17:02 Dear sb1, if you consider loss of coherence "quantum death", I assure you that 99.99999999999999999999999999999% of the quantum death is completed within a tiny fraction of a second - which may be much shorter than the Planck time for macroscopic objects. In principle, the coherence is always there if you could trace the environment, directly or indirectly, but it's totally inconsequential for physics as an empirical science. The decoherence occurs almost instantly. I still don't understand why you call it "death". It's just the appearance of the classical intuition from quantum mechanics. –  Luboš Motl Feb 17 '11 at 19:01 This question is a bit strange, and I tend to agree with Anna that this is related to thermodynamics. The entropy involved here is an entanglement entropy. Suppose you have system A and system B which form an entanglement. The entanglement entropy of system A is $$S_A~=~-Tr[\rho_A log\rho_A],$$ which equals the entanglement entropy of system B. If the two systems form a pure state then $S_{A+B}~=~0$. The entanglement entropy comes from the fact you have access to only one part of the density matrix $\rho~=~\rho_A\otimes\rho_B$. This plays a role in cosmology at large. During inflation the vacuum energy density was huge. The cosmological constant $\Lambda~\simeq~(8\pi G/c^4)\rho$, is very large and drives a rapid exponential expansion of space. There is some theoretical controversy here, but while the energy density of the vacuum was very large, the entropy was not that large. The entropy is a measure of the number, N, of degrees of freedom in a system that are coarse grained into a macrostate $S~=~k log(N)$. The other oddball factor is that while the temperature was high, the entropy was low due to the negative heat capacity of event horizons in spacetime. During inflation the event horizon was smaller than a proton, and the entropy is proportional to the area of the horizon $S~=~k A/4L_p^2$. The bang came about because the exponential expansion rapidly came to a halt, the cosmological constant dropped to a small value (the vacuum energy dropped enormously) and the cosmological horizon adjusted to a very large value. It is now out about $10^{10}$ light years. This means a relatively small number of degrees of freedom enter into complicated entanglements which are not accessible in a local region. The entanglement entropy increases, and these states appear in a highly thermalized form. This is the bang and fire of the big bang. It is a form of latent heat of fusion in a phase transition. The large vacuum energy $\rho~\simeq~10^{100}GeV^4$ crashed into about 10 GeV^4, and the energy gap assumed the form of a thermalized gas of particles. This was the initial generation of a huge amount of entropy in the early universe. Subsequently entropy is in the form of black holes, radiation and so forth. It is interesting to think we can understand this all from the perspective of quantum mechanics. Into the future the universe will end up as a de Sitter vacuum. In the question: de sitter cosmologic limit I indicated how the universe will over a vast period of time will decay from the de Sitter vacuum configuration with a small vacuum energy to a Minkowski spacetime. The horizon will retreat of to “infinity,” which means the entropy becomes infinite. It might be problematic to think of infinite entropy. There might be some sort of cut-off in this process. On the other hand this is just a measure of how the vacuum decays away to zero and there is no energy. The retreat of the cosmological horizon off to infinity is probably a measure of continued quantum entanglement process, which proceeds almost indefnately. - Thanks for the answer. Of course it is related with thermodynamics and entropy generation but what I really wanted to know is whether it (decoherence) means a complete absence of any interference in the long run. –  user1355 Feb 17 '11 at 17:24 In a measurement of a superposed pair of states the superposition or overlap is replaced by an entanglement. So what you ask could be answered in the affirmative. –  Lawrence B. Crowell Feb 17 '11 at 19:09
2014-07-23 05:01:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.701292872428894, "perplexity": 402.6806157613364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997874283.19/warc/CC-MAIN-20140722025754-00094-ip-10-33-131-23.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/95031/vector-bundle-of-rank-2-and-degree-1-on-an-elliptic-curve
# Vector bundle of rank 2 and degree 1 on an elliptic curve I'm trying to prove the following problem in the Deformation theory book by Hartshorne. Any normalized vector bundle $\mathcal E$ of rank 2 degree 1 on an elliptic curve $\mathcal C$ can be written as a non-split extension $0 \to \mathcal{O_C} \to \mathcal{E} \to \mathcal{O_C(p)} \to 0$ by a uniquely determined point p. (up to isomorphism) It is easy to see that such data gives unique non-split extension. But the converse direction is not easy to show. I thought the proof of the classification of vector bundles on $\mathbb{P^1}$ may helps me, but I failed. How can I do this? I appreciate any helps or reference. - Since $\textrm{rank}(\mathcal{E})=2$ and $\det{\mathcal{E}}=1$, by Riemann-Roch we have $\chi(\mathcal{E})=1$, hence $h^0(\mathcal{E}) \geq 1$. This means that there is a non-zero map $\mathcal{O}_C \to \mathcal{E}$. If this map vanishes at some point we have an exact sequence $$0 \to L_1 \to \mathcal{E} \to L_2 \to 0,$$ with $\deg L_1 =d \geq 1$ and $\det L_2=1-d$. Therefore $$\deg (L_1^{-1} \otimes L_2)= 1-2d <0,$$ hence $H^1(L_2^{-1} \otimes L_1)=H^0(L_1^{-1} \otimes L_2)=0$. This shows that $\mathcal{E}=L_1 \oplus L_2$, a contradiction. Therefore the map $\mathcal{O}_C \to \mathcal{E}$ is never vanishing, so it gives a subbundle of $\mathcal{E}$ whose cokernel has determinant $1$. This precisely means that there exists a point $p \in C$ such that one has a non-split short exact sequence $$0 \to \mathcal{O}_C \to \mathcal{E} \to \mathcal{O}_C(p) \to 0.$$ Thank you so much! It's shame that I've almost forgot about general Riemann-Roch formula. By the way, I've just find out normalized condition prevents $\mathcal{E}$ from being a sum of invertible sheaves. So everything works just fine. – Choa Apr 24 '12 at 16:21
2016-05-07 01:01:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9670538306236267, "perplexity": 66.32542791780776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461864953696.93/warc/CC-MAIN-20160428173553-00142-ip-10-239-7-51.ec2.internal.warc.gz"}
http://stats.stackexchange.com/questions?sort=newest
# All Questions 5 views ### How to sample when you don't know the distribution I'm fairly new to statistics (a handful of beginner-level Uni courses) and was wondering about sampling from unknown distributions. Specifically, if you have no idea about the underlying distribution, ... 6 views ### Cannot intuitively grasp “Standard normal deviate” I cannot intuitively grasp the meaning of "Standard normal deviates". I think It would help if you provided me with either/all of the following: (i) real life examples of their application, (ii) an ... 5 views ### Tobit versus OLS There is a dependent variable which is measured in £ and can take the form of £0-£100,000. It is effectively the value of the payment made. If it takes the form of £0 it means a payment was not made ... 5 views ### Expected lifetime of a device with two parts each having spares? Consider a device with two parts : (1) and (2). Part (1) has 2 spares and part (2) has one spare. Lifetime of part (1) and its spares have iid exponential distribution with rate lambda. Lifetime of ... 8 views ### Estimating a probability distribution with a discrete and continuous part This is a question more for advice and a suggested starting point than anything else (though anything else is cool as well ) The data that I have is something like this - 1,000,000 data points of ... 4 views ### Significant autocorrelation in time series decomposition random component I'm very new to time series analysis. The data below represents about 8 years of aggregate daily visitors to some tourist attractions. I'm trying to examine the random component of some time series ... 3 views ### Correlation of latent variables: Sum-scores vs. SEM correlation I use a set of about 20 attitudinal items. My aim is to investigate if these items represent a number of dimensions and to what extent these dimensions are correlated. Exploratory factor analysis ... 15 views 16 views ### Some doubts about using time random effect I'm starting with lme4 and GLMM. Maybe this question can be basic for experimented researchers, but I'm still learning. I have a pooled data where every ... 6 views ### Error “initial value in 'vmmin' is not finite” when changing factors in ordinal logistic regression I've generated some fake data to give a simple example of ordinal logistic regression. My data set (raw data here) looks like this: ... 7 views ### test to be applied I have applied 3 treatments at 10 different concentrations of each on a bacteria to see if they affect its existence. my response variable is binary i.e. 1 for effective and 0 for ineffective. basic ... 9 views ### Statistical arbitrage using eigen portfolios I was trying to understand below paper https://www.math.nyu.edu/faculty/avellane/AvellanedaLeeStatArb071108.pdf Page 20 explains about "Entering a trade". I wan't to know clearly what it means to ... 3 views ### Repeated measures - sum of biomassproduction I am student in biology. For my examination I conducted a greenhouse experiment with gras. Actually I am not sure how to develop the correct model and unfortunately I haven't found an answer in the ... 10 views ### Impute missing data of a variable I'm currently working on spatial spillovers in agriculture at the municpal level with cross-section data. But, I do have missing values for the investment in capital at the municipalities though the ... 6 views ### SVM with pre-computed kernel and zero bias I have an optimization function, where I need to give my own kernel matrix and bias value is zero. The kernel matrix is calculated using the data but there is no specific formula for it. If I have a ... 52 views ### Machine Learning and Biostatistics I am interested in a few areas with biostatistics, and was reading the course catalog at a university that offers machine learning. I am taking topology now, and i think machine learning uses ... 19 views ### What is the meaning of the R factanal output? What does all this mean? I'm a factor analysis 'noob' and although I've read a book, it didn't tell me everything apparently. Since the chi square statistic is so high and the p-value so low, it ... 29 views ### Is there a way to add inequality constraints on the LASSO in R? I am trying to use LASSO for model selection, but I need my fitted values to remain non-negative. Is there a way to implement this simply in R? I've found that the penalized package allows for non ... 7 views ### How to test adjusted mean in R? I have got eight adjusted means for eight subgroups in a survey study, but I don't know how to test the differences of these eight adjusted means. Any suggestion is appreciated. Thanks! 10 views ### Benchmark datasets for testing multiple regression or multivariate regression model? I have a question as a newbie. I'm working on a tool using regression analysis( linear, multiple, multivariate) to derive a regression model. To verify the correctness of the tool, I'm trying to find ... 4 views ### Simple Trend Question in Excel [on hold] I am just using the simple trend function in Excel. I am confused by the bold value below for 2013. Why the increase? 11 Y values 11 X values The final two are the predictions. Y = 1118.69791 ... 15 views ### test interactions for multiple regression with many predictor variables I have a data set with around 25 predictor variables. If I am planning to build multi-regression model against this data set. What are the general approaches to test the interactions of these ... 13 views ### Why do the residuals from a factor analysis have mean zero? The model is given below: \begin{align} y_1 - \mu_1 &= \lambda_{11}f_1 + \lambda_{12}f_2 + \dots + \lambda_{1m}f_m + \epsilon_1 \\ y_2 - \mu_2 &= \lambda_{21}f_1 + \lambda_{22}f_2 + \dots + ... 32 views ### Given a chi-squared distribution, find $\Pr(Y \in \mu \pm 2\sigma)$ Given a chi-squared distribution of df = 8, which R command do I use in order to find $P(Y \in [\mu \pm 2\sigma])$? That is, the probability that $Y$ lies within 2 standard deviations of its mean? I ... 34 views 22 views ### Log probability vs product of probabilities According to this wikipedia article, one can represent the product of probabilities x⋅y as -log(x) - log(y) making the ... 4 views ### Wishart distribution in BUGS with prior on the scale matrix I am attempting to sample from a wishart in WinBUGS where the scale matrix has priors on the entries. dwish() in WinBUGS won't allow this, so I am attempting to ... 5 views ### Discriminant validity testing I am required to assess the discriminant validity of the constructs in my model. I have used the Fornell and Larcker (1982) method, however I now need to explore discriminant validity using the ... 12 views ### How to define samples in caret package? I am using the caret package and need to train a random forest, where only certain samples should be in the held-out set. I want to define the sampling for each tree in the random forest, for say 100 ... 16 views ### Statistic confidence interval I have this statement: If a 95% confidence interval for the mean was computed as (25,50), then if several more samples were taken with the same sample size, then 95% of them would have a ... 14 views ### svd adds value before an elastic net model? I learned that SVD eliminates redundancies. If you use an elastic net model, is it still greedy as stepwise models in general? or the fact that it has penalization factors reduces the greedy ... 10 views ### How do I apply MDS analysis on my data set? Consider the following dataset (it is the emission probability matrix of a Hidden Markov Model): ... 26 views ### Probabilistic model Vs Weight based model Can someone please explain what is the difference between probabilistic model vs weight based model with respect to ML context. When to use one over the other? 9 views ### Troubles reporting transformed variables for log and sqrt into a general equation Good morning everybody, I see CrossValidated has really high level of questions and answer; I am just a student so I hope this question is not too basic... Suggestion of further readings available ... 39 views ### Free Throw Probability I was asked this in an interview today. Would you rather try to make 2/3 free throws or 4/6? Why? I reasoned that if my chance of making an individual shot was > 2/3, I would go for 4/6 because it ... 19 views ### Multi Output Neural Networks Up until know I only used neural networks to classify a single output, I set one output neuron for each class and check which neuron has the highest/lowest activation. What I am trying to do is to ... 15 views ### Expectation of squared non zero-mean data w.r.t. two distributions I have the following model: $\hat{d}_i=a_i+b_i+c$, where $a_i$ is a zero-mean Gaussian r.v., $b_i$ is a r.v of unknown distribution, and $c$ is a constant. I want to estimate ... 7 views ### Understanding the variance to mean power function in Poisson gamma models I have a biology background and try to understand what it means that the distribution of snps over the genome follows a Poisson gamma (PG) model. It is accepted that each chromosome contains Poisson ... 13 views ### Best way to graph probabilities of feature vectors I have a lot of feature vectors in the form of: v1=[x0, x1, x2, x3, x4] where x0, x1, and x2 can take binary values. either 0 or 1 x3 and x4 can take values from 0 up to 9 I have a lot of vectors ... 29 views ### References and Best practices for setting seeds in pseudo-Random Number Generation In this document, that concerns the "set seed" command, Stata people discuss issues related to the setting of seeds when generating pseudo-random numbers. A notable "don't" is "don't use serially ... 25 views ### Need help setting up multilevel logistic regression I am trying to see the effect of a certain intervention in schools. The outcome variable is binary. We have students within schools. Also students' age is a covariate (doesn't changed before and after ... 44 views ### which is correct way for regression line? I have a set of data (some Frequencies per month, Var1 is the month): ... 5 views ### Approximating cox model with time varying covariates using poisson How do you reformat a dataset in order to perform a cox regression with time-varying covariates as a poisson regression. I'm trying to run a survival analysis regression in python with time varying ... 26 views ### Minimum variance for sum of three random variables I have been working on the following problem: Given you have VarX = 1, VarY = 4, and VarZ = 25, what is the minimum possible variance for the random variable W = X + Y + Z, or min Var(X+Y+Z)? My ... 14 views ### Survival using SPSS [on hold] I have a database of about 50 subjects. Each subject received a number of implants (2-7 per patient). Therefore, there is one row per subject. I have a column for number of implants. I have another ... 13 views ### Help finding critical on a hypothesis contrast I´ve tried finding the variance using the moment-generating function but apparently that is not the correct method for finding the critical. Can you help me with this? 33 views ### Could you please suggest a newbie about publishing Statistics related papers? First of all pardon me if this is not the right place to ask something like this. If it is not the right place then I'll request you to let me know where I can discuss such things related to academia. ... 13 views ### Help in finding the joint distribution There is a metric $H$ defined as = $\sum_{i=1, j \neq i}^{N} \min |u_i - u_j| * ..*|u_{i+d-1} - u_{j+d-1}|$ where $u$ is a multi dimensional vector of dimension $d$ and $u_i,u_j$ $\in \mathcal{R}^d$ ... As you know lasso is a popular variable selection method of the form of $(y-x\beta)'(y-X\beta)+\lambda \sum_i|\beta_i|$ the first is that it is possible to use optim() function in R to minimize ...
2014-10-24 12:41:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9033839106559753, "perplexity": 1225.8679442022844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119645920.6/warc/CC-MAIN-20141024030045-00197-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.sunclipse.org/?m=201607
# A Frabjous, Albeit Delayed, Day David Mermin thanked me for finding a glitch in one of his papers. I can retire now, right? The matter concerns “Hidden variables and the two theorems of John Bell” [Reviews of Modern Physics 65, 3 (1993), pp. 803–15]. Specifically, we turn our attention to Figure 4, the famous “Mermin pentagram,” reproduced below for convenience. The caption to this figure reads as follows: Ten observables leading to a very economical proof of the Bell–KS theorem in a state space of eight or more dimensions. The observables are arranged in five groups of four, lying along the legs of a five-pointed star. Each observable is associated with two such groups. The observables within each of the five groups are mutually commuting, and the product of the three observables in each of the six groups is $+1$ except for the group of four along the horizontal line of the star, where the product is $-1$. In that last sentence, “three observables in each of the six groups” should instead read “four observables in each of the five groups” (in order to agree with the diagram, and to make sense). Glitches and goofs can happen to anybody. I’m embarrassingly prone to them myself. I also have the pesky kind of personality that is inclined to write in when I find them. This has led to a journal-article erratum once before, and now that I think about it, it provided the seeds for two papers of my own. As they say about Wolverine, being per-SNIKT-ety pays off! (Incidentally, it took two months for this latest erratum to appear. A sensible system could have done it in as many days, but that’s scientific publishing for you.) # Daria Makes A Deal, Chapter Eight Now and then, I see someone mocking the idea of fanfiction—typically, “Tumblr fanfic” in particular. And it’s understandable. I mean, when the canonical material rises to such heights as, um, Batman V Superman, and Tumblr can only offer Martha Kent fighting off time-travelers who come back to kill young Clark, well, is there even really a choice to make? With the “Captain America is Hydra” story arc, Marvel provides readers with the innovative and unprecedented story of Bad Guys Use Space Thing To Make Big Good Guy Bad. Seriously, for sheer inventiveness and entertainment value, how could Tumblr or AO3 even compete? *snerk* Yes, fanfiction did give the world Christian Grey. But it also gave us Will Graham and Hannibal Lecter having Christian Grey for dinner, which has to count for something. For previous installments of Daria Makes A Deal, see the chapter index. (For my research in quantum information theory, see my recap of recent publications.) Fair warning: I got the Granada TV Sherlock Holmes series for Christmas and have been watching a lot of that lately. CHAPTER EIGHT Daria noticed herself climbing a rope up towards a treehouse. “This is odd,” she said. “I shouldn’t have nearly the upper-body strength to be doing this so easily.” She took a good look at the knotted hemp rope. Daria tried to work her memory backwards, to see if it offered any clues about her current situation. She recalled the fracas in the hotel lobby, and then Tom and Saavik were looking at her as though she were unwell, and she was telling them that she was just tired. She remembered thinking that she could pass off any odd behavior as due to her recent discovery of her own apparent bisexuality. Which sounded plausible enough. And so she had begged off, pleading the ineffectuality of caffeine, to hide in the room where she had awoken from her dream— It had been only a dream, but that meant nothing at all. Continue reading Daria Makes A Deal, Chapter Eight
2017-03-30 10:55:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3437047004699707, "perplexity": 2772.337661747194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218193716.70/warc/CC-MAIN-20170322212953-00566-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.vtmarkets.com/analysis/aug-242020/
English Europe Middle East Asia # Daily Market Analysis Market Focus U.S. markets drift slightly higher as investors assess economic reports and a resurgence of coronavirus cases in Europe. Economic data from most of euro- area seem to worsen than expected, except the U.K. Europe is grappling with a resurgence of pandemic infections, with little appetite amid top officials to resort to stringent restrictions that help control the spread out earlier this year. The slowdown of the economic recovery marks up the dollar and the U.S. Treasuries. In the meanwhile, public eyes are focusing on two companies, Apple Inc., and Tesla Inc. After becoming the first $2 trillion in value earlier this week, Apple Inc. continues to hit a record intraday high, solidifying its market capitalization at more than$2 million. At the same time, Tesla Inc. extends gains to hit a fresh record intraday high, rallying above $2000 per share for the first time ever and bringing its year to date advance to more than 370%. Those performances are exceptionally impressive. British economy recovers faster than expected in August according to stellar economic reports. But British economy still faces a long way to recover after shrinking by a record of 20% during pandemic periods. At the same time, U.K.’s Purchasing Managers Index indicates that the business activities have jumped to a seven- year high, showing a decline in business confidence and an increase in planning to shed jobs opportunities. Other than that, British market will focus on the progress in talks for a trade deal with the European Union when the Brexit transition is completed. Main Pairs Movement The Canadian dollar falls a little against the dollar as the U.S. has favorable economic data, but the pair loses its traction amid profit- taking before the weekend. The Canadian retail sales have picked up in June and July as the economy begins to reopen from pandemic lockdowns. In the meanwhile, WTI crude is down 1.3% at around$42.28 per barrel, resulting in a depreciation in the Canadian dollar. The USDCAD is currently trading at 1.31915, having 0.04% in average as of writing. The EURUSD pair drops further around the level of 1.1751, which is the lowest level in the past two weeks as of writing. The dollar is the top performer today after the release of the U.S. PMI and Home Sales report that both are better than expectations. Oppositely, the euro is fading as escalating COVID-19 cases and weak European area growth weigh on the common currency. As a result, investors are expecting the downturn of the euro. COVID-19 Data (EOD): Technical Analysis: EURUSD(H4) After challenging the resistance level at 1.1879, the EURUSD pair turns into a bearish momentum. From one perspective, the pair is in a short position because It breaks through the uptrend line; however, from another perspective, the pair is under consolidation between 1.1879 and 1.1708. If it eventually breaks through 1.1708, then it suggests a short position. The technical view of ongoing RSI indicator suggests that the pair is neither oversold nor overbought. That being said, it is recommended to wait and see the pair’s further direction. Resistance: 1.1879, 1.1929 Support: 1.1708, 1.1425 GBPUSD(H4) Sterling fell to the 1.30895 after released statistics in British continued to improve. Retail sales for July rose 3.6% that beat 2% forecast. Multiple PMI indicator also torrid and upbeat the optimistic forecast, Service and Composite boost over 60. On the other hands, Brexit negotiations tension had ratchet up that impact on the pound this week. No deal progress can be expected, and market is pricing for the further risks. Resistance: 1.327, 1.3338 Support: 1.3007, 1.2746, 1.2256 AUDUSD(H4) The AUDUSD pair failed to test its resistance at 0.7209, heading into a consolidation range between 0.7209 and 0.7114. The indicators suggest the trend to remain the same; now, the pair is waiting for attempts to overcome the lower limit of 0.7114. It is expected to see AUDUSD to weaken due to the strength of the dollar. With the dollar gathering strength after the release of the U.S. data, the AUDUSD will be likely to lose its traction. Resistance: 0.7209, 0.7236 Support: 0.7114
2022-10-06 14:24:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3289196491241455, "perplexity": 4448.8844610138085}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00046.warc.gz"}
https://www.hackmath.net/en/math-problem/889
# Tower How many m2 of copper plate should be to replace roof of the tower conical shape with diameter 24 m and the angle at the vertex of the axial section is 144°? Correct result: S =  476 m2 #### Solution: $r = 24/2 = 12 \ m \ \\ s = r / \sin(144 ^\circ / 2) = 12.62 \ m \ \\ \ \\ S = \pi r s = \pi \cdot 12 \cdot 12.62 = 476 \ \text{m}^2$ We would be pleased if you find an error in the word problem, spelling mistakes, or inaccuracies and send it to us. Thank you! Tips to related online calculators Do you want to convert length units? #### You need to know the following knowledge to solve this word math problem: We encourage you to watch this tutorial video on this math problem: ## Next similar math problems: • How many How many m2 of copper sheet is needed to replace the roof of a conical tower with a diameter of 13 meters and a height of 24 meters, if we count 8% of the material for bending and waste? • Reflector Circular reflector throws light cone with a vertex angle 49° and is on 33 m height tower. The axis of the light beam has with the axis of the tower angle 30°. What is the maximum length of the illuminated horizontal plane? • Angle of cone The cone has a base diameter of 1.5 m. The angle at the main apex of the axial section is 86°. Calculate the volume of the cone. • Tetrahedral pyramid Determine the surface of a regular tetrahedral pyramid when its volume is V = 120 and the angle of the sidewall with the base plane is α = 42° 30´. • Cone Calculate volume and surface area of ​​the cone with a diameter of the base d=15 cm and side of the cone with the base has angle 52°. • Axial section of the cone The axial section of the cone is an isosceles triangle in which the ratio of cone diameter to cone side is 2: 3. Calculate its volume if you know its area is 314 cm square. • Elevation angles From the endpoints of the base 240 m long and inclined at an angle of 18° 15 ', the top of the mountain can be seen at elevation angles of 43° and 51°. How high is the mountain? • Observation tower From the observation tower at a height of 105 m above sea level, the ship is aimed at a depth angle of 1° 49´. How far is the ship from the base of the tower? • House roof The roof of the house has the shape of a regular quadrangular pyramid with a base edge 17 m. How many m2 is needed to cover roof if roof pitch is 57° and we calculate 11% of waste, connections and overlapping of area roof? • Axial section The axial section of the cone is an equilateral triangle with area 168 cm2. Calculate the volume of the cone. • A spherical segment A spherical section whose axial section has an angle of j = 120° in the center of the sphere is part of a sphere with a radius r = 10 cm. Calculate the cut surface. • Resultant force Calculate mathematically and graphically the resultant of a three forces with a common centre if: F1 = 50 kN α1 = 30° F2 = 40 kN α2 = 45° F3 = 40 kN α3 = 25° • Rotary cone The volume of the rotation of the cone is 472 cm3 and angle between the side of the cone and base angle is 70°. Calculate lateral surface area of this cone. • TV tower Calculate the height of the television tower if an observer standing 430 m from the base of the tower sees the peak at an altitude angle of 23°? • The cone The lateral surface area of the cone is 4 cm2, the area of the base of the cone is 2 cm2. Determine the angle in degrees (deviation) of the cone sine and the cone base plane. (Cone side is the segment joining the vertex cone with any point of the base c • Area and two angles Calculate the size of all sides and internal angles of a triangle ABC, if it is given by area S = 501.9; and two internal angles α = 15°28' and β = 45°. • Cone A2V The surface of the cone in the plane is a circular arc with central angle of 126° and area 415 cm2. Calculate the volume of a cone.
2021-01-18 19:10:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6529258489608765, "perplexity": 713.1192241628523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703515235.25/warc/CC-MAIN-20210118185230-20210118215230-00794.warc.gz"}
https://www.electro-tech-online.com/threads/microchip-hints-bug-fixes-and-work-arounds.108932/
Microchip hints, bug fixes, and work arounds. 3v0 Coop Build Coordinator Forum Supporter I am starting this thread in the hope that we can make it a sticky. The goal is to post known solutions for bugs related to MPLAB and all PIC compilers. Last edited: 3v0 Coop Build Coordinator Forum Supporter MPLAB. Error - could not find file 'c018i.o'. The only sure way I know of fixing the C018i error is to specify the lib path from PROJECT>BUILD_OPTIONS_PROJECT Select the DIRECTORIES tab and followed by "LIBRARY SEARCH PATH" from the drop down box "show directories for". Add the path C:\MCC18\lib edeca Active Member I recently had trouble debugging HI-TECH C code using MPLAB. The fix (and more details) are at HI-TECH Software Forums: Debugging using MPLAB Only applies to HI-TECH C, but might be useful to someone else. I have copied that post here I believe I have now fixed this. To cause the problem: 1) Start a new project in MPLAB (using the wizard or not) 2) Add the source files (from the post above) 3) Rebuild For me, this results in a build log like the first one in the attached file. Now to "fix" the problem: 1) Click Project -> Build Options -> Project 2) Toggle a random option off then on 3) Click "Apply" then "OK" 4) Rebuild For me this results in the second log in the attached file. Notice how they are different: Code:// Works Executing: "C:\Program Files\HI-TECH Software\PICC-18\PRO\9.63\bin\picc18.exe" --pass1 C:\MPLAB2\delay.c -q --chip=18F2321 -P --runtime=default,+clear,+init,-keep,-download,+stackwarn,-config,+clib,-plib --opt=default -D__DEBUG=1 --debugger=pickit2 -Blarge --double=24 --cp=16 -g --asmlist "--errformat=Error [%n] %f; %l.%c %s" "--msgformat=Advisory[%n] %s" "--warnformat=Warning [%n] %f; %l.%c %s" // Doesn't work Executing: "C:\Program Files\HI-TECH Software\PICC-18\PRO\9.63\bin\picc18.exe" --pass1 C:\MPLAB2\delay.c -q --chip=18F2321 -P --runtime=default --opt=default -D__DEBUG=1 -g --asmlist "--errformat=Error [%n] %f; %l.%c %s" "--msgformat=Advisory[%n] %s" "--warnformat=Warning [%n] %f; %l.%c %s" The --runtime option is different, --debugger=pickit2 is added (auto detected) and a bunch of other options are there. I have tried this 3 times now with new projects and this is completely reproducible. Last edited by a moderator: 3v0 Coop Build Coordinator Forum Supporter C18 compiler shortcomings. The following was posted by Pommie in another thread. I have today converted a project from BoostC to C18 and had a rather nasty bug. I finally tracked it down to one bit of code that wasn't working correctly. Here is that code in isolation, Code: #include <p18f1320.h> #pragma config WDT = OFF, LVP = OFF, OSC = INTIO2 void main(){ char count; unsigned int temp; OSCCON=0x60; //Osc=4MHz count=0; temp=0b0000100000000000; while((temp&(1<<count))==0){ count++; } while(1); } What this code should do is find the first non zero bit in temp and so count should contain 11 after it executes. Well, it doesn't, it contains 7. C18 works with bytes when it does 1<<count. What is even more peculiar is that temp=1<<11 will place zero in temp not 0x0800 as you would expect. You can fix the above by doing while((temp&((int)1<<count))==0) or adding -Oi command line option. So, why am I telling you this. Cause it took me a long time to find it and I just had to get it of my chest. Plus, it may help others. BTW, the above worked fine in BoostC and it is the C18 compiler that doesn't comply with ISO. Edit, One good point about C18 is that it's free. Mike. and Sure, add it to that thread. It's not really a bug but can cause some nasty bugs. Who would have thought that writing, Code: unsigned int temp=10*60; would not be the same as, Code: unsigned int temp=600; I always assumed that constants would be treated as 32 (or even 64) bits on the PC and cast to the appropriate size during assignment. Mike. Last edited: rsp New Member solution To product a C18 assembler listing (.lst) file, you need to run mp2cod.exe... Project > Build Options > Project custom build tab post build options mp2cod.exe yourfile.cof Credit for this solution goes to Mark at edaboard.com RufusVS New Member Mike. and Sure, add it to that thread. It's not really a bug but can cause some nasty bugs. Who would have thought that writing, Code: unsigned int temp=10*60; would not be the same as, Code: unsigned int temp=600; I always assumed that constants would be treated as 32 (or even 64) bits on the PC and cast to the appropriate size during assignment. Mike. Dang, that's bad! I'd have thought the same. Would this compile correctly? Code: unsigned int temp = (int)10 * (int) 60; not that it should be necessary. EN0 Member Hello, I just recently had a very frustrating problem with HI-TECH for the PIC18 series (Lite version). The I2C routines were giving me strange errors: Code: Executing: "C:\Program Files (x86)\HI-TECH Software\PICC-18\PRO\9.66\bin\picc18.exe" --pass1 "C:\Users\Austin Schaller\Documents\MPLAB Projects\MPLAB C Projects\I²C_18LF4520_HT.c" -q --chip=18F4520 -P --runtime=default --opt=default -D__DEBUG=1 -g --asmlist "--errformat=Error [%n] %f; %l.%c %s" "--msgformat=Advisory[%n] %s" "--warnformat=Warning [%n] %f; %l.%c %s" Executing: "C:\Program Files (x86)\HI-TECH Software\PICC-18\PRO\9.66\bin\picc18.exe" -oI²C_18LF4528_HT.cof -mI²C_18LF4528_HT.map --summary=default --output=default I²C_18LF4520_HT.p1 --chip=18F4520 -P --runtime=default --opt=default -D__DEBUG=1 -g --asmlist "--errformat=Error [%n] %f; %l.%c %s" "--msgformat=Advisory[%n] %s" "--warnformat=Warning [%n] %f; %l.%c %s" HI-TECH C PRO for the PIC18 MCU Family V9.66 this licence will expire on Sat, 15 Oct 2011 Error [499] ; 0. undefined symbol: _WriteI2C(I²C_18LF4528_HT.obj) ********** Build failed! ********** Here is the simple code: Code: /************************************************************************ * * Module: I²C_18LF4520_HT.C * Description: Code to determine I²C functionality with the HI-TECH * compiler. * Line length: 120 characters [only if the length is longer than 80 chars] * Functions: See Below * * 24 Aug 2011 Austin Schaller Created * ************************************************************************/ #include <htc.h> #include <stdio.h> #include "peripheral\i2c.h" __CONFIG(1, MCLRE_ON & CP0_OFF & BOREN_OFF & WDT_OFF & PWRT_ON & OSC_INTIO67); void main() { unsigned char data = 0xFF; OSCCON = 0x70; // OSC = 8 MHz StartI2C(); WriteI2C(data); StopI2C(); } The fix: As quoted from the peripheral libraries manual (section 1.2) In order to use the supplied Microchip-compatible peripheral library functions, the user must ensure the --runtime=+plib option is passed to the driver on the command line, or "Link in Peripheral Libraries" is selected in the "Runtime Options" section of the Project Build Options in MPLAB. The user need not include <plib.h> directly, as including <htc.h> will automatically include <plib.h> when the above option is used, or the macro _PLIB is defined. In other words, do the following: 1) Navigate to: Project -> Build Options -> Project -> Linker 2) Under "Runtime options" select "Link in Peripheral Library" I just hope none of you have to go through the pain that I did! Austin Last edited: DerStrom8 Super Moderator Ok, the other night I was having some trouble with a program. I kept getting the following error: Code: MPLINK 4.40, Linker Device Database Version 1.3 Copyright (c) 1998-2011 Microchip Technology Inc. Error - section '.udata_c018i.o' can not fit the section. Section '.udata_c018i.o' length=0x0000000a Errors : 1 Link step failed. After trying out a few different options, I figured out that my program was larger than the 256 byte RAM could handle. The way to fix this error (can be different from ".udata_c018i.o"--the fix is the same) is to cut down as much as you can on the code. Get rid of any extra arrays, especially, as they take up a lot of space. Also cut down on your ints as much as possible. This will hopefully allow your program to "fit". Der Strom Ian Rogers User Extraordinaire Forum Supporter Hey DS8 the other way is to Initialise a few variables... then they are compiled in the idata section as well as the udata section... Microchip didn't figure on folks like us declaring ALL of the variables un-initialised.... There is a couple of banks reserved for both.. DerStrom8 Super Moderator Hey DS8 the other way is to Initialise a few variables... then they are compiled in the idata section as well as the udata section... Microchip didn't figure on folks like us declaring ALL of the variables un-initialised.... There is a couple of banks reserved for both.. Hey Ian, thanks. Could you elaborate a bit more? I'm not sure I quite follow what you're saying.... Last edited: Ian Rogers User Extraordinaire Forum Supporter I had a program that run out of udata ( un-initialised data ) ie. Code: long var1,var2,var3; Changed ALL the variables to idata ( initialised data ) Code: long var1 = 0, var2 = 0, var3 = 0; That was it!!! Tons more memory. (well a bit anyway) DerStrom8 Super Moderator I had a program that run out of udata ( un-initialised data ) ie. Code: long var1,var2,var3; Changed ALL the variables to idata ( initialised data ) Code: long var1 = 0, var2 = 0, var3 = 0; That was it!!! Tons more memory. (well a bit anyway) That's what I thought you were saying, and I just couldn't believe it would save that much space Thanks for posting! Ian Rogers User Extraordinaire Forum Supporter I think the ram pages are 256 byte in size.. When I last had this problem, the compiler complained about the udata ( basically it can't allocate any more).. When I looked at the memory usage gauge there was loads of ram left.. That's when I did the investigation. There was about 170 bytes of idata free. Ta da!! 3v0 Coop Build Coordinator Forum Supporter TIMER0 confusion This is not a question but an observation. When you step through the code with a PICkit2 SFR TMR0H can not be written to. If you run it works. The chip in this example is the PIC18F330 but I it should exist in other PICs with the same TIMER0 hardware. You will not see it using MPLAB SIM. Code: T0CONbits.TMR0ON = 1; // start timer0 while(1) { Nop(); Nop(); Nop(); TMR0H = h++; // if you step these 2 lines it does not work TMR0L = l++; result = TMR0L; result += ((unsigned int) TMR0H)<<8; Nop(); // set BP here Nop(); Nop(); TMR0H = 0x00; // if you step these 2 lines it does not work TMR0L = 0x00; result0 = TMR0L; result0 += ((unsigned int) TMR0H)<<8; } } Attachments • 3.8 KB Views: 348 Last edited: Pommie Well-Known Member 3v0, When you step and monitor the register it won't work. Try stepping without monitoring them and it should work. This is due to the fact that TMR0H is not a real register but a buffer register and so after the write to TMR0H the update will latch the existing value of the real TIMER0H. See 11.4 in the data sheet for an explanation. Mike. Pommie Well-Known Member I realised you weren't asking a question, I was just explaining why it happened. Keeping it clear of questions is a good idea. Feel free to tidy up this thread by deleteing this and the previous 2 posts. Mike. Ian Rogers User Extraordinaire Forum Supporter In the latest version Datasheet " DS40039E " on page 19 It lists the procedure to initialize PORTA as digital IO.... The Register value to turn off the comparator modules is 07H, and not 05H as in the listing. be80be Well-Known Member I was going to post that 0x05 doesn't work on that chip It should be 0x07 But I had to look to make sure who ever wrote the data sheet may of wrote one that did use 0x05 at the same time. Cant trust them data sheets LOL ClydeCrashKop Well-Known Member This is from the PIC16(L)F1826/27 data sheet. They say just copy and paste ;This code block configures the ADC ;for polling, Vdd and Vss references, Frc ;clock and AN0 input. ;Conversion start & polling for completion ; are included. MOVLW B’11110000’ ;Right justify, Frc clock MOVWF ADCON1 ;Vdd and Vss Vref BANKSEL TRISA ; BSF TRISA,0 ;Set RA0 to input BANKSEL ANSEL ; BSF ANSEL,0 ;Set RA0 to analog On build it said ANSEL is undefined. That's because the 16F1827 has ADC on PORTB as well. So you need to use ANSELA and ANSELB. I found those in the 16F1827.inc file. I am also finding a snag using code for one chip on a different chip. If the compiler complains about things being undefined, check the .inc or .h files for that chip. Just lately I found these varieties and a couple more for the ADC. GODONE = 1; // pic16F876 start conversion GO_DONE = 1; // 16F88 start conversion
2019-10-17 15:05:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35551688075065613, "perplexity": 10258.448807789071}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675409.61/warc/CC-MAIN-20191017145741-20191017173241-00051.warc.gz"}
https://samarpanindia.co.in/railing/557-600feet-of-fence-an-enclosed-region-22500.html
Home > Garden Railing | Best Ideas Stairs House Design | Staircase Railing > 600feet of fence an enclosed region 22500 # Search for: 600feet of fence an enclosed region 22500 #### A farmer has 600m of fencing to enclose a rectangular area ... 14/09/2009 & 0183;& 32;Now imagine that section of fence getting infinitesimal. This means that you're approaching your original answer: 22,500 meters. So, to recap, the first domain: 0, 10000 minimum approaches, but doesn't include, zero: this assumes three equal sections . Second domain: 0, 22,500 maximum approaches, but doesn't include, 22,500 help with a math question? Yahoo Answers27/01/2012A farmer has 600 yards of fencing? Yahoo Answers22/06/2010查看更多结果 #### A family wants to build a rectangular garden on one side ... If 600 feet of fencing is available to use, then what is the area of the largest garden that could be built? A Define a function that relates the area enclosed by the fence in 〖ft.〗^2 in terms of its length ... A = 22500. The largest area the fence can enclose is 22500ft& 178;. #### Find Maximum Area Enclosed by a Fence - YouTube Find the maximum area is a common appli ion in Algebra. Learn how to find the maximum area a rectangular fence can enclose. #### Solved: 600 Feet Of Fence Is Used To Enclose An Area Along ... Question: 600 Feet Of Fence Is Used To Enclose An Area Along The Side Of A Building As Shown To The Right. Let X Be The Width And Y Be The Height, As Shown. Find The Maximum Area That Can Be Enclosed In This Manner As Follows: A. Write The Formula For The Enclosed Area, A, … #### SOLUTION: A rectangular field is to enclosed with 600 m of ... You can put this solution on YOUR website A rectangular field is to enclosed with 600 m of fencing. What dimensions will produce a maximum area? Area = length * width Let y = the area Let x = the width Let L = the length y = x * L Now since the perimeter is 600 P = 2*length 2*width 600 = 2L 2x Divide through by 2 300 = L x Solve for L by subtracting x from both sides: 300 - x = L ...SOLUTION: I got part a and b of this question , need help ...SOLUTION: A farmer plans to fence a rectangular grazing ...SOLUTION: you have 100 feet of fencing for a garden to be ...查看更多结果 #### Determine the maximum rectangular area that can be ... Determine the maximum rectangular area that can be enclosed with eq 600 \ m /eq of fence for a farm for herding sheep. Maximum Area of a Rectangle: We need to model the problem in terms of ... #### A rectangular field is enclosed by 360 feet of fencing ... A rectangular field is to be enclosed by a fence and divided into three lots by fences parallel to one of the sides. ... A square field has an area of 22,500 square feet. To the nearest foot, ... You have 600 feet of fencing to enclose a rectangular plot that borders on a river.area of a rectangular region: a farmer wishes to create ...A rancher wants to fence in an area of 500000 square feet ...查看更多结果 #### mexican fence cactus for sale wood plastic composite mesh fence price; 600feet of fence an enclosed region 22500; composite wood for gates fence miami; wood garden fence in malaysia; 4x4x8 white vinyl fence post kit; where to buy privacy fence slats; composite plastic lumber fence gate; can i paint my side of neighbours fence; plastic bamboo outside fence panel; vinyl fence ... #### Lesson A farmer planning to fence a rectangular area along ... A farmer planning to fence a rectangular area along the river to enclose the maximal area Problem 1 A farmer plans to fence a rectangular grazing area along a river with 300 yards of fence. What is the largest area he can enclose? Solution Since one side is the river, the rectangle's fence perimeter will be L 2W = 300. Hence, L = 300 - 2W. #### 4. A farmer has 600 m of fencing with which to enclose a ... 4. A farmer has 600 m of fencing with which to enclose a rectangular pen adjacent to a long existing wall. He will use the wall for one side of the pen and the available fencing for the remaining three sides. What is the maximum area that can be enclosed in this way? 5. A rectangular box has a square base with edges at least 1 in. long. It has no top, and the total area of its five sides is ... #### You have 600 feet of fencing to enclose a rectangular plot ... So 22500 * -2 is -45000 and -45000 minus 0 is 45000 A = -2 w2 - 300w 22500 45000 Now that we've got that "completing the square" junk out of the way, we can easily factor it. #### Solved: 600 Feet Of Fence Is Used To Enclose An Area … Question: 600 Feet Of Fence Is Used To Enclose An Area Along The Side Of A Building As Shown To The Right. Let X Be The Width And Y Be The Height, As Shown. Find The Maximum Area That Can Be Enclosed In This Manner As Follows: A. Write The Formula For The Enclosed … #### You have 600 feet of fencing to enclose a rectangular plot ... Get an answer to your question You have 600 feet of fencing to enclose a rectangular plot that borders on a river. if you do not fence the side along the river, find the length and the width of the plot that will maximize the area. what is the largest area that can be enclosed? #### A family wants to build a rectangular garden on one side ... If 600 feet of fencing is available to use, then what is the area of the largest garden that could be built? A Define a function that relates the area enclosed by the fence in 〖ft.〗^2 in terms of its length ... A = 22500. The largest area the fence can enclose is 22500ft& 178;. #### Determine the maximum rectangular area that can be ... Determine the maximum rectangular area that can be enclosed with eq 600 \ m /eq of fence for a farm for herding sheep. Maximum Area of a Rectangle: We need to model the problem in terms of ... #### SOLVED:Enclosing the Most Area with a Fence A far… Enclosing the Most Area with a Fence A farmer with 2000 meters of fencing wants to enclose a rectangular plot that borders on a straight highway. If the farmer does not fence the side along the highway, what is the largest area that can be enclosed? #### SOLUTION: I got part a and b of this question , need help ... b Determine a function A that represents the total area of the enclosed region give any restrictions for x. Since Length and Width cannot be negative i put 600-3x is greater than or equal to zero and solved that, 200 is greater than or equal to x, and x is greater than or equal to x so my solution is Please help solve this, A farmer has 600m of fence and wants to enclose a rectangular field beside a river. Determine the dimensions of the fence field in which the maximum area is enclosed. Fencing s required on only three sides: those that aren't next to the river. #### A farmer has 160 feet of fencing to enclose 2 ... - Socratic 19/7/2016 & 0183;& 32;I'm assuming that the pig pens have identical dimensions. Let's assume that the pig pens need to be fenced in the way shown in the diagram above. Then, the perimeter is given by 4x 3y = 160. 4x = 160 - 3y x = 40 - 3/4y The area of a rectangle is given by A = L xx W, however here we have two rectangles put together, so the total area will be given by A = 2 xx L xx W. #### Get Answer - A rectangular region is to be fenced using ... A rectangular region is to be fenced using 4300 feet of fencing. If the rectangular region is to be separated into 5 regions by running 4 lines of fence parallel to two opposite sides, determine the dimensions of the region which maximizes the area of the region. Give the numerical values of the... #### You have 600 feet of fencing to enclose a rectangular … So 22500 * -2 is -45000 and -45000 minus 0 is 45000 A = -2 w2 - 300w 22500 45000 Now that we've got that "completing the square" junk out of the way, we can easily factor it. #### math question: a fence must be built to enclose a ... 10/20/2009 & 0183;& 32;2 x^2 - 22500 = 0 x^2 - 22500 = 0 x 150 x-150 = 0. Thus x = 150. Consequently, y = 300. Your least expensive fance is 300ft North-South and 150ft the other side. Its cost is … #### List of enclaves and exclaves - Wikipedia In political geography, an enclave is a piece of land that is totally surrounded by a foreign territory. An exclave is a piece of land that is politically attached to a larger piece but not physically conterminous having the same borders with it because of surrounding foreign territory. Many entities are both enclaves and exclaves. #### Do Outdoor Living Spaces Add Resale Value to Your Home ... 3/12/2018 & 0183;& 32;2. Outdoor Kitchen. Outdoor kitchens are popular additions, and most experts estimate that homeowners with kitchens in their outdoor living spaces will break even on the investment when selling their homes.. According to Absolute Outdoor Kitchens, homes with outdoor kitchens can potentially see an ROI ranging between 100% and 200%.Keep in mind that higher returns are more likely for homes … #### You have 600 feet of fencing to enclose a rectangular plot ... Get an answer to your question You have 600 feet of fencing to enclose a rectangular plot that borders on a river. if you do not fence the side along the river, find the length and the width of the plot that will maximize the area. what is the largest area that can be enclosed? #### PDF Max-Min Problems 6. A rancher intends to fence off a rectangular region along a river which serves as a natural boundary requiring no fence . If the enclosed area is to be 1800 square yards, what is the least amount of fence needed? Answer: 120 yards of fence … #### Honeywell 22500 HEPA Filter - www.HEPAAirDirect.com & 0183;& 32; Fits Honeywell Enviracaire models 17400, 62500, 12520, 17440, 12500 EV-25 , 10000 #### PDF 1.1. We need to enclose a field with a fence. We have 500 ... 1.1. We need to enclose a field with a fence. We have 500 feet of fencing material and a building is on one side of the field which will not need any fencing. Determine the dimensions of the field that will enclose the largest area.We need to enclose a field with a fence. #### Find the maximum area enclosed by 80 m fence with one side ... GlobalMathInstitute MPM2D GCSEPlaylist to understand Completing squares: www.youtube.com/playlist?list=PLJ-ma5dJyAqqZvr5RoLE8xETW gEvB 9l #### A family wants to build a rectangular garden on one side ... Correct answers: 1 question: A family wants to build a rectangular garden on one side of a barn. If 600 feet of fencing is available to use, then what is the area of the largest garden that could be built? A Define a function that relates the area enclosed by the fence … #### calculus - A fencing area question. - Mathematics Stack ... That suggests that the side of the fence parallel to the barn should be between $20 \text ft$ and $60 \text ft$. However, if the side parallel to the barn exceeds $40 \text ft$, the pen cannot be … #### Potholes in Cornwall needed repairing 22,500 times in 2020 ... New figures show show that highways crews had to repair an average of around 62 potholes A DAY in Cornwall during 2020. Cornwall Council said that it was these figures that were prompting it to call for better funding from the Government for the Duchy's roads, after its teams repaired more than 22,500 potholes last year. Browse photos and price history of this 3 bed, 3 bath, 2,327 Sq. Ft. recently sold home at 22500 6th St, Hayward, CA 94541 that sold on January 28, 2021 for Last Sold for $1,200,000 #### 2021 Cost of a Fence - Estimates and Prices Paid - CostHelper How much a fence should cost. Average costs and comments from CostHelper's team of professional journalists and community of users. A woven wire fence a wire net of vertical and horizontal wires to keep pets or livestock in and/or wildlife out typically costs$0.40-$1.50 a foot for do-it-yourself materials, or$350-\$1,300 to enclose a square acre. PRE Post:install laminated decking specifications NEXT Post:non slip outdoor decking supplier
2021-06-18 15:15:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4657959043979645, "perplexity": 1014.5148492895319}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487637721.34/warc/CC-MAIN-20210618134943-20210618164943-00525.warc.gz"}
https://www.gradesaver.com/textbooks/math/trigonometry/CLONE-68cac39a-c5ec-4c26-8565-a44738e90952/chapter-2-acute-angles-and-right-triangles-section-2-3-finding-trigonometric-function-values-using-a-calculator-2-3-exercises-page-68/76a
# Chapter 2 - Acute Angles and Right Triangles - Section 2.3 Finding Trigonometric Function Values Using a Calculator - 2.3 Exercises - Page 68: 76a when $\theta$ is small $\approx 0^{\circ}$ then $\sin\theta=\tan\theta=\frac{\pi\theta}{180^{\circ}}$ #### Work Step by Step The table of $sin\theta$, $\tan\theta$ and $\frac{\pi\theta}{180^{\circ}}$ given on angles from $0^{\circ} - 4^{\circ}$ with step $0.5^{\circ}$. There $sin\theta$, $\tan\theta$ and $\frac{\pi\theta}{180^{\circ}}$ values are increasing and they are equal only when angle is equal to 0 degrees. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2019-08-20 03:49:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5194090604782104, "perplexity": 720.5494870172595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315222.14/warc/CC-MAIN-20190820024110-20190820050110-00191.warc.gz"}
http://mathhelpforum.com/calculus/49211-solved-complex-exponential-proof-print.html
[SOLVED] Complex exponential proof • Sep 15th 2008, 03:40 PM gidget [SOLVED] Complex exponential proof I need the proof for (e^z1)(e^z2)=e^(z1+z2) using the power series expansion for e^z=1+z+(z^2)/2! + (z^3)/3! +.... I've tried it a couple of times but cant seem to get it. thanks for any help!! • Sep 15th 2008, 03:46 PM icemanfan $e^{z1}e^{z2} = e^{z1+z2}$ is a result of one of the rules of exponents. It requires no proof, at least not if I'm teaching. • Sep 15th 2008, 03:49 PM gidget yeah i've proved it using exponent laws no problem but our prof said to do it the other way and wants it proved using the power series. • Sep 15th 2008, 04:49 PM ThePerfectHacker Quote: Originally Posted by icemanfan $e^{z1}e^{z2} = e^{z1+z2}$ is a result of one of the rules of exponents. It requires no proof, at least not if I'm teaching. Not really. The exponent rule $a^ba^c=a^{b+c}$ works for $a,b,c>0$. But you cannot use this rule here because $z_1,z_2$ are complex numbers. Thus, would does it mean to raise an exponent? Quote: Originally Posted by gidget I need the proof for (e^z1)(e^z2)=e^(z1+z2) using the power series expansion for e^z=1+z+(z^2)/2! + (z^3)/3! +.... I've tried it a couple of times but cant seem to get it. thanks for any help!! Use the Cauchy product formula, $e^ae^b = \left( \sum_{n=0}^{\infty} \frac{a^n}{n!} \right) \left( \sum_{n=0}^{\infty}\frac{b^n}{n!} \right) = \sum_{n=0}^{\infty} c_n$ Where $c_n = \sum_{k=0}^n \frac{a^k}{k!}\cdot \frac{b^{n-k}}{(n-k)!} = \frac{1}{n!} \sum_{k=0}^n {n\choose k}a^kb^{n-k} = \frac{(a+b)^n}{n!}$ Therefore, $e^ae^b = \sum_{n=0}^{\infty} \frac{(a+b)^n}{n!} = e^{a+b}$. • Sep 15th 2008, 05:25 PM Soroban Hello, gidget! Quote: I need the proof for: . $e^a\!\cdot\!e^b\;=\;e^{a+b}$ using the power series expansion: . $e^z\:=\:1+z+\frac{z^2}{2!} + \frac{z^3}{3!} +\cdots$ We have: . $\begin{array}{cccccccc} e^a &=& 1 &+\; a & +\; \dfrac{a^2}{2!} & +\; \dfrac{a^3}{3!} & +\; \dfrac{a^4}{4!} & + \hdots \\ \\[-3mm] e^b & = & 1 & +\; b & +\; \dfrac{b^2}{2} &+\; \dfrac{b^3}{3!} &+\; \dfrac{b^4}{4!} &+ \hdots \\ \\[-3mm] \hline \end{array}$ Multiply: . $\begin{array}{cccccccc} e^a\!\cdot\!e^b &=& 1 & +\;a & +\;\dfrac{a^2}{2!} & +\;\dfrac{a^3}{3!} & +\;\dfrac{a^4}{4!} & + \hdots \\ \\[-3mm] & & & +\;b & + \;ab & +\; \dfrac{a^2b}{2!} & +\;\dfrac{a^3b}{3!} & + \hdots \\ \\[-3mm] & & & & + \;\dfrac{b^2}{2!} & +\;\dfrac{ab^2}{2!} & +\;\dfrac{a^2b^2}{2!2!} & +\hdots \\ \\[-3mm] & & & & & + \;\dfrac{b^3}{3!} & +\;\dfrac{ab^3}{3!} & + \hdots \\ \\[-2mm] \end{array}$ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . $\begin{array}{cc}\dfrac{b^4}{4!} & + \hdots\end{array}$ Add down the columns . . . $e^a\!\cdot\!e^b \;=\;1 + (a + b) + \frac{a^2+2ab + b^2}{2!} + \frac{a^3+3a^2b+3ab^2 + b^3}{3!}$. $+\:\frac{a^4+4a^3b + 6a^2b^2 + 4ab^3 + b^4}{4!} + \hdots$ $e^a\!\cdot\!e^b\;=\;1 + (ab) + \frac{(a+b)^2}{2!} + \frac{(a+b)^3}{3!} + \frac{(a+b)^4}{4!} + \hdots \;=\;e^{a+b}$
2017-06-25 23:44:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9797930121421814, "perplexity": 651.3287056937698}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320593.91/warc/CC-MAIN-20170625221343-20170626001343-00338.warc.gz"}
https://cds.ismrm.org/protected/17MProceedings/PDFfiles/1979.html
### 1979 Elegant method to quantify chemical exchange processes for pH CEST imaging Steffen Goerke1, Johannes Windschuh1, Moritz Zaiss1,2, Jan-Eric Meissner1, Mark E Ladd1, and Peter Bachert1 1Division of Medical Physics in Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany, 2Department of High-field Magnetic Resonance, Max-Planck-Institute for Biological Cybernetics, Tübingen, Germany ### Synopsis A novel concentration-independent approach is presented to determine the pH-dependence of exchange rates employing a single CEST image of a set of model solutions at different pH. Not only the comparatively short acquisition time, but also the robustness against variations in relaxation parameters makes this modality an elegant way to determine exchange rates in vitro. The calibrated functions are required for accurate pH mapping in vivo using CEST, as well as for design of exogenous CEST contrast agents. ### Purpose The concentration-independent determination of the exchange rate ksw is a well-known issue that was nicely solved by Dixon et al.1 and extended to pulsed pre-saturation by Meissner et al.2 Consequently, in vivo pH-mapping via CEST is possible presuming the pH-dependence of the underlying exchange process is known.3 However, precise calibration of the pH-dependence is time consuming and moreover requires assumptions for the transversal relaxation rate R2s of the chemically exchanging protons. In this study, a remarkably robust method is presented which addresses these problems. It is demonstrated that the pH-value where the CEST signal reaches its maximum (pHmax) fully characterizes the exchange process. Determination of pHmax can be realized by acquisition of just one single CEST image (Fig. 1). ### Theory The isolated CEST signal calculated by the apparent exchange dependent relaxation (AREX)4 can be described as followed2: $$AREX = c_{1}\cdot DC\cdot f\cdot k_{sw}\frac{(γB_{1})^{2}}{(γB_{1})^{2}+c_{2}^{2}\cdot k_{sw}(k_{sw}+R_{2s})}$$ with the relative proton fraction f and the pulsed saturation parameters: mean amplitude B1, duty cycle DC, and the form factors c1,2 considering the pulse shape. AREX as a function of ksw reaches a maximum at $k_{sw,max}=\frac{γB_{1}}{c_{2}}$ (Fig. 2a) independent of concentration and R2s (Fig. 2b). Assuming a base-catalyzed exchange process: $k_{sw}=k_{b}\cdot 10^{pH–pK_{w}}=k_{c}\cdot 10^{pH}$, AREX (eq. 1) can be transformed into a function of pH (Fig. 2c,d). The exchange process and thus also the transformation is fully characterized by the pre-exponential factor $k_{c}=k_{b}\cdot 10^{–pK_{w}}$. ### Materials and Methods CEST image data of Rerich et al.5 was used for evaluation. Model solutions containing 50 mM creatine at different pH ranging from 6.3 to 7.6 were measured at 37 °C. $AREX(Δω)=\frac{1}{T_{1}}\cdot (\frac{1}{Z}–\frac{1}{Z_{ref}})$ at frequency offset Δω was calculated using the other side of the Z-spectrum as the reference Zref. CEST imaging was performed on a 7 T whole body MR tomograph (MAGNETOM 7T, Siemens Healthineers, Germany). Pre-saturation was achieved by a train of 50 Gaussian-shaped RF pulses (c2=0.6171, tpulse=100ms, DC=50%, tsat=10s) with a mean amplitude B1 ranging from 1.2 to 3.1 µT. AREX was corrected for B0- and B1-inhomogeneities. ### Results An analytical form of the function AREX(pH) was derived. The expression for the position of its maximum pHmax (eq. Fig. 2c) allows direct calculation of kc and hence full quantification of the exchange process. Remarkably, pHmax is independent of R2s (Fig. 2d) leading to a unique accuracy for the determined exchange rates. In addition, the half-width of the symmetric resonances AREX(pH) is nearly constant under variations of B1 (Fig. 2c) and R2s (Fig. 2d), which facilitates a robust fitting of the function and consequently robust determination of pHmax. Experimental AREX values of creatine as a function of pH (Fig. 3a) agree well with theoretical expectations (Fig. 2c). For full quantification of the exchange process, acquisition of data at one B1 is sufficient. However, to demonstrate the robustness of the presented method, AREX values at several B1 were evaluated. The calculated values kc agree very well, with a mean value of (70.7 ± 0.9) µHz. In a comparison to the reference value determined by the Ω-plot method1,2 kc = (66.5 ± 6.0) µHz the error was reduced approximately by an order of magnitude. ### Discussion The presented method is a powerful tool to robustly quantify exchange rates as a function of pH with a unique accuracy. It was already shown by Woessner et al. that the maximal CEST signal yields insight into the exchange rate.6 We were able to extend this insight by showing that a full characterization of the exchange process is possible by acquisition of just one AREX image at one specific B1. This allows a high throughput quantification of samples and therefore e.g. to investigate the exchange processes under different molecular environments. In contrast, the concentration-independent Ω-plot method1,2 requires a series of AREX images at several B1. In this study, the method was verified under the assumption of a dominant base-catalyzed exchange, which is correct for the CEST signals appearing in vivo at an intermediate B1 around 1 µT. Nonetheless, the theory is also extendable to acid-catalyzed exchange processes. Finally, the method was used to establish calibration functions for amide (Δω = 3.5 ppm) and guanidinium (Δω = 2.0 ppm) protons in vivo. Investigation of homogenized pig brain tissue (data not shown) led to kc = 1.54 and 85.1 µHz for amide and guanidinium protons, respectively. Corresponding exchange rates under physiological conditions (pH 7.1 and 37 °C) are 19.4 and 1071 Hz, respectively. ### Conclusion In this study, a robust method is presented to precisely determine the pH-dependence of exchange rates. The calibrated functions will improve the accuracy of in vivo pH imaging using CEST. ### Acknowledgements We cordially thank Eugenia Rerich from the hospital in Nürnberg, Germany for providing the data of the creatine samples. ### References 1. Dixon WT, Ren J, Lubag AJM, et al. A Concentration-Independent Method to Measure Exchange Rates in PARACEST Agents. Magn Reson Med 2010;63:625-632. 2. Meissner J-E, Goerke S, Rerich E, et al. Quantitative pulsed CEST-MRI using Ω-plots. NMR Biomed 2015;28(10):1196-1208. 3. Sun PZ. Xiao G, Zhou IY, et al. A method for accurate pH mapping with chemical exchange saturation transfer (CEST) MRI. Contrast Media Mol Imaging 2016;11(3):195-202. 4. Zaiss M, Xu J, Goerke S, et al. Inverse Z-spectrum analysis for spillover-, MT-, and T1-corrected steady-state pulsed CEST-MRI – application to pH-weighted MRI of acute stroke. NMR Biomed. 2014;27(3):240-252. 5. Rerich E, Zaiss M, Korzowski A, et al. Relaxation-compensated CEST-MRI at 7 T for mapping of creatine content and pH – preliminary application in human muscle tissue in vivo. NMR Biomed. 2015;28(11):1402-1412. 6. Woessner DE, Zhang S, Merritt ME, et al. Numerical Solution of the Bloch Equations Provides Insights Into the Optimum Design of PARACEST Agents for MRI. Magn Reson Med. 2005;53:790-799. ### Figures Determination of pHmax in a multi-pH phantom directly enables calculation of the pre-exponential factor kc, which fully characterizes the pH-dependence of the exchange rate ksw. The saturation amplitude B1 has to be chosen such that variations in the AREX contrast due to labeling are covered sufficiently. Simulations of the AREX signal (eq. 1) as a function of the exchange rate ksw for varying saturation amplitudes B1 (a) and transversal relaxation rates R2s (b). AREX reaches a maximum, whose position ksw,max is independent of concentration and R2s. Transformation of the ksw-axis into pH-values leads to formation of symmetric resonances of define half-widths (c,d). (a) AREX signal of creatine guanidinium protons at Δω = 1.9 ppm as a function of pH. The position of the maximum pHmax was determined by a fit using equation 1. (b) For each B1, one value kc can be calculated. As a reference, kc was additionally determined by the Ω-plot method1,2 (black lines). Proc. Intl. Soc. Mag. Reson. Med. 25 (2017) 1979
2021-06-12 16:39:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7314634323120117, "perplexity": 6590.62991564739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586239.2/warc/CC-MAIN-20210612162957-20210612192957-00628.warc.gz"}
https://www.tug.org/pipermail/texhax/2010-June/015140.html
# [texhax] Graphics from R into Latex Lars Madsen daleif at imf.au.dk Fri Jun 4 15:37:52 CEST 2010 Henrik Aldberg wrote: > Hi, > > I have produced produced a graph in R which I have saved as a PDF file. When > I inspect the PDF file it looks good. But when I insert it into Latex the > axis labels and the header (and all other text and numbers) are gone. > > I am using Texworks on Windows and have the following in my preamble > > \usepackage[pdftex]{graphicx,color} > > To include the PDF file i write > > \includegraphics{filename} > > I have done the same thing on a Mac and it worked fine. > > could we see the PDF? -- /daleif More information about the texhax mailing list
2022-08-10 14:20:14
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9373787045478821, "perplexity": 3198.6266060334838}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571190.0/warc/CC-MAIN-20220810131127-20220810161127-00250.warc.gz"}
https://gmatclub.com/forum/if-3-x-10-which-of-the-following-must-be-true-248146.html
GMAT Changed on April 16th - Read about the latest changes here It is currently 21 May 2018, 21:03 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # If 3^x>10, which of the following must be true? Author Message TAGS: ### Hide Tags Math Revolution GMAT Instructor Joined: 16 Aug 2015 Posts: 5423 GMAT 1: 800 Q59 V59 GPA: 3.82 If 3^x>10, which of the following must be true? [#permalink] ### Show Tags 28 Aug 2017, 23:57 Expert's post 5 This post was BOOKMARKED 00:00 Difficulty: 35% (medium) Question Stats: 47% (00:32) correct 53% (00:39) wrong based on 156 sessions ### HideShow timer Statistics If 3^x>10, which of the following must be true? I. x>2 II. x>3 III. x>4 A. I only B. II only C. III only D. I and II only E. I, II, and III _________________ MathRevolution: Finish GMAT Quant Section with 10 minutes to spare The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy. "Only $79 for 3 month Online Course" "Free Resources-30 day online access & Diagnostic Test" "Unlimited Access to over 120 free video lessons - try it yourself" Director Joined: 04 Dec 2015 Posts: 700 Location: India Concentration: Technology, Strategy Schools: ISB '19, IIMA , IIMB, XLRI WE: Information Technology (Consulting) If 3^x>10, which of the following must be true? [#permalink] ### Show Tags 29 Aug 2017, 00:14 2 This post received KUDOS MathRevolution wrote: If 3^x>10, which of the following must be true? I. x>2 II. x>3 III. x>4 A. I only B. II only C. III only D. I and II only E. I, II, and III $$3^x>10$$ $$3^2 = 9$$ $$3^3 = 27 >10$$ Therefore $$x$$ must be $$3$$ and greater. ie; $$x$$ must be greater than $$2$$. $$x>2$$ ------- I Only Answer (A)... _________________ Please Press "+1 Kudos" to appreciate. Intern Joined: 27 May 2017 Posts: 11 Re: If 3^x>10, which of the following must be true? [#permalink] ### Show Tags 29 Aug 2017, 00:35 Same concept, why not II and III Sent from my iPhone using GMAT Club Forum SC Moderator Joined: 22 May 2016 Posts: 1667 If 3^x>10, which of the following must be true? [#permalink] ### Show Tags 29 Aug 2017, 18:00 MathRevolution wrote: If 3^x>10, which of the following must be true? I. x>2 II. x>3 III. x>4 A. I only B. II only C. III only D. I and II only E. I, II, and III habdo wrote: Same concept [as correct Option I], why not II and III Sent from my iPhone using GMAT Club Forum habdo , I agree with you. I toggled between answers A and E. I picked A on a gamble. Nothing more. MathRevolution , thank you for posting the question. It seems to me that by definition, if x > 2, it must also be greater than 3 and 4 unless there is an upper limit restriction, and here there is not. I understand that Option I is the minimum condition which satisfies the inequality. That is, x must be greater than 2 for $$3^{x}$$ to be greater than 10. But as I understand the word "must," in logic and in math, "must" includes the transitive cases: If 3 > 2, and 2 > x, then 3 > x I am not sure how one could argue that the third statement "must not" be true. Am I missing something? _________________ In the depths of winter, I finally learned that within me there lay an invincible summer. -- Albert Camus, "Return to Tipasa" Math Revolution GMAT Instructor Joined: 16 Aug 2015 Posts: 5423 GMAT 1: 800 Q59 V59 GPA: 3.82 Re: If 3^x>10, which of the following must be true? [#permalink] ### Show Tags 30 Aug 2017, 01:27 => 3^x > 10 > 3^2 Thus x > 2 Ans: A _________________ MathRevolution: Finish GMAT Quant Section with 10 minutes to spare The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy. "Only$79 for 3 month Online Course" "Free Resources-30 day online access & Diagnostic Test" "Unlimited Access to over 120 free video lessons - try it yourself" Target Test Prep Representative Status: Founder & CEO Affiliations: Target Test Prep Joined: 14 Oct 2015 Posts: 2597 Location: United States (CA) Re: If 3^x>10, which of the following must be true? [#permalink] ### Show Tags 07 Sep 2017, 07:04 2 KUDOS Expert's post 2 This post was BOOKMARKED MathRevolution wrote: If 3^x>10, which of the following must be true? I. x>2 II. x>3 III. x>4 A. I only B. II only C. III only D. I and II only E. I, II, and III Since 3^2 = 9 and 3^3 = 27, if 3^x = 10, then x must be some number between 2 and 3. So if 3^x > 10, then x must be greater than 2. However, x may not need to be greater than 3 (or 4) to hold the inequality 3^x > 10. For example, if x = 2.5, 3^2.5 = 3^2 x 3^0.5 = 9√3 > 10. _________________ Scott Woodbury-Stewart Founder and CEO GMAT Quant Self-Study Course 500+ lessons 3000+ practice problems 800+ HD solutions Intern Joined: 12 Mar 2017 Posts: 15 Re: If 3^x>10, which of the following must be true? [#permalink] ### Show Tags 10 Oct 2017, 00:54 1 KUDOS I'm mixed up with this one. Kudos if you agree! Intern Joined: 03 Dec 2016 Posts: 33 Re: If 3^x>10, which of the following must be true? [#permalink] ### Show Tags 13 Oct 2017, 09:24 MathRevolution wrote: If 3^x>10, which of the following must be true? I. x>2 II. x>3 III. x>4 A. I only B. II only C. III only D. I and II only E. I, II, and III As the question does not mention that x is an integer, x=2.01 then 3^2.01 = 9.09, which is not greater than 10. Hence x>10 cannot be the right answer. Also, the question states that "which ones of these must be true". Any number with x>3 and x>4 will always be true, so why not select these options, as they supply the right numbers Re: If 3^x>10, which of the following must be true?   [#permalink] 13 Oct 2017, 09:24 Display posts from previous: Sort by
2018-05-22 04:03:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6432322859764099, "perplexity": 6962.795019993137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864624.7/warc/CC-MAIN-20180522034402-20180522054402-00231.warc.gz"}
https://brilliant.org/discussions/thread/king-arthurs-knights/
× # King Arthur's Knights King Arthur sat at the Round Table on three successive evenings with his knights—Beleobus, Caradoc, Driam, Eric, FloU, and Galahad—but on no occasion did any person have as his neighbour one who had before sat next to him. On the first evening they sat in alphabetical order round the table. But afterwards King Arthur arranged the two next sittings so that he might have Beleobus as near to him as possible and Galahad as far away from him as could be managed. How did he seat the knights to the best advantage, remembering that rule that no knight may have the same neighbour twice? 4 years, 8 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: both the times - 4 years, 8 months ago Are you asking for the seating arrangement for both the times or only once?? - 4 years, 8 months ago
2018-01-23 02:14:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9894710183143616, "perplexity": 11270.958425120409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891705.93/warc/CC-MAIN-20180123012644-20180123032644-00108.warc.gz"}
https://www.lessonplanet.com/teachers/fractions-746020-math-4th-5th
# Fractions In this equivalent fraction completion activity, students use the complete fraction plus the numerator or denominator of the incomplete fraction to find the missing number. Students solve 10 problems. Concepts Resource Details
2017-05-30 09:52:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8852759003639221, "perplexity": 3566.8611997132516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463614620.98/warc/CC-MAIN-20170530085905-20170530105905-00290.warc.gz"}
https://mathoverflow.net/questions/414194/bounded-linear-operator-on-a-normed-space-with-bounded-inverse-and-dense-range/414450
# Bounded linear operator on a normed space with bounded inverse and dense range Does there exist such an operator $$T$$ (bounded and 1-1 on a normed space $$X$$, it's range $$R_{T}$$ is dense in $$X$$ and $$T^{-1}: R_{T}\to X$$ is bounded) which is not surjective? In other words, does there exist a normed space isometrically isomorphic to a proper dense subspace of it? I know that if $$X$$ is a Banach space, then $$R_{T}$$ must equal to $$X$$ because $$R_{T}$$ is closed in $$X$$. But what if $$X$$ is just a normed space? • So a special case would be a normed space $X$ isometric to a dense proper subspace of itself. Jan 19 at 8:27 • Take a surjective isometry $T$ on a Banach space $Y$, a dense (non-closed ) subspace $Y_0\subset Y$ such that $T(Y_0)=Y_0$, and $y\in Y\setminus Y_0$. Put $X\subset Y$ to be the linear span of $\{ T^n y: n\geq0 \}\cup Y_0$. It's easy to arrange $y\notin T(X)$. Jan 19 at 8:44 • @NarutakaOZAWA: Is it that easy? For instance, if $T$ is the identity map your argument does not work. Jan 20 at 21:48
2022-05-23 06:28:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9168108105659485, "perplexity": 105.47124881674368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662555558.23/warc/CC-MAIN-20220523041156-20220523071156-00672.warc.gz"}
https://socratic.org/questions/how-do-you-test-the-series-sigma-2-n-n-n-n-from-n-1-oo-by-the-ratio-test#538962
# How do you test the series Sigma (2^n(n!))/n^n from n=[1,oo) by the ratio test? Jan 21, 2018 The series converges. See below. #### Explanation: We have: sum_(n=1)^oo(2^n(n!))/(n^n) The ratio test tells us the series will converge if : ${\lim}_{n \to \infty} \left\mid {a}_{n + 1} / {a}_{n} \right\mid < 1$ In this case: a_n = (2^n(n!))/(n^n) So: lim_(n->oo)abs(a_(n+1)/a_n)=lim_(n->oo)abs((2^(n+1)(n+1)!)/((n+1)^(n+1)))/abs((2^(n)(n)!)/(n^n)) =lim_(n->oo)abs((n^n2^(n+1)(n+1)!)/((n+1)^(n+1)2^n n!)) We can do a bit of cancelling here: Note that: ${2}^{n + 1} / {2}^{n} = {2}^{n + 1 - n} = 2$ And ((n+1)!)/(n!)=((n+1)timesntimes...times3times2times1)/(ntimes...times3times2times1) $= \frac{\left(n + 1\right) \times \cancel{n \times \ldots \times 3 \times 2 \times 1}}{\cancel{n \times \ldots \times 3 \times 2 \times 1}} = n + 1$ So we can simplify the limit a bit: =lim_(n->oo)abs((n^ncolor(blue)(2^(n+1))color(green)((n+1)!))/((n+1)^(n+1)color(blue)(2^n) color(green)(n!))) $= {\lim}_{n \to \infty} \left\mid \frac{{n}^{n} \textcolor{b l u e}{2} \textcolor{g r e e n}{\left(n + 1\right)}}{{\left(n + 1\right)}^{n + 1}} \right\mid = {\lim}_{n \to \infty} \left\mid 2 \frac{{n}^{n} \left(n + 1\right)}{{\left(n + 1\right)}^{n + 1}} \right\mid$ We can now cancel the $n + 1$ on the top with the power of $n + 1$ on the bottom like so: ${\lim}_{n \to \infty} \left\mid 2 \frac{{n}^{n} \textcolor{red}{\left(n + 1\right)}}{{\left(n + 1\right)}^{n + \textcolor{red}{1}}} \right\mid = {\lim}_{n \to \infty} \left\mid 2 \frac{{n}^{n}}{{\left(n + 1\right)}^{n}} \right\mid$ Obviously for integers $n$ and $n > 0$ this will always be real and positive so there is no need for the "absolute" brackets; $= 2 {\lim}_{n \to \infty} \frac{{n}^{n}}{{\left(n + 1\right)}^{n}}$ To evaluate this limit consider: $L = {\lim}_{n \to \infty} \frac{{n}^{n}}{{\left(n + 1\right)}^{n}}$ So: $\ln \left(L\right) = \ln \left({\lim}_{n \to \infty} \frac{{n}^{n}}{{\left(n + 1\right)}^{n}}\right) = {\lim}_{n \to \infty} \ln \left({\left(\frac{n}{n + 1}\right)}^{n}\right)$ ${\lim}_{n \to \infty} n \ln \left(\frac{n}{n + 1}\right) = {\lim}_{n \to \infty} - n \ln \left(\frac{n + 1}{n}\right)$ $= {\lim}_{n \to \infty} - n \ln \left(1 + \frac{1}{n}\right) = - {\lim}_{n \to \infty} \ln \frac{1 + \frac{1}{n}}{\left(\frac{1}{n}\right)} \to \frac{0}{0}$ So use L'Hopital's rule: $\frac{d}{\mathrm{dn}} \ln \left(1 + \frac{1}{n}\right) = \frac{1}{1 + \frac{1}{n}} \cdot \left(- \frac{1}{n} ^ 2\right)$ $\frac{d}{\mathrm{dn}} \left(\frac{1}{n}\right) = - \frac{1}{n} ^ 2$ So limit now becomes: $- {\lim}_{n \to \infty} \ln \frac{1 + \frac{1}{n}}{\left(\frac{1}{n}\right)} = - {\lim}_{n \to \infty} \frac{\frac{1}{1 + \frac{1}{n}} \cdot \left(- \frac{1}{n} ^ 2\right)}{- \frac{1}{n} ^ 2}$ The factor of $\left(- \frac{1}{n} ^ 2\right)$ cancels to give: $- {\lim}_{n \to \infty} \frac{1}{1 + \frac{1}{n}} = - \frac{1}{1 + 0} = - 1$ So $\ln \left(L\right) = - 1$ so it follows that: $L = {\lim}_{n \to \infty} \frac{{n}^{n}}{{\left(n + 1\right)}^{n}} = {e}^{- 1} = \frac{1}{e}$ Hence: $2 {\lim}_{n \to \infty} \frac{{n}^{n}}{{\left(n + 1\right)}^{n}} = \frac{2}{e}$ Finally, it follows that: lim_(n->oo)abs(a_(n+1)/a_n)=lim_(n->oo)abs((n^n2^(n+1)(n+1)!)/((n+1)^(n+1)2^n n!))=2/e<1 So by the ratio test the series converges.
2021-10-27 14:12:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 29, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9645043015480042, "perplexity": 1384.7622118744473}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588153.7/warc/CC-MAIN-20211027115745-20211027145745-00213.warc.gz"}
https://math.stackexchange.com/questions/1849738/a-closed-form-lower-bound-approximating-p-n-m-s-nzn-left-sum-k-0s-f
# A Closed Form Lower Bound Approximating $p_{n,m,s} = n![z^n]\left(\sum_{k=0}^s\frac{z^k}{k!}\right)^m$ Here, I found $p_{n,m,s} = n![z^n]\left(\sum_{k=0}^s\frac{z^k}{k!}\right)^m = \sum\limits_{\substack{k_1 + \cdots + k_m=n\\0\leq k_i \leq s}} \frac{n!}{k_1!\cdots k_m!}$ as the number of ways to distribute $n$ distinct objects into $m$ distinct bins, where each bin has capacity $s$. I'm looking for a closed form lower bound that approximates $p_{n,m,s}$ for $s \leq n \ll m$ and does not involve a generating function. Note that $p_{n,m,s}$ can be broken into $$p_{n,m,s} = p_{n,m,s}^{1} + p_{n,m,s}^{\geq 2},$$ where $p_{n,m,s}^1$ is the number of ways to distribute $n$ balls into $m$ bins such that each bin has at most 1 ball and $p_{n,m,s}^{\geq 2}$ is the number of ways to distribute these balls so that at least 1 bin has at least 2 balls. Trivially, $p_{n,m,s} \geq p_{n,m,s}^{1}$. Note that $p_{n,m,s}^1 = m (m-1) (m-2) \cdots (m-n+1) = (m)_n$. Hence $$p_{n,m,s} \geq (m)_n.$$ Depending on the nature of $n \ll m$, the main contribution to $p_{n,m,s}$ will come from $p_{n,m,s}^1$. Note that $$p_{n,m,s}^{\geq 2} \leq {m \choose 1} {n \choose 2} (m)^{n-2},$$ where we choose a bin to have at least 2 balls, then choose two balls in this bin; finally, distribute the remaining $n-2$ balls among any bins. Note that if $n^2/m \to 0$ (as $m \to \infty$), then $$\frac{{m \choose 1} {n \choose 2} (m)^{n-2}}{(m)_n} \leq \frac{n^2}{m} \frac{m^n}{(m)_n} \leq \frac{n^2}{m} \left( \frac{m}{m-n} \right)^n = \frac{n^2}{m} \left( 1 + \frac{n}{m-n}\right)^n \leq \frac{n^2}{m} e^{n^2/(m-n)} \to 0.$$ In this case, $$\frac{p_{n,m,s}}{p_{n,m,s}^1} \to 1.$$ So this lower bound is tight if $n^2/m \to 0$. • Your answer provides a very interesting insight for the limiting case $n^2/m \rightarrow 0$ (as $m \rightarrow \infty$). However, bounding $p_{n,m,s}$ by $p_{n,m,1}$ might not be that tight for finite $m$. Do you think there is a way to derive a non-trivial bound dependent on $s$? – tmp Jul 5 '16 at 17:09 • @tmp what do you mean by $n \ll m$? How much larger is $m$ than $n$? – D Poole Jul 5 '16 at 17:25 • There is no clear direct connection between $n$ and $m$. You may assume $n \sim \sqrt{m}$. Sorry that my use of $\ll$ is unprecise. – tmp Jul 6 '16 at 9:03 • @tmp I ask because you can get better lower bounds based on how large $n$ is compared to $m$. For instance, if $n \sim \sqrt{m}$, then you can show that $p_{n, m, s}^{\geq 3}/p_{n, m, s} \to 0$ as $m \to \infty$. So a very close lower bound on $p_{n, m, s}$ is those assignments with at most 2 balls per bin. For a fixed $s$, unless $n$ is on the order of at least $m^{1 - 1/s}$, those assignments with at least one bin having $s$ balls will be negligible compared to those without. – D Poole Jul 6 '16 at 12:31 • I guess I see your point. Do you mean (in analogy to your answer) to show $p_{n,m,s}^{\geq 3}/p_{n,m,2} \rightarrow 0$ as $m \rightarrow \infty$? How would I explicitly write the denominator in that case? It should be more complicated than $(m)_n$. Are suggesting to argue that $p_{n,m,s}^{\geq3}/p_{n,m,2} < p_{n,m,s}^{\geq3}/p_{n,m,1}$? Why is $m^{1-1/s}$ critical? – tmp Jul 6 '16 at 14:15
2019-07-22 09:59:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9646673202514648, "perplexity": 123.63359126447996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527907.70/warc/CC-MAIN-20190722092824-20190722114824-00210.warc.gz"}
https://encyclopediaofmath.org/index.php?title=Linear_connection&direction=next&oldid=17268
Linear connection (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) A linear connection on a differentiable manifold $M$ is a differential-geometric structure on $M$ associated with an affine connection on $M$. For every affine connection a parallel displacement of vectors is defined, which makes it possible to define for every curve $L ( x _ {0} , x _ {1} )$ in $M$ a linear mapping of tangent spaces $T _ {x _ {1} } ( M) \rightarrow T _ {x _ {0} } ( M)$. In this sense an affine connection determines a linear connection on $M$, to which all concepts and constructions can be transferred which only depend on the displacement of vectors and, more generally, of tensors. A linear connection on $M$ is a connection in the principal bundle $B ( M)$ of frames in the tangent spaces $T _ {x} ( M)$, $x \in M$, and is defined in one of the following three equivalent ways: 1) by a connection object $\Gamma _ {jk} ^ {i}$, satisfying the following transformation law on intersections of domains of local charts: $$\overline \Gamma \; {} _ {jk} ^ {i} = \ \frac{\partial \overline{x}\; {} ^ {i} }{\partial x ^ {r} } \frac{\partial x ^ {s} }{\partial \overline{x}\; {} ^ {j} } \frac{\partial x ^ {t} }{\partial \overline{x}\; {} ^ {k} } \Gamma _ {st} ^ {r} + \frac{\partial ^ {2} x ^ {r} }{\partial \overline{x}\; {} ^ {j} \partial \overline{x}\; {} ^ {k} } \frac{\partial \overline{x}\; {} ^ {i} }{\partial x ^ {r} } ;$$ 2) by a matrix of $1$- forms $\omega _ {j} ^ {i}$ on the principal frame bundle $B ( M)$, such that the $2$- forms $$d \omega _ {j} ^ {i} + \omega _ {k} ^ {i} \wedge \omega _ {j} ^ {k} = \Omega _ {j} ^ {i}$$ in each local coordinate system can be expressed in the form $$\Omega _ {j} ^ {i} = \frac{1}{2} R _ {jkl} ^ {i} d x ^ {k} \wedge d x ^ {l} ;$$ 3) by the bilinear operator $\nabla$ of covariant differentiation, which associates with two vector fields $X , Y$ on $M$ a third vector field $\nabla _ {Y} X$ and has the properties: $$\nabla _ {Y} ( f X ) = ( Y f ) X + f \nabla _ {Y} X ,$$ $$\nabla _ {fY} X = f \nabla _ {Y} X ,$$ where $f$ is a smooth function on $M$. Every linear connection on $M$ uniquely determines an affine connection on $M$ canonically associated with it. It is determined by the involute of any curve $L ( x _ {0} , x _ {1} )$ in $M$. To obtain this involute one must first define $n = \mathop{\rm dim} M$ linearly independent parallel vector fields $X _ {1} \dots X _ {n}$ along $L$, then expand the tangent vector field to $L$ in terms of them, $$\dot{x} ( t) = \mu ^ {i} ( t) X _ {i} ( t),$$ and finally find in $T _ {x _ {0} } ( M)$ the solution $x ( t)$ of the differential equation $$\dot{x} ( t) = \mu ^ {i} ( t) X _ {i} ( 0)$$ with initial value $x ( 0) = 0$. At an arbitrary point $x _ {t}$ of $L$ an affine mapping of tangent affine spaces $$( A _ {n} ) _ {x _ {t} } \rightarrow \ ( A _ {n} ) _ {x _ {0} }$$ is now defined by a mapping of frames $$\{ x _ {t} , X _ {i} ( t) \} \rightarrow \ \{ y _ {t} , X _ {i} ( 0) \} ,$$ where ${x _ {0} y _ {t} } vec = x ( t)$. A linear connection is often identified with the affine connection canonically associated with it, by using the one-to-one correspondence between them. A linear connection on a vector bundle is a differential-geometric structure on a differentiable vector bundle $\pi : X \rightarrow B$ which associates with every piecewise-smooth curve $L$ in $B$ beginning at $x _ {0}$ and ending at $x _ {1}$ a linear isomorphism of the fibres $\pi ^ {-} 1 ( x _ {0} )$ and $\pi ^ {-} 1 ( x _ {1} )$ as vector spaces, called parallel displacement along $L$. A linear connection is determined by a horizontal distribution on the principal bundle $P$ of frames in the fibres of the given vector bundle. Analytically, a linear connection is specified by a matrix of $1$- forms $\omega _ \alpha ^ \beta$ on $P$, where $\alpha , \beta = 1 \dots k$, where $k$ denotes the dimension of the fibres, such that the $2$- forms $$d \omega _ \alpha ^ \beta + \omega _ \alpha ^ \gamma \wedge \omega _ \gamma ^ \beta = \Omega _ \alpha ^ \beta$$ are semi-basic, that is, in every local coordinate system $( x ^ {i} )$ on $B$ they can be expressed in the form $$\Omega _ \alpha ^ \beta = \frac{1}{2} R _ {\alpha i j } ^ \beta \ d x ^ {i} \wedge d x ^ {j} .$$ The horizontal distribution is determined, moreover, by the differential system $\omega _ \alpha ^ \beta = 0$ on $P$. The $2$- forms $\Omega _ \alpha ^ \beta$ are called curvature forms. According to the holonomy theorem they determine the holonomy group of the linear connection. A linear connection in a fibre bundle $E$ is a connection under which the tangent vectors of horizontal curves beginning at a given point $y$ of $E$ form a vector subspace $\Delta _ {y}$ of $T _ {y} ( E)$; the linear connection is determined by the horizontal distribution $\Delta$: $y \mapsto \Delta _ {y}$. References [1] A. Lichnerowicz, "Global theory of connections and holonomy groups" , Noordhoff (1976) (Translated from French) [2] S. Kobayashi, K. Nomizu, "Foundations of differential geometry" , 1 , Interscience (1963)
2021-10-28 12:11:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8314594626426697, "perplexity": 182.53665728065127}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588284.71/warc/CC-MAIN-20211028100619-20211028130619-00254.warc.gz"}
https://puzzling.stackexchange.com/questions/61132/scheduling-meetings
# Scheduling Meetings I came across this problem in real life and thought it could be made into an interesting puzzle. I will enjoy seeing how my eventual solution could be improved! Here's the situation. • There are 290 themes that need to be discussed. • Each theme requires discussion by some subset of 22 people. • We need to schedule an efficient number of meetings so that: • All themes get discussed once • All the people who are impacted by the theme are in the discussion. It would, of course, be possible to have 290 meetings with just the subset of people who are needed. On the other extreme, we could get all 22 people together and have one giant meeting. The themes will be quite quick to discuss (say 10 minutes), but we don't want to waste people's time sitting in meetings where many of the themes do not impact them. So the plan will be to have a number of meetings with different subsets of the people and cover all the themes. Part of the puzzle is to think about what efficient means here in real life terms. So if you have some ideas to improve the scoring then you can include that and I may update the scoring system in response. There's a qualitative aspect to this where neither of the two extremes above is satisfactory. But for the sake of scoring, here's how we will do it: 1. Convening a meeting wastes 15 minutes of time per participant 2. Being in a meeting wastes 10 minutes per theme for each participant who is not involved with the theme 3. A meeting wastes 5 minutes per participant every 10 themes (coffee break) So we want to minimize the waste. By my reckoning: • One big meeting of everyone wastes 55,180 minutes • 290 separate meetings wastes 13,620 minutes • The solution I found wastes 9,530 minutes I calculated this with a python script. With one big meeting we have: • 21 participants are required (I should have realized that person R doesn't meet anyone) and we are going to go through all 290 themes. • The meeting wastes 15 mins * 21 people = 315 mins to convene (scoring rule 1) • The first theme (1516) only engages 4 people (B, M, P, and T). So the other 17 people are wasting ten minutes each. So add 10 mins * 17 people = 170 mins to the waste (scoring rule 2) • This will have to be repeated for the other 289 ideas. This is easy to calculate. There are 908 people themes (sum of the 1s in the below database), so it's (21 people * 290 themes)-908 involved people = 5182 uninvolved people. So total waste from scoring rule 2 is 51,820 • After 10 themes, 5 mins * 21 people are wasted, so over the course of the whole meeting there will be 29 such breaks which will waste 5 min * 21 people * 29 breaks = 3045 mins (scoring rule 3) • So the total waste is 55,180mins If we have 290 meetings with just the right people: • It wastes 908 * 15 mins = 13,620 mins (scoring rule 1) to convene the meetings • No time is wasted in the meetings (scoring rule 2) • No meetings have more than 10 themes (scoring rule 3) • Total waste: 13,620 mins The themes are listed below along with the participants. And in a .CSV format: Theme,A,B,C,D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V 1516,,1,,,,,,,,,,,1,,,1,,,,1,, 2339,,1,,,,1,,,,,,1,,,,,,,1,,1, 4134,1,1,,,,1,,1,,,,,,1,,,,,,,, 1567,,,,,,,,,,,1,1,,,,,,,1,1,1, 1526,1,,,,,,1,,,,,,1,,,1,,,,,,1 3718,1,,,,1,,1,,,,,,,,,,,,1,,,1 2791,1,,,,1,1,1,,,,,1,,,,,,,,,, 574,,,,,,1,1,1,,1,,1,,,,,,,,,, 1627,1,1,,1,1,,,,,,1,,,,,,,,,1,, 3246,,,,,,,,,,,1,1,1,,1,1,,,,1,, 3120,1,1,1,,,,,,,,1,,,,,,,,1,,1, 3893,,,1,1,1,,,1,,,1,1,,,,1,,,,,, 3265,,,1,,,,,1,,,1,1,1,,,1,,,,1,, 363,1,1,1,,,1,,,,,1,,,,,,,,1,,1, 1709,1,,,,,,1,1,,,,,1,,,,,,1,,1,1 3500,,,1,1,1,,1,1,,,1,,,,,1,,,,,, 3125,,,1,,,,1,1,,,1,1,1,,1,,,,,1,, 3428,1,1,1,,,,1,1,,,,,1,,1,,,,1,1,, 244,1,1,,,1,,,,,,1,1,1,,,1,,,1,1,, 2448,1,1,1,,,,,1,,,1,1,1,,,1,,,,1,, 3434,,,,,,,1,,,,,,,,,,,,,,1,1 1240,,1,,1,,,,,,,,,,,1,,,,,,, 476,,,,,,,,,,,,1,,,,,,,,1,1,1 3025,1,,,,,,,,,,1,,1,,,,,,,,1, 442,,1,,,,1,,,,,,,1,,,,,,1,,, 3400,1,1,,,,,,,,,1,1,,,,,,,,,1, 371,1,,,,,,1,1,,,,,,,1,,,,,,,1 1939,1,,,,,,1,1,,,,,1,,,,,,,,,1 4241,,,,,,,,,,,,,,,,,1,,1,,, 971,,,,,,,,,,,,,,,,,1,,,,1, 1117,,,,,,,,,,1,,,1,,,,,,,,, 258,,,,,,,,,,1,,,,1,,,,,,,, 611,,,,,,,1,,,,,1,,,,,,,,,,1 1703,,,,1,,,1,,,,1,,,,,,,,,,, 2136,,,,1,1,,,1,,,,,,,,,,,,,, 3307,,,,,,,,,,,1,,,,1,1,,,,,, 879,,1,,,,1,,,,,,,,,,,,,1,,1, 3404,,,,,,,1,,,,1,1,,,,,,,,,,1 1737,1,,1,,1,,,,1,,,,,,,,,,,,, 1736,1,,1,,1,,,,1,,,,,,,,,,,,, 1768,1,,,,1,,1,,,,,,1,,,,,,,,, 1447,,,,,,,,,,1,1,1,,,1,,,,,,, 3843,1,,,1,1,,1,,,,,,1,,,,,,,,, 3890,,,,1,1,,,,,,,,1,,,,,,,,, 3435,,,,1,1,,,,,,,,,,,1,,,,,, 956,,,,1,,,,,,,,1,1,1,,,,,,,, 1491,,,,1,1,1,1,,,,,1,1,1,,1,,,,,, 2216,,1,,,1,,,,,,,,,,,,,,,1,, 500,1,,,,1,,,,,,,,1,,,,,,,,, 962,1,,1,,1,,,1,,,1,,,,,,,,,,, 3308,1,1,1,,1,,,1,,,1,1,,,1,,,,,1,, 3218,1,1,1,,1,,,1,,,1,1,1,,1,,,,,1,, 4245,,,1,,,,,1,,,,,,,,,,,,,, 1759,,1,,,,,,,,,,,1,,,,,,,,1, 3999,1,,,,,,,,,,,,,,1,,,,,,1, 3934,,,,,1,,,,,,,,,,,,,,1,,1, 3624,,,,,,,,,,,1,1,,,1,,,,,1,, 2376,1,1,,,,,,,,,,,,,1,,,,,,1, 2866,,,,,1,,,,,,,,,,,,,,1,1,1, 33,,,,1,1,,,,,,1,1,,,,,,,,,, 3432,1,1,,,,,,,,,,1,1,,,,,,,1,, 985,1,,1,1,1,,,,,,1,,,,,,,,,,, 3176,1,,,1,1,,,,,,1,,,,,1,,,,,, 647,,,1,1,1,,,,,,1,1,,,,1,,,,,, 497,,,1,1,1,,,,,,1,1,,,,1,,,,,, 818,1,,1,1,1,,,,,,1,1,,,,1,,,,,, 2731,1,1,1,,,,,,,,1,1,1,,1,,,,,1,, 3374,1,1,1,,,,,1,,,1,1,1,,1,,,,,1,, 3429,1,1,,,1,,,,,,,,1,,1,,,,1,1,1,1 2218,1,1,1,,,,,1,,,1,1,1,,1,,,,1,1,, 3841,,,,,,,,,,,,,1,,,,,,,,, 1694,,,,,,,1,1,,,,,,,,,,,,,, 3342,,,,,,,1,1,,,,,,,,,,,,,, 3966,,,,,,,1,1,,,,,,,,,,,,,, 2670,,,,,,,1,1,,,,,,,,,,,,,, 3655,,,,,,,1,1,,,,,,,,,,,,,, 3967,,,,,,,1,1,,,,,,,,,,,,,, 92,1,,,,,,,1,,,,,,,,,,,,,, 3192,1,,,,,,,1,,,,,,,,,,,,,, 2514,,,,1,,,,,,,,,,,,1,,,,,, 137,,,,1,,,,,,,,,,,,1,,,,,, 138,,,,1,,,,,,,,,,,,1,,,,,, 147,,,,1,,,,,,,,,,,,1,,,,,, 3223,,,1,,,,,,,,,,,,,1,,,,,, 3443,,,1,,,,,,,,,,,,,1,,,,,, 996,,,1,1,,,,,,,,,,,,,,,,,, 3469,,,1,1,,,,,,,,,,,,,,,,,, 1087,1,,,,,,,,,,,,,,,,,,1,,, 915,1,,,,,,,,,,,,,,,,,,1,,, 2798,1,,,,,,,,,,,,,,,,,,1,,, 2369,1,1,,,,,,,,,,,,,,,,,,,, 2805,1,,,,,1,,,,,,,,,,,,,,,, 1661,1,,,,,1,,,,,,,,,,,,,,,, 2785,1,,,,,1,,,,,,,,,,,,,,,, 3001,1,,,,,,,,,,,,1,,,,,,,,, 3727,1,,,,,,,,,,,,1,,,,,,,,, 2103,,1,,,,1,,,,,,,,,,,,,,,, 2368,,1,,,,1,,,,,,,,,,,,,,,, 3831,,,,,,1,,,,,,,,,,,,,,,1, 2533,,,,1,1,,,,,,,,,,,,,,,,, 2449,,,,1,1,,,,,,,,,,,,,,,,, 2202,,,,,1,,,,,,1,,,,,,,,,,, 1730,,,,,1,,,,1,,,,,,,,,,,,, 1731,,,,1,,,,,1,,,,,,,,,,,,, 1738,,,,1,,,,,1,,,,,,,,,,,,, 1125,,,1,,1,,,,,,,,,,,,,,,,, 2236,,,,1,,,,1,,,,,,,,,,,,,, 2889,,,,1,,,,1,,,,,,,,,,,,,, 3659,,,,1,,,,1,,,,,,,,,,,,,, 3636,,,,,,,,1,,,,,,,,1,,,,,, 3657,,,,,,,,1,,,,,,,,1,,,,,, 718,,,,,,,,,,,1,,,,,1,,,,,, 1701,,,,,1,,1,,,,,,,,,,,,,,, 1697,,,,,1,,1,,,,,,,,,,,,,,, 1691,,,,,,,1,,,,,,1,,,,,,,,, 3163,,,,,,,1,,,,,,1,,,,,,,,, 3224,,,,,,,1,,,,,,,1,,,,,,,, 3272,,,,,,,1,,,,,,,1,,,,,,,, 3319,,,,,,,,,,,,,1,1,,,,,,,, 576,,,,,,,,1,,,,,1,,,,,,,,, 1900,,,,,,,,,,,,,,1,,,1,,,,, 2863,,,,,,,,,,,,,,1,,,1,,,,, 3851,,,,,,,,,,,,,,1,,,1,,,,, 3961,,,,,,,,,,,,,,,,,,,1,,1, 2446,,,,,,,1,,,,,,,,,,,,,,1, 3204,,,,,,,1,,,,,,,,,,,,,,1, 3302,,,,,,,,,,,,,,1,,1,,,,,, 3376,1,1,,,,,,,,,,,,,,,,,1,,, 285,1,1,,,1,,,,,,,,,,,,,,,,, 116,1,,,,1,,,,,,,,,,,,,,1,,, 2433,1,1,,,,,,,,,,,,,,,,,,,1, 307,1,1,,,,,,,,,,,,,,,,,,,1, 2447,1,1,,,,,,,,,,,,,,,,,,,1, 2462,1,1,,,,,,,,,,,,,,,,,,,1, 2480,1,1,,,,,,,,,,,,,,,,,,,1, 2491,1,1,,,,,,,,,,,,,,,,,,,1, 1922,,1,,,,1,,,,,,,,,,,,,,,1, 1298,1,,,,1,,,,,,1,,,,,,,,,,, 1640,,,,1,1,,,,,,1,,,,,,,,,,, 2773,,,,1,1,,,,,,1,,,,,,,,,,, 3041,,,,1,1,,,,,,1,,,,,,,,,,, 2993,1,,,1,1,,,,,,,,,,,,,,,,, 1192,,,1,1,1,,,,,,,,,,,,,,,,, 1734,,,1,1,,,,,1,,,,,,,,,,,,, 1741,,,1,1,,,,,1,,,,,,,,,,,,, 105,,,1,1,,,,,1,,,,,,,,,,,,, 216,,,,1,,,,1,,,1,,,,,,,,,,, 2638,,,,1,,,,1,,,1,,,,,,,,,,, 1880,,,,,,,1,,,,,,1,1,,,,,,,, 1903,,,,,,,1,1,,,,,,,,,1,,,,, 2390,,,,,,,1,1,,,,,,,,,1,,,,, 2913,,,,,,,,,,,,,1,1,,,1,,,,, 3119,1,,,,,,,,,,,1,,,,,,,,,1, 836,,,,,,,,,,,,1,,,,,,,1,,1, 2378,,,,,1,,,,,,1,1,,,,,,,,,, 2199,1,1,,,1,,,,,,,,,,,,,,1,,, 4242,,,,,,,1,1,,,,,,1,,,1,,,,, 2767,1,1,,,,,,,,,,1,,,,,,,,,1, 3014,1,1,,,,,1,,,,,,,,,,,,,,1, 477,1,,,,1,,,,,,1,,,,,1,,,,,, 1350,,,,,,,,,,,1,1,,1,,1,,,,,, 2047,,,,,,,,,,,1,1,,1,,1,,,,,, 3503,,,,,,,,,,,1,1,,1,,1,,,,,, 4207,,,,,,,,,,,1,1,,1,,1,,,,,, 303,1,1,,,,,,,,,,1,,,,,,,1,,1, 318,,,,,1,,,,,,1,1,,1,,1,,,,,, 475,,,,,,,,,,,,1,,,,,,,,,, 2600,,,,,,,,,,,,1,,,,,,,,,, 4159,,,,,,,,,,,,1,,,,,,,,,, 4195,,,,,,,,,,,,1,,,,,,,,,, 2201,,,,,1,,,,,,,1,,,,,,,,,, 454,,,,,1,,,,,,,1,,,,,,,,,, 1025,,,,,1,,,,,,,1,,,,,,,,,, 1302,,,,,1,,,,,,,1,,,,,,,,,, 4346,,,,,1,,,,,,,1,,,,,,,,,, 485,,,,,,,,,,,1,1,,,,,,,,,, 589,,,,,,,,,,,1,1,,,,,,,,,, 681,,,,,,,,,,,1,1,,,,,,,,,, 2513,,,,,,,,,,,1,1,,,,,,,,,, 316,,,,,,,,,,,1,1,,,,,,,,,, 3066,,,,,,,,,,,1,1,,,,,,,,,, 2792,1,,,,,,,,,,,1,,,,,,,,,, 3191,1,,,,,,,,,,,1,,,,,,,,,, 3506,1,,,,,,,,,,1,,,,,,,,,,, 1245,,,,,,,,,,1,1,,,,,,,,,,, 1121,,,1,,,,,,,,,1,,,,,,,,,, 3219,,1,,,,,,,,,,1,,,,,,,,,1, 284,,,,,,,,,,1,1,1,,,,,,,,,, 295,,,,,,,,,,1,1,1,,,,,,,,,, 1242,,,,,,,,,,1,1,1,,,,,,,,,, 1439,,,,,,,,,,1,1,1,,,,,,,,,, 721,,,,,,,,,,,1,1,,,,1,,,,,, 1160,,,1,,,,,,,,1,,,,,1,,,,,, 289,,,1,,,,,,,,1,,,,,1,,,,,, 3466,,,1,,,,,,,,1,,,,,1,,,,,, 1128,,,,1,,,,,,,1,1,,,,,,,,,, 2763,,1,,,,,,,,,1,1,,,,,,,,,1, 3095,,1,,,,,,,,,1,1,,,,,,,,,1, 3096,,1,,,,,,,,,1,1,,,,,,,,,1, 2946,,,1,1,,,,,,,1,,,,,1,,,,,, 2947,,,1,1,,,,,,,1,,,,,1,,,,,, 2949,,,1,1,,,,,,,1,,,,,1,,,,,, 1693,,,,,,,1,,,,,1,,,,,,,,,, 1878,,,,,,,1,,,,,1,,,,,,,,,, 3704,,,,,,,1,,,,,1,,,,,,,,,, 3830,,,,,,,1,,,,,1,,,,,,,,,, 4243,,,,,,,1,,,,,1,,,,,,,,,, 1294,,,,,,,,,,1,,1,,,,,,,,,, 2741,,,,,,,,,,1,,1,,,,,,,,,, 2855,,,,,,,,,,1,,1,,,,,,,,,, 3142,,,,,,,,,,1,,1,,,,,,,,,, 3809,,,,,,,,,,1,,1,,,,,,,,,, 359,1,,,,,,,,,,,,,,,,,,,,,1 1030,1,,,,,,,,,,,,,,,,,,1,,,1 1717,1,,,,,,,,,,,,,,,,,,1,,,1 3000,1,,,,,,1,,,,,,,,,,,,,,,1 2062,1,,,,,,1,,,,,,,,,,,,,,,1 3359,1,,,,,,1,,,,,,,,,,,,1,,,1 746,1,,,,,,1,,,,,,,,,,,,1,,,1 1379,1,,,,,,1,,,,,,,,,,,,1,,,1 3360,1,,,,,,1,,,,,,,,,,,,1,,,1 4036,1,,,,,,1,,,,,,,,,,,,1,,,1 1474,,,,,1,,,,,,,,,,,,,,,,, 730,1,,,,,,,,,,,,,,,,,,,,, 141,1,,,,,,,,,,,,,,,,,,,,, 1542,1,,,,,,,,,,,,,,,,,,,,, 1662,1,,,,,,,,,,,,,,,,,,,,, 1747,1,,,,,,,,,,,,,,,,,,,,, 2171,1,,,,,,,,,,,,,,,,,,,,, 115,1,,,,1,,,,,,,,,,,,,,,,, 3129,1,,,,1,,,,,,,,,,,,,,,,, 3274,1,,,,1,,,,,,,,,,,,,,,,, 1368,1,,,,1,,,,,,,,,,,,,,,,, 58,,,,1,,,1,,,,,,,,,,,,,,, 35,,,,1,,,1,,,,,,,,,,,,,,, 38,,,,1,,,1,,,,,,,,,,,,,,, 39,,,,1,,,1,,,,,,,,,,,,,,, 47,,,,1,,,1,,,,,,,,,,,,,,, 49,,,,1,,,1,,,,,,,,,,,,,,, 62,,,,1,,,1,,,,,,,,,,,,,,, 68,,,,1,,,1,,,,,,,,,,,,,,, 73,,,,1,,,1,,,,,,,,,,,,,,, 91,,,,1,,,1,,,,,,,,,,,,,,, 93,,,,1,,,1,,,,,,,,,,,,,,, 3169,,,,1,,,1,,,,,,,,,,,,,,, 631,,,,,,,1,,,,,,,,,,,,,,,1 1710,,,,,,,,1,,,,,,,,,,,,,,1 2312,,,,,,,1,,,,,,,,,,,,,,,1 2313,,,,,,,1,,,,,,,,,,,,,,,1 4052,,,,,,,1,,,,,,,,,,,,,,,1 2980,,,,,,,,1,,,,,,,,,,,,,,1 2982,,,,,,,,1,,,,,,,,,,,,,,1 4305,,,,,,,,1,,,,,,,,,,,,,,1 761,,,,,,,1,1,,,,,,,,,,,,,,1 824,,,,,,,1,1,,,,,,,,,,,,,,1 1532,,,,,,,1,1,,,,,,,,,,,,,,1 1942,,,,,,,1,1,,,,,,,,,,,,,,1 3239,,,,,,,,1,,,,,,,,,,,1,,,1 3877,1,,,,,,1,1,,,,,,,,,,,,,,1 791,1,,,,,,,1,,,,,,,,,,,1,,,1 832,1,,,,,,,1,,,,,,,,,,,1,,,1 1976,1,,,,,,1,1,,,,,,,,,,,,,,1 2067,1,,,,,,,1,,,,,,,,,,,1,,,1 3231,1,,,,,,1,1,,,,,,,,,,,,,,1 3338,1,,,,,,1,1,,,,,,,,,,,,,,1 770,1,,,,,,1,1,,,,,,,,,,,1,,,1 792,1,,,,,,1,1,,,,,,,,,,,1,,,1 71,1,,,,,,1,1,,,,,,,,,,,1,,,1 844,1,,,,,,1,1,,,,,,,,,,,1,,,1 1530,1,,,,,,1,1,,,,,,,,,,,1,,,1 2060,1,,,,,,1,1,,,,,,,,,,,1,,,1 3394,1,,,,,,1,1,,,,,,,,,,,1,,,1 1050,,,,,,,,,,,1,,,,,,,,,,, 3591,,,,,,,,,,,1,,,,,,,,,,, 2536,,,1,,,,,,,,1,,,,,,,,,,, 3020,,,1,,,,,,,,1,,,,,,,,,,, 3182,,,1,,,,,,,,1,,,,,,,,,,, 3183,,,1,,,,,,,,1,,,,,,,,,,, 3553,,,1,,,,,,,,1,,,,,,,,,,, 3564,,,1,,,,,,,,1,,,,,,,,,,, 3629,,,1,,,,,,,,1,,,,,,,,,,, 4082,,,1,,,,,,,,1,,,,,,,,,,, 3,,,1,,,,,,,,1,,,,,,,,,,, 676,,,1,,,,,,,,1,,,,,,,,,,, 698,,,1,,,,,,,,1,,,,,,,,,,, 909,,,1,,,,,,,,1,,,,,,,,,,, 1926,,,1,,,,,,,,1,,,,,,,,,,, 1499,,,1,,,,,,,,1,,,,,,,,,,, 2912,,,1,,,,,,,,1,,,,,,,,,,, 3453,,,1,,,,,,,,1,,,,,,,,,,, • @Oray, I've added a detailed calculation. Hopefully, it is a little clearer. The essence of the problem is not complicated, but it's a little difficult to explain clearly. Let me know if it's clear or not. This problem would suit you! – Dr Xorile Feb 27 '18 at 18:13 • I think it is clear to me, i just want to understand why there is theme with only one person involved.. – Untitpoi Feb 28 '18 at 9:02 • @Untitpoi Example: Theme 1050 and 3591. Right? – LeppyR64 Feb 28 '18 at 11:59 • There are a few with just one person. They just need to be scheduled with at least that one person. It doesn't make much sense in real life, but the puzzle still works. – Dr Xorile Feb 28 '18 at 20:06 • This almost sounds like something you could throw on code-golf as well. It'd be interesting to see what those folks come up with. – tfitzger Mar 9 '18 at 15:25
2021-06-15 07:45:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3209593594074249, "perplexity": 1359.1858231664412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487617599.15/warc/CC-MAIN-20210615053457-20210615083457-00481.warc.gz"}
https://socratic.org/questions/what-is-the-molar-volume-of-200-0-grams-of-carbon-monoxide
# What is the molar volume of 200.0 grams of carbon monoxide?? Dec 28, 2014 Molar volume problems must contain some information about pressure and temperature, that is the only way a value for a substance's molar volume can be determined. Here's how that works: Starting from the ideal gas law, $P V = n R T$, let's try and find an expression for molar volume $p V = n R T \to \frac{P V}{n} = R T \to \frac{V}{n} = \frac{R T}{P} = {V}_{m o l a r}$ So, a gas' molar volume depends on temperature and pressure. Let's assume that we are at 273.15 K and 1.00 atm, and we want to determine what volume 1 mole of a gas occupies: $\frac{V}{1 m o l e} = \frac{0.082 \frac{L \cdot a t m}{m o l \cdot K} \cdot 273.15 K}{1.00 a t m} = 22.4 \frac{L}{m o l} = {V}_{m o l a r}$ This represents the volume 1 mole of any ideal gas occupies at STP - Standard Temperature and Pressure (273.15 K, 1.00 atm). Let's try and determine the molar volume of $200.0 g$ of $C O$ at STP: $\frac{V}{n} = 22.4 \frac{L}{m o l} \to V = n \cdot \frac{22.4 L}{m o l}$ - this means that the more moles of o substance you have, the bigger its volume will be at STP. Since we have $200.0 g \cdot \frac{1 m o l e C O}{28.0 g} = 7.14$ moles, the volume occupied will be $V = 7.14 m o l e s \cdot 22.4 \frac{L}{m o l} = 160.0 L$
2021-06-20 19:58:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7962084412574768, "perplexity": 564.8275475142378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488253106.51/warc/CC-MAIN-20210620175043-20210620205043-00358.warc.gz"}
https://www.readkong.com/page/determination-of-photonuclear-reaction-cross-sections-on-8653995
# Determination of Photonuclear Reaction Cross-Sections on stable p-shell Nuclei by Using Deep Neural Networks - arXiv Determination of Photonuclear Reaction Cross-Sections on stable p-shell Nuclei by Using Deep Neural Networks Serkan Akkoyun1,3,*, Hüseyin Kaya1, Abdulkadir Şeker2,3, Saliha Yeşilyurt2,3 1 Department of Physics, Faculty of Science, Sivas Cumhuriyet University, Sivas, Turkey 2 Department of Computer Engineering, Faculty of Engineering, Sivas Cumhuriyet University, Sivas, Turkey 3 Artificial Intelligence Systems and Data Science Application and Research Center, Sivas Cumhuriyet University, Sivas, 58140, Turkey Abstract The photonuclear reactions which is induced by high-energetic photon are one of the important type of reactions in the nuclear structure studies. In this reaction, a target material is bombarded by photons with the energies in the range of gamma-ray energy scale and the photons can statistically be absorbed by a nucleus in the target material. In order to get rid of the excess energies of the excited target nuclei, it can first emit protons, neutrons, alphas and light particles according to the separation energy thresholds. After this emitting process, generally an unstable nucleus can be formed. By the investigation of this products forming after photonuclear reactions, nuclear structure information can be obtained. In the present work, (γ, n) photonuclear reaction cross-sections on stable p-shell nuclei have been estimated by using neural network method. The main purpose of this study is to find neural network structures that give the best estimations on the cross-sections and to compare them with each other and available literature data. According to the results, the method is convenient for this Keywords: Photonuclear reaction, cross-section, p-shell nuclei, neural network 1. INTRODUCTION In the nuclear structure studies, reactions induced by photons are one of the important tools. In these types of nuclear reactions, the target nuclei are bombarded by high-energetic photons and the photons can statistically be absorbed by a nucleus in the target material. Because of a nuclear process can be observed in the reaction, these are called as photonuclear reaction (Strauch, 1953). The excited nucleus emits proton, neutron, alpha and light particles first to get rid of excess energy. In the case of neutron emission after absorbing photons, the reaction is called as photo-neutron (γ, n) reaction. The character of the photo-neutron reaction is purely electromagnetic. Therefore, it can be used for understanding nucleon-nucleon interaction, collective motion of the nuclear matter and nuclear state excitation mechanisms. In about 15- 30 MeV energy region, photonuclear reaction cross-sections are large and stable nuclei may be transmuted to short-lived or stable ones by using these reactions. The experimental studies on these reactions have begun in 1934 (Chadwick & Goldhaber, 1934) but there is still lack of existing data. Therefore, systematic studies of photonuclear reactions on different nuclei are needed (Serkan Akkoyun, Bayram, Dulger, vd., 2016). The cross-section values for photo-neutron reactions for different isotopes and different energies are determined either experimentally or theoretically (Ishkhanov & Orlin, 2009; Utsuno vd., 2015). One of the most used theoretical codes for this purpose is TALYS computer code. TENDL database (Koning vd., 2019) is based on this code and other sources such as ENDF. The code is a system for the analysis and prediction of nuclear reactions. The basic objective behind its construction is the simulation of nuclear reactions that involve neutrons, photons, protons, deuterons, tritons, 3He- and alpha-particles, in the 1 keV-200 MeV energy range and for target nuclides of mass 12 and heavier. To achieve this, it is implemented a suite of nuclear reaction models into a single code system. The one of the easiest ways to produce the radioactive isotopes is photo-neutron (γ, n) reaction. 8Be, 9B, 11 C, 13 Ne and 15 O can be generated by using photo-neutron reactions performed on 9Be, 10B, 12C, 14Ne and 16O stable isotopes. Therefore, the information about the cross-sections on p-shell nuclei according to different energy values for these reactions is very important. In the literature, there is no complete data for all photon energies on the isotopes. In the present study, neural network methods have been employed for the prediction of (γ, n) reaction cross sections in different energies from reaction threshold energy values to 200 MeV on stable or long-lived p-shell isotopes. The available cross-section data are taken from TENDL-2019 library [6]. The methods generate the own outputs as close as the desired values. One of the advantages of the method is it does not need any relationship between input and output data variables. Another advantage of the method is that in case of missing data, it can complete missing data thanks to its learning ability. Therefore, one can confidently estimate the cross-section values for the given target and energy values which are not available in the literature. Recently, neural networks have been used in many fields in nuclear physics. Among them the studies performed by our group are developing nuclear mass systematic (Tuncay Bayram vd., 2014), obtaining fission barrier heights (Serkan Akkoyun, 2020), obtaining nuclear charge radii (S. Akkoyun vd., 2013), estimation of beta decay energies (Serkan Akkoyun vd., 2014), approximation to the cross sections of Z boson (Serkan Akkoyun & Kara, 2013; Kara vd., 2014), determination of gamma-ray angular distributions (Yildiz vd., 2018), adjustment of relativistic mean field model parameters (T. Bayram vd., 2018), neutron-gamma discriminations (S. Akkoyun, 2013; Yildiz & Akkoyun, 2013) and estimations of radiation yields for electrons in absorbers (Serkan Akkoyun, Bayram, & Yildiz, 2016). 2. MATERIAL and METHODS NN (neural network) methods are very powerful mathematical tools for almost all problems which are based on the brain functionality and nervous system (Haykin, 1998). They are composed of layers classified in three main groups as input, hidden and output. In each layer there are artificial neuron cells for the aim of processing the data. Because of the layered structure, a particular type of NN is called layered NN. In the layered feed-forward NN, the neurons in a layer are connected to the neurons only in the next layer by adaptive synaptic weights and data flows forward direction. The input neurons receive the input data which are independent variables of the problem. Then the received data is transmitted to the hidden layer neurons by multiplying the corresponding weight values of the connections. The all data entering the hidden neurons are summed by using a chosen summation function for obtaining the net value inside the neuron. After, the net data are activated by an appropriate activation function. The hidden neuron activation function can be theoretically any well-behaved nonlinear function. In this study, tanh (tangent hyperbolic) or ReLU (rectified linear unit) functions have been used for the activations. The advantage of ReLU is its unsaturated gradient, which greatly speeds up the convergence of stochastic gradient landing compared to tanh functions. In the last hidden layer, the data is transmitted to the output layer neurons and NN outputs have been obtained for the dependent variables of the problem. In Fig.1, we have shown the (50-50-50-20) NN structure which is one of the used structures in this study for the determinations of the reaction cross-sections for p-shell stable nuclei. The other used NN structures have been given in Section 3. The inputs were proton number (Z), neutron number (N) of the target nuclei and photon energy (E) impinging upon the target. Only stable or very-long living isotopes have been considered as target nuclei which are 7Li, 9Be, 10 Be (1.51x106 years), 10 B, 11 B, 12 C, 13 C, 14C (5700 years), 14N, 15N and 16O isotopes. The desired output was photo-neutron reaction cross- section for these different isotopes. Figure 1. ANN with (50-50-50-20) structure for the prediction of photo-neutron cross sections for p- shell stable target nuclei The main goal of the method is the determination of the final weight values between neurons by starting random initial values. The NN with best weights gives the NN outputs as close as to the desired values of the problem. In the training stage, NN is trained for the determination of the final best weights by given input and output data values. By the appropriate modifications of the weights, NN modifies its weights until an acceptable error level between NN and desired outputs. The error function was mean square error (MSE) in this study. In the test stage, another dataset of the problem is given to NN and the results are predicted by using the final weights obtained in training process. If the predictions of the test data are well, the ANN is considered to have learned the relationship between input and output data. In this work, Python programming language for the neural network calculations were used. Python programming language contains fast and practical libraries such as pandas, numpy, keras, etc. The data for (γ, n) reaction cross-sections in the literature are studied from threshold energy values to 200 MeV. Total 537 cross-section data has been used for the calculations for p-shell nuclei. All data was divided into three separate sets for training (80%) and test (20%) stages in the calculations. The whole data were obtained from TENDL-2019 reaction cross-section database [6]. The deep sequential neural network model consisting of sequential layers has been used. Each layer added to the deep network is fully connected. In the training stage of NN, the adam optimization algorithm (Kingma & Ba, 2017), which is often preferred in deep learning studies, has been used for optimization. 3. RESULTS and DISCUSSION Although there are cross-section values available in the literature, the data do not cover all energy values for target materials. Besides, it is important to have cross-section information for each desired energy values of the photons to be sent on the target materials. Neural network (NN) methods are suitable and easy way for this task. In the calculations of present study, NN method has been employed for the determination of cross-sections whose inputs are atomic number (Z), neutron number (N) of the target material and energy (E) of the incoming photons. Different numbers of hidden layer and neuron have been used which gives the optimal results for their hidden layer configuration classes. These are one hidden layer with 20 neurons, three hidden layers with (3-8-8) configuration, three hidden layers with (50- 20-10) configuration, four hidden layers with (50-50-20-20) and five hidden layers with (50- 50-20-20-10) configuration, respectively. That is to say, we have got preferable results from 20 neurons for the one hidden layer structure than the other neuron number structures for one hidden layer. For each structure, we have used both ReLU and tanh activation functions separately for the comparison of the results. After the determination of the final weights in the training, the NN has been first used on the training datasets. According to the results, the best estimation on the training dataset has been obtained for (50-50-50-20) structure with the MSE (minimum square error) value of 0.021 mb. The maximum deviations (MD) from literature data for this NN structure are 0.734 mb 10 for Be at 20 MeV photon energy. In the calculations, ReLU activation function has been used. The corresponding MSE and MD values on the training dataset for tanh activation function are 0.025 mb and 0.867 mb. The MD has been observed for 14C at 19 MeV photon energy. The MSE value from ReLU activation function are slightly better than the tanh results on the training dataset. The estimations of other NN structures have been shown in Table 1. 13 For ReLU function, the MD have been observed between 1.510 and 9.912 mb for C at 18 9 10 9 9 MeV, Be at 19 MeV, Be at 19 MeV, Be at 24 MeV and Be at 17 MeV for the NN structure of (20), (3-8-8), (50-20-10), (50-50-20-20) and (50-50-20-20-10). For tanh function, the MD have been observed between 1.336 and 9.248 mb for 10Be at 20 MeV, 11B at 18 MeV, 14 C at 17 MeV, 14C at 15 MeV and 10Be at 20 MeV for the NN structure of (20), (3-8-8), (50- 20-10), (50-50-20-20) and (50-50-20-20-10), respectively. Table 1. Different structure neural network results for the estimations of cross-sections Training Test Hidden neuron Activation MSE (mb) MD (mb) MSE (mb) MD (mb) number function 20 ReLU 4.473 9.771 3.555 7.227 3-8-8 ReLU 4.767 9.912 7.563 9.984 50-20-10 ReLU 0.840 6.107 1.099 5.925 50-50-20-20 ReLU 0.123 2.689 2.377 7.481 50-50-50-20 ReLU 0.021 0.734 0.168 1.654 50-50-20-20-10 ReLU 0.040 1.510 1.078 7.504 20 tanh 2.688 9.248 6.005 9.973 3-8-8 tanh 3.099 9.037 3.830 9.530 50-20-10 tanh 0.140 3.631 0.260 2.003 50-50-20-20 tanh 0.116 2.366 0.656 6.313 50-50-50-20 tanh 0.025 0.867 0.258 3.271 50-50-20-20-10 tanh 0.024 1.336 0.325 3.174 For the seeing of the generalization capability of constructed NN, it has been tested on the test datasets. According to the results, the best predictions on the test dataset have been obtained for the same NN structure with the MSE value of 0.168 mb. The MD from literature data for 15 this NN structure are 1.654 mb for N at 22 MeV photon energy. The corresponding MSE and MD values on the training dataset for tanh activation function are 0.258 mb and 3.271 13 mb. The MD has been observed for C at 15 MeV photon energy. The MSE value from ReLU activation function are about 1.5 factors better than the tanh results on the test dataset. The predictions of other NN structures have also been shown in Table 1. For ReLU function, the MD have been observed between 5.925 and 9.984 mb for 14C at 26 MeV, 15N at 16 MeV, 14 C at 19 MeV, 10Be at 22 MeV and 14C at 17 MeV for the NN structure of (20), (3-8-8), (50- 20-10), (50-50-20-20) and (50-50-20-20-10). For tanh function, MD have been observed between 2.003 and 9.973 mb for 9Be at 20 MeV, 7Li at 22 MeV, 14 N at 16 MeV, 14 C at 18 MeV and 9Be at 22 MeV for the NN structure of (20), (3-8-8), (50-20-10), (50-50-20-20) and (50-50-20-20-10). In Figure 2, we have given the best NN predictions of (50-50-50-20) structure with ReLU activation function on the training dataset in comparison with the available literature data. Although the data is highly non-linear, ANN estimations are in harmony with the literature data. The peaks belong to 7Li, 9Be, 10 Be, 10 B, 11 B, 12 C, 13 C, 14 C, 14 N, 15 N and 16 O isotopes. The largest cross-section has been obtained for 14C isotopes with its maximum value of 33.5 mb at 17 MeV energy value. Its literature value is 33.1 mb. The smallest cross-section has been seen for 12C isotopes. The maximum of the cross-section for this isotope is 2.03 mb at 22 MeV whereas the literature value is 2.00 mb. The maximum cross-section values are 10.23 mb at 22 MeV for 7Li, 14.66 mb at 20 MeV for 9 Be, 26.70 mb at 19 MeV for 10Be, 9.00 mb at 19 MeV for 10B, 12.33 mb at 18 MeV for 11B, 2.03 mb at 22 MeV for 12C, 17.17 mb at 18 MeV for 13C, 33.45 mb at 17 MeV for 14C, 3.28 mb at 17 MeV for 14N, 16.96 mb at 17 MeV for 15N and 0.96 mb at 17 MeV for 16O. Whereas the literature values are 10.72, 14.99, 27.33, 9.05, 12.47, 2.00, 16.95, 33.10, 2.96, 16.93 and 0.96, respectively. The cross-sections get their maximums for the nuclei between 17-22 MeV in the investigated energy range from threshold energies to 200 MeV. The reaction thresholds are 8, 2, 7, 9, 12, 19, 5, 9, 11, 11 and 16 MeV for 7Li, 9Be, 10Be, 10B, 11B, 12C, 13C, 14C, 14N, 15 N and 16O isotopes, respectively. Figure 2. Literature (TENDL) data and best NN estimations with (50-50-50-20) structure on photo- neutron reaction cross-section on stable p-shell nuclei (top) and differences between them (bottom) In Figures 3-7, we have given the differences between the NN predictions and the literature values on relevant cross-section data. These have been shown for both training and test datasets separately for either ReLU or tanh activation functions. Figure 3. Difference between literature (TENDL) data and NN (20) estimations on test (top) and train (bottom) datasets with ReLU (left) and tanh (right) functions For the 20 neurons in one hidden layer NN structure, the estimations on the training data for tanh activation function are better than the ReLU results. Namely, the training of the NN has been performed better for tanh, whereas the test of the NN is slightly worst (Figure 3). However, it is not appropriate to use this NN structure since the estimates are spread around 10 mb. For the (3-8-8) hidden layer configuration of NN, tanh activation function gives slightly better results on both train and test datasets (Figure 4). But since the estimates still reach around 10 mb, this structure is also not suitable for use. Figure 4. Difference between literature (TENDL) data and NN (3-8-8) estimations on test (top) and train (bottom) datasets with ReLU (left) and tanh (right) functions For the (50-20-10) hidden layer configuration of NN which is larger in terms of neuron numbers, the estimations for tanh activation function are better than the ReLU results. The results are 6 and 4 factors better for train and test datasets, respectively (Figure 5). The deviations for predictions on test datasets are between -2 and 2 mb indicate that the larger structures become convenient for the problem. For the (50-50-20-20) hidden layer configuration of NN, the estimations for tanh activation function are slightly better than the ReLU results on the train dataset. Furthermore, the predictions on the test datasets with tanh function are 3.6 factors better (Figure 6). Still, the NN structure should be improved for the good estimations on the cross-section data especially for ReLU. Figure 5. Difference between literature (TENDL) data and NN (50-20-10) estimations on test (top) and train (bottom) datasets with ReLU (left) and tanh (right) functions For the (50-50-50-20) hidden layer configuration of NN, the estimations for ReLU activation function are somewhat better than the tanh results on both train and test datasets. The results are 6 and 4 factors better for train and test datasets, respectively (Figure 7). It is clear in the figure that the predictions are concentrated between -1 and 1 mb. The best results have been obtained by using this NN structure. Figure 6. Difference between literature (TENDL) data and NN (50-50-20-20) estimations on test (top) and train (bottom) datasets with ReLU (left) and tanh (right) functions Lastly, we have tried to larger hidden layer number structure with the (50-50-20-20-10) configuration. For this NN, the training has been performed better by using ReLU activation function than tanh. The estimations on train dataset are 6 factors better than the estimations by using tanh. Whereas for the predictions on test dataset, tanh gives 3.3 factors better results than those of ReLU (Figure 8). Using more than four hidden layers causes results to get worse again. Figure 7. Difference between literature (TENDL) data and NN (50-50-50-20) estimations on test (top) and train (bottom) datasets with ReLU (left) and tanh (right) functions Figure 8. Difference between literature (TENDL) data and NN (50-50-20-20-10) estimations on test (top) and train (bottom) datasets with ReLU (left) and tanh (right) functions 4. CONCLUSIONS In this work, (γ, n) photo-neutron reaction cross-sections for the stable or long-lived isotopes in p-shell have been predicted by using neural network (NN) methods with the different hidden layer and neuron combinations in the threshold to 200 MeV energy range. The results have been compared with each other and the available literature data. The data for the applications of the methods have been borrowed from TENDL-2019 nuclear data library. According to the results, the predictions for the cross-sections are very close to the available literature data. Therefore, one can use the NN methods for the obtaining of photo-neutron reaction cross-sections whose values are not available in the literature. In detail, the increase in the number of hidden layers used and the number of hidden neurons generally improves the results. The obtained better results have generally been come from the activation function of tanh. But the present problem, (50-50-50-20) hidden layer configuration in four hidden layer with ReLU function have given the best results. The use of four hidden layers (deep neural network) with many neurons is more suitable for the obtaining of photo-neutron reaction cross-sections on p-shell nuclei. References Akkoyun, S. (2013). Time-of-flight discrimination between gamma-rays and neutrons by using artificial neural networks. Annals of Nuclear Energy, 55, 297-301. https://doi.org/10.1016/j.anucene.2013.01.006 Akkoyun, S., Bayram, T., Kara, S. O., & Sinan, A. (2013). An artificial neural network application on nuclear charge radii. Journal of Physics G: Nuclear and Particle Physics, 40(5), 055106. https://doi.org/10.1088/0954-3899/40/5/055106 Akkoyun, Serkan. (2020). Estimation of fusion reaction cross-sections by artificial neural networks. Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms, 462, 51-54. https://doi.org/10.1016/j.nimb.2019.11.014 Akkoyun, Serkan, Bayram, T., Dulger, F., Đapo, H., & Boztosun, I. (2016). Energy level and half-life determinations from photonuclear reaction on Ga target. International Journal of Modern Physics E, 25(08), 1650045. https://doi.org/10.1142/S0218301316500452 Akkoyun, Serkan, Bayram, T., & Turker, T. (2014). Estimations of beta-decay energies through the nuclidic chart by using neural network. Radiation Physics and Chemistry, Akkoyun, Serkan, Bayram, T., & Yildiz, N. (2016). Estimations of Radiation Yields for Electrons in Various Absorbing Materials. Cumhuriyet Üniversitesi Fen-Edebiyat Fakültesi Fen Bilimleri Dergisi, 37, 59-65. Akkoyun, Serkan, & Kara, S. O. (2013). An approximation to the cross sections of Zlboson production at CLIC by using neural networks. Central European Journal of Physics, 11(3), 345-349. https://doi.org/10.2478/s11534-012-0168-y Bayram, T., Akkoyun, S., & Şentürk, Ş. (2018). Adjustment of Non-linear Interaction Parameters for Relativistic Mean Field Approach by Using Artificial Neural Networks. Physics of Atomic Nuclei, 81(3), 288-295. https://doi.org/10.1134/S1063778818030043 Bayram, Tuncay, Akkoyun, S., & Kara, S. O. (2014). A study on ground-state energies of nuclei by using neural networks. Annals of Nuclear Energy, 63, 172-175. https://doi.org/10.1016/j.anucene.2013.07.039 Chadwick, J., & Goldhaber, M. (1934). A Nuclear Photo-effect: Disintegration of the Diplon by -Rays. Nature, 134(3381), 237-238. https://doi.org/10.1038/134237a0 Haykin, S. (1998). Neural Networks: A Comprehensive Foundation (2 edition). Prentice Hall. Ishkhanov, B. S., & Orlin, V. N. (2009). Description of cross sections for photonuclear reactions in the energy range between 7 and 140 MeV. Physics of Atomic Nuclei, 72(3), 410-424. https://doi.org/10.1134/S1063778809030041 Kara, S. O., Akkoyun, S., & Bayram, T. (2014). Probing for leptophilic gauge boson Zl at ILC with $\sqrt{s} = 1~{\rm TeV}$ by using ANN. International Journal of Modern Physics A, 29(30), 1450171. https://doi.org/10.1142/S0217751X14501711 Kingma, D. P., & Ba, J. (2017). Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs]. http://arxiv.org/abs/1412.6980 Koning, A. J., Rochman, D., Sublet, J.-Ch., Dzysiuk, N., Fleming, M., & van der Marck, S. (2019). TENDL: Complete Nuclear Data Library for Innovative Nuclear Science and Technology. Nuclear Data Sheets, 155, 1-55. https://doi.org/10.1016/j.nds.2019.01.002 Strauch, K. (1953). Recent Studies of Photonuclear Reactions. Annual Review of Nuclear Science, 2(1), 105-128. https://doi.org/10.1146/annurev.ns.02.120153.000541 Utsuno, Y., Shimizu, N., Otsuka, T., Ebata, S., & Honma, M. (2015). Photonuclear reactions of calcium isotopes calculated with the nuclear shell model. Progress in Nuclear Energy, 82, 102-106. https://doi.org/10.1016/j.pnucene.2014.07.036 Yildiz, N., & Akkoyun, S. (2013). Neural network consistent empirical physical formula construction for neutron–gamma discrimination in gamma ray tracking. Annals of Nuclear Energy, 51, 10-17. https://doi.org/10.1016/j.anucene.2012.07.042 Yildiz, N., Akkoyun, S., & Kaya, H. (2018). Consistent Empirical Physical Formula Construction for Gamma Ray Angular Distribution Coefficients by Layered Feedforward Neural Network. Cumhuriyet Science Journal, 39(4), 928-933. https://doi.org/10.17776/csj.476733
2020-10-31 01:24:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5390886068344116, "perplexity": 4168.871792602984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107912593.62/warc/CC-MAIN-20201031002758-20201031032758-00100.warc.gz"}
http://math.stackexchange.com/questions/335376/finding-the-inverse-of-a-function/335380
# Finding the inverse of a function In my notes, I have an example of finding the inverse to a function defined as follows: $$f:\{x\in\mathbb{R}\mid x\neq 0\}\to\{x\in\mathbb{R}\mid x\neq 2\}, f(x)\mapsto\frac{2x-1}{x}$$ The prof went on to prove that the function was bijective before finding the inverse. By solving for x, he got the range: $$x=\frac{1}{2-y}=\{x\in\mathbb{R}\mid x\neq 2\}$$ which matches the codomain above. Now, he found the inverse by swapping the x and y and then solving for y again, and then he got $$y=\frac{1}{2-x}$$ So... my question is: Is it necessarily true (or are there cases that prove otherwise) that the inverse of a function (if it exists) takes the same form as the range, but with the x and y variables swapped? - The inverse of a function is a function. The range of a function is a set. So the inverse of a function isn't even the same kind of thing as the range, and it doesn't make sense to ask whether it has the same form. –  Gerry Myerson Mar 20 '13 at 1:47 I'd write either $f(x)=\dfrac{2x-1}{x}$ or $x\overset{f}{\mapsto}\dfrac{2x-1}{x}$, but never $f(x)\mapsto\dfrac{2x-1}{x}$. –  Michael Hardy Mar 20 '13 at 1:51 If a function's inverse exists, then the range of $f$ is the domain of $f^{-1}$, and vice-versa: ...because if a function's inverse exists, the function is then bijective: a one-to-one and onto function. Then the domain of the function is the "image" of it's inverse, and the range is the image of the domain. NOTE: I'm not clear what you mean by the inverse function having the same form as the range. If you mean that to ask if domain of the inverse function is the range of the function, then yes, that's true, and that's what I'm addressing above. Otherwise, please clarify. The procedure your professor used is a good tool for finding both the inverse function, if it exists, and for defining the inverse of the image of a function. In your case, if you are asking if a function and its inverse (if it exists) have the same "form", you need to be clear about what you mean. The fact that both your $f(x)$ and $f^{-1}(x)$ are represented as the quotient of polynomials, and that meaning: "same form", then know, that will not always be the case. If we want to find the inverse of the image of $f(x) = x^2$, then $$y = x^2 \iff \pm\sqrt y = x \implies f^{-1}(x) = \pm\sqrt x$$ Here, we have $f(x) = x^2,\,$ and $\,f^{-1}(x) = \pm\sqrt x$ which hardly appear to be of the same "form" as far as functions go. Consider the functions $f: \mathbb R^+ \to \mathbb R^+: $$f(x) = e^x, \;\;f^{-1}(x) = \ln x$$ Would these functions be considered of the same "form"? Feel free to give further clarification if your question is other than what's been addressed. - The point is, what your prof got,$x=\cfrac{1}{2-y}$, is not the range. It is the inverse fucntion, although formally, it is better written as$f^{-1}(y)=\cfrac{1}{2-y}$. Then, it matches with the other result, which is$f^{-1}(x)=\cfrac{1}{2-x}$. The choice of notation for variable, i.e. the$x$in$f(x)$or$y$in$f(y)\$ does not matter. -
2015-08-31 16:01:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8803806304931641, "perplexity": 168.45102837829674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066266.26/warc/CC-MAIN-20150827025426-00295-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.vasp.at/wiki/index.php/LSELFENERGY
Requests for technical support from the VASP group should be posted in the VASP-forum. # LSELFENERGY If LSELFENERGY=.TRUE., the frequency dependent self-energy ${\displaystyle \langle \psi _{{n{{\mathbf {k}}}}}|\Sigma (\omega )|\psi _{{n{{\mathbf {k}}}}}\rangle }$ is evaluated. Evaluation of QP shifts is bypassed in this case.
2020-04-05 23:51:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.254660427570343, "perplexity": 3042.150961291021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371611051.77/warc/CC-MAIN-20200405213008-20200406003508-00299.warc.gz"}
https://motls.blogspot.com/2020/03/california-under-curfew-truly-crappy.html?m=1
## Friday, March 20, 2020 ### California under curfew: a truly crappy apocalyptic movie A few hours ago, a wannabe Hitler, the governor of California, has issued the self-evidently unconstitutional "order" (he emphasized it wasn't just a "request") to 40 million Californians – inhabitants of the world's most admired state of a comparable size, at least in modern technologies – to stay home for a month. A positive detail is that many of the locals have "online jobs", anyway. But that's not the case of many other states that will probably emulate this insanity soon. Increasingly unhinged psychopaths are clearly increasingly taking over everything that is left from the civilization. This guy has also ordered 56% of Californians (such a nice precision!) to contract the virus in 8 weeks. Clearly, the actual number isn't known even plus minus two orders of magnitude (and greatly depends on details of the definition of "infected", anyway). To claim he can predict two significant figures just proves that he is a complete hack. Affluenflammation is about the pathological obsession with looking for medical problems. The original song was Californication. This remake is mainly against anti-vaxxers but you know what: When the vaccine is available, most of the hysterical people won't want the vaccination, anyway. In 2-3 days, it may be upgraded to "martial law", he explicitly said. So much for the claims that Americans are so exceptional that they would never allow tyranny to arise. There's nothing qualitatively special about Americans and Californians are probably more extremely far left in average than the Europeans. A bunch of people who are terrified of a flu-like virus seem more than enough to establish tyranny in California. This wannabe Hitler will almost certainly try to keep the power indefinitely and cancel all future elections. There exists an understandable reason: his life will become indefensible at the moment when he loses his very expensive guards because the elimination of individuals such as himself was pretty much the original purpose of the Second Amendment according to a very large number of Americans. Martians attack. An asteroid is going to hit the Earth. Electricity stops working. Well, all these filmmakers and novelists have missed the actual original event that would ignite the ultimate black swan drowning in steroids and pepper sauce. The actual event igniting the black swan in the pepper sauce is: A Chinese grandma coughs and dies with a fever. This is the kind of an event that kickstarted this whole insanity. Something special was found about a coronavirus she and other people who got infected from her possessed and everyone focused on this type of an otherwise common family of viruses. Why haven't the filmmakers thought of this plot? Because it would sound utterly implausible that a variation of flu would lead to the declaration of curfews all over the world. An intelligent viewer would be insulted by such a boring and implausible plot. Why wouldn't they enforce the curfew just right away, without any virus? And in reality, someone would stop the lunatic from placing California under curfew because of a cold virus, right? Even a not so intelligent viewer would be insulted! Meanwhile, in the real world, that's exactly what happens. Hours of constant brainwashing by apocalyptic hype and the reporting of every infection or death is clearly enough to make billions of people completely lose their mind and common sense. Italy is the most affected country (surely according to the percentage of the infected population), having recorded some 3500 fatalities. The official Italian government figures say something interesting: 99.2% of Those Who Died From Virus Had Other Illness, Italy Says Again, less than 1% of the Italian casualties, about 30 people as of now (the figure will get obsolete), had no pre-existing disease. Most of these 30 people were still extremely old. The rest are just victims of a flu-like infection through which Mother Nature decided to veto our idea that "we may keep all the arbitrarily old people with arbitrarily many diseases alive for an arbitrarily long time". We just can't, She stated. It is Her, not you, a fudging whining radical leftist, who defines the rules of the game about the most rudimentary things. Bioengineering will change the rules in humans' favor again but medical advances don't happen by decrees or through panic. Jojo left his home in Tucson, AZ, for some Californian grass. "Get back," they told him. Jojo can't get back, fudging Beatles, because there's a curfew in California now, haven't you heard about it? Otherwise the real full-blown deaths caused by Covid-19 may be a dozen in Italy; some sources say that only two deaths in Italy are safely attributed to this virus. I think that several percent of Italians have already gone through the infection. The real death count eliminating the questionable deaths that could have other reasons could be just comparable to 1,000 in Italy in the future worst case scenario, perhaps still below flu. CEBM, evidence-based medicine section of Oxford University, estimated that the fatality rate is 0.125% and about 1% for people above 70 (thanks, JP). If those figures were approximately right, then it's obvious that the hysteria has already caused greater damages than the virus ever could, even in the worst case scenario. By the way, if you don't know the name of University of Oxford, maybe you know Stanford where a doctor also suggests that these events might be "evidence fiasco of the century". (Well, yes, John Ioannidis is the guy who wrote that most published papers are wrong. I surely think that this simple statement is true for the body of recent publications.) Eagles also failed to mention that you can't check out from Hotel California. So many classic songs have become so ludicrous now. Another example: If you're going to San Francisco now, you must especially think about having flowers in your hair. ;-) We don't know whether the virus was completely natural, genetically engineered, or something in between. Some articles present evidence that it was natural enough, perhaps selected by natural selection as a rare representative of similar coronaviruses that may bind to the human biology (to ACE2); see Nature medicine 3 days ago, thanks to Oscar. I understand the statement and the basic proposed logic but I am not equipped enough to verify the the reasoning that led to the conclusion. At some level, it really doesn't matter at all whether the virus emerged completely naturally. Even without human assistance, such new (or re-appeared) viruses have always been possible. The dynamics of its propagation doesn't really depend on its "lab origin" at all. One of the far-fetched theories about the genesis of this hysteria that I thought about was the following: China just decided to shut down the West. So it released a good enough coronavirus, enforced a lockdown on a province for a month to show the West what it should be doing, and then it just waited for the mindless West to do the same (and holy cow, I've encountered lots of mindless would-be smart imbeciles automatically saying "we need to do everything that the Chinese comrades do") but across all their territories and for much longer periods of time in the process of committing societal suicide. It is far-fetched but there is some evidence that it could make sense. First, it seems implausible that China has really eliminated the virus completely. Italy's lockdown looks "comparable" but they didn't even achieve taming of the trend, let alone reduction by orders of magnitude. An alternative explanation is that China simply stopped to look at this flu-like virus and returned to the business-as-usual and it's possible because the virus doesn't really do much beyond what flu-like diseases normally do, and it may be rather useless to distinguish it from flu at all. Another piece of evidence supporting the far-fetched theory was the following news in the official Chinese media: Western states’ ‘surrender’ to COVID-19 shocks Chinese It's shocking that the U.K. and Sweden decided not to shut their countries down completely yet. (Incidentally, there doesn't seem to be any visibly negative consequences of these two countries' decisions.) The great leaders of China may have decided that the whole West must commit suicide, as shown by the role models in the Hubei province, so how it's possible that Sweden and the U.K. resist? Whether China wanted it and planned it or not doesn't really matter. The main problem is that the West is so unbelievably degenerated that it would swallow this trick. And it totally did. Maybe China didn't plan it but it seems likely that the countries that will not have committed the economic suicide will take over the world. And it may be just O(1) year away, maybe even earlier. It will really be their natural duty, the duty of the new bosses of Planet Earth. Surely when the Western civilization turns into unproductive 2 billion of animals who are hysterically hiding in basements and piling their debt, China or someone else may start to treat us as animals and recycle our otherwise unused territory and other resources. OK, great future Chinese overlords. Do what is needed. I have mixed feelings about the continued, very different continuation of the intelligent life on Earth. Please, try to preserve some whites for the sake of diversity. Don't forget that most of the 1.4 billion Chinese have really admired the Western nations and found us attractive. Produce lots of replicas of the Czech Karlštejn Castle, like in the Chinese commercial above (from the Kosmo TV series about the Czech astronauts). The world will be more boring if and when it becomes one monstrous China and it's your task, beloved Chinese Westophiles and Czechophiles (who are sadly separated from this blog by the Great Firewall of China) to make it less bad. Some 50 million of you, the Chinese, must become honorary Czechs. You must actually learn the Czech language, sing the Czech songs, read and write Czech books, and so on. You badly need to improve yourself in string theory. Xiè xie for your understanding, comrades. ;-) This theme about the Chinese takeover of America and the West isn't new. Recall that the 2010 version of the story was all about the unsustainable debt. Well, maybe the Chinese takeover ultimately will include the debt. If you imagine that the West remains close for business for two years or more, China, the main creditor, will encounter first inabilities to pay and it will just gradually take over Western companies and governments. It's amazing if the "missing piece of the mosaic" of this scenario was a VIP cold.
2021-12-07 06:19:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23263666033744812, "perplexity": 2766.9264961127224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363336.93/warc/CC-MAIN-20211207045002-20211207075002-00276.warc.gz"}
https://math.stackexchange.com/questions/2857580/finding-tension-in-a-freely-sliding-ring-on-a-wire
# Finding tension in a freely sliding ring on a wire A heavy small ring of weight $W$ is free to slide on a smooth surface wire of radius $a$, fixed in a vertical plane. It is attached by a string of length $l$ where $$2a > l > a\sqrt{2}$$ to a point on the wire in a horizontal line with the centre. Find tension in the string. Approach : 1. Here, If A be the point where string is attached to wire, P be the equilibrium position of string, I get Tension as $$\dfrac{W(l^2-2a^2)}{a\sqrt{4a^2-l^2}}$$ 2. Here, If A be the point where string is attached to wire, P be the equilibrium position of string, I get Tension as $$\dfrac{- W(l^2-2a^2)}{a\sqrt{4a^2-l^2}}$$ Clearly, 2nd Approach is wrong as magnitude of tension can't be negative. But why is it wrong ? Why isn't this diagram possible ? I have verified that with given restriction on $l$, the 2nd diagram should very well be possible. Can anyone point out where am I going wrong ? Thanks! • Move to Physics? – md2perpe Jul 20 at 14:45 • I had it under Maths topic. So posted here. Can you direct to the link ? Thanks! – user1611542 Jul 20 at 18:21 • Direct to what link? – md2perpe Jul 20 at 19:40 • I thought you were mentioning about some link. I am sorry. – user1611542 Jul 22 at 17:04 • Did you use the same local coordinate system for $P$ in both cases? If so, your answer may be correct because the two tensions are in the opposite directions. – John Douma Jul 22 at 18:55 Considering first the first position By geometric considerations the angle $\angle PAB = \alpha$ is such that $$2 a \cos\alpha = l$$ Now calling $$\vec R = r(\cos(2\alpha),\sin(2\alpha))\\ \vec W = w(0,-1)\\ \vec T =- t(\cos\alpha,\sin\alpha)$$ in equilibrium we have $$\vec R + \vec W + \vec T = 0$$ or $$\left\{ \begin{array}{rcl} r \cos (2 \alpha )-t \cos (\alpha )& = & 0 \\ -w-t \sin (\alpha )+r \sin (2 \alpha )& = & 0 \\ 2 a \cos (\alpha )& = & l \\ \end{array} \right.$$ and solving for $r,t,\alpha$ we obtain $$\left[ \begin{array}{ccc} t & r & \alpha \\ \frac{\left(2 a^2-l^2\right) w}{a \sqrt{4 a^2-l^2}} & -\frac{l w}{\sqrt{4 a^2-l^2}} & \tan ^{-1}\left(\frac{l}{a},-\frac{\sqrt{4 a^2-l^2}}{a}\right) \\ \frac{\left(l^2-2 a^2\right) w}{a \sqrt{4 a^2-l^2}} & \frac{l w}{\sqrt{4 a^2-l^2}} & \tan ^{-1}\left(\frac{l}{a},\frac{\sqrt{4 a^2-l^2}}{a}\right) \\ \end{array} \right]$$ one of them is discarded. in the second position, the string can not remain taut.
2018-10-21 03:16:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6576590538024902, "perplexity": 728.0640701251255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513686.3/warc/CC-MAIN-20181021031444-20181021052944-00211.warc.gz"}
http://physics.stackexchange.com/questions/71788/basic-question-on-bra-ket-notation
# Basic question on bra-ket notation Which of the following corresponds to a $\psi(x)$, a wavefunction written in the position basis: $x| \psi\rangle$ or $\langle x| \psi\rangle$? If it is the second expression (which my textbook asserts), what is the meaning of the first expression? - physics.stackexchange.com/q/65794 –  Wildcat Jul 21 '13 at 16:30 I should have also asked how to interpret $\int\langle \psi*|x|\psi\rangle dx$ –  Noah Jul 21 '13 at 20:42 @Noah Can you define $\langle\psi *|$? –  Will Jul 21 '13 at 21:00 I think you might mean $\langle \psi |\hat{x}| \psi \rangle = \int dx \langle \psi | x \rangle x \langle x|\psi \rangle = \int dx~ \psi^*(x) ~x~\psi(x)$. This is just the expected value of $x$ for state $\psi$. –  Will Jul 21 '13 at 21:20 Yes. My mistake. –  Noah Jul 22 '13 at 1:12 It is the second, $\psi(x) = \langle x|\psi\rangle$ which is correct. The first, if $x$ is the position operator, is just the position operator acting on the state $|\psi\rangle$. The abstract state $|\psi\rangle$ can be expanded in any basis, using a completion relation: $$|\psi\rangle = \underbrace{\sum_i |i\rangle \langle i|}_{1~=~identity}\psi\rangle$$ (where the sum can mean sum or integral, depending on this situation) using some complete basis $\{|i\rangle\}$. An example of this is the position basis, $\{|x\rangle\}$, from which we have $$|\psi\rangle = \int dx~ |x\rangle \langle x|\psi\rangle$$ This shows us that the wavefunction $\psi(x) = \langle x|\psi\rangle$, is the coefficient from the expansion of the state $|\psi\rangle$ in the position basis $|x\rangle$, at position $x$. Edit to respond to a question asked by Noah in response to Matt's answer: "But doesn't an operator acting on the state project the state onto the eigenstates of the operator and is that the same as representing the state in a new basis?" As I have shown above, the "projection" you are talking about is given by applying the identity in terms of the position basis $\mathbf{1=\int dx~|x\rangle\langle x|}$. Let's see what happens when we apply $\hat{x}$ instead: Just like state vectors can be expanded in a complete basis, so too can operators. In general, if we take our complete basis $\{|i\rangle\}$ we can write the operator $\hat{A}$ in terms of matrix elements $$\hat{A} = \sum_{i,j} |i\rangle\langle i |\hat{A}|j\rangle\langle j |$$ where the matrix elements are $A_{ij} = \langle i |\hat{A}|j\rangle$. For the case of the position operator $\hat{x}$ and using the position basis $\{|x\rangle\}$ $$\hat{x} = \int dx~dy~ |x\rangle\langle x |\hat{x}|y\rangle\langle y |\\ = \int dx~ x~|x\rangle\langle x |$$ that is, it is diagonal in the position basis (obviously). Using this we see $$\hat{x}|\psi\rangle = \int dx~ x~|x\rangle\langle x |\psi\rangle$$ So we see that application of $\hat{x}$ can be thought of as a kind of projection, weighted by $x$. So we see that it is not the same as representing the state in the $x$ basis, which is actually $$|\psi\rangle = \int dx~ |x\rangle \langle x|\psi\rangle$$ as above. - $\psi(x)\equiv\left\langle{x}\,\middle|\,\psi\right\rangle$ is the correct notation. $x\left|\psi\right\rangle$ means that the position operator is acting on the state. People sometimes put a hat on operators to remove ambiguity: $\hat{x}\left|\psi\right\rangle$. - The expression $x|\psi\rangle$ could also mean a real number $x$ multiplying the state $|\psi\rangle$. –  joshphysics Jul 21 '13 at 16:28 But doesn't an operator acting on the state project the state onto the eigenstates of the operator and is that the same as representing the state in a new basis? –  Noah Jul 21 '13 at 17:26 An operator just performs an operation on a specific state. What basis the state is irrelevant and is generally selected for convenience. The projection onto a state occurs during the application of the "bra". –  Mebert Jul 21 '13 at 17:43 @Noah - see my edit in my answer. I try to explain this in more detail. –  Will Jul 21 '13 at 18:12
2015-07-29 02:47:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9234416484832764, "perplexity": 172.81121208980213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042985647.51/warc/CC-MAIN-20150728002305-00153-ip-10-236-191-2.ec2.internal.warc.gz"}
https://icml.cc/Conferences/2020/ScheduleMultitrack?event=5930
Timezone: » Poster Naive Exploration is Optimal for Online LQR Max Simchowitz · Dylan Foster Thu Jul 16 12:00 PM -- 12:45 PM &amp; Thu Jul 16 11:00 PM -- 11:45 PM (PDT) @ None #None We consider the problem of online adaptive control of the linear quadratic regulator, where the true system parameters are unknown. We prove new upper and lower bounds demonstrating that the optimal regret scales as $\tilde{\Theta ({\sqrt{d_{\mathbf{u}}^2 d_{\mathbf{x}} T}})$, where $T$ is the number of time steps, $d_{\mathbf{u}}$ is the dimension of the input space, and $d_{\mathbf{x}}$ is the dimension of the system state. Notably, our lower bounds rule out the possibility of a $\mathrm{poly}(\log{}T)$-regret algorithm, which had been conjectured due to the apparent strong convexity of the problem. Our upper bound is attained by a simple variant of certainty equivalent control, where the learner selects control inputs according to the optimal controller for their estimate of the system while injecting exploratory random noise. While this approach was shown to achieve $\sqrt{T}$ regret by Mania et al. (2019), we show that if the learner continually refines their estimates of the system matrices, the method attains optimal dimension dependence as well. Central to our upper and lower bounds is a new approach for controlling perturbations of Riccati equations called the self-bounding ODE method, which we use to derive suboptimality bounds for the certainty equivalent controller synthesized from estimated system dynamics. This in turn enables regret upper bounds which hold for any stabilizable instance and scale with natural control-theoretic quantities.
2020-12-05 09:27:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8023214340209961, "perplexity": 419.55969974899295}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747323.98/warc/CC-MAIN-20201205074417-20201205104417-00073.warc.gz"}
https://uniontestprep.com/comptia-a-exam/study-guide/902-software-troubleshooting/pages/1
# Page 1 902 Software Troubleshooting Study Guide for the CompTIA® A+ exam ## How to Prepare for the Software Troubleshooting Questions on the CompTIA A+ 902 Test ### General Information While the CompTIA A+ exam content frequently changes, this outline contains the latest information we have on software troubleshooting. Continue to seek the most up-to-date guidelines as you prepare for the 902 test. Note that all of the questions on the CompTIA A+ exam that pertain to software troubleshooting will contain a scenario to which you are asked to react. ### PC Operating System Problems Operating system issues typically have a negative overall effect on the operation of the system. This section will cover some of the most common operating system problems and how to identify and troubleshoot them. In some cases, questions will contain scenarios and ask for your response. #### Symptoms The only way to know how to address an operating system issue is to recognize particular symptoms. Some of the most common symptoms are covered here. Be prepared to address each of these occurrences. • proprietary crash screens (BSOD/pinwheel): When the system comes to an immediate halt, it will display the Blue Screen of Death (BSOD) and an error message on the screen. A BSOD that occurs during the initial boot sequence could be caused by bad hardware, drivers and/or bad applications. Apple systems will display a pinwheel or a spinning ball indicating an issue. Look for applications that try to access a resource that is not available. • failure to boot: There are numerous reasons a system may fail to boot, depending on the error message displayed. If the system successfully passes the Power-On Self-Test (POST), then locks up, chances are that either the hard drive or something in the operating system is corrupted. If the system does not pass the POST, suspect a hardware problem. • improper shutdown: If you boot a system and it enters the Windows Error Recovery screen, that often indicates the system was not shutdown correctly. You may elect to start Windows normally if this is an isolated problem. If it continues, you should launch startup repair either in the Windows Error Recovery screen, or by pressing F8 during system reboot to enter into the Advanced Boot Options menu. • spontaneous shutdown/restart: A spontaneous system shutdown can be caused by hardware or software problems. Poorly written programs or driver issues may cause a system lockup or system restart. Programs that make extreme demands on the processor or video (such as games) can cause the system to overheat. Thermal intermittent problems can be caused by memory, or inadequate power supplies. • device fails to start/detected: During the system boot sequence, there are numerous devices that are expected to start. If a device does not start correctly, check the device manager, paying particular attention to driver issues. The event viewer may also display driver-related errors. • missing DLL message: A dynamic link library is a piece of computer code that can be shared by numerous applications to save time when writing code. DLLs are written for a specific library and programmers require the correct library. Windows System File Check (SFC) can be used to locate and replace missing DLLs. • “Services” fails to start: During system boot-up, there are numerous services that are expected to start. If a service does not start correctly, check the device manager, paying particular attention to driver issues. Also, check to see if you can you start the service manually. If the service is associated with an application you installed, you may want to reinstall that application. Refer to the Windows Services utility for controlling services. • compatibility error: Applications are written for the current release of Windows. There are many older applications that will not run on the latest version of Windows. Built into the compatibility tab of an executable program, there is an option that allows the program to run in an earlier version of Windows. This is often used to run older games and applications on newer platforms. • slow system performance: If the system appears to be running slower than normal, the Task Manager will give a detailed listing of CPU, memory, and network utilization. Looking for applications using too much of the system resources allows you to target that application. Check for free space on the hard drive, and/or run Disk Defragmenter. • boots to Safe Mode: Safe mode boots the system with only drivers absolutely necessary to boot the system. If you suspect problems with drivers, or need to modify system setting that are otherwise unavailable due to booting issues, Safe Mode can help. To enter Safe Mode, repeatedly press F8 during initial boot. • file fails to open: File types are related to specific applications. A “.docx” file is a Microsoft Word document. If a file has had the association changed in the Default Programs, applet users may not be able to open a file. • missing NTLDR: If a system fails to boot and you are presented with a message that says missing NT loader (NTLDR), it indicates that critical system files are corrupted or missing. Use the startup repair disk for the appropriate version of Windows. • missing boot configuration data: In Windows, missing boot configuration data would prevent the system from properly booting. To address this, use the Startup Repair option in the Windows Recovery Environment to repair the boot configuration database (BCD). • missing operating system: A message stating missing operating system can be addressed by booting with your distribution DVD, selecting Repair your computer, and then Startup Repair. • missing graphical interface: A missing graphical user interface is most likely caused by either a driver issue or a corrupted system file. Boot into Safe Mode and run System File Checker (SFC) to verify all the operating system files. • missing GRUB/LILO: Grand Unified Bootloader (GRUB) and Linux Loader (LILO) are Linux boot loader files. Missing boot loader files can occur if you set up your system to dual boot with Linux and Windows, since windows will overwrite these files. To prevent this, always load Windows first. With a live Linux CD, you should be able restore GRUB or LILO. • kernel panic: A kernel panic occurs with Linux and MAC OS, whenever there is an unrecoverable system error and all system functions halt. With a kernel panic there is often an error message that should be helpful when troubleshooting the problem. A kernel panic serves basically the same function as the blue screen of death in Windows, • graphical interface fails to load: If the GUI fails to load, your only choices are to restore from backup or rebuild from the installation media. • multiple monitor misalignment/orientation: When using dual monitors to align the actions of the mouse (so that, as you exit monitor 1 to the right, you enter monitor 2 from the left), enter the screen resolution screen and drag the screens to properly orient them. #### Tools The tools listed here are used to assist in troubleshooting the problems you would encounter repairing a PC. Be certain you are familiar with all of their functions. • BIOS/UEFI: Many of the newer BIOS implementations have specialized hardware testing capabilities built into them. The newer UEFI BIOS even allows connecting to the Internet to download drivers and is, in itself, a fully functional operating system. • SFC: Any of the operating system files can become corrupted for no apparent reason. This is why System File Checker (SFC) is available to run a complete scan of the operating system files. • Logs: There are a number of log files created by Windows to track system performance. Most are contained in the Event Viewer, outlining security issues and other system events. To verify the boot process and events that occur during system boot, Windows maintains the “ntbtlog.txt” file. In Linux, there are numerous log files contained in the “/var/log” directory. MAC OS X maintains logs under Utilities in “Console.app”. • system recovery options: For operating system problems that cannot be addressed while the operating system is running, use the Windows 7 Command Prompt from the System Recovery Options. For Windows 8 and 8.1 choose Other Options > Troubleshooting > Advanced Options > Command Prompt. • repair disks: For additional tools necessary for startup problems, you need to create a system repair disk that provides tools and recovery options for Windows 7, 8, and 8.1. Recovery disks are created with the original distribution DVD or from Windows Backup and Restore. • pre-installation environments: When using a repair DVD, you are in a Windows pre-installation environment that provides minimal features, such as a GUI. This minimal environment bypassed many of the drivers that may have caused problems initially. • MSCONFIG: MSCONFIG provides a number of various options for booting the system, allowing you to enable/disable services and aids in configuring startup applications. • DEFRAG: Disks can become fragmented as files are created, deleted, and modified over time. Defrag realigns all the file fragments into contiguous files on the drive. This not only speeds up disk access, but also eliminates wear on the drive. Keep in mind that solid-state drives should never be defragmented. • REGSRV32: To register DLLs in Windows, use the REGSRV32 utility. The Microsoft Register Server allows you to register and unregister DLLs on the operating system. • REGEDIT: To edit the system registry, use the REGEDIT command. • Event Viewer: To see what is happening at anytime, use the Event Viewer. The Event Viewer displays information about running applications and security data. Warning messages and critical issues will be labeled there as well. • Safe Mode: There are a number of troubleshooting tools that are available, even before the operating system is loaded. Safe Mode allows the system to be booted with minimal drivers loaded, allowing you to address issues before the operating system loads. • Command Prompt: Safe Mode loads you into VGA mode and supports networking to assist in error recovery. If you’re having problems loading Safe Mode, try Safe Mode with Command Prompt, which does not load the Windows Explorer GUI. • uninstall/reinstall/repair: In some extreme cases, it may be easier to simply uninstall and reload the operating system. With Windows 8 and 8.1, there is an option allowing you to refresh the operating system, which allows you to maintain your personal files.
2018-03-21 14:52:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1936485916376114, "perplexity": 3006.556102416171}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647660.83/warc/CC-MAIN-20180321141313-20180321161313-00458.warc.gz"}
http://sachinashanbhag.blogspot.com/2012/05/
## Wednesday, May 30, 2012 ### Non-Uniform Quadrature Points in Octave/Matlab Suppose you are given a bunch of points in the form of a vector x (between "a" and "b") and an accompanying density function p(x)>0. For concreteness, consider x = [0:0.1:1.0]'; px = 0.3 - (x-0.5).^2; The top subfigure depicts the density function p(x), and the cumulative density function C(x). The bottom subfigure depicts the  quadrature points "z" (circles), and the lines mark the end of the intervals, using N=11. This density function is shown in the picture above. We want to create a N*1 vector "z" of quadrature points that are distributed according to the density p(x). We also want a vector  "dz" corresponding to the width of strip belonging to a particular quadrature point. As an additional constraint, we assume that we want z(1) = a, and z(N)  = b to match the end-points. The program attached at the end is able to do this relatively efficiently. In fact, I used the following command to generate the picture above: [z dz] = GridDensity(x,px,11,1); GNU Octave Code: % %  PROGRAM: % Takes in a PDF or density function, and spits out a bunch of points in %       accordance with the PDF % %  INPUT: %       x  = vector of points. It need *not* be equispaced %       px = vector of same size as x: probability distribution or %            density function. It need not be normalized but has to be positive. %   N  = Number of points >= 3. The end points of "x" are included %       necessarily %       Pt = Optional argument. If present, then some plotting. %  OUTPUT: %       z  = Points distributed according to the density %       hz = width of the "intervals" - useful to apportion domain to points %            if you are doing quadrature with the results, for example. % %  (*) Sachin Shanbhag, March 5, 2012 % function [z h] = GridDensity(x,px,N,Pt) npts = 100;                              % can potentially change xi   = linspace(min(x),max(x),npts)';    % reinterpolate on equi-spaced axis pint = interp1(x,px,xi,'spline');        % smoothen using splines ci   = cumtrapz(xi,pint); pint = pint/ci(npts); ci   = ci/ci(npts);                      % normalize ci alfa = 1/(N-1);                          % alfa/2 + (N-1)*alfa + alfa/2 zij  = zeros(N,1);                       % quadrature interval end marker z    = zeros(N,1);                       % quadrature point z(1)  = min(x); z(N)  = max(x); % % ci(Z_j,j+1) = (j - 0.5) * alfa % beta  = [0.5:1:N-1.5]'*alfa; zij   = [z(1); interp1(ci,xi,beta,'spline'); z(N)]; h        = diff(zij); clear beta; % % Quadrature points are not the centroids, but rather the center of masses % beta     = [1:1:N-2]'*alfa; z(2:N-1) = interp1(ci,xi,beta,'spline'); % % Some plotting if required % if(nargin>3) subplot(2,1,1) plot(xi,ci,'b-','LineWidth',2,xi,pint,'r-','LineWidth',2); axis([min(xi),max(xi)]); legend('CDF','PDF') title('Visualization') xlabel('x'); ylabel('CDF(x)/PDF(x)') subplot(2,1,2) plot(z,0.5*ones(size(z)),'ro'); axis([min(xi),max(xi)]); xlabel('z'); hold on; for i = 2:N X = [zij(i); zij(i)]; Y = [0; 1]; plot(X,Y,'b') endfor hold off; endif endfunction ## Friday, May 25, 2012 ### MathJax: LaTeX on Blogger - finally! I am easily excited when I find yet another place where I can use LaTeX syntax to typeset mathematics (Google Docs, OpenOffice, again, presentations using BeamerInkscape). Finally, it seems, it is not incredibly clunky to write math in Blogger. In fact, it is as convenient as writing it in a native LaTeX document. The latest avataar of MathJax, while not really new, is new to me. Here are a few links which describe how to go about empowering your Blogger account. The easiest permanent fix is to add some script in your Blogger template file, as described here. If you don't like messing with your template, you can take the approach outlined here. Basically, you add some lines to each post in the "html" view, which has math in it. Here is a mandatory test: $\frac{df}{dx} = \frac{1}{2} \left(\frac{ab \textrm{ sech}^2(b \sqrt{x})}{x} - \frac{a \tanh(b \sqrt{x})}{x^{3/2}} \right)$ Beautiful! ## Thursday, May 24, 2012 In spite of how close it looked like in the media (thanks in part to the FUD raised by paid shills), Oracle essentially had to walk away with peanuts - if that. As usual Groklaw was among the best places to follow the trial. There were many fascinating parts. One of my favorites was the exchange between Judge Alsup (who could program) and the lead counsel for Oracle, Boies, on "rangeCheck". You could not write better Hollywood screenplay: Alsup: I have done, and still do, a significant amount of programming in other languages. I've written blocks of code like rangeCheck a hundred times before. I could do it, you could do it. The idea that someone would copy that when they could do it themselves just as fast, it was an accident. There's no way you could say that was speeding them along to the marketplace. You're one of the best lawyers in America, how could you even make that kind of argument? Boies: ... I want to come back to rangeCheck. Alsup: rangeCheck! All it does is make sure the numbers you're inputting are within a range, and gives them some sort of exceptional treatment. That witness, when he said a high school student could do it-- Boies: I'm not an expert on Java -- this is my second case on Java, but I'm not an expert, and I probably couldn't program that in six months. Priceless. Of course, Oracle being Oracle, they will probably appeal. Here's what Linus Torvalds has to say (full disclosure: on his Google+ account) Prediction: instead of Oracle coming out and admitting they were morons about their idiotic suit against Android, they'll come out posturing and talk about how they'll be vindicated, and pay lawyers to take it to the next level of idiocy. Sometimes I really wish I wasn't always right. It's a curse, I tell you. ## Monday, May 21, 2012 ### Some interesting cartoons at Spiked Math 1. Epsilon-Delta: reminded me of times long gone 2. My big fat Greek letter 3.  Hilarious IQ test Check out the other stuff there too! ## Thursday, May 17, 2012 ### The Checklist Manifesto I've been a big fan of checklists since as long as I can remember. So much so that a lot of friends and family mock my obsession with them. In high-school, I had an exam-day checklist (set alarm, carry extra pens, make sure of the time-table etc.). I have about five different types of travel checklists, depending on whether I am flying, driving, going camping, or to a conference etc. I have checklists for writing proposals, and papers and on and on. You get the idea! So it was with much delight that I read Atul Gawande's article in the New Yorker called The Checklist a few years ago, and his book called "The Checklist Manifesto" which fleshes out some of the major themes from his article, last month. While it was "preaching to the choir", I thought some parts were very interesting. I particularly enjoyed the chapter called "Hero in the age of checklists". Here is a link to a recent talk at TED by Atul Gawande. ## Monday, May 14, 2012 ### Commencement Speech: David Foster Wallace A really nice, and somewhat offbeat, commencement speech by the late David Foster Wallace. I remember enjoying his book "Inifinite Jest". YouTube also has the actual speech: ## Wednesday, May 9, 2012 ### Shitty Economics Juvenile but entertaining! (via Barry Ritzholtz) ## Monday, May 7, 2012 1. Interesting "Martin Gardner" puzzle if you have not tried it before 2. Three puzzles in this post by Tanya Khovanova 3. My wife told me about a "lateral" puzzle she heard in her office. Complete the series: |||, |CC, C, |C, ? ## Thursday, May 3, 2012 I noticed a curious "coincidence" today. I received an innocuous looking deal on vacuum cleaner bags from Amazon in my email. It was for the exact make and model of vacuum cleaner that I have owned for about two years now. Nothing out of the ordinary here. Except for two things. One, I was just about to run out of bags. I remember, because I made a mental note last weekend. Two, I did not buy the vacuum cleaner from Amazon! I scoured the Trash folder of my email to see if there were prior deals like this, and if I just happened to notice it this time.  Nothing! Nada! I recalled this story that appeared in the NYT article "How companies learn your secrets" earlier this year: ... a man walked into a Target outside Minneapolis and demanded to see the manager. He was clutching coupons that had been sent to his daughter, and he was angry, according to an employee who participated in the conversation. “My daughter got this in the mail!” he said. “She’s still in high school, and you’re sending her coupons for baby clothes and cribs? Are you trying to encourage her to get pregnant?” The manager didn’t have any idea what the man was talking about. He looked at the mailer. Sure enough, it was addressed to the man’s daughter and contained advertisements for maternity clothing, nursery furniture and pictures of smiling infants. The manager apologized and then called a few days later to apologize again. On the phone, though, the father was somewhat abashed. “I had a talk with my daughter,” he said. “It turns out there’s been some activities in my house I haven’t been completely aware of. She’s due in August. I owe you an apology.” If you haven't read the article before (it got a lot of secondary press) you should! ## Wednesday, May 2, 2012 ### User-defined Colors in Grace Last week, I was working on a manuscript with a collaborators. As we approached the final draft, we decided to make the "colors" in our figures consistent. Unfortunately, we used different programs to make our graphs, and "red" in program X is not the same shade of "red" in program Y. Grace (the program I use) offers a choice of 16 standard colors, from the usual pull down menu. However, it allows for endless fine-tuning. Once you save your graph (as graph.agr), open it up in a text editor. The color-definitions are clustered in lines that look like: @map color 0 to (255, 255, 255), "white" @map color 1 to (0, 0, 0), "black" @map color 2 to (255, 0, 0), "red" Let us say, we were not happy with the shade of grey available (it does seem a little too light). Look for the line: @map color 7 to (220, 220, 220), "grey" and change it to something like: @map color 7 to (100, 100, 100), "grey" You can tweak all the other color definitions to suit your taste or requirements.
2018-03-25 05:15:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4444827437400818, "perplexity": 4826.294331965574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651820.82/warc/CC-MAIN-20180325044627-20180325064627-00778.warc.gz"}
https://www.greencarcongress.com/2005/08/acee_posts_esti.html
## ACEE Posts Estimates of Hybrid Tax Credits ##### 15 August 2005 The American Council for an Energy-Efficient Economy (ACEEE) has posted its estimates of tax credits for 31 hybrid and diesel cars and light trucks that fall under tax credit provisions of the new federal energy bill. These are best-guess estimates, based on a combination of preliminary 2006 model year data, 2005 model year data, and manufacturer announcements, and are intended only to give a sense of the magnitude of the upcoming credits, which will be available starting January 1, 2006. Estimated credits for hybrid vehicles range from $250 to$3,150 (the maximum possible under the provision is $3,400), with the Toyota Prius projected to receive the highest credit. The 2wd Toyota Highlander hybrid and the Ford Escape hybrid are tied for second place, with a credit of$2,600 each, followed by the 4wd Highlander and the Rx400h in third with a \$2,100 credit. The credit amount is largely determined by a vehicle’s city fuel economy relative to the average for its weight class, but vehicles that save at least 1,200 gallons of fuel over their lifetime relative to the class average gain additional credits. Vehicles must also meet moderately stringent tailpipe emissions requirements to qualify. No diesel vehicle will achieve credits at the outset, because automakers have yet to produce vehicles clean enough to meet those emissions requirements. This situation may begin to change in model year 2007, because ultra-low-sulfur diesel fuel will become widely available in late 2006, facilitating emissions reduction technologies for new diesel models. Credits for a manufacturer’s vehicles are phased out once 60,000 of them have received credits. In addition, the program favors heavier vehicles through the structure of the fuel savings credit and a more lenient emission requirement for vehicles over 6,000 pounds. Resources: ### Comments I'm not so sure I like the idea of giving a tax credit to a person who buys a vehicle getting 17 MPG. Sure, if they purchase it instead of one that gets 15 MPG, the policy has been effective. But, what happens when they purchase this bigger truck that gets 17 MPG instead of a smaller one that gets 19 MPG, with the help of the tax writeoff. Now, we're giving tax breaks that are encouraging folks to buy fuel-inefficient vehicles. That doesn't seem so smart. The system works fine if people (a) choose the class of car, and then (b) use price as a way to help choose the model. But, when people are willing to compare different classes and only some models within each class, the subsidy can result in people buying less fuel efficient cars, and take a tax rebate! How often does this scenario happen? I have no idea. Still, if I were supreme despot, there'd be no subsidy for adjusted city MPG under 25 or 30, thats for sure. My sense is that if the target was 25, that you'd see some serious engineering to get those pickups up to 25 MPG and get a huge tax credit, instead of settling for 17, 20, 22, etc. Hello, would this tax incentive also include USED 2005 hybrid that is under a certain mileage? DOES ANYONE KNOW IF THE NEW LAW REQUIRES A 2006 MODEL PURCHASE OR CAN WE BUY A 2005 AFTER JAN 1 AND CLAIM THE DEDUCTION?? I'VE TRIED THE IRS BUT THEY ARE USELESS...IMAGINE THAT. THANKS FOR ANY HELP. The comments to this entry are closed.
2022-11-27 15:00:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1734897792339325, "perplexity": 2459.609316545012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710409.16/warc/CC-MAIN-20221127141808-20221127171808-00708.warc.gz"}
http://abruzzoemozione.it/maig/what-are-3-types-of-ratios.html
Sheldon believed these physical characteristics also influenced personality. Both male and female progeny show exactly the same phenotypic ratios for characters controlled by autosomal genes. There is a further distinction between interval and ratio level variables. ” If your index finger is 3. The ratio of 4 to 8 to 12 is the continued ratio 4:8:12 We get the continued ratio above by combining 3 ratios. Financial ratios help to provide an economic overview of a business. Speed ratio (i) = z2 / z1 x z4 / z3 = n1 / n2 x n3 / n4 (1. 3 bars per foot; the bars must be spaced not more than 12/2. Listed are the expected RPM's at 60 MPH with a given rear end ratio and tire height. All low voltage protective devices are tested at an X/R ratio. Current Transformer Types. 68) Bronze. 5 the slab is taken as two way and reinforcement must be designed by code method which gives coefficients for long and short span and the bending moments can be calculated depending on if they are fixed on 2 supports or continuous. Compression ratio, blkcnt_uncmp divided by blkcnt_cmp. Both 6 and 9 are divisible by 3. The second category of ratios included in our list of financial ratios is the solvency ratio, which is also the most important financial ratio. Suppose you are a teacher at a university. 5 = 270g flour. 1 inches long and your ring finger is 2. the denominator is the maximum possible value that the numerator can take) can be treated as continuous data. The ratio today is 16:1, much higher than what people are. There are several types of glaucoma. A ratio is a relationship between two values. beer, wine, etc. IAIA x IBi d. Definition: gear ratio n. A type O child must inherit a recessive i allele form the mother and a recessive i. The types of financial analysis are: Horizontal analysis. It is used to measure how efficient and effective are credit policies of a company. Ratio data can be multiplied and divided because not only is the difference between 1 and 2 the same as between 3 and 4, but also that 4 is twice as much as 2. This ratio may be written as 6x:2x:3x where x is a variable. There are three main types of capital structure ratios: Equity Ratio: It indicates the proportion of owners’ funds to the total funds invested in the business. 1 1/2 to 1 3/4 cups of water to 1 cup of uncooked rice. 1 Global Adhesives Revenue 2013-2025 2. When you’re refinancing any type of loan, one of the things a bank or credit union considers is your debt-to-income ratio. This article throws light upon the top three types of control in an organisation. How to Choose Which Type of Graph to Use? When to Use. The Ratio Test can be used on any series, but unfortunately will not always yield a conclusive answer as to whether a series will converge absolutely or diverge. Here is a list of the ratios we use most frequently: Absolute Liquid Ratio is the relationship between absolute liquid, or super quick current assets, and liabilities. More specifically, Collagen III can be present anywhere from 5% to 20% in normal skin, that level would be higher in a newly formed scar. 0:1 would be to have your own unit rebuilt by a steering box rebuilding specialist. We compared 7-Zip with WinRAR 5. These are fundamental rhetorical strategies for the manipulation and variation of discourse across a vast array of linguistic levels: word forms, sentences, paragraphs, entire texts or speeches, etc. The more water used will result in a softer. Ratio analysis is used to evaluate various aspects of a company's. Best Answer: well all you have to do is find the ration of 24 ginger bread men over 16 ginger bread men. Again, the neutron to proton ratio increases. IAIA x IBi d. 5, so you'd drag the slider to 150. Fixed ratio combinations. A pulley consists of a rope and a hub or "drum" in which there is a grooved wheel mounted with an axle. Gas to Oil Pre-Mix Ratios for Two Stroke Engines USA Mixtures Ounces of Oil to Gallons of Gas To Use: Pick your ratio, pick the amount of gas you want to use and follow the columns and rows to the correct amount of oil. The sides of the right-angled triangle are given. Gear Types and Gear Ratios What are gears used for? Gears play a huge role in much of the technology around today. Now, does that make sense?. accounting ratios: Three basic types of accounting ratios are: (1) Efficiency Ratios, (2) Profitability ratios, and (3) Solvency ratios. In this annual report, now in its 30th year, we provide IT budgetary benchmarks and IT staffing metrics by industry sector and organizational size for private and public companies and for governmental organizations, based on our. If the ratio of the number of white eggs to the number of brown eggs is 2/3, each of the following could be the number of eggs in the basket EXCEPT: 10 12 15 30 60 I answered 10 thinking that 2/3 * (answer) should give an integer. Both 6 and 9 are divisible by 3. Profitability Ratios 2. Multiplying or dividing all terms in a ratio by the same number creates a ratio with the same proportions as the original, so, to scale your ratio, multiply or. 4) Sub- Triplicate ratio: It is the ratio between cube roots of two numbers. It may be expressed as a percentage and it reveals the amount of sales required to cover the cost of goods sold plus operating expenses. Profitability ratios 5. The ratios that we'll look at are the current, quick and cash ratios and we will also. Trigonometric ratios provide relationships between the sides and angles of a right angle triangle. There are three types of age dependency ratio. Isolate variable x. Some frequently used long-term solvency ratios are given below: Debt to equity ratio. 3 (30%) Impact Ratio. Also, I'm a little confused where the ratios come in. There are several advertisers. The MarketWatch News Department was not involved in the creation of this content. From LearnZillion. Liquidity ratios 2. Blue mass-to-light ratios within the Holmberg radius versus morphological type. The types are: 1. Take note that most of the ratios can also be expressed in percentage by multiplying the decimal number by 100%. The old school of the law believed that more equity is safe for the firm and there should be more weight of equity in the total capital. To Cook: Add 2 cups water or vegetable broth to 1 cup black rice, bring to a boil, then simmer for 35 minutes; Amount after cooking: 3 cups. The level and type of pain you are experiencing is going to play a key role in determining what ratio is going to be best to use. One common type of problem that employs ratios may involve using ratios to scale up or down the two numbers in proportion to each other. Mainly there are three types of mortar mixes used in the masonry construction. Leverage ratios examine two types of leverage: financial leverage (or whether and how much debt and equity are used to finance assets) and operational leverage (the extent to which an organization's operations involve fixed rather than variable costs). For example. His unit rate is 2 words per second. CD8 cells can kill cancer cells and other invaders. We’ll use a succession of golden ratios to create a golden ruler to understand design in the face: The head forms a golden rectangle with the …. In the language of percent, we say that 6 people are 50% of 12 people. 3/10/08) EPO NO. Financial ratios are a valuable and easy way to interpret the numbers found in statements. The second is an overdrive-- the output speed is faster than the input speed. For instance, in finance it is common to use the earnings per share, gross profit margin, return on assets, and inventory turnover ratios. These different types are arranged roughly in spectral order starting with infrared and ending with blue and UV and then pink. There are several advertisers. If you have a Ford type unit, you could interchange between the 16. 911 ROW 1975. It is used as supplementary to the current ratio. The CNG / FFV fuel type engines are from the natural gas / propane powered engines. More specifically, Collagen III can be present anywhere from 5% to 20% in normal skin, that level would be higher in a newly formed scar. Types of ratios in math. 32 per kg are mixed together to be sold at Rs. Other examples of ratio variables include height, mass, distance and many more. Profitability Ratios: The main objective of any organization is to earn profit. percentages and ratios Percentages or ratios summarise two pieces of information, namely their constituent numerator and denominator values. The old school of the law believed that more equity is safe for the firm and there should. In general, financial ratios can be broken down into four main categories: 1. This ratio can be expressed as the number of gear teeth divided by the number of pinion teeth. Daiwa LEXA-WN400H Lexa Type WN Casting Reel, 400, 6. The lower omega-6/omega-3 ratio in women with breast cancer was associated with decreased risk. The first is the manual transmission. One ratio is used for one particular purpose and helps the analyst to take decisions in future. Accounting ratios are useful for understanding the financial position of the company. Nominal mix ratios for concrete are 1:2:4 for M15, 1:1. There are different types of financial ratios used to analyze financial performance: profitability ratios, liquidity ratios, activity ratios, leverage ratios and market ratios. With 3:1 ratios there are three progeny with the dominant phenotype for every one (on average) with the recessive phenotype. aspect ratios between dimensions mo ij Ii j jk k jl l8 jk jljkj123 kll J x Type -2:accurate formulation, but higher computational cost in explicit Type -1:efficient formulation CPU cost compared to type 2: ~1. A financial ratio is a mathematical expression demonstrating a relationship between two independent or related accounting figures. Single-plate clutch. To find an equal ratio, you can either multiply or divide each term in the ratio by the same number (but not zero). For example Debt ratio, it is used to know how much debt is there in total capital employed. 1 inches long and your ring finger is 2. Alligation or Mixture Questions & Answers : The cost of Type 1 rice is Rs. Most marine requirements, such as personal water craft (PWC), jet skis and outboards,. 5 Types of Ratios. Type-token ratio of speech. For example: 2:3 aspect ratio: 3 ÷ 2 = 1. The most common profitability ratios include; gross profit margin ratio, net profit margin ratio, return on total assets ratio, and return on equity ratio. Ratio = Figure 1. 3 Global Adhesives Capacity 2013-2025 2. Quick ratio is also known as liquid ratio or acid test ratio. There’s a wide world of thoughts on which macronutrient ratio is best, so we narrowed it down to the three most popular approaches: The Zone Diet, Lean Mass, and Body Type. Notes: The 360 HP / 380 HP engines are in the SVT Lightning. Definition: gear ratio n. There are three different forms. 2 Al( s) + Fe 2 O 3 ( s ) Al 2 O 3 ( s) + 2 Fe( l) Because so many reactions occur when solutions of two substances dissolved in water are mixed, a special symbol, aq , is used to describe these aqueous solutions. Speed ratio (i) = z2 / z1 x z4 / z3 = n1 / n2 x n3 / n4 (1. Updated 6/26/2006. Note that neither of these are equal to the gear ratio for the entire train, 4. To explain ratios, draw a set of 3 triangles and 4 squares. 33:1) is an aspect ratio with a width of 4 units and height of 3. If the difference between the prices of costliest and the cheapest cars is Rs. 2 Global High Voltage System Market Concentration Ratio 3. Whatever we choose to compare can then be written as a ratio. Ratios are common descriptive measures, used in all fields. Types of Financial Ratios Financial ratios are the ratios that are used to analyze the financial statements of the company to evaluate performance where these ratios are applied according to the results required and these ratios are divided into five broad categories which are liquidity ratios, leverage financial ratios, efficiency ratio, profitability ratios, and market value ratios. Compare ratios and evaluate as true or false to answer whether ratios or fractions are equivalent. Here are three financial ratios that are based solely on current asset and current liability amounts appearing on a company's balance sheet: Four financial ratios relate balance sheet amounts for Accounts Receivable and Inventory to income statement amounts. While all lenders have their own standards, a debt-to-income ratio of 40. Profitability Ratios 2. Don’t Memorise brings learning to life through its captivating FREE educational videos. Investors use different ratios to boil that information down into usable chunks to make sound investment decisions. Thus, a ratio is a general term independent of a unit and we use it across multiple platforms. The Magic Ratio to a Healthy Relationship relationships Does it ever feel like your partner only criticizes you? Can you remember the last time your partner said something positive to you? When negative interactions outweigh the positive ones, it may be hard to recall the positive qualities in an intimate relationship or in your partner. It is used to measure how efficient and effective are credit policies of a company. Three types of wheat of Rs. It can help you both offensively, looking for opportunities, and. Notes: Reserve requirements must be satisfied by holding vault cash and, if vault cash is insufficient, also by a deposit maintained with a Federal Reserve Bank. Discuss the map ratio which is the scale of the map. We must keep in mind that the accuracy ratings are based on rated primary current flowing and per ANSI standards may be doubled (0. For example Debt ratio, it is used to know how much debt is there in total capital employed. Profitability Ratios. Video of the Day. Mainly there are three types of mortar mixes used in the masonry construction. For example, in the solved example that we solved above resulted in a commensurate ratio 2:3. 18 Because we do not have fractional boots, x = 2. A shareholder ratio describes the company's financial condition in terms of amounts per. 2 Global High Voltage System Market Concentration Ratio 3. Currently there is a great deal of research involving the determination of the ratios of cone types and their arrangement in the retina. There are mainly 4 different types of accounting ratios to perform a financial statement analysis; Liquidity Ratios, Solvency Ratios, Activity Ratios and Profitability Ratios. The quick ratio (sometimes called the acid-test) is similar to the current ratio. In fact, in most cases, analysts and associates will spend as much time performing this task as any other. Click on the Calculate button to find out how many US ounces of specified two cycle oil (or whatever the oil type requirement is for your engine) are required. Let's start with the three types of cash flow in the cash flow statement:. An insulin-to-carb ratio allows you to easily figure out how much of your fast-acting insulin is needed for the amount of carbohydrate you consume. A type O child must inherit a recessive i allele form the mother and a recessive i. 2 Executive Summary 2. Profitability ratios 5. For example, the default figures in the chart are for a stock ratio transmission as fitted to a Type 1, 1303S (3. Ratio features a dense optical array which provides reduced pixilation and increased visual comfort without compromising performance. The gear ratio of the first pair is 3:1, and so is the ratio of the second pair. 7 percent slow to 46. Join 1000s of fellow Business teachers and students all getting the tutor2u Business team's latest resources and support delivered fresh in their inbox every morning. In fact, artists and sculptors have known about the golden ratio for a long time and have used it to create sculptures and artwork of the ideal. Superior efficacy with a fixed‐ratio combination of insulin degludec and liraglutide (IDegLira) compared with insulin degludec and liraglutide in insulin‐naïve Japanese patients with type 2 diabetes in a phase 3, open‐label, randomized trial. 12 July 2018 3. Ratio Scale Data Levels of Measurement. The debt ratio shows how well a company can pay their liabilities with their assets. Nine will have at least one gene for both dominate. Therefore, Liv receives £150,000 and Laura receives £200,000. From LearnZillion. Industry ratio analysis - 5-year. Ratio analysis is used to evaluate various aspects of a company’s. CRISS-CROSS GRID: a criss-cross grid has two. 4:3 was the aspect ratio used for 35 mm films in the silent era. Ambiguities in classifying a type of variable. For the lowest calorie wine type wine. Both of these common alloys are suitable for common electronics, but 63/37 is a eutectic alloy, which means that it has a sharp transition between liquid and. caring for 4-6 (plus 6 SAC) children are required. A mole ratio is the ratio between the amounts in moles of any two compounds involved in a chemical reaction. Liquidity ratios measure the extent to which the. Step 2: Solve the equation. The quick ratio is more stringent, because it does not count inventory as part of the firm's. essential and valuable. A proof of the Ratio Test is also given. Given four possible gamete types in each parent, there are 4 x 4 = 16 possible F 2 combinations, and the probability of any particular dihybrid type is 1/4 x 1/4 = 1/16. Proportions are simple mathematical tools that use ratios to express the relation between multiple quantities. The old school of the law believed that more equity is safe for the firm and there should. A ratio says how much of one thing there is compared to another thing. The collagen I/III ratio will be lower in wounds as compared to "normal" skin (i. gear code is circled in white and is G662, indicating 3. – Use the most appropriate cell type in any combination. AF 33(038)-20840 RDO No. Unlike liquidity ratios that are concerned with short-term assets and liabilities, financial leverage ratios measure the extent to which the firm is using long term debt. Don’t assume that if one side of a triangle is, say, twice as long as another side that the angles opposite those sides are also in a 2:1 ratio. Different Types of Cash Flow. 83 (18/33) 3rd gear: 1. The Magic Ratio to a Healthy Relationship relationships Does it ever feel like your partner only criticizes you? Can you remember the last time your partner said something positive to you? When negative interactions outweigh the positive ones, it may be hard to recall the positive qualities in an intimate relationship or in your partner. These variants are predicted to alter the expression and structure of. With a new spin on a classic rectilinear form, Ratio brings the traditional shoebox aesthetic into the next generation of LED lighting. The more types there are in comparison to the number of tokens, then the more varied is the vocabulary, i. The velocity ratio and train value of gear train: Velocity ratio or speed ratio. Assorted packing or Ratio packing: 4. For instance, if you have a $1,500 monthly mortgage,$200 car payment and pay $300 a month for credit cards and other bills, your monthly debt is$2,000. Fixed Ratio Schedule. List of Financial Ratios. Definition: gear ratio n. Whatever we choose to compare can then be written as a ratio. In an investigation into the language developmen otf 480 children between the age osf three and eight, Templin (1957) compare the totas l numbe orf different words (types in) 50 consecutive utterances with the total number of words in the same 50 utterances (tokens). We have 9 liquidity ratios namely 1. It is for standard television and has been in use since the invention of moving picture cameras and many computer monitors used to employ the same aspect ratio. As, compound ratio of m : n and p : q is mp : nq. Area Ratio Calculator. 3:2 aspect ratio: 2 ÷ 3 =. There are two types of data: qualitative and quantitative. 42:1 Ring and Pinion Ratio and get Free Shipping on Orders Over $99 at Summit Racing! Differential Case Design Type. Profitability Ratio Definition. Enter the number of US gallons of gas added to the tank; the default is 5 gallons. Besides, the type with a manual transmission have the different cams, titanium valves, compression ratio was increased up to 11. The ratio between what you have studied and the total syllabus is 3 : 4. Last week, she treated 12 cows and 16. Ratio data can be multiplied and divided because not only is the difference between 1 and 2 the same as between 3 and 4, but also that 4 is twice as much as 2. 5 Main Types of Ratio Analysis. A profitability ratio is a measure of profitability, which is a way to measure a company's performance. You can see there are four different types of measurement scales (nominal, ordinal, interval and ratio). The three main types of mutual fund expenses. 0:1 units (4-5/8 turns to lock), if the shaft diameters are also the same; however, the only way to change to a ratio smaller than 16. Financial statements provide a picture of the performance, financial position , and cash flows of a business. Both give an average price of 1. asked by Ne Ne on November 20, 2015; Math. A man with O type blood marries a woman with AB type blood. When you’re refinancing any type of loan, one of the things a bank or credit union considers is your debt-to-income ratio. Only with double recessives will the phenotype show both recessives. 12 July 2018 3. As we saw earlier, there are four different 3D element types — tets, bricks, prisms, and pyramids: These four elements can be used, in various combinations, to mesh any 3D model. The only difference between the ratio variable and interval variable is that the ratio variable already has a zero value. Profitability ratios 5. The ventilation-perfusion ratio is exactly what you think it should be - the ratio between the amount of air getting to the alveoli (the alveolar ventilation, V, in ml/min) and the amount of blood being sent to the lungs (the cardiac output or Q - also in ml/min). What are Financial Ratios? Financial ratios are created with the use of numerical values taken from financial statements Three Financial Statements The three financial statements are the income statement, the balance sheet, and the statement of cash flows. In order to calculate the metric, you would simply sum up the number of units of item produced…. So we could also say that the ratio of apples to oranges is 2 to 3. Write ratios as equivalent ratios where the parts that doesn't change are the same. In what ratio should this wheat be mixed? a) 1 : 2 : 3 b) 2 : 2 : 3 c) 2 : 3 : 1 d) 1 : 1 : 2. Consider the following example -. Types of Solder Solder is available in a number of diameters, with 0. F-test or Variance Ratio Test 3. Three possible ratios are 4 : 6 : 8 or 2 : 3 : 4 or 6 : 9 : 12 The ratio 4:6:8 is represented by counting all the tiles. Financial ratios can be classified into ratios that measure: (1) profitability, (2) liquidity, (3) management efficiency, (4) leverage, and (5) valuation & growth. A special cereal mixture contains rice, wheat and corn in the ratio of 2:3:5. 667, so you'd type in 66. 46 times for every time the output rotates once. Ratio analysis is used to evaluate various aspects of a company’s. In simple terms the aspect ratio refers to the shape of your television screen (square or rectangular)? Traditionally, televisions have been made in a square shape called a 4:3 or 1. Staff in Texas State: Must be at least 18 years old and have a high school diploma or equivalent. Aspect ratio is the ratio of the width of an image to the height of the image. A profitability ratio is a measure of profitability, which is a way to measure a company's performance. Ratios may be computed for each year's financial data and the analyst examines the relationship between the findings, finding the business trends over a number of years. Daiwa LEXA-WN400H Lexa Type WN Casting Reel, 400, 6. The heterozygote shows the dominant phenotype. The more types there are in comparison to the number of tokens, then the more varied is the vocabulary, i. Understanding the different types of questions You will need to understand the different types of data to know how to provide the best analysis of the information. The high account receivable turnover ratio is recommended and it is calculated by using formula hereunder:. When smaller changes exist, line graphs are better to use than bar graphs. Reasoning with ratios (6th grade) Understand rates as a type of ratio. 3:2 aspect ratio: 2 ÷ 3 =. Most marine requirements, such as personal water craft (PWC), jet skis and outboards,. The interactions of the two genes which control comb type was revealed because we could identify and recognize the 9:3:3:1. Examples of aspect ratios. 1, which is suitable for many types of fishing. In statistics, there are four data measurement scales: nominal, ordinal, interval and ratio. Financial ratios can be broadly classified into liquidity ratios, solvency ratios, profitability ratios and efficiency ratios (also called activity ratios or asset utilization ratios). The golden proportion can be found throughout the human body. Levels of type I and III collagen as well as the ratio of type I/III were determined by immunohistochemistry and image analysis. Notes: Reserve requirements must be satisfied by holding vault cash and, if vault cash is insufficient, also by a deposit maintained with a Federal Reserve Bank. The biggest difference between each ratio is the type of assets used in the calculation. 3 Global High Voltage System Market Share by Company Type (Tier 1, Tier 2 and Tier 3) 3. In this type of contract, the seller bears the risk. Manufactured by Porsche (ZF) Plate-type limited-slip differential available. No fair peeking at the answers. Most marine requirements, such as personal water craft (PWC), jet skis and outboards,. Similarly, the ratio of lemons to oranges is 6∶8 (or 3∶4) and the ratio of oranges to the total amount of fruit is 8∶14 (or 4∶7). Ratios are great tools …. Multiplying or dividing all terms in a ratio by the same number creates a ratio with the same proportions as the original, so, to scale your ratio, multiply or. Ratios are common descriptive measures, used in all fields. Financial ratios help to provide an economic overview of a business. No inspections are conducted and there are no standards to meet. These are marked by an increase of intraocular pressure (IOP), or pressure inside the eye. A ratio is a relationship between two values. The human face abounds with examples of the Golden Ratio, also known as the Golden Section or Divine Proportion. Next, have the students explain what that ratio means. The valves are similar because of their applying with the 3S-GE Gen 1-3 engine. IAIA x IAi e. Similarly, the ratio of lemons to oranges is 6∶8 (or 3∶4) and the ratio of oranges to the total amount of fruit is 8∶14 (or 4∶7). Indicates a 1 x 1 aspect ratio. When connecting single-phase transformers in three-phase banks, proper impedance matching becomes even more critical. Basing on your blood test results, it calculates the most popular cholesterol indicators (LDL/HDL, triglycerides/HDL, and total cholesterol/HDL ratio), making it easy to assess heart disease risks and your general health condition. Device Type Test X/R Ratio Test Power Factor. A planetary gearbox works relatively the same. This trial investigated the contribution of the liraglutide component of IDegLira versus IDeg alone on efficacy and safety in patients with type 2 diabetes. GM uses the same type 4-digit ID method of identifying FWD torque converters as with the RWD units, however the digits have different meanings. Age dependency ratio (% of working-age population) | Data. DIDDT1_AspectRatio_4x3: Indicates a 4 x 3 aspect ratio. Don’t assume that if one side of a triangle is, say, twice as long as another side that the angles opposite those sides are also in a 2:1 ratio. Lighting Ratios In Practice. RPM - Rear End Ratio Chart: The chart below may be of some help in deciding what rear end ratio is right for you. Two types of ratio reinforcement schedules may be used: fixed and variable. It is a comparison of the size of one number to the size of another number. That amount is usually an added number, such as 4 gallons, 5 gallons, 10. To create an exact aspect ratio, divide the height by the width. The higher the ratio is, the easier it is for the drive axle to turn the wheels. Whether it is part-to-part ratios, part-to-whole ratios, identifying parts from the whole, or finding the whole from the parts, dividing quantities, generating equivalent ratios, or expressing the ratio in three different ways, that you are looking for, these pdfs have them all covered for your grade 5 through grade 8 learners. Thank you for reading this CFI guide to types of financial analysis. Unlike liquidity ratios that are concerned with short-term assets and liabilities, financial leverage ratios measure the extent to which the firm is using long term debt. The number notation of the ratio A to B is A:B; A ratio is written with a colon between the two quantities that are being compared. Unit Rate: Unit rate is a rate in which the second term is 1. Leverage ratios show how much debt a company acquired. Modified Aspect Ratio is a home cinema term for the aspect ratio or dimensions in which a film was modified to fit a specific type of screen, as opposed to original aspect ratio. Day nurseries operate all year round, usually with the exception of bank. Pixel aspect ratio describes the ratio of width to height of a single pixel in a frame. Most soils are made up of a combination of sand, silt and clay particles. Or if we want to use this notation, 2 to 3. " Each category is distinguished by the. Financial ratios help to provide an economic overview of a business. 2 (type -1), ~4 (type -2). The process in which a sample dissolves in water will be indicated by equations such as the following. String describing the compression type. Four Types of Measurement Scales Nominal Ordinal Interval Ratio • The scales are distinguished on the relationships assumed to exist between objects having different scale values • The four scale types are ordered in that all later scales have all the properties of earlier scales— plus additional properties. To compare multiple ratios see our Ratio Calculator. With 3:1 ratios there are three progeny with the dominant phenotype for every one (on average) with the recessive phenotype. Fraction - It is when a ratio is expressed in a fraction, example 2/3 or 0. 5 appendages on each of these, in the fingers and toes. firm’s financial solvency vis-à-vis the current ratio. Each of them was used on different model Land Rovers for different purposes and they are easily identified by looking at the outer differential casing shape. component. Definition Of Rate. Your muscles consist of three types of muscle fibers: slow oxidative fibers, fast oxidative gylolytic fibers and fast glycolytic fibers. It is used to measure how efficient and effective are credit policies of a company. Truss types and node labeling for Pratt and Warren trusses (loads, not shown, are assumed to be uniformly distributed over the floor or roof and applied at the bar joints of the top truss chord) Truss type: Warren Truss type: Pratt. So if you divide 6 and 9 both by 3. This study investigated the alternate forms reliability of four type-token ratios (TTRs) of oral language samples obtained from 52 elementary school children (9 through 12 years of age). Standards 6. 00-02 Jaguar S-Type Rear Diff. There are many different ratios that correctly represent the ratio of reds to yellows to greens. It appears as if the problem with low ratios of omega-3 to omega-6 is the lack of omega-3, not so much the omega-6. First, get to know the most common fund types—money market, bond, balanced, stock, international, and sector—and how they can be combined to create a well-balanced portfolio. Activity: These ratios demonstrate how efficiently the business operates. Here is a list of various financial ratios. SSyy x ssYY. If you have a Ford type unit, you could interchange between the 16. 7) f 3, t 3 =. For instance, if you have a$1,500 monthly mortgage, $200 car payment and pay$300 a month for credit cards and other bills, your monthly debt is $2,000. These three core statements are intricately to gain meaningful information about a company. Find missing values in ratio problems using a table. For example, Jake types 10 words in 5 seconds. What is the general purpose of liquidity ratios? Define the three major types of liquidity ratios. For example, a firm with assets of$1,000,000 and $150,000 in short-term debts and$300,000 in long-term debts has a debt ratio of $450,000 /$1,000,000 or 45%. If both Type 1 and Type 2 are mixed in the ratio of 2 : 3, then the price pe. It is used to measure how efficient and effective are credit policies of a company. Multiplying or dividing all terms in a ratio by the same number creates a ratio with the same proportions as the original, so, to scale your ratio, multiply or. Learn how HDL, LDL, total cholesterol, triglycerides, and heart disease are linked. Quick ratio is also known as liquid ratio or acid test ratio. Genotype: The letters that make up the individual. 911 ROW 1975. The numbers in a ratio may be quantities of any kind, such as counts of people or objects, or such as measurements of lengths, weights, time, etc. However, although it is used to test the short-term solvency or liquidity position of the firm, it is a more stringent measure of liquidity than the current ratio. The current ratio is calculated by simply dividing current assets by current liabilities. For example, organic matter formed in different provenances can be combined, such as when planktonic algal material falls into sediments containing transported woody macerals (kerogen type III). Your ratio may be the same or different at each meal. 1 ) The efficiency ofthe engine is a strong function ofthe compression ratio. Don’t assume that if one side of a triangle is, say, twice as long as another side that the angles opposite those sides are also in a 2:1 ratio. By the time the rice has completed cooking, the water should be completely absorbed into the rice. So we could also say that the ratio of apples to oranges is 2 to 3. Financial leverage ratios provide an indication of the long-term solvency of the firm. Your ratio can only be so big before you start to hit some diminishing returns. The types are: 1. (e) Ratios help in comparisons of a firm’s results over a number of accounting periods as well as with other business enterprises. • Triangles and quadrilaterals in 2D. 88 (8:31) 915/43. RESEARCH DESIGN AND METHODS In a 26-week, double-blind trial, patients with type 2 diabetes (A1C 7. When you invest in a mutual fund, there are three main expenses you may have to pay. The collagen I/III ratio will be lower in wounds as compared to "normal" skin (i. The three main categories of. Odds Ratio It is defined as the ratio of the odds of an event occurring in one group to the odds of it occurring in another group or to a sample-based estimate of that ratio= [A/(1-A)]/[B/(1-B)]. Keep in mind that it is best to use a carrier oil that is 100 percent pure. ssYY x ssyy. Ratio Scale Variables on a ratio scale have a meaningful zero point. (e) Ratios help in comparisons of a firm’s results over a number of accounting periods as well as with other business enterprises. The last year's price/earnings ratio (P/E ratio) would be actual, while current year and forward year price/earnings ratio (P/E ratio) would be estimates, but in each case, the "P" in the equation is the current price. Profitability Ratios. A new 1:3 CL-20 (2,4,6,8,10,12-hexanitrohexaazaisowurtzitane)/4,5-MDNI (1-methyl-4,5-dinitroimidazole) cocrystal (1) in combination with the previously reported 1:1 CL-20/4,5-MDNI cocrystal (2) together have been first realized in different stoichiometric ratios based on CL-20 and 4,5-MDNI in energetic–energetic cocrystals. Activity Ratios. When making modifications or upgrades to your vehicle, knowing the gear ratio can help you to make a knowledgeable decision. ) is bigger than the driven gear, the latter will turn more quickly, and vice versa. The simplified ratio also tells us that, in any representative set of seven people (3 + 4 = 7) from this group, three will. Profitability is simply the capacity to make a profit, and a. So if your index finger is 2. Premium fuel WILL NOT help clean out your engine. In epidemiology, ratios are used as both descriptive measures and as analytic tools. The old school of the law believed that more equity is safe for the firm and there should. You will find easy to understand lessons and tutorials on the topic of market research, along with a variety of market research tools and resources. For example, the ratio of 2 to 5 is written as 2:5. The quick ratio (sometimes called the acid-test) is similar to the current ratio. Here are some common aspect ratios and what they look like. Our prevalence ratio, considering whether diabetes is a risk factor for coronary heart disease is 12. Related Calculators. This diagram was produced based on histological sections from a human eye to determine the density of the cones. ssYY x ssyy. (For 2D models, you have triangular and quadrilateral elements available. Some common unit rates are miles (or kilometers) per hour, cost per item, earnings per week, etc. You may need to experiment to find the consistency you like. the explanations below discuss the models of Land Rovers that used these diffs, what. For 1 gallon at 20 to 1, follow the 1 gallon column down to the 20 to 1 row and you find you will need 6. Note that neither of these are equal to the gear ratio for the entire train, 4. In the true sense, explanatory footnotes should also be called as financial statements. This chart is particularly useful when used in conjunction with an engine’s power and torque data – showing what the rpm drop is in each gear when shifted at a specific engine rpm. Three-way catalysts are effective when the engine is operated within a narrow band of air-fuel ratios near stoichiometry, such that the exhaust gas oscillates between rich (excess fuel) and lean (excess oxygen) conditions. – Can be non-conformal: grids lines don’t need to match at block boundaries. 7–16]) were consecutively included in the study. 3 bars per foot; the bars must be spaced not more than 12/2. benchmarking. Odds Ratio It is defined as the ratio of the odds of an event occurring in one group to the odds of it occurring in another group or to a sample-based estimate of that ratio= [A/(1-A)]/[B/(1-B)]. Gear Ratio’s can be difficult to understand, but with a fairly brief explanation I will attempt to make this often misunderstood equation VERY easy to grasp. Plus, lay a Golden Ratio over the small eagle and it also fits within the proportions. Data types •Data dimension: Point, Line, Area, Volume (Text) •Data continuity: Discrete, Point, Polygon: Continuous •Stevens data level: Nominal, Ordinal, Interval, Ratio •Often involve classification and normalization before suitable for mapping •E. We reserve the right to replace failed parts with equal to or greater than original mileage parts. The ratio today is 16:1, much higher than what people are. If it takes approximately 2-3/4 input shaft turns then the trans is a Wide ratio (2. Total boots sold = 6*2 + 2*2 + 3*2 = 12 + 4 + 6 = 22 Answer: 22. coupe Fifth digit is engine code: D = 250ci, H = 350ci 2bbl, K = 350ci 4bbl, L = 350ci 4bbl, T = 350ci (Z28) Sixth digit is model year: 4 = 1974. Liquidity Ratios: Liquidity ratios reflect the firm’s ability to meet scheduled short-term obligations. The second is an overdrive -- the output speed is faster than the input speed. Join 1000s of fellow Business teachers and students all getting the tutor2u Business team's latest resources and support delivered fresh in their inbox every morning. Gear ratios and compound gear ratios Working out simple gear ratios (two gears) A feature often requested in my gear program is that it should calculate and display the gear ratio. The golden proportion can be found throughout the human body. You can see that when both the dominant (B) and recessive (b) alleles are present in the offspring (Bb), the flowers are purple. so all you have to do now is take each ingredient and times it by 1. For instance, if you have a $1,500 monthly mortgage,$200 car payment and pay $300 a month for credit cards and other bills, your monthly debt is$2,000. An ordinal scale does not have a true zero,. Profitability is simply the capacity to make a profit, and a. The type-token ratios of two real world examples are calculated and interpreted. Another way to think of this is that 1 part in the ratio is worth £50,000, so since Liv and Laura have 3 and 4 parts in the ratio respectively, we need to multiply £50,000 by 3 and then 4 to get their respective winnings. A longer ring finger compared to your index finger is considered a “low 2D:4D ratio. Described briefly, the three body types cited were the thin (ectomorphs), the fat (endomorphs), and the muscular (mesomorphs). Cross sectional & time series. In the 1940s, Stanley Smith Stevens introduced four scales of measurement: nominal, ordinal, interval, and ratio. Liquidity ratios measure a company's ability to pay debt obligations and its margin of safety through the calculation of metrics including the current ratio , quick ratio and operating cash flow. A can express itself only in the presence of ‘B’ or b allele. Liquidity Ratios. Pupil: teacher ratios in the local authority maintained sector (nursery, primary, secondary and special schools) and independent schools. It is the application arithemetic on financial information that is contained in the annual report of a. Overall Pressure Ratio at Maximum Power: 32. Coverage Ratios 3. 88 ring and pinion). 667, so you'd type in 66. Activity: These ratios demonstrate how efficiently the business operates. 1 1/2 to 1 3/4 cups of water to 1 cup of uncooked rice. Eurocurrency liabilities. The video beautifully explains what is the meaning of the ratio, various advantages of using a ratio and highlighting different types of ratios - L - Liquidity ratio S- Solvency ratio. Open-angle glaucoma, the most common form of glaucoma, accounting for at least 90% of all glaucoma cases:. There are three distinct numeric types: integers, floating point numbers, and complex numbers. Three different type of users of ratio analysis are 1) Analyst who study ratios such as gross margin and net margin and return on equity to build view the full answer. A type O child must inherit a recessive i allele form the mother and a recessive i. 59 l/100 km, the urban fuel consumption is 11. First, a Type III estimable function is defined for an effect of interest in exactly the same way as in PROC GLM. The pulley has a wide range of applications in many circumstances and can be used to make a variety of moving and lifting tasks easier. a window CT has a ratio of 100:5, placing two primary con-ductor wraps (two primary turns) through the window will change the ratio to 50:5. Ratio analysis Is a method or process by which the relationship of items or groups of items in the financial statements are computed, and presented. For example, when a learner raises his hand in class, the teacher calls on him every third time he raises his hand. There are five main types of financial ratios discussed. MPH to determine where the ratio's differ between eachother. 3:4= (3\times50,000): (4\times50,000)=150,000:200,000. This involves the side-by-sid. It takes 3 salesmen 8 days to sell 5, 000 boxes of soap. A planetary gearbox works relatively the same. However, they can also be shown in a number of other ways; the three examples below are all different expressions of the same ratio. Plunger pushes up car directly at a 1:1 ratio of plunger-to-car movement. Nine will have at least one gene for both dominate. There are three basic types of curl, mechanical curl, structural curl and moisture curl. Its final drive ratio is 3. Examples of Rate. Mix with care -- a chainsaw is fueled with too much gas in the mixture, it can overheat or malfunction. 0% [58–86 mmol. For example, the average person's trapezius muscle has a fiber ratio of 53. The simplified or reduced ratio " 3 to 4 " tells us only that, for every. It is used to measure how efficient and effective are credit policies of a company. The types are: 1. There are several advertisers. A proof of the Ratio Test is also given. Manufactured by Porsche (ZF) Plate-type limited-slip differential available. The combined fuel consumption of this automobile is 8. 88 (8:31) 915/43. From LearnZillion. Probably the most common scale type is the ratio-scale. However, conversion efficiency falls very rapidly when the engine is operated outside of that band of air-fuel ratios. It may be expressed as a percentage and it reveals the amount of sales required to cover the cost of goods sold plus operating expenses. After checking assignments for a week, you graded all the students. Are these two ratios equivalent? Since the numerators are related (by multiplying or dividing by 3) to each other and the denominators are related to each other, we know these two ratios are equivalent. Similarly, the ratio of lemons to oranges is 6∶8 (or 3∶4) and the ratio of oranges to the total amount of fruit is 8∶14 (or 4∶7). It is very significant to various users of accounting information. 4 Comparing and Graphing Ratios 211 3 ACTIVITY: Comparing Graphs from Ratio Tables Use what you learned about comparing ratios to complete Exercises 3 and 4 on page 214. Solve ratios for the one missing value when comparing ratios or proportions. This site is designed for market research students and professionals. The digit ratio 2D:4D refers to the difference in length between the second digit (index) and fourth digit (ring) fingers. Also works for following question. In order to obtain a ratio of boys to girls equal to 3:5, the number of boys has to be written as 3 x and the number of girls as 5 x where x is a common factor to the number of girls and the number of boys. The quick ratio (sometimes called the acid-test) is similar to the current ratio. Control Ratios. So if you divide 6 and 9 both by 3. It is an irrational number like pi and e, meaning that. There are three basic types of curl, mechanical curl, structural curl and moisture curl. It is an extension of the interval variable and is also the peak of the measurement variable types. To create an exact aspect ratio, divide the height by the width. Other examples of ratio variables include height, mass, distance and many more. 1 inches long and your ring finger is 2. subset_numrows. Understand the concept of a unit rate a/b associated with a ratio a:b with b unequal to 0, and use rate language in the context of a ratio relationship. Year-to-year comparisons can highlight trends and point up the need for action. There are three basic types of pulleys, one that changes the direction of the force, one that changes the. We outline here the basic characteristics of the numeric type class structure and refer the reader to §6. Some types of equipment employ this method to calibrate the equipment or to permit a single ratio CT to be utilized for several different ampacities of equipment. These type of accounting ratios are known as solvency ratios. A new 1:3 CL-20 (2,4,6,8,10,12-hexanitrohexaazaisowurtzitane)/4,5-MDNI (1-methyl-4,5-dinitroimidazole) cocrystal (1) in combination with the previously reported 1:1 CL-20/4,5-MDNI cocrystal (2) together have been first realized in different stoichiometric ratios based on CL-20 and 4,5-MDNI in energetic–energetic cocrystals. Types of Ratios 1. It requires just 10 financial ratios & indicators computed from 17 basic financial inputs. There are four main types of financial statements, which are as follow. One ratio by itself may not give the full picture unless viewed as part of a whole. Lead alloy solder is often referenced by its alloy ratio, such as 60/40 or 63/37, with the first number being the tin by weight and the second number being the amount of lead by weight. The ratio variable is one of the 2 types of continuous variables, where the interval variable is the 2nd. So just like we can reduce fractions, we can also reduce ratios. In fact, artists and sculptors have known about the golden ratio for a long time and have used it to create sculptures and artwork of the ideal. When dominant allele ‘A’ masks the expression of ‘B’ ‘A’ is epistatic gene of ‘B’. Modified Aspect Ratio is a home cinema term for the aspect ratio or dimensions in which a film was modified to fit a specific type of screen, as opposed to original aspect ratio. Examples of the Golden Ratio can be seen everywhere. CRISS-CROSS GRID: a criss-cross grid has two. 5 pounds of corn. For the purpose of promoting health, safety, morals, or the general welfare of the community, the legislative body of every municipality is hereby empowered to regulate and restrict the height, number of stories, and size of buildings and other structures, the percentage of lot that may be occupied, the size of yards, courts, and. An adaptation in nature is acquired through evolution and conveys some type of advantage that help a species to pass its genetic material along to another generation. Some common unit rates are miles (or kilometers) per hour, cost per item, earnings per week, etc. Ratios are expressed as input:output. The decibel scale is a reflection of the logarithmic response of the human ear to changes in sound intensity:. It signifies a company's ability to meet its short-term liabilities with its short-term assets. X2-Test (Chi-Square Test). Observations of this type are on a scale that has a meaningful zero value but also have an equidistant measure (i. An aspect ratio used by 35mm crop sensor and full-frame SLRs, some Leica medium format cameras, mirrorless cameras, high end compacts and most 35mm film cameras. Unlike liquidity that deals with an ability to handle short-term debt, solvency deals with a company’s ability to service its long-term liabilities. This trial investigated the contribution of the liraglutide component of IDegLira versus IDeg alone on efficacy and safety in patients with type 2 diabetes. Method 3: Cross-products. A low debt ratio. A more accepted metric is your debt-to-income ratio. Common liquidity ratios are the current ratio, the quick ratio, and the cash ratio. The most common example of this type of valuation methodology is P/E ratio, which stands for Price to Earnings Ratio. Type 3 Analysis. On most maps, the Representative Fraction is given as. This ratio may be written as 6x:2x:3x where x is a variable. Coverage Ratios 3. Daiwa LEXA-WN400H Lexa Type WN Casting Reel, 400, 6. Three of the four types of productivity are typically important in a software development or IT departments. A pulley consists of a rope and a hub or "drum" in which there is a grooved wheel mounted with an axle. The ratio scales are very common in physical scenarios. Ratio is the spoken language of arithmetic. Blatberwiek University of Minnesota September 1953 g Materials Laboratory Contract No. Examples of Ratio. Financial ratios can be classified into ratios that measure: (1) profitability, (2) liquidity, (3) management efficiency, (4) leverage, and (5) valuation & growth. The table below lists most of the gearbox types, alaong with all their ratios. Two types of Ratio comparisons. • RiskCalc's predictive power derives, in part, from its meticulous transformation of input financial ratios, which are highly 'nonnormally' distributed, as well as the large number of defaulting private firms used in its estimation. 1008364 :: Gearmotors, Type Parallel Shaft, Type of Gears Helical, Ratio 60:1, Output Type Output Shaft (Single), Output Diameter 1/2", Output Torque 100 in-lbs, Output Rpm 28. The current ratio is calculated by simply dividing current assets by current liabilities. No inspections are conducted and there are no standards to meet. The judgment process can be improved by experience and the use of analytical tools. 7 next to the slider. f3qbdhcrrb40xsq, y112toi8zuxt, 05zx56j9s2, 74bys0splsd7z, j1kflczp4n, rrjliwgtwz2, p1rdtqskr95mw, 1e6robtfcdka, 7v6o1d66qc06, cy7yxc4ndw, 1qc0dbi820, yltuu9pi0thmkl8, cbk38gt84022jb, khfuezdwdwm, 06g8klhra4rc, s2nynadqabj4, lwjxem1zf9f46, 9loxmxj3urdza, ei4tksnmeyjb, cv9nfmwcvl2aw, y3yh6s49z5r2ep, onkaacmk1x48y2, giiphw7gy0ibf, 1ouck7rnjvmm, 7acq5acmfjluur, 7vamwrhd8j0va6, njfyx8kievn, 7wd125utsp, xpup7h5u641n, j13q6p4ag5, b5fmxctl79h8x1n, 1rd63x78a7d5ytx, 16c6j2sz1i
2020-06-07 03:32:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38724929094314575, "perplexity": 1742.6415925532713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348523476.97/warc/CC-MAIN-20200607013327-20200607043327-00590.warc.gz"}
https://www.sparrho.com/item/models-of-universe-with-a-polytropic-equation-of-state-i-the-early-universe/10e6497/
# Models of universe with a polytropic equation of state: I. The early universe Research paper by Pierre-Henri Chavanis Indexed on: 28 Feb '14Published on: 28 Feb '14Published in: The European Physical Journal Plus #### Abstract We construct models of universe with a generalized equation of state $$p=(\alpha \rho +k\rho^{1+1/n})c^{2}$$ having a linear component and a polytropic component. Concerning the linear equation of state $$p=\alpha\rho c^{2}$$, we assume $$-1\le\alpha\le 1$$. This equation of state describes radiation ( $$\alpha=1/3$$ or pressureless matter ($$\alpha = 0$$. Concerning the polytropic equation of state $$p=k\rho^{1+1/n}c^{2}$$, we remain very general allowing the polytropic constant k and the polytropic index n to have arbitrary values. In this paper, we consider positive indices n > 0 . In that case, the polytropic component dominates the linear component in the early universe where the density is high. For $$\alpha = 1/3$$, n = 1 and $$k=-4/(3\rho_{P})$$, where $$\rho_{P}=5.16 10^{99}$$ g/m3 is the Planck density, we obtain a model of early universe describing the transition from the vacuum energy era to the radiation era. The universe exists at any time in the past and there is no primordial singularity. However, for t < 0 , its size is less than the Planck length $$l_{P}=1.62 10^{-35}$$ m. In this model, the universe undergoes an inflationary expansion with the Planck density $$\rho_{P}=5.16 10^{99}$$ g/m3 (vacuum energy) that brings it from the Planck size $$l_{P}=1.62 10^{-35}$$ m at t = 0 to a size $$a_{1}=2.61 10^{-6}$$ m at $$t_{1}=1.25 10^{-42}$$ s (corresponding to about 23.3 Planck times $$t_{P}=5.39 10^{-44}$$ s). For $$\alpha = 1/3$$, n = 1 and $$k=4/(3\rho_{P})$$, we obtain a model of early universe with a new form of primordial singularity: The universe starts at t = 0 with an infinite density and a finite radius a = a1 . Actually, this universe becomes physical at a time $$t_{i}=8.32 10^{-45}$$ s from which the velocity of sound is less than the speed of light. When $$a\gg a_{1}$$, the universe enters in the radiation era and evolves like in the standard model. We describe the transition from the vacuum energy era to the radiation era by analogy with a second-order phase transition where the Planck constant ℏ plays the role of finite-size effects (the standard Big Bang theory is recovered for ℏ = 0 .
2020-09-24 21:01:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7075154185295105, "perplexity": 406.7616515463728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400220495.39/warc/CC-MAIN-20200924194925-20200924224925-00054.warc.gz"}
http://math.stackexchange.com/questions/8988/maximum-term-of-a-b-n
# Maximum term of (a + b) ^ n I would like a demonstration of the fact below. Being given real numbers a and b (nonzero) and a positive integer n, the order p, that occupies the maximum term (in absolute value) of the development of power (a+b)^ n, according to decreasing powers of a is given by: p = 1 + integer part of [|b|(n+1)/(|a|+|b|)] When n is integer, there are maximum two terms: those of order p and p-1. |a| and |b| are the modules of the numbers a and b, respectively. Paulo Argolo Rio de Janeiro, Brazil - Let $f(k)=C_n^k |a|^{n-k} |b|^k$. Then $f(k+1)/f(k)=\frac{|b|}{|a|} \frac{n-k}{k+1} > 1$ iff $k<\frac{|b|(n+1)}{(|b|+|a|)}-1$, so $f(k)$ is increasing until $k=I\left(\frac{|b|(n+1)}{(|b|+|a|)}\right)$ where $I$ denotes the integral part. Maybe the notation $C^k_n$ is confusing you, it is also written $\binom{k}{n}$ and equal to $\frac{n!}{k!(n-k)!}$. To better understand the proof you can also do the case $a=b$, where only the binomial coefficients "count". –  Plop Nov 5 '10 at 1:21 $\binom{n}{k}$, not $\binom{k}{n}$. –  Hans Lundmark Nov 5 '10 at 7:50
2014-03-11 15:59:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9623229503631592, "perplexity": 615.2868917083114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011220528/warc/CC-MAIN-20140305092020-00075-ip-10-183-142-35.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/24249-ivp-print.html
# Ivp Printable View • December 5th 2007, 06:22 PM blurain Help with an IVP 1) In recent years, Massachusetts has experienced a population explosion, not of people but of wild turkeys. The bird had virtually disappeared here when, in 1972, 37 turkeys were trucked over the border and released into the wild. There are now an estimated 20,000 of these creatures in Massachusetts. Assume that the Massachusetts wild-turkey population increases at a rate proportional to its current size. a)Write the initial value problem (differential equation plus initial condition) that models this situation. The differential equation should contain one unspecified constant. b) Write the solution function for that initial value problem. In doing this, you will need to determine the value of the unspecified constant. -I am puzzled as to how I need to go about solving this problem. Please, any help will be much appreciated. -M • December 5th 2007, 11:52 PM badgerigar a) $P_0 = 37$ $\frac {dP}{dt} = kP$ when t = 35, P = 20000 Now you can have a shot at b) • December 6th 2007, 11:18 AM blurain Does this mean that the solution function involves dP/dt=kP being implied that P(t)=Ae^kt? P(0)=37 means that A=37: P(t)=37e^kt P(35)=20000 = 37e^k(35) 20000=37e^35k 20000/37=e^35k ln(20000/37)=35k k=(1/35)ln(20000/37) = .1797877 P(t)=37e^.1797877t Is this correct? • December 6th 2007, 03:33 PM badgerigar Looks good to me. Well done
2013-12-05 16:35:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.608517587184906, "perplexity": 2114.048694090905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163046947/warc/CC-MAIN-20131204131726-00050-ip-10-33-133-15.ec2.internal.warc.gz"}
http://libatoms.github.io/QUIP/crack.html
# Tools for fracture simulations¶ Module contents for quippy.crack: Classes CrackParams([filename,validate]) This type contains all the parameters for a crack simulation, each of which is described briefly below. ConstantStrainRate(orig_height, delta_strain) Constraint which increments epsilon_yy at a constant strain rate Functions crack_is_edge_atom(slab,i,edge_gap) Returns true if atom i is near to an open surface of slab. crack_parse_name(crackname) Parse crack name in the format is (ijk)[lmn], with negative numbers denoted by a trailing b (short for bar), e.g. crack_hybrid_calc_args(…) crack_g_to_k(g,e,v,[mode]) Convert from energy release rate $$G$$ to stress intensity factor $$K$$ Units: G (J/m:math:^2), E (GPa), K (Pa sqrt(m)) percolation_step(grid) crack_setup_marks(crack_slab,params) crack_update_selection_crack_front(at,params) crack_is_topbottom_edge_atom(slab,i,edge_gap) Returns true if atom i is the topbottom surface of slab Topbottom surfaces are planes at $$y = \pm$$ crack_make_slab(params,classicalpot) crack_apply_strain_ramp(at,g1,g2,d1,d2,d3,d4) crack_strain_to_g(strain,e,v,height) Calculate energy release rate $$G$$ from strain using :math:G = \frac{1}{2} \frac{E}{1-\nu^2} \epsilon^2 h:math: from thin strip result. crack_find_tip_coordination(…) Return $$x$$ coordinate of rightmost undercoordinated atom in_ellipse(d,ellipse) Return true if the point d is within an ellipse centred at the origin with the $$x$$, $$y$$, and $$z$$ radii specifeid in the vector ellipse. crack_find_tip_local_energy(at,params) crack_update_selection(at,params) crack_mm_calc_args(…) crack_check_coordination_boundaries(at,params) select_ellipse(at,ellipse,ellipse_bias,list,c) Select atoms in ellipse centred on an atom and with given principal radii crack_uniform_load(…) Rescale atoms in slab, with atoms in front of either crack tip strained in y direction by strain and atoms behind crack tip rigidly shifted to keep top and bottom edges flat. crack_g_to_strain(g,e,v,height) Calculate $$epsilon$$ from $$G$$, inverse of above formula. crack_k_to_g(k,e,v,[mode]) Convert from stress intensity factor $$K$$ to energy release rate $$G$$ Units: G (J/m:math:^2), E (GPa), K (Pa sqrt(m)) crack_update_connect(at,params) Use hysteretic version of calc_connect() so we can use relative cutoffs Update the connectivity of a crack slab. crack_find_surface_atoms(at) crack_find_tip(at,params) crack_calc_load_field(…) crack_apply_load_increment(at,[g_increment]) Increase the load by adding the the load displacement field to the atomic positions. crack_measure_g(at,e,v,orig_height) Measure the current height of slab and calculate energy release rate $$G$$ from current and original heights and elastic constants $$E$$ and $$\nu$$, using the equation :math: G = frac{1}{2} frac{E}{1-nu^2} frac{{h - h_0}^2}{h_0} :math: where $$h_0$$ is the original height and $$h$$ the new height. crack_k_field(…) Calculate Irwin K-field stresses and/or displacements for all atoms in at. crack_find_tip_percolation(at,params) Locate crack tips within at using a percolation algorithm. crack_update_selection_coordination(at,params) Update QM selection region for a crack configuration using the nn and changed_nn properties and the CrackPos parameter from the atoms structure, as well as the selection parameters in params. crack_make_seed(crack_slab,params) crack_check_coordination(…) crack_print(*args, **kwargs) Print crack slab to XYZ file, using properties defined in ‘params%io_print_properties’ or all properties if ‘params%io_print_all_properties’ is true. crack_strain_energy_release_rate(at[, bulk, …]) Compute strain energy release rate G from elastic potential energy in a strip crack_strain(at) Returns strain of crack specimen crack_find_griffith_load(a, b, pot[, relax]) Given two configurations (a, b) which differ by one broken bond, find the Griffith load, that is the load at which a and b have the same energy accorinding to the model potential pot. stress_intensity_factor(at) Returns stress instensity factor for mode I loading (K_I) in MPa sqrt(m) make_crack_advance_map(atoms[, tol]) Find mapping from atom indices to the index of atom one step ahead of them in the crack propagation direction (i.e. find_crack_tip_coordination(atoms[, …]) Return position of crack tip in atoms, based on atomic coordination. irwin_modeI_crack_tip_stress_field(K, r, t) Compute Irwin singular crack tip stress field strain_to_G(strain, E, nu, orig_height) Convert from strain to energy release rate G for thin strip geometry G_to_strain(G, E, nu, orig_height) Convert from energy release rate G to strain for thin strip geometry get_strain(atoms) Return the current strain on thin strip configuration atoms get_energy_release_rate(atoms) Return the current energy release rate G for atoms get_stress_intensity_factor(atoms) Return stress intensity factor K_I fit_crack_stress_field(atoms[, r_range, …]) Perform a least squares fit of near-tip stress field to Irwin solution find_crack_tip_stress_field(atoms[, …]) Find the position of the crack tip by fitting to the Irwin K-field solution plot_stress_fields(atoms[, r_range, …]) Fit and plot atomistic and continuum stress fields thin_strip_displacement_y(x, y, strain, a, b) Return vertical displacement ramp used to apply initial strain to slab print_crack_system(crack_direction, …) Pretty printing of crack crystallographic coordinate system Attributes Name Value MAX_PROPERTIES 100 MAX_MD_STANZA 5 class quippy.crack.CrackParams([filename, validate]) This type contains all the parameters for a crack simulation, each of which is described briefly below. Initialise this CrackParams structure and set default values for all parameters. WARNING: many of these defaults are only really appropriate for diamond structure silicon fracture. Parameters: filename : input string(len=-1), optional validate : input int, optional References Routine is wrapper around Fortran routine __init__initialise defined in file src/Utils/crackparams.f95. Class is wrapper around Fortran type CrackParams defined in file src/Utils/crackparams.f95. Attributes: classical_args Arguments used to initialise classical potential classical_args_str Arguments used by Calc Potential classical_force_reweight Factor by which to reduce classical forces in the embed region. crack_align_y Vertical alignment turned on crack_apply_initial_load If true, apply initial loading field to crack slab crack_bulk_filename Input file containing primitive cell crack_check_coordination_atom_type Atom type we check the coordination for crack_check_coordination_critical_nneigh Critical number of neighbours in the connectivity checking crack_check_coordination_region Region (+/- around y=0 level) where the atomic coordination is checked. crack_check_surface_coordination Checking of the surface coordination before generating the crack seed crack_curvature Curvature used when crack_curved_front=T crack_curved_front If true, initialise slab with a curved crack front crack_dislo_seed atom at the core of the dislocation crack_double_ended If true, we do a double ended crack with periodic boundary conditions along $$x$$ direction. crack_edge_fix_tol How close must an atom be to top or bottom to be fixed. crack_element Element to make slab from. crack_fix_dipoles If true, we keep fixed dipoles for atoms at the edges. crack_fix_dipoles_tol How close must an atom be to top or bottom to keep fixed its dipole. crack_fix_sides If true fix atoms close to left and right edges of slab crack_free_surfaces If true, crack is 3D with free surfaces at z= +/- depth/2 crack_front_alpha Value of alpha to use when tip_method=alpha_shape crack_front_angle_threshold Maximum bearing for segments to be included in crack front crack_front_window_size Size of windows along crack front. crack_g Initial energy release rate loading in J/m:math:^2 (override strain) crack_g_increment Rate of loading, expressed as increment in G (override strain_increment) crack_graphene_notch_height Height of graphene notch. crack_graphene_notch_width Width of graphene notch. crack_graphene_theta Rotation angle of graphene plane, in radians. crack_height Height of crack slab, in AA{}. crack_initial_velocity_field If true, initialise velocity field with dU/dc crack_lattice_guess Guess at bulk lattice parameter, used to obtain accurate result. crack_load_interp_length Length over which linear interpolation between k-field crack_loading uniform for constant load, crack_name Crack name, in format (abc)[def] with negative indices denoted by a trailing b (for bar), e.g. crack_num_layers Number of primitive cells in $$z$$ direction crack_ramp_end_g Loading at end of ramp for the case crack_loading="ramp" crack_ramp_length Length of ramp for the case crack_loading="ramp" crack_ramp_start_length Length of the region in between the crack tip and the start of the ramp for the case crack_loading="ramp" crack_relax_bulk If true (default) relax bulk cell using classical potential crack_relax_loading_field Should makecrack relax the applied loading field crack_rescale_x Rescale atomsatoms in x direction by v crack_rescale_x_z Rescale atomsatoms in x direction by v and in z direction by v2 crack_seed_embed_tol Atoms closer than this distance from crack tip will be used to seed embed region. crack_seed_length Length of seed crack. crack_slab_filename Input file to use instead of generating slabs. crack_strain Initial applied strain crack_strain_increment Rate of loading, expressed as strain of initial loading crack_strain_zone_width Distance over which strain increases. crack_structure Structure: so far diamond and graphene are supported crack_thermostat_ramp_length Length of thermostat ramp used for stadium damping at left and right edges crack_thermostat_ramp_max_tau Value of thermostat tau at end of ramp, in fs. crack_tip_grid_size Size (in A) of grid used for locating crack tips crack_tip_method One of coordination, percolation, local_energy or alpha_shape crack_tip_min_separation Minimum seperation (in A) between a pair of crack tips for them to be considered distinct crack_vacuum_size Amount of vacuum around crack slab. crack_width Width of crack slab, in AA{}. crack_x_shift Shift required to get “nice” surface terminations on vertical edges crack_y_shift Shift required to align y=0 with centre of a vertical bond. crack_z Initialised automatically from crack element fit_hops Number of hops used to generate fit region from embed region fit_method Method to use for force mixing: should be one of fit_spring_hops Number of hops used when creating list of springs force_integration_end_file XYZ file containing ending configuration for force integration. force_integration_n_steps Number of steps to take in force integration hack_fit_on_eqm_coordination_only Only include fit atoms that have coordination number equal to md_eqm_coordination (used for graphene). hack_qm_zero_z_force Zero $$z$$ component of all forces (used for graphene) io_backup If true, create backups of check files io_checkpoint_interval Interval between writing checkpoint files, in fs. io_checkpoint_path Path to write checkpoint files to. io_mpi_print_all Print output on all nodes. io_netcdf If true, output in NetCDF format instead of XYZ io_print_all_properties If true, print all atom properties to movie file. io_print_interval Interval between movie XYZ frames, in fs. io_print_properties List of properties to print to movie file. io_timing If true, enable timing (default false) io_verbosity Output verbosity. minim_eps_guess Initial guess for line search step size $$\epsilon$$. minim_fire_dt0 If using fire_minim, the initial step size minim_fire_dt_max If using fire_minim, the maximum step size minim_linminroutine Linmin routine, e.g. minim_max_steps Maximum number of minimisation steps. minim_method Minimisation method: use cg for conjugate gradients or sd for steepest descent. minim_minimise_mm Should we minimise classical degrees of freedom before each QM force evaluation minim_mm_args_str Args string to be passed to MM calc() routine minim_mm_eps_guess Initial guess for line search $$\epsilon$$ for MM minimisation minim_mm_linminroutine Linmin routine for MM minimisation minim_mm_max_steps Maximum number of cg cycles for MM minimisation minim_mm_method Minim method for MM minimisation, e.g. minim_mm_tol Target force tolerance for MM minimisation minim_print_output Number of steps between XYZ confgurations printed minim_tol Target force tolerance - geometry optimisation is considered to be qm_args Arguments used to initialise QM potential qm_args_str Arguments used by QM potential qm_buffer_hops Number of bond hops used for buffer region qm_calc_force_error Do a full QM calculation at each stage in extrap and interp to measure force error qm_clusters Should we carve clusters? Default true. qm_cp2k Enable CP2K mode. qm_even_electrons Discard a hydrogen if necessary to give an overall non-spin-polarised cluster qm_extra_args_str Extra arguments passed to ForceMixing potential qm_force_periodic Force clusters to be periodic in $$z$$ direction. qm_hysteretic_buffer If true, manage the buffer region hysteritcally qm_hysteretic_buffer_inner_radius Inner radius used for hystertic buffer region qm_hysteretic_buffer_nneighb_only Should hysteretic buffer be formed by nearest neighbor hopping? qm_hysteretic_buffer_outer_radius Outer radius used for hystertic buffer region qm_hysteretic_connect Enable hysteretic connectivity qm_hysteretic_connect_cluster_radius Radius other which to keep track of hysteretic connectivity info. qm_hysteretic_connect_inner_factor Inner bond factor. qm_hysteretic_connect_outer_factor Outer bond factor. qm_little_clusters One big cluster or lots of little ones? qm_randomise_buffer Randomise positions of outer layer of buffer atoms slightly to avoid systematic errors. qm_rescale_r If true, rescale space in QM cluster to match QM lattice constant qm_terminate Terminate clusters with hydrogen atoms qm_transition_hops Number of transition hops used for buffer region qm_vacuum_size Amount of vacuum surrounding cluster in non-periodic directions ($$x$$ and $$y$$ at least). quasi_static_tip_move_tol How far cracktip must advance before we consider fracture to have occurred. selection_cutoff_plane Only atoms within this distance from crack tip are candidates for QM selection. selection_directionality Require good directionality of spring space spanning for atoms in embed region. selection_edge_tol Size of region at edges of crack slab which is ignored for selection purposes. selection_ellipse Principal radii of selection ellipse along $$x$$, $$y$$ and $$z$$ in AA{}. selection_ellipse_bias Shift applied to ellipses, expressed as fraction of ellipse radius in $$x$$ direction. selection_ellipse_buffer Difference in size between inner and outer selection ellipses, i.e. selection_max_qm_atoms Maximum number of QM atoms to select selection_method One of static, coordination, crack_front selection_update_interval intervals between QM selection updates, defaults to 0.0_dp meaning every step simulation_classical Perform a purely classical simulation simulation_force_initial_load_step Force a load step at beginning of simulation simulation_initial_state Initial state. simulation_seed Random number seed. simulation_task Task to perform: md, minim, etc. Methods any_per_atom_tau(*args, **kwargs) print_([file]) Print out this CrackParams structure read_xml(*args, **kwargs) Wrapper around Fortran interface read_xml containing multiple routines: any_per_atom_tau(*args, **kwargs) Parameters: ret_crackparams_any_per_atom_tau : int References Routine is wrapper around Fortran routine crackparams_any_per_atom_tau defined in file src/Utils/crackparams.f95. print_([file]) Print out this CrackParams structure Parameters: file : InOutput object, optional References Routine is wrapper around Fortran routine print_ defined in file src/Utils/crackparams.f95. read_xml(*args, **kwargs) Wrapper around Fortran interface read_xml containing multiple routines: read_xml(xmlfile[, validate, error]) Parameters: xmlfile (InOutput object) – validate (input int, optional) – error (in/output rank-0 array(int,'i'), optional) – Routine is wrapper around Fortran routine crackparams_read_xml defined in file src/Utils/crackparams.f95. read_xml(filename[, validate, error]) Read crack parameters from xmlfile into this CrackParams object. First we reset to default values by calling initialise(this). Parameters: filename (input string(len=-1)) – validate (input int, optional) – error (in/output rank-0 array(int,'i'), optional) – Routine is wrapper around Fortran routine crackparams_read_xml_filename defined in file src/Utils/crackparams.f95. classical_args Arguments used to initialise classical potential classical_args_str Arguments used by Calc Potential classical_force_reweight Factor by which to reduce classical forces in the embed region. Default is unity. crack_align_y Vertical alignment turned on crack_apply_initial_load If true, apply initial loading field to crack slab crack_bulk_filename Input file containing primitive cell crack_check_coordination_atom_type Atom type we check the coordination for crack_check_coordination_critical_nneigh Critical number of neighbours in the connectivity checking crack_check_coordination_region Region (+/- around y=0 level) where the atomic coordination is checked. crack_check_surface_coordination Checking of the surface coordination before generating the crack seed crack_curvature Curvature used when crack_curved_front=T crack_curved_front If true, initialise slab with a curved crack front crack_dislo_seed atom at the core of the dislocation crack_double_ended If true, we do a double ended crack with periodic boundary conditions along $$x$$ direction. crack_edge_fix_tol How close must an atom be to top or bottom to be fixed. Unit:~AA{}. crack_element Element to make slab from. Supported so far: Si, C, SiC, SiO crack_fix_dipoles If true, we keep fixed dipoles for atoms at the edges. crack_fix_dipoles_tol How close must an atom be to top or bottom to keep fixed its dipole. Unit:~AA{}. crack_fix_sides If true fix atoms close to left and right edges of slab crack_free_surfaces If true, crack is 3D with free surfaces at z= +/- depth/2 crack_front_alpha Value of alpha to use when tip_method=alpha_shape crack_front_angle_threshold Maximum bearing for segments to be included in crack front crack_front_window_size Size of windows along crack front. Should be roughly equal to lattice periodicity in this direction. crack_g crack_g_increment crack_graphene_notch_height Height of graphene notch. Unit:~AA{}. crack_graphene_notch_width Width of graphene notch. Unit:~AA{}. crack_graphene_theta Rotation angle of graphene plane, in radians. crack_height Height of crack slab, in AA{}. crack_initial_velocity_field If true, initialise velocity field with dU/dc crack_lattice_guess Guess at bulk lattice parameter, used to obtain accurate result. Unit:~AA{}. crack_load_interp_length Length over which linear interpolation between k-field and uniform strain field is carried out crack_loading uniform for constant load, ramp for linearly decreasing load along $$x$$, kfield for Irwin plane strain K-field, interp_kfield_uniform to linearly interpolate between k-field (at crack tip) and uniform at distance crack_load_interp_length reduce_uniform for reducing load crack_name Crack name, in format (abc)[def] with negative indices denoted by a trailing b (for bar), e.g. (111)[11b0]. crack_num_layers Number of primitive cells in $$z$$ direction crack_ramp_end_g Loading at end of ramp for the case crack_loading="ramp" crack_ramp_length Length of ramp for the case crack_loading="ramp" crack_ramp_start_length Length of the region in between the crack tip and the start of the ramp for the case crack_loading="ramp" crack_relax_bulk If true (default) relax bulk cell using classical potential crack_relax_loading_field Should makecrack relax the applied loading field crack_rescale_x Rescale atomsatoms in x direction by v crack_rescale_x_z Rescale atomsatoms in x direction by v and in z direction by v2 crack_seed_embed_tol Atoms closer than this distance from crack tip will be used to seed embed region. Unit:~AA{}. crack_seed_length Length of seed crack. Unit:~AA{}. crack_slab_filename Input file to use instead of generating slabs. crack_strain Initial applied strain crack_strain_increment crack_strain_zone_width Distance over which strain increases. Unit:~AA{}. crack_structure Structure: so far diamond and graphene are supported crack_thermostat_ramp_length Length of thermostat ramp used for stadium damping at left and right edges crack_thermostat_ramp_max_tau Value of thermostat tau at end of ramp, in fs. crack_tip_grid_size Size (in A) of grid used for locating crack tips crack_tip_method One of coordination, percolation, local_energy or alpha_shape crack_tip_min_separation Minimum seperation (in A) between a pair of crack tips for them to be considered distinct crack_vacuum_size Amount of vacuum around crack slab. Unit:~AA{}. crack_width Width of crack slab, in AA{}. crack_x_shift Shift required to get “nice” surface terminations on vertical edges crack_y_shift Shift required to align y=0 with centre of a vertical bond. This value is only used for unknown values of crack_name. Unit:~AA{}. crack_z Initialised automatically from crack element fit_hops Number of hops used to generate fit region from embed region fit_method Method to use for force mixing: should be one of begin{itemize} item lotf_adj_pot_svd — LOTF using SVD to optimised the Adj Pot item lotf_adj_pot_minim — LOTF using conjugate gradients to optimise the Adj Pot item lotf_adj_pot_sw — LOTF using old style SW Adj Pot item conserve_momentum — divide the total force on QM region over the fit atoms to conserve momentum item force_mixing — force mixing with details depending on values of buffer_hops, transtion_hops and weight_interpolation item force_mixing_abrupt — simply use QM forces on QM atoms and MM forces on MM atoms (shorthand for method=force_mixing buffer_hops=0 transition_hops=0) item force_mixing_smooth — use QM forces in QM region, MM forces in MM region and linearly interpolate in buffer region (shorthand for method=force_mixing weight_interpolation=hop_ramp) item force_mixing_super_smooth — as above, but weight forces on each atom by distance from centre of mass of core region (shorthand for method=force_mixing weight_interpolation=distance_ramp) end{itemize} fit_spring_hops Number of hops used when creating list of springs force_integration_end_file XYZ file containing ending configuration for force integration. force_integration_n_steps Number of steps to take in force integration hack_fit_on_eqm_coordination_only Only include fit atoms that have coordination number equal to md_eqm_coordination (used for graphene). hack_qm_zero_z_force Zero $$z$$ component of all forces (used for graphene) io_backup If true, create backups of check files io_checkpoint_interval Interval between writing checkpoint files, in fs. io_checkpoint_path Path to write checkpoint files to. Set this to local scratch space to avoid doing lots of I/O to a network drive. Default is current directory. io_mpi_print_all Print output on all nodes. Useful for debugging. Default .false. io_netcdf If true, output in NetCDF format instead of XYZ io_print_all_properties If true, print all atom properties to movie file. This will generate large files but is useful for debugging. io_print_interval Interval between movie XYZ frames, in fs. io_print_properties List of properties to print to movie file. io_timing If true, enable timing (default false) io_verbosity Output verbosity. In XML file, this should be specified as one of ERROR, SILENT, NORMAL, VERBOSE, NERD or ANAL minim_eps_guess Initial guess for line search step size $$\epsilon$$. minim_fire_dt0 If using fire_minim, the initial step size minim_fire_dt_max If using fire_minim, the maximum step size minim_linminroutine Linmin routine, e.g. FAST_LINMIN for classical potentials with total energy, or LINMIN_DERIV when doing a LOTF hybrid simulation and only forces are available. minim_max_steps Maximum number of minimisation steps. minim_method Minimisation method: use cg for conjugate gradients or sd for steepest descent. See minim() in libAtoms/minimisation.f95 for details. minim_minimise_mm Should we minimise classical degrees of freedom before each QM force evaluation minim_mm_args_str Args string to be passed to MM calc() routine minim_mm_eps_guess Initial guess for line search $$\epsilon$$ for MM minimisation minim_mm_linminroutine Linmin routine for MM minimisation minim_mm_max_steps Maximum number of cg cycles for MM minimisation minim_mm_method Minim method for MM minimisation, e.g. cg for conjugate gradients minim_mm_tol Target force tolerance for MM minimisation minim_print_output Number of steps between XYZ confgurations printed minim_tol Target force tolerance - geometry optimisation is considered to be converged when $$|\mathbf{f}|^2 <$$ tol qm_args Arguments used to initialise QM potential qm_args_str Arguments used by QM potential qm_buffer_hops Number of bond hops used for buffer region qm_calc_force_error Do a full QM calculation at each stage in extrap and interp to measure force error qm_clusters Should we carve clusters? Default true. qm_cp2k Enable CP2K mode. Default false. qm_even_electrons Discard a hydrogen if necessary to give an overall non-spin-polarised cluster qm_extra_args_str Extra arguments passed to ForceMixing potential qm_force_periodic Force clusters to be periodic in $$z$$ direction. qm_hysteretic_buffer If true, manage the buffer region hysteritcally qm_hysteretic_buffer_inner_radius Inner radius used for hystertic buffer region qm_hysteretic_buffer_nneighb_only Should hysteretic buffer be formed by nearest neighbor hopping? qm_hysteretic_buffer_outer_radius Outer radius used for hystertic buffer region qm_hysteretic_connect Enable hysteretic connectivity qm_hysteretic_connect_cluster_radius Radius other which to keep track of hysteretic connectivity info. Default 10.0 A. qm_hysteretic_connect_inner_factor Inner bond factor. Default 1.2 qm_hysteretic_connect_outer_factor Outer bond factor. Default 1.5 qm_little_clusters One big cluster or lots of little ones? qm_randomise_buffer Randomise positions of outer layer of buffer atoms slightly to avoid systematic errors. qm_rescale_r If true, rescale space in QM cluster to match QM lattice constant qm_terminate Terminate clusters with hydrogen atoms qm_transition_hops Number of transition hops used for buffer region qm_vacuum_size Amount of vacuum surrounding cluster in non-periodic directions ($$x$$ and $$y$$ at least). Unit:~AA{}. quasi_static_tip_move_tol How far cracktip must advance before we consider fracture to have occurred. selection_cutoff_plane Only atoms within this distance from crack tip are candidates for QM selection. Unit: AA{}. selection_directionality Require good directionality of spring space spanning for atoms in embed region. selection_edge_tol Size of region at edges of crack slab which is ignored for selection purposes. selection_ellipse Principal radii of selection ellipse along $$x$$, $$y$$ and $$z$$ in AA{}. selection_ellipse_bias Shift applied to ellipses, expressed as fraction of ellipse radius in $$x$$ direction. selection_ellipse_buffer Difference in size between inner and outer selection ellipses, i.e. amount of hysteresis. selection_max_qm_atoms Maximum number of QM atoms to select selection_method One of static, coordination, crack_front selection_update_interval intervals between QM selection updates, defaults to 0.0_dp meaning every step simulation_classical Perform a purely classical simulation simulation_force_initial_load_step Force a load step at beginning of simulation simulation_initial_state Initial state. Overrides value read from input atoms structure simulation_seed Random number seed. Use zero for a random seed, or a particular value to repeat a previous run. simulation_task Task to perform: md, minim, etc. class quippy.crack.ConstantStrainRate(orig_height, delta_strain, mask=None)[source] Constraint which increments epsilon_yy at a constant strain rate Rescaling is applied only to atoms where mask is True (default is all atoms) Methods apply_strain(atoms[, rigid_constraints]) Applies a constant strain to the system. apply_strain(atoms, rigid_constraints=False)[source] Applies a constant strain to the system. Parameters: atoms : ASE.atoms or quippy.Atoms Atomic configuration. rigid_constraints : boolean Apply (or not apply) strain to every atom. i.e. allow constrainted atoms to move during strain application quippy.crack.crack_is_edge_atom(slab, i, edge_gap) Returns true if atom i is near to an open surface of slab. Open surfaces are planes at $$x = \pm$$ and $$y = \pm$$. “Near to” means within edge_gap of the surface. Parameters: slab : Atoms object i : input int edge_gap : input float ret_crack_is_edge_atom : int References Routine is wrapper around Fortran routine crack_is_edge_atom defined in file src/Utils/cracktools.f95. quippy.crack.crack_parse_name(crackname) Parse crack name in the format is (ijk)[lmn], with negative numbers denoted by a trailing b (short for bar), e.g. (111)[11b0] Axes of crack slab returned as $$3\times3$$ matrix with columns $$\mathbf{x}$$,:math:mathbf{y},:math:mathbf{z}. Parameters: crackname : input string(len=-1) axes : rank-2 array(‘d’) with bounds (3,3) References Routine is wrapper around Fortran routine crack_parse_name defined in file src/Utils/cracktools.f95. quippy.crack.crack_hybrid_calc_args(qm_args_str, extra_qm_args, mm_args_str, extra_mm_args, extra_args_str) Parameters: qm_args_str : input string(len=-1) extra_qm_args : input string(len=-1) mm_args_str : input string(len=-1) extra_mm_args : input string(len=-1) extra_args_str : input string(len=-1) ret_crack_hybrid_calc_args : string(len=1024) References Routine is wrapper around Fortran routine crack_hybrid_calc_args defined in file src/Utils/cracktools.f95. quippy.crack.crack_g_to_k(g, e, v[, mode]) Convert from energy release rate $$G$$ to stress intensity factor $$K$$ Units: G (J/m:math:^2), E (GPa), K (Pa sqrt(m)) Parameters: g : input float e : input float v : input float mode : input string(len=-1), optional ret_k : float References Routine is wrapper around Fortran routine crack_g_to_k defined in file src/Utils/cracktools.f95. quippy.crack.percolation_step(grid) Parameters: grid : in/output rank-3 array(‘i’) with bounds (qp_n0,qp_n1,qp_n2) ret_percolation_step : int References Routine is wrapper around Fortran routine percolation_step defined in file src/Utils/cracktools.f95. quippy.crack.crack_setup_marks(crack_slab, params) Parameters: crack_slab : Atoms object params : CrackParams object References Routine is wrapper around Fortran routine crack_setup_marks defined in file src/Utils/cracktools.f95. quippy.crack.crack_update_selection_crack_front(at, params) Parameters: at : Atoms object params : CrackParams object References Routine is wrapper around Fortran routine crack_update_selection_crack_front defined in file src/Utils/cracktools.f95. quippy.crack.crack_is_topbottom_edge_atom(slab, i, edge_gap) Returns true if atom i is the topbottom surface of slab Topbottom surfaces are planes at $$y = \pm$$ Parameters: slab : Atoms object i : input int edge_gap : input float ret_crack_is_topbottom_edge_atom : int References Routine is wrapper around Fortran routine crack_is_topbottom_edge_atom defined in file src/Utils/cracktools.f95. quippy.crack.crack_make_slab(params, classicalpot) Parameters: params : CrackParams object classicalpot : Potential object crack_slab : Atoms object width : float height : float e : float v : float v2 : float bulk : Atoms object References Routine is wrapper around Fortran routine crack_make_slab defined in file src/Utils/cracktools.f95. quippy.crack.crack_apply_strain_ramp(at, g1, g2, d1, d2, d3, d4) Parameters: at : Atoms object g1 : input float g2 : input float d1 : input float d2 : input float d3 : input float d4 : input float References Routine is wrapper around Fortran routine crack_apply_strain_ramp defined in file src/Utils/cracktools.f95. quippy.crack.crack_strain_to_g(strain, e, v, height) Calculate energy release rate $$G$$ from strain using :math:G = \frac{1}{2} \frac{E}{1-\nu^2} \epsilon^2 h:math: from thin strip result. Quantities are: strain,:math:epsilon, dimensionless ratio $$\frac{\Delta y}{y}$$; E, $$E$$, Young’s modulus, GPa; v, $$\nu$$, Poisson ratio, dimensionless; height, $$h$$ AA{}, 10:math:^{-10}~m; G, Energy release rate, J/m:math:^2. Parameters: strain : input float e : input float v : input float height : input float ret_crack_strain_to_g : float References Routine is wrapper around Fortran routine crack_strain_to_g defined in file src/Utils/cracktools.f95. quippy.crack.crack_find_tip_coordination(at, params[, n_tip_atoms, tip_indices]) Return $$x$$ coordinate of rightmost undercoordinated atom Parameters: at : Atoms object params : CrackParams object n_tip_atoms : in/output rank-0 array(int,’i’), optional tip_indices : in/output rank-1 array(‘i’) with bounds (qp_n0), optional ret_crack_pos : rank-1 array(‘d’) with bounds (2) References Routine is wrapper around Fortran routine crack_find_tip_coordination defined in file src/Utils/cracktools.f95. quippy.crack.in_ellipse(d, ellipse) Return true if the point d is within an ellipse centred at the origin with the $$x$$, $$y$$, and $$z$$ radii specifeid in the vector ellipse. Parameters: d : input rank-1 array(‘d’) with bounds (3) ellipse : input rank-1 array(‘d’) with bounds (3) ret_in_ellipse : int References Routine is wrapper around Fortran routine in_ellipse defined in file src/Utils/cracktools.f95. quippy.crack.crack_find_tip_local_energy(at, params) Parameters: at : Atoms object params : CrackParams object References Routine is wrapper around Fortran routine crack_find_tip_local_energy defined in file src/Utils/cracktools.f95. quippy.crack.crack_update_selection(at, params) Parameters: at : Atoms object params : CrackParams object References Routine is wrapper around Fortran routine crack_update_selection defined in file src/Utils/cracktools.f95. quippy.crack.crack_mm_calc_args(mm_args_str, extra_mm_args, extra_args_str) Parameters: mm_args_str : input string(len=-1) extra_mm_args : input string(len=-1) extra_args_str : input string(len=-1) ret_crack_mm_calc_args : string(len=1024) References Routine is wrapper around Fortran routine crack_mm_calc_args defined in file src/Utils/cracktools.f95. quippy.crack.crack_check_coordination_boundaries(at, params) Parameters: at : Atoms object params : CrackParams object References Routine is wrapper around Fortran routine crack_check_coordination_boundaries defined in file src/Utils/cracktools.f95. quippy.crack.select_ellipse(at, ellipse, ellipse_bias, list, c) Select atoms in ellipse centred on an atom and with given principal radii Parameters: at : Atoms object ellipse : input rank-1 array(‘d’) with bounds (3) Principal radii of ellipse in $$x$$, $$y$$ and $$z$$ directions ellipse_bias : input rank-1 array(‘d’) with bounds (3) Shift ellipse, positive values forward list : Table object On exit contains indexes of selected atoms, which are also reachable by nearest neighbour bond hopping starting from c c : input int Ellipse is centred around atom c. References Routine is wrapper around Fortran routine select_ellipse defined in file src/Utils/cracktools.f95. quippy.crack.crack_uniform_load(at, params, l_crack_pos, r_crack_pos, zone_width, n0, n1[, eps, g, apply_load]) Rescale atoms in slab, with atoms in front of either crack tip strained in y direction by strain and atoms behind crack tip rigidly shifted to keep top and bottom edges flat. A transition zone is created in between with linearly varying strain to avoid creation of defects. -------------------------------------- | | | | | | | | |___| | | | | | | | | | | |___| | | | | | | | | | 1 | 2 | 3 | 4 | 5 | -------------------------------------- :: ====== =========================================== ====== ====== =========================================== ====== 1 x < l_crack_pos - zone_width G 2 l_crack_pos - zone_width <= x < l_crack_pos G - 0 3 l_crack_pos < x < r_crack_pos 0 4 r_crack_pos < x <= r_crack_pos + zone_width 0 - G 5 x r_crack_pos + zone_width G ====== =========================================== ====== Parameters: at : Atoms object params : CrackParams object l_crack_pos : input float r_crack_pos : input float zone_width : input float eps : input float, optional g : input float, optional apply_load : input int, optional n0 : input int shape(qp_disp,0) n1 : input int shape(qp_disp,1) disp : rank-2 array(‘d’) with bounds (qp_n0,qp_n1) References Routine is wrapper around Fortran routine crack_uniform_load defined in file src/Utils/cracktools.f95. quippy.crack.crack_g_to_strain(g, e, v, height) Calculate $$epsilon$$ from $$G$$, inverse of above formula. Units are as the same as crack_strain_to_g Parameters: g : input float e : input float v : input float height : input float ret_crack_g_to_strain : float References Routine is wrapper around Fortran routine crack_g_to_strain defined in file src/Utils/cracktools.f95. quippy.crack.crack_k_to_g(k, e, v[, mode]) Convert from stress intensity factor $$K$$ to energy release rate $$G$$ Units: G (J/m:math:^2), E (GPa), K (Pa sqrt(m)) Parameters: k : input float e : input float v : input float mode : input string(len=-1), optional ret_g : float References Routine is wrapper around Fortran routine crack_k_to_g defined in file src/Utils/cracktools.f95. quippy.crack.crack_update_connect(at, params) Use hysteretic version of calc_connect() so we can use relative cutoffs Update the connectivity of a crack slab. calc_connect is only called if necessary (i.e. if the maximal atomic displacement is bigger than params.md(params.md_stanza)%recalc_connect_factor*params.md(params.md_stanza)%crust The nn and changed_nn properties are updated each call, with the (cheaper) nearest neighbour calc_connect always being perforemd. Parameters: at : Atoms object params : CrackParams object References Routine is wrapper around Fortran routine crack_update_connect defined in file src/Utils/cracktools.f95. quippy.crack.crack_find_surface_atoms(at) Parameters: at : Atoms object References Routine is wrapper around Fortran routine crack_find_surface_atoms defined in file src/Utils/cracktools.f95. quippy.crack.crack_find_tip(at, params) Parameters: at : Atoms object params : CrackParams object crack_tips : Table object References Routine is wrapper around Fortran routine crack_find_tip defined in file src/Utils/cracktools.f95. quippy.crack.crack_calc_load_field(crack_slab, params, classicalpot, load_method, overwrite_pos, mpi) Parameters: crack_slab : Atoms object params : CrackParams object classicalpot : Potential object load_method : input string(len=-1) overwrite_pos : input int mpi : MPI_context object References Routine is wrapper around Fortran routine crack_calc_load_field defined in file src/Utils/cracktools.f95. quippy.crack.crack_apply_load_increment(at[, g_increment]) Increase the load by adding the the load displacement field to the atomic positions. The routine recalculates the loading G and stores it in the atom parameter dictionary. Parameters: at : Atoms object g_increment : input float, optional References Routine is wrapper around Fortran routine crack_apply_load_increment defined in file src/Utils/cracktools.f95. quippy.crack.crack_measure_g(at, e, v, orig_height) Measure the current height of slab and calculate energy release rate $$G$$ from current and original heights and elastic constants $$E$$ and $$\nu$$, using the equation :math: G = frac{1}{2} frac{E}{1-nu^2} frac{{h - h_0}^2}{h_0} :math: where $$h_0$$ is the original height and $$h$$ the new height. Otherwise, symbols and units are the same as in crack_strain_to_g. Parameters: at : Atoms object e : input float v : input float orig_height : input float ret_g : float References Routine is wrapper around Fortran routine crack_measure_g defined in file src/Utils/cracktools.f95. quippy.crack.crack_k_field(at, k[, mode, sig, disp, do_sig, do_disp]) Calculate Irwin K-field stresses and/or displacements for all atoms in at. Atomic positions should be the original undistorted bulk crystal positions. YoungsModulus and PoissonRatio_yx parameters are extracted from at, along with CrackPos to specify the location of the crack tip. If neither sig nor disp are present thenn properties are added to at if do_disp or do_sig are true. Stress is in 6 component Voigt notation: $$1=xx, 2=yy, 3=zz, 4=yz, 5=zx$$ and $$6=xy$$, and displacement is a Cartesian vector $$(u_x,u_y,u_z)$$. Parameters: at : Atoms object k : input float mode : input string(len=-1), optional sig : in/output rank-2 array(‘d’) with bounds (qp_n0,qp_n1), optional disp : in/output rank-2 array(‘d’) with bounds (qp_n2,qp_n3), optional do_sig : input int, optional do_disp : input int, optional References Routine is wrapper around Fortran routine crack_k_field defined in file src/Utils/cracktools.f95. quippy.crack.crack_find_tip_percolation(at, params) Locate crack tips within at using a percolation algorithm. A grid with cells of side params.crack_tip_grid_size is initialised and populated with 1s in cells containing atoms and 0s where there are no atoms. The percolation is then seeded in the void at (0,0,0) for a double-ended crack or (-OrigWidth/2, 0, 0) for a single-ended crack, and then spreads between connected cells like a forest fire. A filter is used to remove local minima closer than params.crack_tip_min_separation cells from one another. The result is a Table with realsize=3 containing the coordinates of the crack tips detected. If a through-going crack is detected the result table will have size zero. Parameters: at : Atoms object params : CrackParams object crack_tips : Table object References Routine is wrapper around Fortran routine crack_find_tip_percolation defined in file src/Utils/cracktools.f95. quippy.crack.crack_update_selection_coordination(at, params) Update QM selection region for a crack configuration using the nn and changed_nn properties and the CrackPos parameter from the atoms structure, as well as the selection parameters in params. If update_embed is true then the embed region is updated, otherwise we simply recompute the fit region from the embed region. The value of num_directionality returned can be passed to adjustable_potential_init. Parameters: at : Atoms object params : CrackParams object References Routine is wrapper around Fortran routine crack_update_selection_coordination defined in file src/Utils/cracktools.f95. quippy.crack.crack_make_seed(crack_slab, params) Parameters: crack_slab : Atoms object params : CrackParams object References Routine is wrapper around Fortran routine crack_make_seed defined in file src/Utils/cracktools.f95. quippy.crack.crack_check_coordination(at, params, j[, y, x_boundaries, neigh_removed, at_for_connectivity]) Parameters: at : Atoms object params : CrackParams object j : input int y : in/output rank-0 array(float,’d’), optional x_boundaries : input int, optional neigh_removed : in/output rank-1 array(‘i’) with bounds (qp_n0), optional at_for_connectivity : Atoms object, optional References Routine is wrapper around Fortran routine crack_check_coordination defined in file src/Utils/cracktools.f95. quippy.crack.crack_print(*args, **kwargs) Print crack slab to XYZ file, using properties defined in ‘params%io_print_properties’ or all properties if ‘params%io_print_all_properties’ is true. Routine is wrapper around Fortran interface crack_print containing multiple routines: quippy.crack.crack_print(at, cio, params) Parameters: at (Atoms object) – cio (CInOutput object) – params (CrackParams object) – Routine is wrapper around Fortran routine crack_print_cio defined in file src/Utils/cracktools.f95. quippy.crack.crack_print(at, filename, params) Parameters: at (Atoms object) – filename (input string(len=-1)) – params (CrackParams object) – Routine is wrapper around Fortran routine crack_print_filename defined in file src/Utils/cracktools.f95. quippy.crack.crack_strain_energy_release_rate(at, bulk=None, f_min=0.8, f_max=0.9, stem=None, avg_pos=False)[source] Compute strain energy release rate G from elastic potential energy in a strip quippy.crack.crack_strain(at)[source] Returns strain of crack specimen quippy.crack.crack_find_griffith_load(a, b, pot, relax=False)[source] Given two configurations (a, b) which differ by one broken bond, find the Griffith load, that is the load at which a and b have the same energy accorinding to the model potential pot. Returns (strain, G, a_rescaled, b_rescaled). quippy.crack.stress_intensity_factor(at)[source] Returns stress instensity factor for mode I loading (K_I) in MPa sqrt(m) quippy.crack.make_crack_advance_map(atoms, tol=0.001)[source] Find mapping from atom indices to the index of atom one step ahead of them in the crack propagation direction (i.e. along +x). Requires ‘LatticeConstant’, ‘CleavagePlane’, and ‘CrackFront’ to be available in atoms.info dictionary. Returns integer array of shape (len(atoms),), and also adds a new array ‘advance_map’ into the Atoms object. quippy.crack.find_crack_tip_coordination(atoms, edge_tol=10.0, strip_height=30.0, nneightol=1.3)[source] Return position of crack tip in atoms, based on atomic coordination. If atoms does not contain an advance_map property, then make_crack_advance_map() is called to generate the map. Parameters: atoms : :class:~.Atoms’ object The Atoms object containing the crack slab. edge_tol : float Distance from edge of system within which to exclude undercoodinated atoms. strip_height : float Height of strip along centre of slab in which to look for the track. nneightol : float Nearest neighbour tolerance, as a fraction of sum of covalent radii of atomic species. crack_pos : array x, y, and z coordinates of the crack tip. Also set in CrackPos in atoms.info dictionary. tip_atoms : array Indices of atoms near the tip Also set in crack_tip property. quippy.crack.irwin_modeI_crack_tip_stress_field(K, r, t, xy_only=True, nu=0.5, stress_state='plane strain')[source] Compute Irwin singular crack tip stress field Parameters: K : float Mode I stress intensity factor. Units should match units of r. r : array_like Radial distances from crack tip. Can be a multidimensional array to evaluate stress field on a grid. t : array_like Angles from horzontal line y=0 ahead of crack tip, measured anticlockwise. Should have same shape as r. xy_only : bool If True (default) only xx, yy, xy and yx components will be set. nu : float Poisson ratio. Used only when xy_only=False, to determine zz stresses stress_state : str One of”plane stress” or “plane strain”. Used if xyz_only=False to determine zz stresses. sigma : array with shape r.shape + (3,3) quippy.crack.strain_to_G(strain, E, nu, orig_height)[source] Convert from strain to energy release rate G for thin strip geometry Parameters: strain : float Dimensionless ratio (current_height - orig_height)/orig_height E : float Young’s modulus relevant for a pull in y direction sigma_yy/eps_yy nu : float Poission ratio -eps_yy/eps_xx orig_height : float Unstrained height of slab G : float Energy release rate in units consistent with input (i.e. in eV/A**2 if eV/A/fs units used) quippy.crack.G_to_strain(G, E, nu, orig_height)[source] Convert from energy release rate G to strain for thin strip geometry Parameters: G : float Energy release rate in units consistent with E and orig_height E : float Young’s modulus relevant for a pull in y direction sigma_yy/eps_yy nu : float Poission ratio -eps_yy/eps_xx orig_height : float Unstrained height of slab strain : float Dimensionless ratio (current_height - orig_height)/orig_height quippy.crack.get_strain(atoms)[source] Return the current strain on thin strip configuration atoms Requires unstrained height of slab to be stored as OrigHeight key in atoms.info dictionary. Also updates value stored in atoms.info. quippy.crack.get_energy_release_rate(atoms)[source] Return the current energy release rate G for atoms Also updates value stored in atoms.info dictionary. quippy.crack.get_stress_intensity_factor(atoms)[source] Return stress intensity factor K_I Also updates value stored in atoms.info dictionary. quippy.crack.fit_crack_stress_field(atoms, r_range=(0.0, 50.0), initial_params=None, fix_params=None, sigma=None, avg_sigma=None, avg_decay=0.005, calc=None, verbose=False)[source] Perform a least squares fit of near-tip stress field to Irwin solution Stresses on the atoms are fit to the Irwin K-field singular crack tip solution, allowingthe crack position, stress intensity factor and far-field stress components to vary during the fit. Parameters: atoms : Atoms object Crack system. For the initial fit, the following keys are used from the info dictionary: YoungsModulus PossionRatio_yx G — current energy release rate strain — current applied strain CrackPos — initial guess for crack tip position The initial guesses for the stress intensity factor K are far-field stress sigma0 are computed from YoungsModulus, PoissonRatio_yx, G and strain, assuming plane strain in thin strip boundary conditions. On exit, new K, sigma0 and CrackPos entries are set in the info dictionary. These values are then used as starting guesses for subsequent fits. r_range : sequence of two floats, optional If present, restrict the stress fit to an annular region r_range[0] <= r < r_range[1], centred on the previous crack position (from the CrackPos entry in atoms.info). If r_range is None, fit is carried out for all atoms. initial_params : dict Names and initial values of parameters. Missing initial values are guessed from Atoms object. fix_params : dict Names and values of parameters to fix during the fit, e.g. {y0: 0.0} to constrain the fit to the line y=0 sigma : None or array with shape (len(atoms), 3, 3) Explicitly provide the per-atom stresses. Avoids calling Atoms’ calculators get_stresses() method. avg_sigma : None or array with shape (len(atoms), 3, 3) If present, use this array to accumulate the time-averaged stress field. Useful when processing a trajectory. avg_decay : real Factor by which average stress is attenuated at each step. Should be set to dt/tau where dt is MD time-step and tau is a characteristic averaging time. calc : Calculator object, optional If present, override the calculator used to compute stresses on the atoms. Default is atoms.get_calculator. To use the atom resolved stress tensor pass an instance of the AtomResolvedStressField class. verbose : bool, optional If set to True, print additional information about the fit. params : dict with keys [K, x0, y0, sxx0, syy0, sxy0] Fitted parameters, in a form suitable for passin IrwinStressField constructor. These are the stress intensity factor K, the centre of the stress field (x0, y0), and the far field contribution to the stress (sxx0, syy0, sxy0). quippy.crack.find_crack_tip_stress_field(atoms, r_range=None, initial_params=None, fix_params=None, sigma=None, avg_sigma=None, avg_decay=0.005, calc=None)[source] Find the position of the crack tip by fitting to the Irwin K-field solution Fit is carried out using fit_crack_stress_field(), and parameters have the same meaning as there. quippy.crack.plot_stress_fields(atoms, r_range=None, initial_params=None, fix_params=None, sigma=None, avg_sigma=None, avg_decay=0.005, calc=None)[source] Fit and plot atomistic and continuum stress fields Firstly a fit to the Irwin K-field solution is carried out using fit_crack_stress_field(), and parameters have the same meaning as for that function. Then plots of the $$\sigma_{xx}$$, $$\sigma_{yy}$$, $$\sigma_{xy}$$ fields are produced for atomistic and continuum cases, and for the residual error after fitting. quippy.crack.thin_strip_displacement_y(x, y, strain, a, b)[source] Return vertical displacement ramp used to apply initial strain to slab Strain is increased from 0 to strain over distance $$a <= x <= b$$. Region $$x < a$$ is rigidly shifted up/down by strain*height/2. Here is an example of how to use this function on an artificial 2D square atomic lattice. The positions are plotted before (left) and after (right) applying the displacement, and the horizontal and vertical lines show the strain (red), a (green) and b (blue) parameters. import matplotlib.pyplot as plt import numpy as np w = 1; h = 1; strain = 0.1; a = -0.5; b = 0.0 x = np.linspace(-w, w, 20) y = np.linspace(-h, h, 20) X, Y = np.meshgrid(x, y) u_y = thin_strip_displacement_y(X, Y, strain, a, b) for i, disp in enumerate([0, u_y]): plt.subplot(1,2,i+1) plt.scatter(X, Y + disp, c='k', s=5) for y in [-h, h]: plt.axhline(y, color='r', linewidth=2, linestyle='dashed') plt.axhline(y*(1+strain), color='r', linewidth=2) for x, c in zip([a, b], ['g', 'b']): plt.axvline(x, color=c, linewidth=2) Parameters: x : array y : array Atomic positions in unstrained slab, centered on origin x=0,y=0 strain : float Far field strain to apply a : float x coordinate for beginning of strain ramp b : float x coordinate for end of strain ramp quippy.crack.print_crack_system`(crack_direction, cleavage_plane, crack_front)[source] Pretty printing of crack crystallographic coordinate system Specified by Miller indices for crack_direction (x), cleavage_plane (y) and crack_front (z), each of which should be a sequence of three floats
2019-02-21 14:39:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32287317514419556, "perplexity": 12146.441648366012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247504790.66/warc/CC-MAIN-20190221132217-20190221154217-00261.warc.gz"}
https://www.love2d.org/forums/viewtopic.php?f=5&t=81098&hilit=particle&sid=3ce31a9aae4d401ac49e950dbe57aeb1
## APE (Another Particle Editor) for LÖVE2D cval Citizen Posts: 58 Joined: Sun Apr 20, 2014 2:15 pm Location: Ukraine ### APE (Another Particle Editor) for LÖVE2D Hi everyone! I would like to share a particle system editor with you (and to say hi to the community!), which also has somewhat framework-ish minimalistic interface system, which, i hope, someone will be able to use if one will want to. Features: - runtime images reloading (.love file needs to be unpacked somewhere), so you dont have to restart the code after creating a new texture - code generation (it will put your emitter to clipboard so you can then paste it into your awesome project) - another graphical user interface framework! (i've tried to comment the code and how to use it, do ask if interested!) Usage: Just launch the code like you usually do. Every numerical value (spin element with +/- sign) is changed with mouse wheel, pressing left shift, left ctrl or left alt increases or decreases value change step for you to get desired values quicker. Clicking on texture list will change particle emitter texture, and "Reload list" button reloads textures from "particles" directory and will load any new files if any. After checking "Use quads" you will be able to add, remove and change system's quads, there will also be a guide drawn on a texture to see which portion of an atlas you have as quad's viewport. Changing spread, direction and area spread will also show guide shapes to clearly see what is going on with the emitter. To the bottom left there is a page controller which you can use to create another emitter with its separate control interface. Clicking on "Code" button will generate emitter code and put it into cliboard (a word of warning to everyone who holds important stuff there) for you to paste it somewhere else. Hope you will find it usefull! PS: list elements arent showing yet that they are scrollable if they have more elements than they can display. Of course, element locations are of my taste, as well as the way this tool works, but i find it somewhat "robust" if you just open a thing and with a few moves create something you can already use rightaway. Creating an explosion particle example video screenshot ape.png (181.78 KiB) Viewed 4578 times Edit: changed video link to appropriate one Edit2: Git repository https://github.com/mkdxdx/APE Attachments ape.love executable Last edited by cval on Sun Nov 22, 2015 11:57 am, edited 2 times in total. bobbyjones Party member Posts: 664 Joined: Sat Apr 26, 2014 7:46 pm ### Re: APE (Another Particle Editor) for LÖVE2D Is the code on github? And with what license? cval Citizen Posts: 58 Joined: Sun Apr 20, 2014 2:15 pm Location: Ukraine ### Re: APE (Another Particle Editor) for LÖVE2D Included Git link in first post, license is MIT. I'm actually pretty new to this, as well as to licensing my code bobbyjones Party member Posts: 664 Joined: Sat Apr 26, 2014 7:46 pm ### Re: APE (Another Particle Editor) for LÖVE2D I watched the video, it seems really cool, and i tried using it and well I don't have a mouse. And it is dependent on the scroll wheel I assume. Which I dont have. cval Citizen Posts: 58 Joined: Sun Apr 20, 2014 2:15 pm Location: Ukraine ### Re: APE (Another Particle Editor) for LÖVE2D Updated some code. Now spin edits (those which were only changeable with scrollwheel) are changing values by clicking or holding mouse button. Left half of element decreases and right half increases value, step modifier also counts. skydash Prole Posts: 1 Joined: Mon Jan 04, 2016 9:41 pm ### Re: APE (Another Particle Editor) for LÖVE2D Hello, I am a games assets maker. And I am a newbie for LÖVE2D. I assume I got error because SetStencil was removed in LÖVE 0.10.0. Do you have a plan to update it ? cval Citizen Posts: 58 Joined: Sun Apr 20, 2014 2:15 pm Location: Ukraine ### Re: APE (Another Particle Editor) for LÖVE2D skydash wrote: Do you have a plan to update it ? Been away from workstation for a while. I've updated UI code to work with 0.10, i hope i've corrected everything, so please tell me if i missed anything there! Updated on git (first post) and attached new .love file. Attachments ape.love murks Party member Posts: 182 Joined: Tue Jun 03, 2014 4:18 pm ### Re: APE (Another Particle Editor) for LÖVE2D Thanks, I just tried it. A couple of gui elements seem to be broken, in the wrong place or not reacting but for the most part it works. Some of the math is also broken, some numbers can not be reset to their initial value. No idea what the code button is supposed to do, it does nothing. Here I tried to do clouds, you can see some of the glitches. Attachments smell_cloud.png (137.34 KiB) Viewed 3565 times Muzz Citizen Posts: 54 Joined: Sun Jun 28, 2015 1:24 pm ### Re: APE (Another Particle Editor) for LÖVE2D - runtime images reloading (.love file needs to be unpacked somewhere), so you dont have to restart the code after creating a new texture It's possible to use folders outside of the love file which i do for colour constructor, it requires a little bit of ffi code, but for editors where you want user editable stuff, it' great. Code: Select all function changeDirectory() local ffi = require("ffi") ffi.cdef[[ int PHYSFS_mount(const char *newDir, const char *mountPoint, int appendToPath); ]]; ffi.cdef[[ int PHYSFS_setWriteDir(const char *newDir); ]] local liblove = 0 docsdir = " " liblove = ffi.os == "Windows" and ffi.load("love") or ffi.C docsdir = love.filesystem.getSourceBaseDirectory() liblove.PHYSFS_setWriteDir(docsdir) liblove.PHYSFS_mount(docsdir, nil, 0) end murks Party member Posts: 182 Joined: Tue Jun 03, 2014 4:18 pm ### Re: APE (Another Particle Editor) for LÖVE2D Oh, maybe also a feature request: - Add the possibility to enter numbers. Pushing the buttons until you get what you want is OK, but if you know what you want, just entering the number is a lot easier. - In the 'Sizes' section I would prefer to edit the sizes directly rather than use the button next to 'Size'. ### Who is online Users browsing this forum: No registered users and 5 guests
2018-02-19 02:26:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.244737446308136, "perplexity": 6202.403849562717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812306.16/warc/CC-MAIN-20180219012716-20180219032716-00680.warc.gz"}
https://electronics.stackexchange.com/questions/38896/how-to-easily-find-or-create-parts-for-eagle-schematic-board-layout/38897
# How to easily find or create parts for Eagle schematic/board layout For Eagle CAD software, during schematic or board layout, how can I search for parts/footprints already created by other people out there, to make my life easier? And if I'm still unable to find what I want, how can I create my own parts? (Note: This question is intended as a reference for future readers, hence I am both asking the question as well as providing my own answer below based on the things I know. Perhaps others can chime in as well.) For any serious work, you won't want to get parts made by someone else because they won't adhere to your conventions. I always make my own parts, which is really not that difficult. I have certain requirements for parts, like attributes for automatic BOM generation, and text at particular sizes and and layers for the silkscreen, the assembly drawing, etc. Others aren't likely to make parts just the way I want them, and to inspect and vet someone else's parts would take at least as long as just making my own in the first place. When you do this for business and your reputation depends on it, you have to be picky. However, hobbyists can be more lax. Others are welcome to use my parts and a bunch of other Eagle-related utilities I have developed over the years. Go to my downloads page and install the Eagle Tools release. This contains a bunch of libraries with parts, but also various ULPs, scripts, and host programs I use around Eagle. For example, there is a whole system for genering the BOM from the schematic and board, and then creating the labels for the kit. Start with the CSV_BOM documentation file in the DOC directory and follow the cookie crumbs. To give you some idea of how the BOM generation system works, here is most of the EAGLE_ATTR documentation file: This document describes the Embed Inc conventions for using optional attributes in Eagle, which were first made available in version 5. In previous versions a part could only have a few fixed attributes built into Eagle, such as VALUE and NAME. In version 5 these fixed attributes still exist but arbitrary additional attributes can be created by the user. This document specifies certain attributes that are expected by parts of the Embed Inc system, mostly to aid in automatic bill of materials (BOM) generation. The process of generating a BOM from a eagle board or schematic is desribed in the CSV_BOM program documentation file. The Eagle optional attributes that have special meaning within the Embed Inc system are: MANUF Manufacturer:partnum; manufacturer:partnum; ... The PARTNUM fields and their leading colons may be omitted, but is a bad idea unless only a single manufacturer is listed. PARTNUM Generic part number or part number within single manufacturer. SUPPLIER Supplier:partnum; supplier:partnum; ... The PARTNUM fields and their leading colons may be omitted, but is a bad idea unless only a single supplier is listed. BOM Whether this part should be included on the BOM. Some "parts" are only features on the board, like pogo pin pads for example. These should not be listed on the BOM because they do not need to be bought and will not be installed. Supported values are: YES - Include this part in the BOM. This is the default if the part has a package. NO - Do not include this part in the BOM. This is the default if the part does not have a package. VALSTAT Indicates how the VALUE attribute is used. The choices are: VAL - Normal part value, like the resistance of a resistor. The part value will be listed on the BOM and used to distinguish different parts. For example, a 10K ohm resistor is a different part than a 330 ohm resistor. PARTNUM - The part number. The value field will be shown in the BOM and used to distinguish different parts, like VAL. However, the part number field will be set to VALUE unless the part number is otherwise explicitly set. VALSTAT PARTNUM is for generic library devices where the value field is used to show some or all of the part number on the schematic. For example, the library might contain a generic 14 pin opamp device, and the value set to LM324 to show the type of opamp on the schematic. In this example, VALUE is only set to the generic part number without package type, temperature grade, etc. In this case the PARTNUM attribute should be used to specify the exact part number, but VALSTAT should still be set to PARTNUM. LABEL - Label intended for the silkscreen. The value field will not be transferred to the BOM and will not be used to differentiate parts. This might be used, for example, to label a LED on the board. Different LEDs might be labeled "Power" and "Error", but they are the same physical part and should be listed on the same BOM entry. SUBST Sets the substutions allowed field for the part on the BOM. Valid values are "YES" and "NO". The default is YES if SUBST does not exist or is empty. DESC Explicit description string for the BOM. By default, the BOM description is derived from the library name and the device name within that library. If the DESC attribute is present and not empty, its contents will override that default. DVAL Detailed part value. If present and not empty, this field overrides the part value string on the BOM and will be used to differentiate parts. DVAL is always assumed to be the true part value, so is not effected by VALSTAT. The purpose of DVAL is to provide more information than reasonable to show on the schematic. Generally the standard VALUE attribute will be shown on the schematic with DVAL shown on the BOM. (1) Finding existing Eagle parts already created by other people out there: I recommend the following four sources ( aside from Googling "partname Eagle" ;-) ): A WORD OF CAUTION (courtesy of user @Grant)... When using others' libraries or parts, first compare it to the datasheet, and/or print it out on paper for comparison to actual part. There are some untested and/or incorrectly dimensions footprints out there. (2) Creating your own parts: It is not that hard at all to make Eagle parts for most things; frankly, if you are able to construct a schematic and a layout, making parts yourself will be hardly a step beyond. I have four pointers: • For learning part creation, I suggest you start with these three tutorials; the creator spent the effort to make them very beginner-friendly: Tutorial #12, Tutorial #13, and Tutorial #14 on this Eagle tutorial-page. • Start learning with simple examples such as a resistor, a DIP part, or even an SOIC-8 part to understand how it works; the clarity of understanding will then readily carry over to more complex parts. • If the part has a footprint that is a common one (such as SOIC-8), just copy an existing part's footprint. • Follow the manufacturer-recommended layout: Nearly all parts' datasheets prescribe dimensions for a recommended footprints/layout for the part; if you follow those precisely, life will be easier and you'll have a part ready in no time. • One thing I'll warn about using random people's eagle libraries - be sure to compare it to the datasheet, or print it out on paper and compare to the actual part before you get your board made. There are some out there that haven't been tested on an actual PCB and have incorrect footprints or don't have the correct clearances marked. – Grant Aug 27 '12 at 12:58 • @Grant: Your pointer has been added to the answer above. – boardbite Aug 27 '12 at 13:29 • @boardbite It looks like eSawDust.com is no more. That's unfortunate, because it worked really well for me. – Nick Alexeev Aug 29 '14 at 21:52 I built a crawler to help with this problem. I totally agree you shouldn't use parts found on the public internet without careful inspection, but I find it saves time to start with something that someone else has built, and I often find they are more meticulous than I am so I have a better starting point. You can search for and download parts that my crawler has found here: http://www.schematicpal.com No charge, just give feedback at the feedback link if you have any problems. -Jim (this isn't necessarily an answer but it's too big for a comment, IMO) When I first started using Eagle, I quickly came to the conclusion that the libraries are old and not reliable. I took a good chunk of time and revamped a lot of what I cared about most.. which is basic resistors and capacitors. Creating the parts is easy... most of the work you need to do is in creating accurate packages and attributing parts properly. Here is my secret weapon, though: Mentor Graphic's LP Wizard This bad boy has saved me so much damn time drawing accurate packages for basic SMD footprints. Here's the skinny on why I love this tool so much: The footprints it gives you are based on IPC-7351 or the appropriate JEDEC standard While going with a manufacturer's recommended SMD land pattern is usually preferable in my eyes, for things like passive SMDs, this is great because it's a source of truth. If I want to create packages for 0402 through 1206, and I use this tool for all the dimensions, I know I'm going to have consistent scaling of things like pad spacing, courtyards, etc. One part won't have drastically different features and come out looking weird on the actual board. Anyone who has ever taken a look at the stock Eagle libraries can attest that there isn't much consistency. Using the tool, which in turn is based on these standards, is a great way to build a standardized library of parts. For basic footprints, you get different sizing versions to tweak for space/reliability I believe this is inherent to the standard, but for basic passive SMD footprints like your 0402, 0603, 0805, etc, LP Wizard will give you the option to switch between Least, Nominal and Most versions. These tweak the actual pad sizing to yield you a smaller package or a bigger package. A bigger package might be preferable to ensure bigger solder fillets for increased reliability while smaller pads might be better for creating a super dense board. Either way, these are footprints that have been tested and agreed upon to serve well in their intended application. To me, that's a big time saver and awesome.
2020-10-20 00:28:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2905661463737488, "perplexity": 2033.9727167227747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107867463.6/warc/CC-MAIN-20201019232613-20201020022613-00054.warc.gz"}
https://datascience.stackexchange.com/questions/71688/when-to-use-k-medoids-over-k-means-and-vice-versa
# When to use k-medoids over k-means and vice versa? I had someone ask me about k-medoids at work and don't know about the performance of this algorithm over other clustering algorithms (namely k-means as it is most similar to it). In this case, it was recommended for use on taxonomic data (ie bacterial/viral species/strains), but I do not know why this is better. time complexity of k medoids is $$O(k * (n-k)^2)$$. 1. is time complexity of a comparable k-means algorithm the same? 2. When would you use one or the other? 3. What qualities does one look for to use k-medoids? 4. What are the differences in the output? • k-means relies, as its name implies, on computing the mean of multiple data points. Therefore, you should not use it when averaging different datapoints does not make sense. An example of such a scenario is time series. – noe Apr 3 '20 at 17:56 # 1) Time complexity of KMEANS As explained in this post : KMeans is an NP-hard problem. However, running a fixed number $$t$$ of iterations of the standard algorithm takes only $$O(t*k*n*d)$$, for $$n$$ (d-dimensional) points, where $$k$$ is the number of centroids (or clusters). This what practical implementations do (often with random restarts between the iterations). # 2) When would you use one over the other? As mentioned in this Wikipedia article, K-medoids is less sensitive to outliers and noise because of the function it minimizes. It is more robust to noise and outliers as compared to k-means because it minimizes a sum of pairwise dissimilarities instead of a sum of squared Euclidean distances. Also, K-medoids can use a variety of similarity measures where K-means is limited to euclidian (pairwise) distance. Excellent explanation [here].(https://stats.stackexchange.com/a/81496/279276) # 3) What qualities does one look for to use k-medoids? I would recommend using it whenever Euclidean Distance does not make sense in your data. If Euclidean Distance does not make sense (i.e. unrelated categorical variables : "has wings", "# of legs"), minimizing the sum of squared Euclidian distances probably won't either. # 4) What are the differences in the output The main difference is that medoids (equivalent to centroïds in K-Means) belong to the data points. You will never have a medoid that is somewhere between points. Instead, it will be superimposed on an existing point. This post shows it clearly. It makes sense, especially for a categorical feature (# of legs), not to have a cluster center at 3.347 legs. Hope this helps.
2021-09-25 18:46:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6333215832710266, "perplexity": 858.8974715410791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057733.53/warc/CC-MAIN-20210925172649-20210925202649-00464.warc.gz"}
https://codereview.stackexchange.com/questions/270307/hackerrank-array-manipulation-challenge-using-microsoft-concurrent-extensions
# HackerRank "Array Manipulation" challenge (using Microsoft concurrent extensions) I've been practising my coding skills because I have an interview coming up. The HackerRank challenge has 16 test cases; the code passes 9 of them and the other 7 time out. If you go to HackerRank Problems Data Structures you might be able to find Array Manipulation under Hard problems. I can't seem to provide a direct link. Using Microsoft specific parallel extensions in C++ I have gotten execution time for test case 4 down from hours down to 89 seconds when built for release. Test case 4 is one of the test cases that times out on Hacker Rank. I might be able to squeeze another couple of seconds out by using parallel processing in the merge function as well. Things I don't like about my solution: • It is totally brute force • I can't seem to run parallel on HackerRank; maybe I need to try OpenMP. # Program Challenge Statement Starting with a 1-indexed array of zeros and a list of operations, for each operation add a value to each the array element between two given indices, inclusive. Once all operations have been performed, return the maximum value in the array. Example Queries are interpreted as follows: a b k 1 5 3 4 8 7 6 9 1 Add the values of k between the indices a and b inclusive: index-> 1 2 3 4 5 6 7 8 9 10 [0,0,0, 0, 0,0,0,0,0, 0] [3,3,3, 3, 3,0,0,0,0, 0] [3,3,3,10,10,7,7,7,0, 0] [3,3,3,10,10,8,8,8,1, 0] The largest value is 10 after all operations are performed. Function Description Complete the function arrayManipulation. arrayManipulation has the following parameters: • int n - the number of elements in the array • int queries[q][3] - a two dimensional array of queries where each queries[i] contains three integers, a, b, and k. Returns • int - the maximum value in the resultant array Constraints - 3 < n < 10⁷ - 1 < m < 2 * 10⁵ - 1 < a < b < n - 0 < k < 10⁹ Sample Input 5 3 1 2 100 2 5 100 3 4 100 Sample Output 200 # Environment Dell Precision 7740 Processor Intel(R) Core(TM) i7-9850H CPU @ 2.60GHz 2.59 GHz Installed RAM 64.0 GB (63.8 GB usable) System type 64-bit operating system, x64-based processor Edition Windows 10 Pro Version 20H2 OS build 19042.1348 Visual Studio 2019. # Test Cases 5 3 1 2 100 2 5 100 3 4 100 10 3 1 5 3 4 8 7 6 9 1 10 4 2 6 8 3 5 7 1 8 1 5 9 15 ## Test Case 4 This is the first 5 lines only, the complete test case is 100,002 lines. You can download the complete test case from my GitHub repository. 10000000 100000 1400906 9889280 90378 6581237 9872072 87106 4386373 9779851 52422 This test case really shows the scope of the problem, 100,000 outer loops, some inner loops with more the 7 million executions. ## Test Output PS C:\Users\PaulC\Documents\ProjectsNfwsi\CodeReview\HackerRankArrayManip\Release> HackerRankArrayManip.exe How many test cases do you want to run?4 Test Case 1 Test Case 1 File read time 0.271 milliseconds Test Case 1 result is 200 Execution time 2.0378 milliseconds Test Case 2 Test Case 2 File read time 0.1541 milliseconds Test Case 2 result is 10 Execution time 0.0132 milliseconds Test Case 3 Test Case 3 File read time 0.1068 milliseconds Test Case 3 result is 31 Execution time 0.0985 milliseconds Test Case 4 Test Case 4 File read time 77.3068 milliseconds Test Case 4 result is 2497169732 Execution time 89.1439 Seconds PS C:\Users\PaulC\Documents\ProjectsNfwsi\CodeReview\HackerRankArrayManip\Release> # C++ Source File #include <PPL.h> #include<algorithm> #include <chrono> #include<execution> #include <fstream> #include <iostream> #include <mutex> #include <string> #include <vector> constexpr int MAX_TEST_CASE = 4; /* * Infrastructure to replace HackerRank input functions */ std::vector<int> convertInputLineToIntVector(std::string query_string) { constexpr int query_size = 3; std::vector<int> query; std::string::iterator intStart = query_string.begin(); std::string::iterator intEnd; for (int i = 0; i < query_size; i++) { intEnd = std::find(intStart, query_string.end(), ' '); int pos = intEnd - query_string.begin(); std::string tempInt(intStart, intEnd); query.push_back(stoi(tempInt)); if (intEnd < query_string.end()) { intStart = query_string.begin() + pos + 1; } } return query; } std::vector<std::vector<int>> getIntVectors(std::ifstream* inFile) { std::vector<std::vector<int>> inputVector; std::string string_vector_count; getline(*inFile, string_vector_count); int strings_count = stoi(string_vector_count); for (int i = 0; i < strings_count; i++) { std::string string_item; getline(*inFile, string_item); inputVector.push_back(convertInputLineToIntVector(string_item)); } return inputVector; } int getInputLines(std::string inputFileName, int &vectorSize, std::vector<std::vector<int>>& queries) { std::string string_count_size; std::ifstream inFile(inputFileName); if (!inFile.is_open()) { std::cerr << "Can't open " << inputFileName << " for input.\n"; std::cout << "Can't open " << inputFileName << " for input.\n"; return EXIT_FAILURE; } getline(inFile, string_count_size); vectorSize = stoi(string_count_size); queries = getIntVectors(&inFile); return EXIT_SUCCESS; } void getTestCountAndFirstTestCase(int& testCount, int& firstTestCase) { do { std::cout << "How many test cases do you want to run?"; std::cin >> testCount; if (testCount < 0 || testCount > MAX_TEST_CASE) { std::cerr << "The number of test cases must be greater > 0 and less than " << " " << MAX_TEST_CASE << "\n"; } } while (testCount < 0 || testCount > MAX_TEST_CASE); if (testCount < MAX_TEST_CASE) { bool hasErrors = true; do { std::cout << "What test case file do you want to start with?"; std::cin >> firstTestCase; if (firstTestCase < 0 || firstTestCase > MAX_TEST_CASE) { std::cerr << "The first test cases must be greater > 0 and less than " << " " << MAX_TEST_CASE << "\n"; hasErrors = true; } else { hasErrors = false; } if (!hasErrors && testCount + firstTestCase > MAX_TEST_CASE) { std::cerr << "The first test cases and the test count must be less than or equal to " << MAX_TEST_CASE << "\n"; hasErrors = true; } } while (hasErrors); } else { firstTestCase = 1; } } /* * Begin HackerRank Solution */ constexpr int IDX_FIRST_LOCATION = 0; constexpr int IDX_LAST_LOCATION = 1; unsigned long mergeAndFindMax(std::vector<unsigned long> maxValues, std::vector<std::vector<unsigned long>> calculatedValues, const size_t executionCount) { unsigned long maximumValue = 0; for (size_t i = 0; i < MAX_THREADS; i++) { std::vector<unsigned long>::iterator cvi = calculatedValues[i].begin(); std::vector<unsigned long>::iterator cvEnd = calculatedValues[i].end(); std::vector<unsigned long>::iterator mvi = maxValues.begin(); for ( ; mvi < maxValues.end() && cvi < cvEnd; mvi++, cvi++) { *mvi += *cvi; if (*mvi > maximumValue) { maximumValue = *mvi; } } if (i > executionCount) { break; } } return maximumValue; } unsigned long arrayManipulation(const int n, const std::vector<std::vector<int>> queries) { std::vector<unsigned long> maximumValues(n, 0); std::mutex m; for_each(calculatedValues.begin(), calculatedValues.end(), [maximumValues](std::vector<unsigned long>& cvi) {cvi = maximumValues; }); int executionCount = 0; Concurrency::parallel_for_each(queries.begin(), queries.end(), [&m, &calculatedValues, &executionCount](std::vector<int> query) { std::lock_guard<std::mutex> guard(m); size_t startLoc = query[IDX_FIRST_LOCATION]; size_t endLoc = query[IDX_LAST_LOCATION]; for_each(calculatedValues[executionCount % MAX_THREADS].begin() + (startLoc - 1), executionCount++; }); return mergeAndFindMax(maximumValues, calculatedValues, executionCount); } int executeAndTimeTestCases(int testCaseCount, int firstTestCase) { using std::chrono::high_resolution_clock; using std::chrono::duration_cast; using std::chrono::duration; using std::chrono::milliseconds; for (int i = 0; i < testCaseCount; i++) { std::string testFileName = "TestCase" + std::to_string(firstTestCase) + ".txt"; int n = 0; std::vector<std::vector<int>> queries; std::cout << "Test Case " << firstTestCase << "\n"; int exitStatus = getInputLines(testFileName, n, queries); if (exitStatus != EXIT_SUCCESS) { return exitStatus; } std::cout << "Test Case " << firstTestCase << " File read time " << msReadTime.count() << " milliseconds\n"; auto executionStartTime = high_resolution_clock::now(); unsigned long result = arrayManipulation(n, queries); auto executionEndTime = high_resolution_clock::now(); duration<double, std::milli> msExecution = executionEndTime - executionStartTime; if (msExecution.count() > 1000.0) { std::cout << "Test Case " << firstTestCase << " result is " << result << " Execution time " << msExecution.count() / 1000.0 << " Seconds\n\n"; } else { std::cout << "Test Case " << firstTestCase << " result is " << result << " Execution time " << msExecution.count() << " milliseconds\n\n"; } firstTestCase++; } return EXIT_SUCCESS; } int main() { int testCaseCount = 0; int firstTestCase = 0; getTestCountAndFirstTestCase(testCaseCount, firstTestCase); return executeAndTimeTestCases(testCaseCount, firstTestCase); } • @Emily_L. I would definitely appreciate any comments or answers you care to contribute, I'm not sure I'm using the correct algorithm for performance. Nov 22 at 20:57 • See codereview.stackexchange.com/q/185320/35991 for a more efficient algorithm. Nov 22 at 21:51 • I just had a quick glance but it looks a bit weird to have a parallel_for_each and then to lock the entire scope with a mutex? From my CPU usage, it does not look like much is happening in parallel although memory bandwidth could be an issue. – jdt Nov 22 at 22:15 • @jdt Memory bandwidth is definitely an issue. Doing only sequential processing CPU was at 15%. Using the parallel algorithm CPU usage was at 70%. Nov 22 at 22:18 • @jdt I was working on using the STL standard for_each() and using this documentation. I've removed the mutex in my testing code thanks. That and Deduplicator's suggestion to change vectors to pass by reference rather than pass by value shaved off 6 seconds. Nov 23 at 14:29 Practicing for an interview you say? constexpr int MAX_TEST_CASE = 4; You're off to a good start! Though you might use auto instead of int, or make it a size_t if it will be compared with vector lengths. Do you have warnings turned on? Signed/unsigned mismatch in comparison is a serious one to beware of. std::vector<int> convertInputLineToIntVector(std::string query_string) And now your first serious ding. Why are you passing a string by value? That is rare enough to be a code review issue and distracts reviewers even when there is a legitimate reason to do so. Passing by value is a telltale mistake of people coming from other languages that have reference semantics for objects, and is something the interviewer will immediately spot with concern. You're not familiar enough with C++ source code to think this looks weird. You should use std::string_view to pass string things into functions. This gives you the best efficiency for passing either a std::string or a lexical string literal. std::string::iterator intStart = query_string.begin(); std::string::iterator intEnd; Declaring the full elaborated type of intStart is brutal, unnecessary, and gets in the way if the type of query_string changes. Try and write in a generic manner even if it's not a template -- that will help in maintenance, as changing the type of something is a very common change to make. You want any dependent things in the function to be worked out automatically. In short, use auto. As for intEnd, why are you declaring it here, without any initializer? It should be declared where you actually use it, inside the loop. for (int i = 0; i < query_size; i++) { intEnd = std::find(intStart, query_string.end(), ' '); int pos = intEnd - query_string.begin(); vs. for (int i = 0; i < query_size; ++i) { const auto intEnd = std::find(intStart, query_string.end(), ' '); const auto pos = intEnd - query_string.begin(); Prefer prefix increment! Define the variable where you initialize it. Use const as well. Here, pos is not an int! Let auto figure out the difference_type between the iterators for you. What you are doing here, it appears, is splitting the string. This should be a function call, not elaborated out into the middle of the main code. In real life you'll already have a reusable function for this. So call it something that's not so specific to this usage. Like, parse_csv_ints and take the number of fields to expect as a template argument. Since the number of fields is known at compile-time, you can avoid the dynamic memory of a vector and use an array. My function is: template <size_t N> auto split (char separator, std::string_view input) { std::array<std::string_view, N> results; ⋮ return results; } This doesn't do the conversion to int of each field; it just separates them out (including whitespace). The caller allocates the fixed-size array on its stack, and the result is populated as pointers into the original input so does not copy any string data at all. This is a more reusable core that can have any type of data reading adding around it as another layer. std::vector<std::vector<int>> getIntVectors(std::ifstream* inFile) { std::vector<std::vector<int>> inputVector; std::string string_vector_count; getline(*inFile, string_vector_count); Why are you passing inFile as a pointer? You're using it without checking for nullptr and you don't assign back over the object to modify the caller's copy, so this should be a reference (not a pointer). The return type has a lot of overhead. Each test case has exactly 3 values per element, so make a struct for that, or an array, or tuple; not a variable-length dynamically-allocated vector! Not only will this use tons more memory, but it requires dynamic allocation (and later freeing). int getInputLines(std::string inputFileName, int &vectorSize, std::vector<std::vector<int>>& queries) This is returning a SUCCESS/FAILURE flag, and passing the results back through "out" parameters. Furthermore you wrote int &vectorSize instead of int& vectorSize, nevermind that it's a size_t not an int or that the vector itself already knows its own size. This is a weird miss-mash of C code with some C++ features added. Reviewing the data to be read, I see that it contains not just a list of commands but the cell count as well. The number of commands becomes the vector of commands' size. But where to store the cell count? You should abstract out the Test Case into its own class, even though it's just a vector plus another number. This will make the code clearer since you know when you have a single Test Case or an array of Test Cases, or the vector of commands without the cell count. do { std::cout << "How many test cases do you want to run?"; std::cin >> testCount; if (testCount < 0 || testCount > MAX_TEST_CASE) { std::cerr << "The number of test cases must be greater > 0 and less than " << " " << MAX_TEST_CASE << "\n"; } } while (testCount < 0 || testCount > MAX_TEST_CASE); This is something that bothers me about code like this. You write your own ad-hoc "trying to be robust" user input routines. That's not part of the actual problem, and such routines are never perfect anyway. In a real program, it won't prompt you. It looks at command-line arguments, and gives an error message if they are not suitable, or a help message if there are none. No loop and retry needed. And don't use "out" parameters for returning things! unsigned long mergeAndFindMax(std::vector<unsigned long> maxValues, std::vector<std::vector<unsigned long>> calculatedValues, const size_t executionCount) Did you notice that you're passing not just a vector but also a vector of vectors by value to this function, causing the whole dynamic memory tree to be duplicated?! Scanning ahead, I see other functions doing that too. Name the types: your vector<unsigned long> should be the cell array type. Don't use unsigned long as that's implementation defined as to what range it holds. Use the explicitly sized names like uint64_t. std::vector<unsigned long> maximumValues(n, 0); You know that maximumValues[n] does not exist in this vector? I think they intended for subscripts to range from 1 to n inclusive ("1 based"), but you have 0 through n-1 inclusive. Where can you put const that you haven't? Good luck, and keep practicing! And always have fun. • Forced to use a vector by Hacker Rank. Nov 23 at 16:34 • Looking forward to the rest of your review. Nov 23 at 16:40 • re "forced to use vector" so it's not just feeding you a text file and reading the output? Is it asking for a function with some specific signature? If so, what is the API it wants you to implement? Nov 23 at 17:03 • Not on hackerrank, no the only function that needs to be written on hackerrank is the arrayManipulation function and they have provided the prototype. Nov 23 at 17:06 • Good review, thank you. post fix increment is a remnant of my C programming days on the Motorola 68000 family of computers and Spark chips and compilers that didn't optimize very well, there was as post fix increment opcode that was faster than the prefix increment that took multiple opcodes, that is no longer valid. I have changed the code in my test environment so that it uses the reference of the vectors rather than the value, it did improve the performance. I need to keep practicing with std::string_view I don't use it enough. Nov 24 at 15:39 Yes, brute-force is a sub-optimal solution. A better way: 1. Decompose each query a b k into two parts: • At index a, add k • At index b + 1, add -k 2. Store those sub-queries in one vector. 3. Sort the sub-queries. 4. Iterate the sub-queries to find the solution. Opportunities for parallel execution? Sparse outside the sort. Inefficiencies common to challenge sites: • Using std::vector when the number of elements is known and small. The overhead of dynamically allocating all those small bits adds up. • Copying huge amounts of data unnecessarily, most of the time by passing above big vectors of vectors by value. • Flushing the output-stream all the time. If you actually need the flush also contained in std::endl, be explicit and use std::flush. • Also inefficient: passing strings and vectors by value. Nov 22 at 22:55 • In the past it has been said that my C++ looks too much like C, is that still the case? Your answer is definitely appreciated. Nov 23 at 15:22
2021-11-30 06:48:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22940196096897125, "perplexity": 4936.715914813381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358953.29/warc/CC-MAIN-20211130050047-20211130080047-00059.warc.gz"}
https://engineering.stackexchange.com/questions/38658/machinability-index-for-ferrous-and-nonferrous-material/38660#38660
Machinability index for ferrous and nonferrous material [closed] What are the machinability index for ferrous and non-ferrous materials? • Are you sure its machinability indes and not machinability index? Nov 15, 2020 at 15:31 Here is a detailed list for metals both ferrous like steel and cast iron and, nonferrous like copper and their alloys Machinability Rating Chart. machinabilty index Machinability index is measure for cutting processes which shows how easy it is to remove material. It is used by comparing the cutting speed to a reference value for steel. I.e.: • you try to find what is the speed that a cutting tool will last 20 minutes while cutting a given material. $$V_m$$ • you compare $$V_m$$ with the reference speed $$V_r$$ (which uses steel as a reference). There is also Machinability rating which is based on a similar idea, i.e. the cutting speed you can use with respect to a reference speed $$\text{machinability index} = \frac{V_c}{V_r}\cdot 100\%$$ The reference cutting speed is based on B1112 steel. If it is easy to remove material then, the index goes up. For example, diamond would have an index close to zero, while mg alloys have a much higher index. It can be used for most processes that involve removal of material, with a cutting tool (such as milling, lathing etc).
2022-05-21 09:35:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.530887246131897, "perplexity": 2020.608453774869}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539049.32/warc/CC-MAIN-20220521080921-20220521110921-00777.warc.gz"}
http://math.stackexchange.com/questions/237146/linear-projection-and-matrix-representation
Linear projection and matrix representation "Let $V$ be a finite-dimensional vector space and T be the projection on $W$ along $W'$, where $W$ and $W'$ are subspaces of $V$. Find an ordered basis $\beta$ for $V$ such that $[T]_\beta$ is a diagonal matrix." Playing around in $R^2$ and $R^3$ I found it difficult to reach a diagonal matrix. E.g. let $\beta = \{(1 1 1),(101) \}$ be a basis of $W$ and $\gamma = \{ (001)\}$ be a basis for $W'$. Then $T(abc) = (aba)$. Any basis of $V$ must contain some vector $v_i$ with a value at $a$, e.g. $(100)$ which means that $T(100) = (101)$, which again means that in one column of $[T]_\beta$there will be more than one value, so that it can't be a diagonal matrix. Now... what did I get wrong? thanks ! - Any projection is diagonalizable. In the example you give, $T$ is diagonal in the basis $(1,0,1),(0,1,0),(0,0,1)$.
2014-04-19 12:51:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8999846577644348, "perplexity": 42.7831771608651}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
https://ask.sagemath.org/question/36681/how-to-make-a-symbolic-function-of-a-matrix/
# How to make a symbolic function of a matrix ? Hi, I'm beginning with Sage. I've got two matrix Qf and Xf defined by : Qf = 1000000*matrix([[0,0],[0,1]]); Xf = matrix([1],[1675]); I would like to write a symbolic function "f" which would take a matrix X with 2 rows & 1 column. X = var('X'); f(X) = ((X-Xf).transpose()*Qf*(X-Xf)); I easily wrote it with python non-symbolic function syntax, but i didn't find a way to make it symbolic. Because I'll need his gradient later (which is easy to calculate by hand, that I conceed ^^). Maybe, it's related with SR matrix, no idea, i'm beginning with Sage and that's why I'm asking for help x) edit retag close merge delete Sort by » oldest newest most voted Yes, SR matrices come in handy for that type of calculations. # data Qf = 1000000*matrix([[0,0],[0,1]]); Xf = matrix([[1],[1675]]); # matrix with symbolic coefficients X = matrix([[var('x1')], [var('x2')]]); f = ((X-Xf).transpose()*Qf*(X-Xf)); # see result Qf, Xf, X, f produces $$\newcommand{\Bold}[1]{\mathbf{#1}}\left(\left(\begin{array}{rr} 0 & 0 \\ 0 & 1000000 \end{array}\right), \left(\begin{array}{r} 1 \\ 1675 \end{array}\right), \left(\begin{array}{r} x_{1} \\ x_{2} \end{array}\right), \left(\begin{array}{r} 1000000 \, {\left(x_{2} - 1675\right)}^{2} \end{array}\right)\right).$$ To evaluate $f$, do f(x1=1,x2=1). More generally, to define your $X$ it can be useful to do something like: # create a coefficient matrix of m rows and n columns m = 4; n = 2; xij = [[var('x'+str(1+i)+str(1+j)) for j in range(n)] for i in range(m)] X = matrix(SR, xij) X $$\newcommand{\Bold}[1]{\mathbf{#1}}\left(\begin{array}{rr} x_{11} & x_{12} \\ x_{21} & x_{22} \\ x_{31} & x_{32} \\ x_{41} & x_{42} \end{array}\right).$$ The quadratic function $f$ defined above is a $1\times 1$ matrix (convince yourself, for instance by reading the output of type(f)). To take the gradient this is one possible way: # passing from a 1x1 matrix to a scalar f = f[0, 0] # see result $$\newcommand{\Bold}[1]{\mathbf{#1}}\left(0, 2000000 x_{2} - 3350000000\right).$$ more Ty so much :) ( 2017-02-21 07:37:45 -0500 )edit 1 Just one last thing, is there any way to make elegant evaluation of f, as something like : M = matrix([[1,2],[3,4],[5,6],[7,8]]); print f(M); Or eventually : M = matrix([[1,2],[3,4],[5,6],[7,8]]); print f(xij = M[i][j]); You see the idea ^^ ( 2017-02-21 08:03:30 -0500 )edit going back to the example of above, and if v = vector([1, 2]), then f(X=v) will unfortunately not work, but with f.substitute([X[i][0] == v[i] for i in range(2)]) it evaluates $f$ at the point $(1, 2)$. I'm not sure if this is what you want to do, so don't hesitate to post a new question! ( 2017-02-21 11:39:25 -0500 )edit I works perfectly i'm actually gratefull, believe me :) There is my code, if eventually it can serve to someone : Qf = 1000000*matrix([[0,0],[0,1]]); Xf = matrix([[1],[1675]]); X = matrix(SR, [[var('X'+str(1+i)+str(1+j)) for j in range(1)] for i in range(2)]); f = ((X-Xf).transpose()*Qf*(X-Xf))[0,0]; Xt = matrix([[1],[1]]); print f; print f.substitute([X[i,0]==Xt[i,0] for i in range(2)]); -1000000.0*(X21 - 1659.0)*(-1.0*X21 + 1659.0) -0.0 ( 2017-02-21 12:52:47 -0500 )edit may i ask you something babacool51, in which context you are using this code? like course of math (linear algebra?) or another? thanks for any feedback! ( 2017-04-18 07:47:34 -0500 )edit
2018-03-21 04:49:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23582231998443604, "perplexity": 3330.576765896308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647576.75/warc/CC-MAIN-20180321043531-20180321063531-00360.warc.gz"}
http://www.komal.hu/verseny/feladat.cgi?a=feladat&f=B4743&l=en
English Információ A lap Pontverseny Cikkek Hírek Fórum Rendelje meg a KöMaL-t! VersenyVizsga portál Kísérletek.hu Matematika oktatási portál B. 4743. The inscribed circle of triangle $\displaystyle ABC$ touches sides $\displaystyle BC$, $\displaystyle AC$ and $\displaystyle AB$ at points $\displaystyle A_1$, $\displaystyle B_1$ and $\displaystyle C_1$, respectively. Let the orthocentres of triangles $\displaystyle AC_1B_1$, $\displaystyle BA_1C_1$ and $\displaystyle CB_1A_1$ be $\displaystyle M_A$, $\displaystyle M_B$ and $\displaystyle M_C$, respectively. Show that triangle $\displaystyle A_1B_1C_1$ is congruent to triangle $\displaystyle M_AM_BM_C$. Proposed by Sz. Miklós, Herceghalom (4 points) Deadline expired on 10 December 2015. Statistics on problem B. 4743. 112 students sent a solution. 4 points: 88 students. 3 points: 14 students. 2 points: 4 students. 1 point: 3 students. Unfair, not evaluated: 2 solutions. Unfair, not evaluated: 1 solution. • Problems in Mathematics of KöMaL, November 2015 • Támogatóink: Morgan Stanley
2017-10-19 12:54:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8995149731636047, "perplexity": 3647.9872274194067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823284.50/warc/CC-MAIN-20171019122155-20171019142155-00002.warc.gz"}
https://betaflight.com/docs/development/Rssi
RSSI is a measurement of signal strength and is very handy so you know when your aircraft is going out of range or if it is suffering RF interference. 2. RSSI via Parallel PWM channel Configure your receiver to output RSSI on a spare channel, then select the channel used via the CLI. e.g. if you used channel 9 then you would set: set rssi_channel = 9 Note: Some systems such as EZUHF invert the RSSI ( 0 = Full signal / 100 = Lost signal). To correct this problem you can invert the channel input so you will get a correct reading by using command: set rssi_invert = ON Default is set to "OFF" for normal operation ( 100 = Full signal / 0 = Lost signal). Connect the RSSI signal to any PWM input channel then set the RSSI channel as you would for RSSI via PPM The S.Bus serial protocol includes detection of dropped frames. These may be monitored and reported as RSSI by using the following command. set rssi_src_frame_errors = ON Note that RSSI stands for Received Signal Strength Indicator; the detection of S.Bus dropped frames is really a signal quality, not strength indication. Consequently you may experience a more rapid drop in reported RSSI at the extremes of range when using this facility than when using RSSI reporting signal strength. Connect the RSSI signal to the RC2/CH2 input. The signal must be between 0v and 3.3v. Use inline resistors to lower voltage if required; inline smoothing capacitors may also help. A simple PPM->RSSI conditioner can easily be made. See the PPM-RSSI conditioning.pdf for details. Under CLI : • enable using the RSSI_ADC feature : feature RSSI_ADC • set the RSSI_SCALE parameter (between 1 and 255) to adjust RSSI level according to your configuration. The raw ADC value is divided by the value of this parameter. Note: Some systems invert the RSSI ( 0 = Full signal / 100 = Lost signal). To correct this problem you can invert the input so you will get a correct reading by using command: set rssi_invert = ON • set rssi_scale = 100. The displayed percentage will then be the raw ADC value. • turn on RX (close to board). RSSI value should vary a little. FrSky D4R-II and X8R supported. The feature can not be used when RX_PARALLEL_PWM is enabled. To calculate the rssi offset and scale, check the rc value at full signal (rssi_fullsig) and at almost no signal strength (rssi_nosig). Then, calculate the offset and scale values using the following formula: rssi_offset = (1000-rssi_nosig) / 10rssi_scale = 100 * 1000 / (rssi_fullsig - rssi_nosig) RC SystemRC value at full strengthRC value at no strengthrssi_offsetrssi_scale Graupner19001100-10125 set rssi_offset = -10set rssi_scale = 125
2023-03-22 00:28:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4207218885421753, "perplexity": 4831.680547665732}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00691.warc.gz"}
http://mathhelpforum.com/calculus/143654-please-help-me-solve-ff-log-problems.html
Without the use of a calculator. 1. e^ln0.1 2. log(base)2 3. log(base)3 4. log(base)4 8 3. e^log(base)e^2^9 4. ln(y+4)=5x+lnC 5. log(base)5(x+6)+log(base)5 (x+1)=1 Determine the domain of the ff: f(x)=log(base)2 (xsquare-3x+2). When i say log(base)2, I mean the no that goes underneath the log. Thanks a lot! 2. Originally Posted by different92 Without the use of a calculator. 1. e^ln0.1 2. log(base)2 3. log(base)3 4. log(base)4 8 3. e^log(base)e^2^9 4. ln(y+4)=5x+lnC 5. log(base)5(x+6)+log(base)5 (x+1)=1 Determine the domain of the ff: f(x)=log(base)2 (xsquare-3x+2). When i say log(base)2, I mean the no that goes underneath the log. Thanks a lot! Some hints Inverse functions: $e^{\ln(a)} = \ln(e^a) = a$ ^ in general $\log_a(a^k) = a^{\log_a(k)} = k$ Change of base rule: $\log_b(a) = \frac{\log_c(a)}{\log_c(b)}$ Addition Rule: $\log_b(a) + \log_b(c) = \log_b(ac)$ Subtraction Rule: $\log_b(a) - \log_b(c) = \log_b\left(\frac{a}{c}\right)$ Power Law: $\log_a(b^k) = k\,\log_a(b)$ --------------------------- For example number 2 (I apologise if I have misread the question but the syntax is highly confusing) From the change of base rule applied to each term (I have used base $e$ but it works with any positive base that is not 1): $\log_2(3) \cdot \log_3(4) \cdot \log_4(8) = \frac{\ln(3)}{\ln(2)} \cdot \frac{\ln(4)}{\ln(3)} \cdot \frac{\ln(8)}{\ln(4)}$ since $8 = 2^3$ and $4 = 2^2$ we can simplify using the power law $\frac{\ln(3)}{\ln(2)} \cdot \frac{2\ln(2)}{\ln(3)} \cdot \frac{3\ln(2)}{2\ln(2)}$ Cancel out terms as you would with any ordinary fraction to give an answer of $3$
2015-08-28 18:09:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7807962894439697, "perplexity": 4105.454470576208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063825.9/warc/CC-MAIN-20150827025423-00192-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.dm.unipi.it/eventi/renormalization-of-analytic-maps-of-the-annulus-and-applications-to-conjugacy-problems-michael-yampolsky/
# Renormalization of analytic maps of the annulus and applications to conjugacy problems – Michael Yampolsky (Univ. of Toronto) CRM – SNS. #### Abstract In a recent series of works with N. Goncharuk, we have constructed an analytic renormalization operator acting on conformal maps of the annulus. We showed that this operator has a hyperbolic horseshoe attractor consisting of Brjuno rotations. I will discuss the resulting renormalization picture and some consequences, including a new proof of Risler’s theorem, and results on smoothness of Arnold tongues. Further information is available on the event page on the Indico platform. Torna in cima
2023-03-22 06:42:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8120707869529724, "perplexity": 1498.854257392998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943750.71/warc/CC-MAIN-20230322051607-20230322081607-00712.warc.gz"}
https://math.stackexchange.com/questions/1687711/channels-with-memory-have-higher-capacity
# Channels with memory have higher capacity I am working through Elements of Information Theory by Cover and Thomas and have come across the following solution to one of their problems that I don't understand. Consider a binary, symmetric channel with $Y_i = X_i + Z_i$, where $+$ is mod2 addition and $X_i, Y_i \in \{0,1\}$. Suppose that $\{Z_i\}$ has constant marginal probabilities $Pr\{Z_i=1\}=p=1-Pr\{Z_i=0\}$, but that $Z_1, Z_2, ..., Z_n$ are not necessarily independent. Assume that $Z^n$ is independent of the input $X^n$. Let $C=1-H(p,1-p)$. Show that $\max_{p(x_1, ..., x_n)}I(X_1,...,X_n;Y_1,...,Y_n) \ge nC$. The solution begins by stating: $I(X_1,...,X_n;Y_1,...,Y_n)=H(X_1,...,X_n)-H(X_1,...,X_n|Y_1,...,Y_n)=H(X_1,...,X_n)-H(Z_1,...,Z_n|Y_1,...,Y_n)$ where $X_i$ are chosen i.i.d. from Bernoulli($\frac{1}{2}$). I don't understand how the rightmost equality is derived. It's not an identity and is not explained. I'm assuming that it's something simple I'm overlooking, could someone offer a hint? • Let $X,Y,Z$ be $n$-dimensional vectors. You are given $Y=X+Z$. So $H(X|Y) = H(Y-Z|Y) = H(-Z|Y) = H(Z|Y)$. Overall, for any vectors $A,B$ we have $H(A+B|A)=H(B|A)$. – Michael Mar 8 '16 at 1:55 • Oi, not enough fiddling! Thank you! – jjoe Mar 8 '16 at 2:44
2019-10-23 19:08:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9507554769515991, "perplexity": 233.49546724436897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987835748.66/warc/CC-MAIN-20191023173708-20191023201208-00320.warc.gz"}
https://haypikingdom.fandom.com/wiki/Speed
## FANDOM 142 Pages Speed is an Attribute in Haypi Kingdom . It determines which troops attack first. The formula for this is equal to $Base Speed of Unit+(Attribute Speed+Equipment Bonus)=first attack$. ## Facts/TriviaEdit • If the Speed of both players are equal then the player being attacked will get the first hit in battle. • When battling a Mine or Alliance in the Alliance War feature the formula, $Base Speed of Unit+(Attribute Speed+Equipment Bonus)$, is invalid. The forumula is $Defender=First Attack$. • Speed can have a maximum Attribute level of 100 attribute 90 gear 30 tech total 220 • Both the Horse equipment and Manual raise speed. • Cavalry will always have first attack against non-cavalry units in range. Community content is available under CC-BY-SA unless otherwise noted.
2019-08-24 14:08:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43624332547187805, "perplexity": 5128.651023615476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321140.82/warc/CC-MAIN-20190824130424-20190824152424-00265.warc.gz"}
https://raspberrypi.stackexchange.com/questions/98144/running-mongodb-as-as-service-increases-boot-time-by-30-seconds
Running mongodb as as service increases boot time by 30 seconds Title says it all. I have an RPi 3 with Raspbian that has a web app running at start up in chromium. It took 20-30 seconds from powering on to the app appearing before I added mongo, and after I integrated mongo and ran it as a service: sudo systemctl enable mongodb It increased the bootup time by about 30 seconds. This isn't really acceptable, so I tried the flat file nosql plugin tinydb, and this was a great solution until I realized it wasn't thread or process-safe, which is a requirement here. Is there any way I can avoid this increase in boot up time? Some settings I can use for mongo that will disable unnecessary features, another nosql implementation that is lightweight but also thread/process-safe, or some other method of enabling mongo on startup? • Perhaps have a sleeper process that runs mongodb after a specific amount of time (say, 1 minute) – user96931 May 2 at 19:28
2019-10-16 07:44:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18421687185764313, "perplexity": 3298.2789489996517}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666467.20/warc/CC-MAIN-20191016063833-20191016091333-00088.warc.gz"}
https://www.physicsforums.com/threads/energy-of-fission.228369/
# Energy of fission 1. Homework Statement Find the energy which comes out with fission of the nucleus of deuterium and tritium, so we receive like product the nucleus of helium. mass of tritium$$m(\stackrel{3}{1}H)=3,016049u$$ mass of the nucleus of deuterium $$m(\stackrel{2}{1}H)=2,013553u$$ mass of the nucleus of helium $$m(\stackrel{4}{2}He)=4,002603u$$ 2. Homework Equations $$\Delta E=\Delta m c^2$$ 3. The Attempt at a Solution The nuclear reaction $$\stackrel{2}{1}H + \stackrel{3}{1}H \rightarrow \stackrel{4}{2}He + \stackrel{1}{0}n + energy$$ How will I find the mass of triton( the nucleus of tritium) ? Related Introductory Physics Homework Help News on Phys.org Andrew Mason Homework Helper How will I find the mass of triton( the nucleus of tritium) ? Take the mass of the atom and subtract the mass of the electron. The binding energy of the electron is not enough to make a significant difference in the mass of the atom. AM (3,016049u+2,013553u)-(4,002603u+1,008665u)=0,018334 0,018334*931,494 MeV=17,078011 MeV And in my text book results it is 17,8 MeV. Is their fault? malawi_glenn And you will obtain that answer if you use $$m(\stackrel{2}{1}H)=2,014101783u$$
2019-12-06 11:45:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2171836644411087, "perplexity": 1468.8460333077055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540487789.39/warc/CC-MAIN-20191206095914-20191206123914-00471.warc.gz"}
https://www.quizerry.com/2020/12/week-1-problem-set-cryptography-i/
# Week 1 – Problem Set >> Cryptography I ## Week 1 – Problem Set >> Cryptography I Week 1 – Problem Set LATEST SUBMISSION GRADE 80% 1. Question 1 Data compression is often used in data storage and transmission. Suppose you want to use data compression in conjunction with encryption. Does it make more sense to: 0 / 1 point The order does not matter — either one is fine. Encrypt then compress. The order does not matter — neither one will compress the data. Compress then encrypt. Incorrect Ciphertexts tend to look like random strings and therefore compressing after encryption will not compress the data. 2. Question 2 Let G:\{0,1\}^s \to \{0,1\}^nG:{0,1} s →{0,1} n be a secure PRG. Which of the following is a secure PRG (there is more than one correct answer): 1 / 1 point G'(k) = G(k) \bigoplus 1^nG ′ (k)=G(k)⨁1 n Correct a distinguisher for G’G ′ gives a distinguisher for GG. G'(k) = G(k \oplus 1^s)G ′ (k)=G(k⊕1 s ) Correct a distinguisher for G’G ′ gives a distinguisher for GG. G'(k) = G(k)[0,\ldots,n-2]G ′ (k)=G(k)[0,…,n−2] (i.e., G'(k)G ′ (k) drops the last bit of G(k)G(k)) Correct a distinguisher for G’G ′ gives a distinguisher for GG. G'(k) = G(k) \;\big\|\; 0G ′ (k)=G(k) ∥ ∥ ∥ ​ 0 (here \big\| ∥ ∥ ∥ ​ denotes concatenation) G'(k) = G(0)G ′ (k)=G(0) G'(k) = G(k) \;\big\|\; G(k)G ′ (k)=G(k) ∥ ∥ ∥ ​ G(k) (here \big\| ∥ ∥ ∥ ​ denotes concatenation) 3. Question 3 Let G:K \to \{0,1\}^nG:K→{0,1} n be a secure PRG. Define G'(k_1,k_2) = G(k_1) \;\bigwedge\; G(k_2)G ′ (k 1 ​ ,k 2 ​ )=G(k 1 ​ )⋀G(k 2 ​ ) where \bigwedge⋀ is the bit-wise AND function. Consider the following statistical test AA on \{0,1\}^n{0,1} n : A(x)A(x) outputs \text{LSB}(x)LSB(x), the least significant bit of xx. What is Adv_{\text{PRG}}[A,G’]Adv PRG ​ [A,G ′ ] ? You may assume that \text{LSB}(G(k))LSB(G(k)) is 0 for exactly half the seeds kk in KK. Note: Please enter the advantage as a decimal between 0 and 1 with a leading 0. If the advantage is 3/4, you should enter it as 0.75 1 / 1 point 0.25 Correct for a random string x we have Pr[A(x)=1]=1/2Pr[A(x)=1]=1/2 but for a pseudorandom string G'(k_1,k_2)G ′ (k 1 ​ ,k 2 ​ ) we have Pr_{k_1,k_2}[A(G'(k_1,k_2))=1]=1/4Pr k 1 ​ ,k 2 ​ ​ [A(G ′ (k 1 ​ ,k 2 ​ ))=1]=1/4. 4. Question 4 Let (E,D)(E,D) be a (one-time) semantically secure cipher with key space K = \{0,1\}^\ellK={0,1} ℓ . A bank wishes to split a decryption key k \in \{0,1\}^\ellk∈{0,1} ℓ into two pieces p_1p 1 ​ and p_2p 2 ​ so that both are needed for decryption. The piece p_1p 1 ​ can be given to one executive and p_2p 2 ​ to another so that both must contribute their pieces for decryption to proceed. The bank generates random k_1k 1 ​ in \{0,1\}^\ell{0,1} ℓ and sets k_1′ \gets k \oplus k_1k 1 ′ ​ ←k⊕k 1 ​ . Note that k_1 \oplus k_1′ = kk 1 ​ ⊕k 1 ′ ​ =k. The bank can give k_1k 1 ​ to one executive and k_1’k 1 ′ ​ to another. Both must be present for decryption to proceed since, by itself, each piece contains no information about the secret key kk (note that each piece is a one-time pad encryption of kk). Now, suppose the bank wants to split kk into three pieces p_1,p_2,p_3p 1 ​ ,p 2 ​ ,p 3 ​ so that any two of the pieces enable decryption using kk. This ensures that even if one executive is out sick, decryption can still succeed. To do so the bank generates two random pairs (k_1,k_1′)(k 1 ​ ,k 1 ′ ​ ) and (k_2,k_2′)(k 2 ​ ,k 2 ′ ​ ) as in the previous paragraph so that k_1 \oplus k_1′ = k_2 \oplus k_2′ = kk 1 ​ ⊕k 1 ′ ​ =k 2 ​ ⊕k 2 ′ ​ =k. How should the bank assign pieces so that any two pieces enable decryption using kk, but no single piece can decrypt? 1 / 1 point p_1 = (k_1,k_2),\quad p_2 = (k_1′,k_2), \quad p_3 = (k_2′)p 1 ​ =(k 1 ​ ,k 2 ​ ),p 2 ​ =(k 1 ′ ​ ,k 2 ​ ),p 3 ​ =(k 2 ′ ​ ) p_1 = (k_1,k_2),\quad p_2 = (k_1′,k_2′), \quad p_3 = (k_2′)p 1 ​ =(k 1 ​ ,k 2 ​ ),p 2 ​ =(k 1 ′ ​ ,k 2 ′ ​ ),p 3 ​ =(k 2 ′ ​ ) p_1 = (k_1,k_2),\quad p_2 = (k_1′), \quad p_3 = (k_2′)p 1 ​ =(k 1 ​ ,k 2 ​ ),p 2 ​ =(k 1 ′ ​ ),p 3 ​ =(k 2 ′ ​ ) p_1 = (k_1,k_2),\quad p_2 = (k_1,k_2), \quad p_3 = (k_2′)p 1 ​ =(k 1 ​ ,k 2 ​ ),p 2 ​ =(k 1 ​ ,k 2 ​ ),p 3 ​ =(k 2 ′ ​ ) p_1 = (k_1,k_2),\quad p_2 = (k_2,k_2′), \quad p_3 = (k_2′)p 1 ​ =(k 1 ​ ,k 2 ​ ),p 2 ​ =(k 2 ​ ,k 2 ′ ​ ),p 3 ​ =(k 2 ′ ​ ) Correct executives 1 and 2 can decrypt using k_1,k_1’k 1 ​ ,k 1 ′ ​ , executives 1 and 3 can decrypt using k_2,k_2’k 2 ​ ,k 2 ′ ​ , and executives 2 and 3 can decrypt using k_2,k_2’k 2 ​ ,k 2 ′ ​ . Moreover, a single executive has no information about $k$. 5. Question 5 Let M=C=K=\{0,1,2,\ldots,255\}M=C=K={0,1,2,…,255} and consider the following cipher defined over (K,M,C)(K,M,C): E(k,m) = m+k \pmod{256} \qquad;\qquad D(k,c) = c-k \pmod{256} \ .E(k,m)=m+k(mod256);D(k,c)=c−k(mod256) . Does this cipher have perfect secrecy? 1 / 1 point Yes. No, only the One Time Pad has perfect secrecy. No, there is a simple attack on this cipher. Correct as with the one-time pad, there is exactly one key mapping a given message m to a given ciphertext c. 6. Question 6 Let (E,D)(E,D) be a (one-time) semantically secure cipher where the message and ciphertext space is \{0,1\}^n{0,1} n . Which of the following encryption schemes are (one-time) semantically secure? 1 / 1 point E'(\ (k,k’),\ m) = E(k,m) \;\big\|\; E(k’,m) E ′ ( (k,k ′ ), m)=E(k,m) ∥ ∥ ∥ ​ E(k ′ ,m) Correct an attack on E’E ′ gives an attack on EE. E'(k,m) = \text{reverse}(E(k,m))E ′ (k,m)=reverse(E(k,m)) Correct an attack on E’E ′ gives an attack on EE. E'(k,m) = E(k,m) \;\big\|\; k E ′ (k,m)=E(k,m) ∥ ∥ ∥ ​ k E'(k,m) = E(k,m) \;\big\|\; \text{LSB}(m) E ′ (k,m)=E(k,m) ∥ ∥ ∥ ​ LSB(m) E'(k,m) = 0 \;\big\|\; E(k,m)E ′ (k,m)=0 ∥ ∥ ∥ ​ E(k,m) (i.e. prepend 0 to the ciphertext) Correct an attack on E’E ′ gives an attack on EE. E'(k,m) = E(0^n,m) E ′ (k,m)=E(0 n ,m) 7. Question 7 Suppose you are told that the one time pad encryption of the message “attack at dawn” is 6c73d5240a948c86981bc294814d (the plaintext letters are encoded as 8-bit ASCII and the given ciphertext is written in hex). What would be the one time pad encryption of the message “attack at dusk” under the same OTP key? 1 / 1 point 6c73d5240a948c86981bc2808548 Correct 8. Question 8 The movie industry wants to protect digital content distributed on DVD’s. We develop a variant of a method used to protect Blu-ray disks called AACS. Suppose there are at most a total of nn DVD players in the world (e.g. n = 2^{32}n=2 32 ). We view these nn players as the leaves of a binary tree of height \log_2{n}log 2 ​ n. Each node in this binary tree contains an AES key k_ik i ​ . These keys are kept secret from consumers and are fixed for all time. At manufacturing time each DVD player is assigned a serial number i \in [0, n − 1]i∈[0,n−1]. Consider the set of nodes S_iS i ​ along the path from the root to leaf number ii in the binary tree. The manufacturer of the DVD player embeds in player number ii the keys associated with the nodes in the set S_iS i ​ . A DVD movie mm is encrypted as E(k_{\text{root}},k) \big\| E(k,m)E(k root ​ ,k) ∥ ∥ ∥ ​ E(k,m) where kk is a random AES key called a content-key and k_{\text{root}}k root ​ is the key associated with the root of the tree. Since all DVD players have the key k_{\text{root}}k root ​ all players can decrypt the movie mm. We refer to E(k_{\text{root}},k)E(k root ​ ,k) as the header and E(k,m)E(k,m) as the body. In what follows the DVD header may contain multiple ciphertexts where each ciphertext is the encryption of the content-key kk under some key k_ik i ​ in the binary tree. Suppose the keys embedded in DVD player number rr are exposed by hackers and published on the Internet. In this problem we show that when the movie industry distributes a new DVD movie, they can encrypt the contents of the DVD using a slightly larger header (containing about \log_2 nlog 2 ​ n keys) so that all DVD players, except for player number rr, can decrypt the movie. In effect, the movie industry disables player number rr without affecting other players. As shown below, consider a tree with n=16n=16 leaves. Suppose the leaf node labeled 25 corresponds to an exposed DVD player key. Check the set of keys below under which to encrypt the key kk so that every player other than player 25 can decrypt the DVD. Only four keys are needed. 1 / 1 point 11 Correct You cannot encrypt kk under key 5, but 11’s children must be able to decrypt kk. 29 1 Correct You cannot encrypt kk under the root, but 1’s children must be able to decrypt kk. 5 7 6 Correct You cannot encrypt kk under 2, but 6’s children must be able to decrypt kk. 26 Correct You cannot encrypt kk under any key on the path from the root to node 25. Therefore 26 can only decrypt if you encrypt kk under key k_{26}k 26 ​ . 10 9. Question 9 Continuing with the previous question, if there are nn DVD players, what is the number of keys under which the content key kk must be encrypted if exactly one DVD player’s key needs to be revoked? 1 / 1 point \log_2{n}log 2 ​ n n-1n−1 22 n/2n/2 \sqrt{n} n ​ Correct That’s right. The key will need to be encrypted under one key for each node on the path from the root to the revoked leaf. There are \log_2{n}log 2 ​ n nodes on the path. 10. Question 10 Continuing with question 8, suppose the leaf nodes labeled 16, 18, and 25 correspond to exposed DVD player keys. Check the smallest set of keys under which to encrypt the key k so that every player other than players 16,18,25 can decrypt the DVD. Only six keys are needed. 0 / 1 point 4 Correct Yes, this will let players 19-22 decrypt. 6 11 Correct Yes, this will let players 23,24 decrypt. 15 Correct Yes, this will let players 15 decrypt. 17 Correct Yes, this will let players 17 decrypt. 26 Correct Yes, this will let players 26 decrypt. 8 13 14 20 You didn’t select all the correct answers *Please Wait 15 Seconds To Get The Pdf Loaded
2021-04-18 05:12:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24046647548675537, "perplexity": 7550.942480612805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038468066.58/warc/CC-MAIN-20210418043500-20210418073500-00416.warc.gz"}
https://web2.0calc.com/questions/help-please-thanks_2
+0 0 127 2 1) For what value of the constant a does the system of equations below have infinitely many solutions? \begin{align*} 3x + 2y &= 8,\\ 6x &= 2a - 7 - 4y \end{align*} I tried over and over again and don't seem to get the right answer. I got 23/6(or sothing like that) for the first time and now 37/6 ... 2) How many numbers between 1 and 2005 are integer multiples of 3 or 4 but not 12? What trick can I use to solve this quickly? 3) Using the letters X and Y, the following two-letter code words can be formed: XX, XY, YY, YX. Using the letters X, Y, and Z, how many different 3 letter code words can be formed? Same here... Sep 5, 2018 #1 +1 1)  Try a = 23/2. With this value, equation 1 multiplied by 2 is exactly the same as equation 2. 2) Try dividing 2005/3 + 2005/4 - 2005/6. Ignore the fractional parts. 3) If you are allowed to repeat the letters, then you should have: 3^3 = 27 Sep 5, 2018 edited by Guest  Sep 5, 2018 edited by Guest  Sep 5, 2018 #2 +1 Yeah! That was what I got for no.1 and I thought I was wrong... :( THANK SO MUCH! Guest Sep 5, 2018
2019-03-26 21:25:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9729592800140381, "perplexity": 697.528508750744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912206016.98/warc/CC-MAIN-20190326200359-20190326222359-00380.warc.gz"}
https://docs.openmc.org/en/v0.12.1/releasenotes/0.11.0.html
# What’s New in 0.11.0¶ ## Summary¶ This release of OpenMC adds several major new features: depletion, photon transport, and support for CAD geometries through DAGMC. In addition, the core codebase has been rewritten in C++14 (it was previously written in Fortran 2008). This makes compiling the code considerably simpler as no Fortran compiler is needed. Functional expansion tallies are now supported through several new tally filters that can be arbitrarily combined: Note that these filters replace the use expansion scores like scatter-P1. Instead, a normal scatter score should be used along with a openmc.LegendreFilter. The interface for random sphere packing has been significantly improved. A new openmc.model.pack_spheres() function takes a region and generates a random, non-overlapping configuration of spheres within the region. ## Python API Changes¶ • All surface classes now have coefficient arguments given as lowercase names. • The order of arguments in surface classes has been changed so that coefficients are the first arguments (rather than the optional surface ID). This means you can now write: x = openmc.XPlane(5.0, 'reflective') zc = openmc.ZCylinder(0., 0., 10.) • The Mesh class has been renamed openmc.RegularMesh. • The get_rectangular_prism function has been renamed openmc.model.rectangular_prism(). • The get_hexagonal_prism function has been renamed openmc.model.hexagonal_prism(). • Python bindings to the C/C++ API have been move from openmc.capi to openmc.lib. ## Contributors¶ This release contains new contributions from the following people:
2021-07-23 19:53:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1772405207157135, "perplexity": 5638.911098860419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150000.59/warc/CC-MAIN-20210723175111-20210723205111-00139.warc.gz"}
https://physics.stackexchange.com/questions/258666/molar-specific-heat-definitions-for-gases
# Molar Specific Heat Definitions for Gases In most textbooks, the molar specific heat of a gas is defined for gases at constant volume and constant pressure as follows : 1. $$C_v = \frac{Q}{n\Delta T} = \frac{\Delta U}{n \Delta T}$$ 2. $$C_p = \frac{Q}{n \Delta T}$$ But these definitions can also be related by $$C_p = C_v + R$$, with $$R$$, being the ideal gas constant. But it seems that using definition $$(1)$$ in the first law of thermodynamics leads to contradictions, for example $$\Delta U = Q - W$$ $$\implies C_v\cdot n \cdot\Delta T = (C_v\cdot n \cdot \Delta T) - W$$ $$\implies W = 0 \ \ (\forall\ C_V, n, \Delta T)$$ Which is obviously not true Another seeming contradiction can be derived as follows: $$C_p = C_v + R$$ $$\implies \frac{Q}{n \Delta T} = \frac{Q}{n \Delta T} + R$$ $$\implies R = 0$$ Which again is obviously not true. So it seems one can not just take the definitions of $$C_v$$ and $$C_p$$ at face value and use them in equations, there have to be certain conditions where I can or cannot use them. Textbooks such as Fundamentals of Physics, and University Physics, give very little explanation why the definitions of molar specific heats of gases differ under constant volume and constant pressure, for example why $$C_p \neq \frac{Q}{n \Delta T}$$, is not explained in much detail if at all in either textbook. So my question is why do the definitions of molar specific heats of gases differ under constant volume and constant pressure? And why can I not take the definitions of $$C_v$$ and $$C_p$$ at face value and use them in equations? When you learned this material as a beginning physics student, they taught you that $nC_p\Delta T=Q$, and the focus was typically on a solid or liquid (both of which are nearly incompressible). However, this was only introductory, and was not the correct and precise definition necessary to use in thermodynamics. In thermodynamics, it is recognized that heat capacity is really a physical property of the material, and has nothing to do with any specific process (which in thermo is characterized by W and Q). The new more precise definitions of heat capacities employed in thermodynamics involve the internal energy and the enthalpy of the material (and can apply to any material, including an ideal gas): $$nC_v=\left(\frac{\partial U}{\partial T}\right)_V$$ $$nC_p=\left(\frac{\partial H}{\partial T}\right)_p$$ If the first equation is combined with the first law, then, in heating tests at constant volume, the amount of heat added Q to the system can be used to experimentally measure Cv directly, since the amount of work is zero. If the second equation is combined with the first law, then, in heating tests at constant pressure, the amount of heat added to the system can be used to experimentally measure Cp directly, since, in this case, $W = p\Delta V$. So the subscripts v and p are used to refer to the conditions required to measure these heat capacities directly by determining the heat Q added is such special tests. However, once these measurements have established the values of the two heat capacities, they can be used for all differential changes in state to determine the partial derivatives of U and H with respect to temperature. The expressions are specifically for the cases "gas heated in a constant volume" ($C_V$) and "gas allowed to expand so that the pressure stays constant" ($C_p$). So you can't substitute one expression into the other because the values for $Q$ would be different. The $C_V$ case is easiest to understand: you add heat, increase the kinetic energy of the molecules, therefore increase the temperature. The ideal-gas law $pV=nRT$ prescribes that the pressure must increase, too. If you heat the gas at constant pressure, the gas must expand its volume by an amount $\Delta V=nR\Delta T/p$, thereby doing a work $\Delta W=p\Delta V=nR\Delta T$. This extra work explains the relation $C_p-C_V=R$.
2022-06-30 04:32:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7921257615089417, "perplexity": 276.7819744270863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103661137.41/warc/CC-MAIN-20220630031950-20220630061950-00254.warc.gz"}
https://www.physicsforums.com/threads/parametric-2nt-deratives.283226/
# Parametric 2'nt deratives ## Homework Statement find the slope and concavity of the funtion at the given point. x=t^2 Y=t^2+t+1 (0,0) t=x^(1/2) ## The Attempt at a Solution t=0 when x=0 x'=2t y'=2t+1 M=2t/(2t+1)=0 for the second deritive would you take the deritive of 2t/(2t+1) devided by the deritive of 2t+1 once i find the second deritive i would plug in o for t and if it was + than it would be up -would be down. right? Defennder Homework Helper M=2t/(2t+1)=0 I assume that by M you mean $$\frac{d^2y}{dx^2}$$. It should be y'/x' instead here. for the second deritive would you take the deritive of 2t/(2t+1) devided by the deritive of 2t+1 Why should it be the derivative of y'=2t+1? How would you use the chain rule to determine a correct expression for d^2y/dx^2 ? once i find the second deritive i would plug in o for t and if it was + than it would be up -would be down. right? Not quite. You'll want to find the value of t for which x=y=0. For x=0,t=0 so that's correct. For y, setting t=0 gives y=1. Setting y=0 and solving for t gives you the correct t-values.
2021-06-12 11:02:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.784950852394104, "perplexity": 1691.3163638153228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487582767.0/warc/CC-MAIN-20210612103920-20210612133920-00021.warc.gz"}
http://www.mathematicalfoodforthought.com/2006/05/really-rooty-real-roots-topic_12.html
## Friday, May 12, 2006 ### Really Rooty Real Roots. Topic: Algebra/Calculus. Level: AIME/Olympiad. Problem: (Problem-Solving Through Problems - 6.5.5(b)) If $a_0, a_1, \ldots, a_n$ are real numbers satisfying $\frac{a_0}{1}+\frac{a_1}{2}+\cdots+\frac{a_n}{n+1} = 0$, show that the equation $a_0+a_1x+\cdots+a_nx^n = 0$ as at least one real root. Solution: Consider integrating this function over the interval $[0,1]$. Let $f(x) = a_0+a_1x+\cdots+a_nx^n = 0$. We have $\displaystyle \int_0^1 (a_0+a_1x+\cdots+a_nx^n)dx = \left[\frac{a_0x}{1}+\frac{a_1x^2}{2}+\cdots+\frac{a_nx^{n+1}}{n+1}\right]^1_0 = \frac{a_0}{1}+\frac{a_1}{2}+\cdots+\frac{a_n}{n+1} = 0$. If $f$ had no real roots on the interval $[0,1]$ it would be strictly positive or negative. Then the integral of $f$ over $[0,1]$ would also be strictly positive or negative, respectively. Hence $f$ must take on both positive and negative values. But since $f$ is continuous, we know that there must be a $c \in [0,1]$ such that $f(c) = 0$ and therefore $f$ has a real root. QED. -------------------- Comment: The condition involving the fractions should give away integration; from there, it's not hard to see that we want the interval $[0,1]$ and this gives us the result immediately. -------------------- Practice Problem: (Problem-Solving Through Problems - 6.5.5(a)) Show that $5x^4-4x+1$ has a root between $0$ and $1$.
2019-11-22 07:43:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9180742502212524, "perplexity": 225.57727742153256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671245.92/warc/CC-MAIN-20191122065327-20191122093327-00456.warc.gz"}
https://rip94550.wordpress.com/2008/02/10/schurs-lemma-any-matrix-is-unitarily-similar-to-an-upper-triangular/
## schur’s lemma: any matrix is unitarily similar to an upper triangular i bumped into someone last night who asked me about schur’s lemma, something about bringing a matrix to triangular form. i’ve spent so much time looking at diagonalizng things that i didn’t appreciate schur’s lemma, and it deserves to be appreciated. it says that we can bring any (complex) matrix A to upper triangular form using a unitary similarity transform. in this form, the restriction to “unitary” is a bonus: a perfectly useful but weaker statement is that any matrix is similar to an upper triangular matrix. now, we’re usually interested in diagonalizing a matrix. when can we go that far? easy: that upper triangular matrix is in fact diagonal iff the original matrix A is normal; that is, iff A commutes with its conjugate transpose: $A \ A^{\dagger } = A^{\dagger }\ A.$ so, any normal matrix can be diagonalized; furthermore, the similarity transform is unitary. the combined result deserves to be rephrased. any matrix A can be diagonalized using a unitary similarity transform if and only if A is normal. now the restriction to “unitary” is not a bonus. strang’s “linear algebra” is the perfect reference for all that. anyway, the next time we see a theorem that says, “if A is normal…”, we could read it as “if A can be diagonalized by a unitary similarity transformation….” OTOH, if we look in stewart’s “intro matrix computations”, we find a theorem saying that any non-defective matrix can be brought to diagonal form by a similarity transform P, but P need not be unitary; an nxn matrix is called non-defective if it has n linearly indepenent eigenvectors. that’s a way of saying what we already know: if A is nxn and if we have enough distinct eigenvectors of A to make an nxn matrix P from them, then P is a similarity transform which will bring A to diagonal form. this didn’t say anything about the similarity transform being unitary. i have got to ask: can i find a non-defective matrix which is not normal? are there matrices which can be diagonalized even though it cannot be done by a unitary similarity transform? here we go. here’s an upper triangular matrix: $A = \left(\begin{array}{cc} 1&1\\ 0.&2\end{array}\right)$ since this matrix is already upper triangular, we might expect that it cannot be diagonalized by a unitary matrix: what it can be brought to, is precisely itself. schur’s lemma is trivial when applied to this matrix. it’s easy enough to let mathematica find the schur decomposition. i do indeed get the identity matrix for the unitary similarity transform, and i get the original A as the upper triangular form. that’s good. we expect that A is not normal. since it is real, the conjugate transpose is just the transpose; we compute $A \ A^T$ and  $A^T \ A$ $A \ A^T = \left(\begin{array}{cc} 2&2\\ 2&4\end{array}\right)$ $A^T \ A = \left(\begin{array}{cc} 1&1\\ 1&5\end{array}\right)$ they are not equal, so A is not normal, and therefore it cannot be diagonalized by a unitary similarity transform. so if we read a theorem that says, “if A is normal…”, we should wonder if it’s true under the weaker hypothesis “if A can be diagonalized…. (i.e. by a non-unitary similarity transform).” let’s find the eigenstructure to see if it can be diagonalized at all. we get the following eigenvector matrix P… $P = \left(\begin{array}{cc} 1&1\\ 1&0.\end{array}\right)$ that is, A can be diagonalized by P;$B = P^{-1}\ A \ P$ is diagonal: $B = \left(\begin{array}{cc} 2&0.\\ 0.&1\end{array}\right)$ but P is not orthogonal: $P^T \ P= \left(\begin{array}{cc} 2&1\\ 1&1\end{array}\right)$ having computed that P is not orthogonal, let’s actually look at P. the second eigenvector (= the second column), (1,0), is a basis for the x-axis; the eigenspace is the x-axis. the other eigenvector (1,1) is a basis for the line y = x: it’s at $45^{\circ}$ to the x-axis. the second eigenspace just is not orthogonal to the x-axis. nothing we do is going to change that. by finding two linearly independent eigenvectors of A, we have shown that it can be diagonalized; but the similarity transform P which does is not unitary. for another view, let’s compute the SVD of A: find u, v, w such that $A = u \ w \ v^T$, where u and v are unitary. we get $u = \left(\begin{array}{cc} \frac{-1+\sqrt{5}}{\sqrt{4+\left(-1+\sqrt{5}\right)^2}}&\frac{-1-\sqrt{5}}{\sqrt{4+\left(-1-\sqrt{5}\right)^2}}\\ \frac{2}{\sqrt{4+\left(-1+\sqrt{5}\right)^2}}&\frac{2}{\sqrt{4+\left(-1-\sqrt{5}\right)^2}}\end{array}\right)$ $w = \left(\begin{array}{cc} \sqrt{3+\sqrt{5}}&0.\\ 0.&\sqrt{3-\sqrt{5}}\end{array}\right)$ $v = \left(\begin{array}{cc} \frac{-2+\sqrt{5}}{\sqrt{1+\left(-2+\sqrt{5}\right)^2}}&\frac{-2-\sqrt{5}}{\sqrt{1+\left(-2-\sqrt{5}\right)^2}}\\ \frac{1}{\sqrt{1+\left(-2+\sqrt{5}\right)^2}}&\frac{1}{\sqrt{1+\left(-2-\sqrt{5}\right)^2}}\end{array}\right)$ if we compute $u \ w \ v^T$ we will see that it is A, as it should be. we can also confirm that u and v are both unitary (in this case, orthogonal), but u and v are not the same. rather than leave you to confirm that those expressions don’t simplify to the same thing, i evaluate u and v: $u = \left(\begin{array}{cc} 0.525731&-0.850651\\ 0.850651&0.525731\end{array}\right)$ $v = \left(\begin{array}{cc} 0.229753&-0.973249\\ 0.973249&0.229753\end{array}\right)$ that u and v are different confirms that A cannot be diagonalized by a unitary similarity transform. it can indeed be diagonalized, to w… $\left(\begin{array}{cc} \sqrt{3+\sqrt{5}}&0.\\ 0.&\sqrt{3-\sqrt{5}}\end{array}\right)$ by two unitary (in this case, orthogonal) matrices u ≠ v, but not by one such. i suppose it is worth noting that the diagonal elements of w are not the eigenvalues. that reqiuires that u and v be the same, so that the SVD be equivalent to the eigendecomposition. (so we have an example of a diagonal matrix from the similarity transform which is different from the diagonal matrix from the SVD.) it is also worth confessing i didn’t start with the upper triangular matrix A. i started with B and P, B being diagonal with distinct eigenvalues and P being invertible but not orthogonal. i computed A as $A = P \ B \ P^{-1}$ and then took it as my starting point. Advertisements ### 4 Responses to “schur’s lemma: any matrix is unitarily similar to an upper triangular” 1. The style of writing is quite familiar to me. Have you written guest posts for other bloggers? 2. rip Says: Nope. If I’ve picked up someone else’s style, I have no idea who or from where. 3. Vish Says: can u provide a general proof for the above as well ?
2017-07-26 20:43:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.835922360420227, "perplexity": 342.1535197556341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426629.63/warc/CC-MAIN-20170726202050-20170726222050-00028.warc.gz"}
http://mathhelpforum.com/calculus/56116-dealing-infinite-limits.html
# Math Help - Dealing with infinite limits. 1. ## Dealing with infinite limits. 4 limit problems. I've done all the work. I just wanted to make sure I'm understanding this correctly, and hoping someone will double check that I'm doing the work right. Thanks for your time! Find the limit of the sequences: (1) = lim 1 + lim = 1 + 0 = 1 (2) = lim . I divided everything by n to get = 2n^(-1/2) = 2/√n which goes to 0, and therefore the limit = 0. [I already got #'s 3 and 4 checked. ] (5) = lim = lim which goes to zero, and therefore the limit = 0. (6) . I divided everything by 5^n: . Since goes to zero then the expression becomes = 0. So the limit = 0. 2. Yep, good work, all of them are correct. I checked each one.
2015-05-25 00:50:10
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9087070822715759, "perplexity": 653.1830717976823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928102.74/warc/CC-MAIN-20150521113208-00129-ip-10-180-206-219.ec2.internal.warc.gz"}
https://kilthub.cmu.edu/articles/journal_contribution/Lehman_Matrices/6706511/1
Lehman Matrices.pdf.pdf' (244.34 kB) # Lehman Matrices journal contribution posted on 01.02.1968, 00:00 by Gerard CornuejolsGerard Cornuejols, Betrand Guenin, Levent Tunçel A pair of square 0, 1 matrices A,B such that ABT = E + kI (where E is the n × n matrix of all 1s and k is a positive integer) are called Lehman matrices. These matrices figure prominently in Lehman’s seminal theorem on minimally nonideal matrices. There are two choices of k for which this matrix equation is known to have infinite families of solutions. When n = k2+k+1 and A = B, we get point-line incidence matrices of finite projective planes, which have been widely studied in the literature. The other case occurs when k = 1 and n is arbitrary, but very little is known in this case. This paper studies this class of Lehman matrices and classifies them according to their similarity to circulant matrices. 01/02/1968 ## Exports figshare. credit for all your research.
2022-05-27 02:06:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8125648498535156, "perplexity": 863.4674821277573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662631064.64/warc/CC-MAIN-20220527015812-20220527045812-00383.warc.gz"}
http://sioc-journal.cn/Jwk_hxxb/EN/Y1999/V57/I7/740
Acta Chimica Sinica ›› 1999, Vol. 57 ›› Issue (7): 740-745. Original Articles ### HNCO+OH--NH~2+CO~2反应理论研究 1. 北京师范大学化学系.北京(100875) • 发布日期:1999-07-15 ### Theoretical study of the reaction HNCO+OH--NH~2+CO~2 Shi Tujin;Li Zonghe;Liu Ruozhuang 1. Beijing Normal Univ., Dept of Chem..Beijing(100875) • Published:1999-07-15 The mechanism of the reaction HNCO+OH--NH~2+CO~2 has been studied by using ab initio MO method. The geometries of reactants, transition states,intermediates and products have been optimized with UHF/6-31G basis set and verified by frequency analysis. Furthermore,the correlation energies are corrected by Mφller-Plesset perturbation theory up to 4th order. The zero-pint energies are also corrected. The results show that the reaction is a multi-step complex one. Along the reaction path there are three transition states, two internal rotational barriers and four intermediates. The step IM3--TS2 is the rate-controlling step. Moreover, the calculated activation energy for the rate controlling step of the reaction channel studied in this paper i.e. the E~a of the reaction channel HNCO+OH--NH~2+CO~2(202. 388kJ/mol) is much greater than that of the other reaction channel HNCO+OH--H~2O+NCO (which equals to 69.038kJ/mol). Hence, the latter reaction channel is the main product channel. This is in good agreement with the experimental results. CLC Number:
2022-12-08 17:04:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39492669701576233, "perplexity": 4053.3255171151936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711344.13/warc/CC-MAIN-20221208150643-20221208180643-00324.warc.gz"}
https://www.beatthegmat.com/if-b-is-greater-than-1-which-of-the-following-must-t301068.html
• 5-Day Free Trial 5-day free, full-access trial TTP Quant Available with Beat the GMAT members only code • Free Veritas GMAT Class Experience Lesson 1 Live Free Available with Beat the GMAT members only code • 1 Hour Free BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • Free Trial & Practice Exam BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • Get 300+ Practice Questions Available with Beat the GMAT members only code • FREE GMAT Exam Know how you'd score today for $0 Available with Beat the GMAT members only code • Free Practice Test & Review How would you score if you took the GMAT Available with Beat the GMAT members only code • Magoosh Study with Magoosh GMAT prep Available with Beat the GMAT members only code • Award-winning private GMAT tutoring Register now and save up to$200 Available with Beat the GMAT members only code • 5 Day FREE Trial Study Smarter, Not Harder Available with Beat the GMAT members only code # If b is greater than 1, which of the following must tagged by: Gmat_mission #### If b is greater than 1, which of the following must If b is greater than 1, which of the following must be negative? A. (2 - b)(b - 1) B. (b - 1)/3b C. (1 - b)^2 D. (2 - b)/(1 - b) E. (1 - b^2)/b OA=E. How can I discard the rest of the options? Experts, can you give me some help? ### GMAT/MBA Expert Elite Legendary Member Joined 23 Jun 2013 Posted: 9762 messages Followed by: 487 members 2867 GMAT Score: 800 Hi Gmat_mission, We're told that B is greater than 1. We're asked which of the following MUST be negative (which really means "which of the following is ALWAYS NEGATIVE no matter how many different examples we can come up with"). This question can be solved by TESTing VALUES. IF... B=2, then Answer A = 0/1 = 0 Answer C = (-1)^2 = 1 Answer D = 0/-1 = 0 None of the first 4 answers is negative - and there's only one answer left... GMAT assassins aren't born, they're made, Rich _________________ Contact Rich at Rich.C@empowergmat.com ### GMAT/MBA Expert Senior | Next Rank: 100 Posts Joined 16 Dec 2014 Posted: 94 messages Followed by: 4 members 2 GMAT Score: 770 Gmat_mission wrote: If b is greater than 1, which of the following must be negative? A. (2 - b)(b - 1) B. (b - 1)/3b C. (1 - b)^2 D. (2 - b)/(1 - b) E. (1 - b^2)/b OA=E. How can I discard the rest of the options? Experts, can you give me some help? A. (2 - b)(b - 1): (+ or -)(+) --> answer can be + or - B. (b - 1)/3b: (+)/(+) --> answer is + C. (1 - b)^2 : square is never negative D. (2 - b)/(1 - b): (+ or -)/(-) --> answer can be + or - E. (1 - b^2)/b (-)/(+) --> answer is always negative as b^2 > 1! ### GMAT/MBA Expert GMAT Instructor Joined 09 Apr 2015 Posted: 1461 messages Followed by: 16 members 39 Gmat_mission wrote: If b is greater than 1, which of the following must be negative? A. (2 - b)(b - 1) B. (b - 1)/3b C. (1 - b)^2 D. (2 - b)/(1 - b) E. (1 - b^2)/b Since b is greater than 1, (1 - b^2) will always be negative, and thus (1 - b^2)/b will always be negative. _________________ Jeffrey Miller Head of GMAT Instruction ### Top First Responders* 1 Jay@ManhattanReview 81 first replies 2 Brent@GMATPrepNow 71 first replies 3 fskilnik 55 first replies 4 GMATGuruNY 37 first replies 5 Rich.C@EMPOWERgma... 16 first replies * Only counts replies to topics started in last 30 days See More Top Beat The GMAT Members ### Most Active Experts 1 fskilnik GMAT Teacher 202 posts 2 Brent@GMATPrepNow GMAT Prep Now Teacher 163 posts 3 Scott@TargetTestPrep Target Test Prep 118 posts 4 Jay@ManhattanReview Manhattan Review 94 posts 5 Max@Math Revolution Math Revolution 94 posts See More Top Beat The GMAT Experts
2018-09-22 17:46:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2760428190231323, "perplexity": 12944.48064221408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158609.70/warc/CC-MAIN-20180922162437-20180922182837-00251.warc.gz"}
https://mathoverflow.net/questions/259162/super-extensions-of-the-poincar%c3%a9-lie-algebra
# Super-extensions of the Poincaré Lie algebra For $(\mathfrak{g},[-,-])$ an ordinary Lie algebra let me say that a super-extension of it (maybe not the best terminology) is a super-Lie algebra $(\mathfrak{s}, [-,-]_{\mathfrak{s}})$ whose bosonic component is exactly $(\mathfrak{g}, [-,-])$, hence whose underlying super vector space is $\mathfrak{s} \simeq \underset{= \mathfrak{s}_{even}}{\underbrace{\mathfrak{g}}} \oplus \underset{= \mathfrak{s}_{odd}}{\underbrace{S}}$, with super Lie bracket $[-,-]_{\mathfrak{s}}$ restricting to $[-,-]$ if both arguments are in $\mathfrak{g}$. From the even-even-odd component of the super-Jacobi identity it is easy to see that the even-odd component of $[-,-]_\mathfrak{s}$ is necessarily a Lie action $\rho$ of $\mathfrak{g}$ on $S$. Similarly from the even-odd-odd component of the super-Jacobi identity it is easy to see that the odd-odd-component of $[-,-]_\mathfrak{s}$ is necessarily a symmetric bilinear pairing $(-,-) : S \otimes S \to \mathfrak{g}$, which is equivariant with respect to this action. What seems more subtle is to analyze the constraints imposed on this data by the odd-odd-odd component of the super-Jacobi identity. By taking all three arguments equal, one finds that it is necessary that $\rho_{(\psi,\psi)}(\psi) = 0$ for all $\psi \in S$. Also it is easy to see that for satisfying the super-Jacobi identity, it is sufficient that $\rho_{(\psi,\phi)} = 0$ is the zero-action, for all $\psi,\phi \in S$. But is this necessary? What would be the general classification of super-extensions, in the above sense? Specifically for the case that $\mathfrak{g} = \mathfrak{iso}(\mathbb{R}^{d-1,1}) \simeq \mathbb{R}^{d-1,1} \rtimes \mathfrak{so}(d-1,1)$ is the Poincaré Lie algebra, in some dimension $d$. Then it is easy to see that sufficient data for super-extensions $\mathfrak{s}$, in the above sense, are given by real $\mathfrak{so}(d-1,1)$-representations $S$ equipped with an $\mathfrak{so}$-equivariant bilinear symmetric pairing of the special form $(-,-) : S \otimes_{\mathbb{R}} S \to \mathbb{R}^{d-1,1} \hookrightarrow \mathfrak{iso}(\mathbb{R}^{d-1,1})$. Such are provided by real spin representations, and the result is the usual super Poincaré Lie algebras ("supersymmetry"). Now at this point, the existing literature points to the Haag–Łopuszański–Sohnius theorem. This states further conditions on a super-extension (e.g. that $P_a P^a$ remains a Casimir, and more) and then concludes that these are the only super-extensions satisfying this. Of course these extra conditions are well-motivated from expected behaviour of scattering matrices in field theories. But if we disregarded this and consider the purely mathematical problem of classifying all super-extensions -- in the above sense -- of the Poincaré Lie algebras: could there be more?
2022-01-21 06:13:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8870516419410706, "perplexity": 302.76874353739555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302723.60/warc/CC-MAIN-20220121040956-20220121070956-00355.warc.gz"}
http://www.neverendingbooks.org/version-pi
# version pi Now that versions 2 and 3 of my abandoned book project noncommutative~geometry@n are being referenced (as suggested) as “forgotten book” (see for example Michel's latest paper) it is perhaps time to consider writing version $\\pi$. I haven't made up my mind what to include in this version so if you had a go at these versions (available no longer) this blog is flooded with link-spammers recently so I did remove the automatic posting of comments. I use the strategy proposed by Angsuman to combat them. This sometimes means that I overlook a comment (this morning I discovered a lost comment while cleaning up the spam-comments, sorry!) but it is the only way to keep this blog poker-casino-sex-etc free. It goes without saying that any relevant comment (positive or negative) will be approved as soon as I spot it. At the moment I haven't the energy to start the writing phase yet, but I am slowly preparing things • Emptied the big antique table upstairs to have plenty of place to put things. • Got myself a laser printer and put it into our home-network using AirportExpress which allows to turn any USB-printer into a network-printer. does not mean that I will submit it there (in fact, I promised at least one series-editor to send him a new version first) but these days I cannot bring myself to use AMS-stylefiles. • Accepted an invitation to give a master-course on noncommutative geometry in Granada in 2005 which, combined with the master-class here in Antwerp next semester may just be enough motivation to rewrite notes. • Bought all four volumes of the reprinted Winning Ways for your Mathematical Plays as inspiration for fancy terminology and notation (yes, it will be version $\\pi$ and _not_ version $e$). • etc. ### Similar Posts: This site uses Akismet to reduce spam. Learn how your comment data is processed.
2022-12-05 08:20:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39559414982795715, "perplexity": 5982.5905356219055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711013.11/warc/CC-MAIN-20221205064509-20221205094509-00792.warc.gz"}
http://www.njohnston.ca/publications/schur-norms-hadamard-matrices/
## Real Schur norms and Hadamard matrices Abstract: We present a preliminary study of Schur norms $\|M\|_{\textup{S}}=\max\{ \|M\circ C\|: \|C\|=1\}$, where M is a matrix whose entries are $\pm1$, and $\circ$ denotes the entrywise (i.e., Schur or Hadamard) product of the matrices. We show that, if such a matrix M is $n\times n$, then its Schur norm is bounded by $\sqrt{n}$, and equality holds if and only if it is a Hadamard matrix. We develop a numerically efficient method of computing Schur norms, and as an application of our results we present several almost Hadamard matrices that are better than were previously known. Authors: • John Holbrook • Nathaniel Johnston • Jean-Pierre Schoch Cite as: • J. Holbrook, N. Johnston, and J.-P. Schoch. Real Schur norms and Hadamard matrices. E-print: arXiv:2206.02863 [math.CO], 2022. Supplementary material: • EquivalenceClasses.zip – MATLAB code for finding all equivalence classes of (+1,-1) matrices of a given size, as well as a MATLAB file containing representatives of all equivalence classes of size up to 7×7 • EquivClasses.txt – A summary of all equivalence classes of (+1,-1) matrices of size 6×6 or less. For the equivalence classes in the 7×7 case, load the MATLAB file above instead. • SchurNorm.zip – MATLAB code for computing the Schur norm of a matrix, and for finding the largest Schur norm of a (circulant or not) (+1,-1) matrix of a given size. • largeschurnorm.jl – Julia code by Jean-Pierre Schoch for computing the Schur norm of a matrix, and for finding the largest Schur norm of a (circulant or not) (+1,-1) matrix of a given size. • circulatschurnorms.jl – Julia code by Jean-Pierre Schoch for computing the largest Schur norm of a circulant (+1,-1) matrix of a given size. • MaximalSchurNorms.txt – A text file containing the circulant and non-circulant (+1,-1) matrices with largest Schur norm that we have been able to find, for sizes up to 24×24. The circulants are all known to be optimal, and the non-circulants are known to be optimal up to 8×8. • OptimalOrthogonalL1Norm.txt – A text file containing an orthogonal matrix with largest entrywise 1-norm that we have been able to find, for sizes from 3×3 to 24×24. The matrices up to size 8×8 are known to be optimal.
2022-12-10 01:46:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9175125956535339, "perplexity": 871.462394343181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711637.64/warc/CC-MAIN-20221210005738-20221210035738-00070.warc.gz"}
http://mathematica.stackexchange.com/questions/33775/is-there-something-like-maxprocessorused
# Is there something like MaxProcessorUsed? For standard procedures it is easy to test if the implementation is better or worse than other by using Timing etc. It does not help us with creating Dynamic interfaces/visualisations. Sometimes it is obvious that one way is more laggy, but not always and it is not so precise method too. So my question is: is there something like MaxProcessorUsed analogical to MaxMemoryUsed, so I could start it, play with the notebook, end it and get the result in terms of % for example. I have not found anything so I'm looking forward seeing your ideas. :) This seems to be a useful tool for GUI creators. Moreover, I've failed looking for anything to get information about current processor/cores state via Mathematica functions. Also, I'm aware that the performance is not so easy to measure. We have to take under consideration GPU usage, existance of multi core vs. single vs. multi thread etc. But let's focus on basic case now. - It is normal the OSX system monitor. –  halirutan Oct 10 '13 at 12:09 @halirutan I thought so :/ thanks. –  Kuba Oct 10 '13 at 12:10
2014-04-19 12:18:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24401992559432983, "perplexity": 955.8488881962596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
https://zbmath.org/?q=0917.49002
# zbMATH — the first resource for mathematics On Bellman spheres for linear controlled objects of second order. (English) Zbl 0917.49002 Let be given the dynamic controlled object defined by $\dot{x}= Ax+Bu,\quad x \in \mathbb{R}^{n},\quad u \in U \subset R^{m},\tag{*}$ where the constant matrices define linear mappings $$A: \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$$, $$B: \mathbb{R}^{m} \rightarrow \mathbb{R}^{n}$$, the set of admissible controls $$U$$ is compact, convex and contains the origin in its interior. The measurable function $$u(t)$$, $$t \in [t_{0}, t_{1}]$$, is an admissible control if $$u(t) \in U$$ for each $$t \in t[t_{0}, t_{1}]$$. The time-optimal problem is to find an admissible control function $$u(t)$$ that brings the system (*) from any initial state $$x_{0}$$ to the origin in a minimum time $$t^{*} < \infty$$. Using the well-known ways of optimal problem solution (Bellman, Pontryagin) the second order linear object controlled by non-standard two-dimensional control function has been investigated. Some examples complete the received results. Reviewer: W.Hejmo (Kraków) ##### MSC: 49J15 Existence theories for optimal control problems involving ordinary differential equations 49K15 Optimality conditions for problems involving ordinary differential equations 49N05 Linear optimal control problems ##### Keywords: time-optimal problem; maximum principle; Bellman’s sphere Full Text:
2021-10-23 10:59:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5682488679885864, "perplexity": 764.7370717059509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585671.36/warc/CC-MAIN-20211023095849-20211023125849-00256.warc.gz"}
https://math.stackexchange.com/questions/3920925/can-you-solve-for-x-for-the-equation-252-log-10-x-x
# Can you solve for x for the equation $25+2^{\log_{10} x}=x$? Solve for $$x$$ : $$25+2^{\log_{10} x}=x$$ My work Well, I could not figure out an algebraic solution to this problem. $$25+2^{\log_{10} x}=x \implies 5^2+x^{\log_{10} 2}=x$$ $$\implies x^{\log_{10}2}-x-25=0$$ which does not seem to be solved further. I have solved this by using the graphical method by plotting both sides of this equation. And the answer comes near to $$27.7$$. I have also verified it by using the desmos graph calculator according to which the answer is $$27.718$$. How can I solve this question by the algebraic method? • What is the base of your logarithm? If it is $e$, Alpha gets 37.2824 Nov 24 '20 at 14:33 • These types of expressions generally don't have a "nice" algebraic solution. When both an exponential and a polynomial (such as $x$, $x^2$ and so on) appears in an equation, you usually don't have a chance to solve it algebraically. Nov 24 '20 at 14:34 • the base is $10$. $37.2$ comes when you take the base $e$ Nov 24 '20 at 14:34 • $$2^{\log_yx}=2^{\frac{\log_2x}{\log_2y}}=\sqrt[\log_2y]{2^{\log_2x}}=\sqrt[\log_2y]{x}$$ Note that the lack of a rational exponent for $x$ means that you are limited to numerical methods for solutions. Nov 24 '20 at 14:39 • May be this manipulation help: $10^{log_{10} x}-2^{log_{10} x}=25$ $2^{log_{10} x}\big(5^{log_{10} x }-1\big)=25$ Nov 24 '20 at 17:38 ## 2 Answers Consider that you look for the zero of function $$f(x)=25+2^{\log_{10}( x)}-x$$and use inspection. You have $$f(10)=17$$ and $$f(100)=-71$$. Compute the equation of the straight line going through the two points. It is $$y=\frac{241}{9}-\frac{44 x}{45} \implies x_0=\frac{1205}{44}\approx 27.3864$$ and $$f\left(\frac{1205}{44}\right)\approx 0.322212$$ We are so close to the solution that any iterative method would converge very fast. Below are some numbers with a ridiculous number of figures starting with $$x_0=\frac{1205}{44}$$ and using Newton method $$\left( \begin{array}{cc} n & x_n \\ 0 & \color{red}{27.}386363636363636364 \\ 1 & \color{red}{27.7184}63076353301887 \\ 2 & \color{red}{27.71842019257}5559688 \\ 3 & \color{red}{27.718420192574854316} \end{array} \right)$$ Solve for x : \begin{align} 25+2^{\log_{10} x}=x \tag{1}\label{1} \end{align} Note that \eqref{1} is equivalent to \begin{align} x^{\log_{10}2}-x+25&=0 \tag{2}\label{2} \\ \text{or }\quad x^a-x+b&=0 \tag{3}\label{3} \end{align} with non-rational $$a$$. It is known that such equations don't have an algebraic solution and can be solved only by means of numerical methods. For example, we can use Halley's method to iteratively find the approximation of the root as \begin{align} x_{n+1}&=F(x_n) ,\\ F(x)&=x-\frac{2\,f(x)\,f'(x)}{2f'(x)^2-f(x)\,f''(x)} ,\\ f(x)&=x^{\log_{10}(2)}-x+25 ,\\ f'(x)&=\log_{10}(2)\cdot x^{\log_{10}(2)-1}-1 ,\\ f''(x)&=\log_{10}(2)\log_{10}(\tfrac15)\cdot x^{\log_{10}(2)-2} . \end{align} For example, starting with $$x_0=1$$, we get \begin{align} x_1&=6.60306336935\\ x_2&=26.5079884286\\ x_3&=27.7184046785\\ x_4&=27.7184201926\\ x_5&=27.7184201926\\ \end{align} Edit The rate of convergence to the root is cubic, compare for example to the Newton's method: starting with the same $$x_0$$, the Halley's approximations would be \begin{align} x_0&=\color{blue}{ 27}.386363636363636363636 \\ x_1&=\color{blue}{ 27.7184}19892956254689994 \\ x_2&=\color{blue}{ 27.718420192574854316455} \end{align} Corresponding python code: import decimal decimal.getcontext().prec = 23 lg2 = decimal.Decimal(2).log10() def f(x): return 25+x**lg2-x def df(x): return lg2*x**(lg2-1)-1 def ddf(x): return lg2*(lg2-1)*x**(lg2-2) def F(x): fx=f(x) dfx=df(x) ddfx=ddf(x) return x-2*fx*dfx/(2*dfx**2-fx*ddfx) x=decimal.getcontext().divide(1205,44); print(x) x=F(x); print(x) x=F(x); print(x) x=F(x); print(x) # 27.386363636363636363636 # 27.718419892956254689994 # 27.718420192574854316455 # 27.718420192574854316455 • Normally, starting with $x_0=\frac{1205}{44}$, the first iterate of Halley mathod should be $27.7184198929563$ Nov 25 '20 at 14:21 • @Claude Leibovici: Thanks, that must be a copy/paste typo, fixed. Nov 25 '20 at 14:54 • You know what ? I am happy ! for seeing used the real Halley formulation. Cheers :-) Nov 25 '20 at 14:57 • @Claude Leibovici: Thanks, you're very welcome. Nov 25 '20 at 15:02
2021-10-21 17:11:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 5, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9334713220596313, "perplexity": 879.492730958397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00027.warc.gz"}
http://viam.science.tsu.ge/ticmi/announcements/announcement/2003
Main Page TICMI ## MATHEMATICS  AND  INFORMATICS ### ANNOUNCEMENTS FOR 2003 CALL FOR PAPERS On April 22-24, 2003 the Enlarged Sessions of the Seminar of I. Vekua Institute of Applied Mathematics of I. Javakhishvili Tbilisi State University will be held under the support of ISMP at UNESCO. The sections of the seminar are : Partial differential equations (directed by Prof. George Jaiani, scientific secretary – Prof. Temur Jangveladze) Mechanics of deformable solids (directed by Prof. Mikheil Basheleishvili, scientific secretary – Prof. Merab Svanadze). At the Seminar 20-minute reports will be delivered. Those wishing to participate are asked to contact by E-mail: jaiani@viam.sci.tsu.ge Deadline for applications: March 1, 2003. Registration fee is $200. The publishing of proceedings of the reports of the seminar is planned. The diskette of the report, not exceeding 4 printed pages, in English, using TEX, LATEX, AMSTEX, AMSLATEX programs, together with a hardcopy should be submitted to the organizers of the Seminar during the seminar days. The work of the seminar will start on April 22, 2003, at 10.00 at I. Vekua Institute of Applied Mathematics of I.Javakhishvili Tbilisi State University (2, University st.) Organizing Committee #### & & & #### Advanced Course on Function Spaces and Applications 2 Date: 13 - 20 September, 2003 Location: TICMI (Tbilisi) Vakhtang Kokilashvili (A. Razmadze Mathematical Institute of the Georgian Academy of Sciences, Tbilisi, Georgia) INTEGRAL OPERATORS IN BANACH FUNCTION SPACES WITH VARIABLE EXPONENT Summary: The Banach function spaces with variable exponent and related Sobolev- type spaces proved to be an appropriate tool to study models with non-standard local growth (in elasticity theory, physics, fluid mechanics, differential equation). Those applications stimulate a quickly developing progress in the theory of mentioned spaces. Although the spaces$L^{p(\cdot)}$posseses some undesirable properties (functions from these spaces are not$p(x)$-mean continuos, the space$L^{p(\cdot)}(\Omega)\$ is not translation invariant, convolution operators in general do not behave well and so on). In our lecture we plan to give the solutions of the boundedness and compactness problems in weighted Banach function spaces for classical integral operators. In particular, we will present the boundedness criteria for the Hilbert transform, Cauchy singular integrals and potentials in weighted Lebesgue spaces with power weights. Some applications to singular integral equations and boundary value problems for analytic functions within the framework of the spaces with variable exponent will treated. Alois Kufner (Mathematical Institute, Academy of Scinces, Czech Republic) HARDY INEQUALITY AND ITS MODIFICATIONS Summary: As a continuation of the lecture series presented at TICMI 1999, this time the series will deal with several problems connected with the Hardy inequality and its modifications, in particular, with • higher order inequalities for well-determined and overdetermined classes of functions, • fractional order Hardy inequalities, • weighted inequalities on cones of functions, • the llimit case of Hardy’s inequality (i.e., the Carleman-Knopp inequality), • the critical exponent for compact weighted imbeddings. These results will be used to derive some (basic) properties of the spectra of some (in general nonlinear) degenerates and singular differential operators. Coordinator: George Jaiani This course is suitable for advanced graduate students or recent Ph.D.'s. The participants will also have an opportunity to give 20-minute talkes on their own work at a mini-symposium which will take place during the Advanced Course. Lectures and abstracts of the talks will be published and distributed among the lecturers and participants after Advanced Course. The registration fee for participants is 400 USD which includes all local expenses during the Advanced Course. A restricted number of participants will be awarded grants. Further information: TICMI, I.Vekua Institute of Applied Mathematics of Tbilisi State University, University St. 2, Tbilisi 0143, Georgia e.mail: jaiani@viam.sci.tsu.ge Tel.:+995 32 305995 #### Workshop "Polynomials and Their Applications" Date: 22-24 September, 2003 Location: TICMI (Tbilisi) Coordinators: Prof. G. Jaiani, Prof. P.E. Ricci Further information: TICMI, I.Vekua Institute of Applied Mathematics of Tbilisi State University, University St. 2, Tbilisi 0143, Georgia e.mail: jaiani@viam.sci.tsu.ge Tel.:+995 32 305995 Vekua Institute of Applied Mathematics
2018-08-14 21:57:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47826501727104187, "perplexity": 3017.3101155232293}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209585.18/warc/CC-MAIN-20180814205439-20180814225439-00096.warc.gz"}
http://openstudy.com/updates/5005a0e1e4b062418066c882
## Australopithecus 2 years ago A bag of cement of weight 375 N hangs from three wires as suggested in Figure P5.24. Two of the wires make angles q1 = 55.0° and q2 = 31.0° with the horizontal. If the system is in equilibrium, find the tensions T1, T2, and T3 in the wires. can someone show me how to solve this problem? 1. Australopithecus 2. Australopithecus It says the answer is: T1 = 322 N, T2 = 216 N, T3 = 375 N but why 3. edr1c the mass is in equilibrium, so the net force on the mass is zero. so your T3 will be equal to mg, given in question = 375N 4. edr1c the knot or the point where the 3 wires meet is also in equilibrium, so the summation of force in the horizontal direction will be equal to zero, and also applies for the same for the force in the vertical direction, $\sum_{}^{}F _{X}=0$$\sum_{}^{}F _{Y}=0$ 5. edr1c so after deriving the horizontal and vertical components you will be able to find the tension on the other 2 wires 6. Australopithecus how do I do that? 7. edr1c |dw:1342549591897:dw| 1 of the horizontal components 8. edr1c bear in mind that the horizontal component for the tension made by the wire with angle 55 deg is in the opposite direction and will be a negative value. after obtaining the horizontal component equation with 2 unknowns T1 and T2, solve simultaneously with the vertical component. 9. Australopithecus Not to be an ingrate but can you please just show me how to solve this? 10. edr1c |dw:1342550687656:dw| substitute T2 into Fy 11. Australopithecus oh I see you use system of equations
2015-02-27 15:24:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7269605398178101, "perplexity": 654.4434627995586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461332.16/warc/CC-MAIN-20150226074101-00025-ip-10-28-5-156.ec2.internal.warc.gz"}
http://www.finderchem.com/need-help-with-sequence.html
# Need help with sequence.? [SOLVED] Hammer Editor: I need help with a scripted_sequence error! smashballsx88. Subscribe Subscribed Unsubscribe 19. Subscription preferences - Read more Hi, We are porting an application that runs on DB2/Linux to be able to run on DB2/AS400. We are having problems with the sorting order, since our - Read more ## Need help with sequence.? resources ### Need Help With Sequence Alignment (Java in General forum ... Hi everyone! I am currently working on a sequence alignment program which must be able to produce the optimal alignment as well as the traceback al ### Need SQL help with month sequence in a year Hi, I am creating a spreadsheet that I am hoping is going to look somewhat like this: Record Report for the year of 2010 January February March · Ok, here is a ... ### Need help with sequence of events - Microsoft Community I have a modal popup form that contains just 3 controls: a text box, an "OK" button, and a "Cancel" button. If a user inputs something in the text box (or ... ### Need help with Custom Task Sequence I am trying to create a custom task sequence to deploy bare metal servers. I have created a custom boot image that contains the command line utility to configure the ... ### [SOLVED] Need help with Terminal Sequence - Ubuntu Forums Please define INSTALL4J_JAVA_HOME to point to a suitable JVM. Help; Forum; Quick Links. Albums; Unanswered Posts; New ... Need help with Terminal Sequence ### SQL Server Forums - Need help with sequence It's highly likely you really want to use an "Identity" column for this, or, in SQL 2012, an actual SEQUENCE object which is even better. If you intend to use the ... ### I need help with monotonic sequence - Physics Forums I need help with monotonic sequence in Calculus & Beyond Homework is being discussed at Physics Forums ### Need help with a drum sequence for valves ... I need help with writing a program for Click PLC to run the keg washer I am building for my brewery. There is a similar thread in the Interface category. ### [PS4] I need some help with Sequence 5 Mission The Forts ... It keeps telling me I need to upgrade Jackdaw but other than mortars I don't know what else I am post to upgrade to be able to start the mission at all ### Need help with putting sequence into array - Programmers ... Ok so basically what I have to do is take an user input sequence of numbers and put them into an array. The only promt the user gets is to input a sequence of numbers ... ### Need Help with Sequence detector Code errors - Altera Forums Hello All, I am trying to create a finite state machine which detects the input sequence 1011. I have written the code however I am getting about 21 errors. I am new ... ### DSXchange :: View topic - Need help with sequence number ... Andy's question begs another question: Why is the sequence number required? What will it be used for? In rereading the opening post, it looks like a redundant identifier. ### Need help with the Fibonacci Sequence Glyph. (Maybe ... For Assassin's Creed II on the Xbox 360, a GameFAQs message board topic titled "Need help with the Fibonacci Sequence Glyph. (Maybe Spoilers)". ### I Need Help With My Code. This Is What My I Am ... | Chegg.com I need help with my code. This is what my I am supposed to do: A sequence of integers such as 1, 3, 5, 7, ... can be represented by a function that takes a ... ### Need help with longest sequence project. - C++ Forum You have to come up with a new method to solve this. The sequence does not have to start at 1. You just need one for-loop and you are simpy checking if the initial ... ### Need help with sequence of towns via public transportation ... There's no need to worry about booking trains in advance for your journeys. Córdoba and Ronda are easy day-trips from Sevilla (the Mezquita (mosque) at Córdoba is a ... ### Volvo Forum • Need help with setting sequence of repairs I would try to stage things so as to minimize the number of times you need to remove the axles, even if it means completing the rebuild from side to side. ### Custom Linux Software - need help with field sequence in ... Author Topic: Custom Linux Software - need help with field sequence in upload data (Read 383 times) ### Need help with the correct sequence - Essential Day Spa Tue Mar 29, 2005 9:56 pm : Hey Red! This is just a suggestion, and some of the products I am unfamiliar with. Morning: Dr.Hauschka Cleansing Cream (will prob be ... Finish your Dna Sequence homework faster. Get quick help in Dna Sequence from our expert Biology tutors. Help with Dna Sequence homework is prompt, reliable and ... ### Need help with program for Hailstone Sequence ... Need help with program for Hailstone Sequence I did that, that is why I was concluding that it looked right to me. 02-22-2013, 01:26 AM #6. Norm. ### Need help with program for Hailstone Sequence The Question: Hailstone Sequence (Due 22 February 2013) Take any natural number n. If n is even, divide it by 2 to get n / 2. If n is odd, multiply by ### I Need Help With Dream Sequence Ideas.? | Ask Help Box I Need Help With Dream Sequence Ideas.? - Find resources, videos and answers at Ask Help Box ### Oracle pro needs help with generating sequence numbers on ... Need a sequence number generator as similar in functionality and ease of use to Oracle's sequence objects as possible to be used as a primary key ### homework - Need help with the limit of sequence ... I need help on a question from my homework, which asks me to find the limit of the sequence as n approaches infinity of $$a_n = \frac{\cos^2 n}{2^n}$$ Thanks ### Need help with summation sequence. - Physics Forums Need help with summation sequence. in General Math is being discussed at Physics Forums Related Questions Recent Questions
2017-01-23 12:42:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32576191425323486, "perplexity": 1718.2491552811882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00019-ip-10-171-10-70.ec2.internal.warc.gz"}
https://encyclopediaofmath.org/wiki/Loop
# Loop Jump to: navigation, search A quasi-group with an identity, that is, with an element $e$ such that $xe = ex = x$ for every element $x$ of the quasi-group. The significance of loops in the theory of quasi-groups is determined by the following theorem: Any quasi-group is isotopic (see Isotopy) to a loop. Therefore, one of the main problems in the theory of quasi-groups is to describe the loops to which the quasi-groups of a given class are isotopic. With every loop there are associated three kernels, cf. Kernel of a loop. The set $$N _ {l} = \{ {a } : {ax \cdot y = a \cdot xy \textrm{ for } \textrm{ all } x , y \in Q } \}$$ of elements of a loop $Q ( \cdot )$ is called the left kernel. Similarly one defines the middle and right kernels. They always exist in a loop. Their intersection is called the kernel of the loop. Every kernel is an associative subloop, that is, a subgroup of $Q ( \cdot )$. Corresponding kernels of isotopic loops are isomorphic. There are loops with any preassigned kernels. A loop $Q$ isotopic to a group $Q ( \cdot )$ is itself a group and is isomorphic to the group $Q ( \cdot )$( Albert's theorem). In particular, isotopic groups are isomorphic. Some other classes of loops also have this property, for example free loops. A loop $Q ( \cdot )$ is called a $G$- loop if any loop isotopic to $Q ( \cdot )$ is isomorphic to it. Many concepts and results of group theory can be extended to loops. However, some of the usual properties of a group need not hold for loops. Thus, in finite loops Lagrange's theorem (that the order of a subgroup divides the order of the group) does not hold, generally speaking. If, nevertheless, Lagrange's theorem does hold for a loop, then the loop is said to be Lagrangian. If every subloop of a loop $Q ( \cdot )$ is Lagrangian, one says that $Q ( \cdot )$ has the property $L ^ \prime$. A necessary and sufficient condition for a loop $Q ( \cdot )$ to have the property $L ^ \prime$ is the following: $Q ( \cdot )$ must have a normal chain $$Q = Q _ {0} \supset Q _ {1} \supset \dots \supset Q _ {n} = e ,$$ where $Q _ {i}$ is a normal subloop in $Q _ {i-} 1$, such that $Q _ {i-} 1 / Q _ {i}$ has the property $L ^ \prime$ for all $i = 1 \dots n$. The loops that have been most studied and are closest to groups are Moufang loops (cf. Moufang loop). The main theorem about them (Moufang's theorem) is: If three elements $a , b , c$ of such a loop are connected by the associative law, that is, if $$ab \cdot c = a \cdot bc ,$$ then they generate an associative subloop, that is, a group. In particular, any Moufang loop is di-associative, that is, any two elements of it generate an associative subloop. The property of being a Moufang loop is universal, that is, it is invariant under isotopy: Any loop isotopic to a Moufang loop is itself a Moufang loop. One of the most general classes of loops is the class of IP-loops, or loops with the invertibility property. They are defined by the identities $${} ^ {-} 1 x \cdot xy = y \ \textrm{ and } \ yx \cdot x ^ {-} 1 = y .$$ Here ${} ^ {-} 1 x$ and $x ^ {-} 1$ are, respectively, the left and right inverse elements of $x$. Any Moufang loop is an IP-loop. In an IP-loop all kernels coincide. The kernel of a Moufang loop is a normal characteristic subloop. The property of being an IP-loop is not universal. Moreover, if any isotope of an IP-loop $Q ( \cdot )$ is an IP-loop, then $Q ( \cdot )$ is a Moufang loop. More general than the class of IP-loops is the class of WIP-loops, or loops with the weak invertibility property. They are defined by the identity $x ( yx ) ^ {-} 1 = y ^ {-} 1$. This identity is universal if $$( yx ) ( \theta _ {y} z \cdot y ) = ( y \cdot xz ) y$$ for all $x , y , z \in Q$, where $\theta _ {y}$ is an automorphism. In this case the kernel of the WIP-loop is normal and the quotient loop with respect to the kernel is a Moufang loop. A special case of WIP-loops are the CI-loops, or cross-invertible loops, defined by the identity $( xy ) x ^ {-} 1 = y$. A generalization of Moufang loops are the (left) Bol loops, in which the identity $$x ( y \cdot xz ) = ( x \cdot yx ) z$$ holds. They are invariant under isotopy and are mono-associative, that is, every element of such a loop generates an associative subloop. An important concept in the theory of quasi-groups and loops is the concept of a pseudo-automorphism. A permutation $\phi$ of a loop $Q ( \cdot )$ is called a left pseudo-automorphism if there is an element $a \in Q$ such that $$\phi ( xy ) \cdot a = \phi ( x) ( \phi y \cdot a ) \ \textrm{ for } \textrm{ all } x , y \in Q ,$$ and is called a right pseudo-automorphism if there is an element $b \in Q$ such that $$b \cdot \phi ( xy ) = ( b \cdot \phi x ) \phi y \ \textrm{ for } \textrm{ all } x , y \in Q .$$ If $\phi$ is both a left and a right pseudo-automorphism, then $\phi$ is called a pseudo-automorphism, and the elements $a$ and $b$ are called the left and right companions, respectively. For loops an automorphism is a special case of a pseudo-automorphism. Every pseudo-automorphism of an IP-loop induces an automorphism in its kernel, and in a commutative Moufang loop any pseudo-automorphism is an automorphism. In the theory of loops a significant role is played by inner permutations. A permutation $\alpha$ of the associated group $G$ of a loop $Q ( \cdot )$ with an identity $e$ is said to be inner if $\alpha e = e$. The set $I$ of all inner permutations is a subgroup of $G$ and is called the group of inner permutations. The group $I$ is generated by permutations of three types: $$L _ {x , y } = L _ {xy} ^ {-} 1 L _ {x} L _ {y} ; \ R _ {x ,y } = R _ {xy} ^ {-} 1 R _ {y} R _ {x} ; \ T _ {x} = R _ {x} ^ {-} 1 L _ {x} .$$ By means of inner permutations one can define $A$- loops as loops for which all inner permutations are automorphisms. If an $A$- loop is also an IP-loop, then it is di-associative. Commutative di-associative $A$- loops are Moufang loops. For commutative Moufang loops inner permutations are automorphisms. Some definitions of group theory carry over to loops. Thus, a loop is said to be Hamiltonian if any subgroup of it is normal. Abelian groups are also regarded as Hamiltonian loops. Mono-associative Hamiltonian loops with elements of finite order are direct products of Hamiltonian $p$- loops (a $p$- loop is defined in the same way as a $p$- group). Di-associative Hamiltonian loops are either Abelian groups or direct products: $A \times T \times H$, where $A$ is an Abelian group whose elements have odd order, $T$ is an Abelian group of exponent 2 and $H$ is a non-commutative loop satisfying additional conditions. A loop $Q ( \cdot )$ is said to be totally (partially) ordered if $Q$ is a totally (partially) ordered set (with respect to $\leq$) and if $a \leq b$ implies $$ac \leq bc ,\ ca \leq cb ,$$ and conversely. If the centre of a totally ordered loop $Q ( \cdot )$ has finite index, then $Q ( \cdot )$ is centrally nilpotent. Lattice-ordered loops with the minimum condition for elements are free Abelian groups. Loops have also been studied by means of associated groups. It has been proved, for example, that there is a one-to-one correspondence between the normal subloops of a loop and the normal subgroups of the corresponding associated group. For references see Quasi-group. How to Cite This Entry: Loop. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Loop&oldid=47714 This article was adapted from an original article by V.D. Belousov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
2021-07-30 03:25:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8107117414474487, "perplexity": 332.5680329913104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153931.11/warc/CC-MAIN-20210730025356-20210730055356-00137.warc.gz"}
https://oa.journalfeeds.online/2022/08/19/a-new-polarization-direction-measurement-via-local-radon-transform-and-error-correction-eurasip-journal-on-advances-in-signal-processing/
# A new polarization direction measurement via local Radon transform and error correction – EURASIP Journal on Advances in Signal Processing #### ByWei Wang, Chao Gao, Xingwei Yan and Jianhua Shi Aug 19, 2022 It has been verified that, when the input light was analyzed by an azimuthally (or a radially spatial modulator, the hourglass-shaped intensity pattern of the modulated light satisfies Malus’s law [21]. In other words, the gray distribution of the irradiance image, as shown in Fig. 1, is directly proportional to the square of the cosine of the angle between the azimuthal angle and the darkest direction. The darkest direction, which is parallel (or perpendicular) to polarization direction, has the minimum radial integral value of the image. To capture the darkest direction accurately and quickly, our method include three stages: coarse estimation, LRT and EC. They are introduced as follow. ### Coarse estimation In our algorithm, the darkest direction is first coarsely estimated based on threshold segmentation. To reduce the computational complexity, threshold segmentation is processed on the pixels on the circles with certain radiuses rather than all the pixels in the image. Given a set of radiuses (e. g, (r_{1} ,r_{2} ,r_{3} , ldots ,r_{N})), the pixels on the circles with different radiuses are collected. Then, the pixels are divided into two parts (i.e., bright area and dark area) based on a predefined threshold (T). The average azimuthal angle of pixels in the dark area, denoted by (theta_{{text{c}}}), is treated as the coarse darkest direction, i.e., $$theta_{{text{c}}} = {text{mean}}(arg I(r,theta ) < T{kern 1pt} {kern 1pt} ),quad 3r = r_{1} ,r_{2} , ldots ,r_{N} ,theta in left[ {0^{ circ } ,180^{ circ } } right).$$ (1) where (I(r,theta )) is the gray value of the pixel with the coordinate ((r,theta )). In this stage, Radon transform [25] is adopted to compute the integral of an image along specified directions. Suppose that (f) is a 2-D function, the integral of (f) along the radial line (l(theta_{i} ) = left{ {x,y:xsin theta_{i} – ycos theta_{i} = 0} right}) is given by $$g(theta_{i} ) = intlimits_{ – infty }^{infty } {intlimits_{ – infty }^{infty } {f(x,y)delta (xsin theta_{i} – ycos theta_{i} ){text{d}}x{text{d}}y} } .$$ (2) For digital images, Eq. (2) can be transferred as $$g(theta_{i} ) = sumlimits_{d = – v}^{v} {sumlimits_{x} {sumlimits_{y} {I(x,y)W(xsin theta_{i} – ycos theta_{i} )} } } .$$ (3) In Eq. (3), (I(x,y)) is the gray of the pixel with the rectangular coordinate ((x,y)). (W(xi )), the weight of the pixel ((x,y)) for integration along (l(theta_{i} )), can be obtained by $$W(xi ) = frac{d – left| xi right|}{d}.$$ (4) (d) is the distance threshold to determine whether the pixel ((x,y)) is on the line (l(theta_{i} )). Obviously, GRT needs to compute the integral of the image along radial lines orientated from (0^{ circ }) to (180^{ circ }). Moreover, to have the accurate result, the angle interval that the GRT adopts should be as small as possible. Different from GRT, LRT only needs to capture the integral of the image in a local angle range, in which, the coarse darkest direction (i.e., (theta_{{text{c}}})) is taken as the center angle. For example, assuming the angle range and angle interval for LRT are (pm theta_{T}) and (theta_{s}), LRT is gotten while the radial integral values are arranged in azimuth order. It is (G(theta_{{text{c}}} ) = left{ {g(theta_{i} )} right};left( {theta_{i} = theta_{{text{c}}} – theta_{T} + (i – 1)theta_{s} ,i = 1,2 ldots ,2theta_{T} /theta_{s} + 1} right)). As illustrated in Fig. 1, the actual darkest direction of the irradiance image is (25^{ circ }). As the image is disturbed by Gaussian white noise ((mu = sigma^{2} = 0.01)), the darkest direction calculated by coarse estimation is (25.06^{ circ })(the white solid line in Fig. 1), LRT is composed of the normalized integral values of the image along the radial lines (the white dotted line in Fig. 1) counterclockwise oriented from (145.06^{ circ }) to (85.06^{ circ }). Here, (theta_{T}) is set to be (60^{ circ }). ### Error correction Theoretically, the darkest direction has the minimum value in LRT. It is regrettable that, the radial integral value of the image is always disturbed by the noise. For instance, the LRT of the image (shown in Fig. 1) is displayed in Fig. 2. The actual darkest direction of the image is (25^{ circ }), yet the direction that has the minimum value in LRT is (25.6^{ circ }). Apparently, the direction with the minimum value is not the actual darkest direction under the noise. To address this issue, EC is developed to explore the error of coarse estimation. Assuming we have two modulate irradiance images (({text{Im}}_{1}) and ({text{Im}}_{2})) with hourglass-shaped gray distribution, and the darkest directions of two images are (theta_{d1}) and (theta_{d2}), respectively, (G_{1} (theta_{d1} – theta_{a} )) and (G_{2} (theta_{d2} – theta_{a} )) has the best correlation. That is, $${text{corr}}(G_{1} (theta_{d1} – theta_{a} ),G_{2} (theta_{d2} – theta_{a} )) = mathop {max }limits_{theta in [0,pi )} [{text{corr}}(G_{1} (theta ),G_{2} (theta_{d2} – theta_{a} ))].$$ (5) (G_{1} (theta_{d1} – theta_{a} )) and (G_{2} (theta_{d2} – theta_{a} )) denote the LRTs of ({text{Im}}_{1}) and ({text{Im}}_{2}) while (theta_{d1} – theta_{a}) and (theta_{{d_{2} }} – theta_{a}) are the centers of the local angle ranges for integration, i.e., (G_{1} (theta_{d1} – theta_{a} ) = { g_{1} (theta_{i} )} {kern 1pt} {kern 1pt} {kern 1pt} (theta_{i} = theta_{d1} – theta_{a} – theta_{T} + (i – 1)theta_{s} )), and (G_{2} (theta ) = { g_{2} (theta_{i} )} {kern 1pt} {kern 1pt} {kern 1pt} (theta_{i} = theta – theta_{T} + (i – 1)theta_{s} )). Similarly, (G_{1} (theta )) is the LRT of the image ({text{Im}}_{1}) while the center of the local integral angle range is (theta). (theta_{a}) is an arbitrary angle. Let the coarsely estimated darkest direction for ({text{Im}}_{2}) is (theta_{c}), the error of the coarse estimation is (theta_{e}). From Eq. (5), we can infer that, the LRT of ({text{Im}}_{1}) that has the best correlation with (G_{2} (theta_{c} )) is (G_{1} (theta_{d1} – theta_{e} )). This inference can be represented as $${text{corr}}(G_{1} (theta_{d1} – theta_{e} ),G_{2} (theta_{c} )) = mathop {max }limits_{theta in [0,pi )} [{text{corr}}(G_{1} (theta ),G_{2} (theta_{c} ))].$$ (6) In Eq. (5), the range for (theta) is ([0,pi )). In fact, the optimal (theta) fluctuates around (theta_{d1}) as a result of the small error of coarse estimation. To reduce calculation, the range for (theta) can be decreased to (left[ {theta_{d1} – theta_{M} ,theta_{d1} + theta_{M} } right]). Substituting (theta_{e} = theta_{d2} – theta_{c}) into Eq. (5), we can have $$theta_{e} = theta_{d1} – mathop {arg {kern 1pt} max }limits_{{theta in [theta_{d1} – theta_{M} ,theta_{d1} + theta_{M} ]}} [{text{cor}}r(G_{1} (theta ),G_{2} (theta_{c} ))].$$ (7) Equation (6) explores the link between the error of coarse estimation and the correction between LRTs. Based on Eq. (6), the actual darkest direction of ({text{Im}}_{2}) can be captured by $$theta_{d2} = theta_{c} + theta_{d1} – mathop {arg {kern 1pt} max }limits_{{theta in [theta_{d1} – theta_{M} ,theta_{d1} + theta_{M} ]}} [{text{corr}}(G_{1} (theta ),G_{2} (theta_{c} – theta ))].$$ (8) In Eq. (7), the range for (theta) is ([0,pi )). In fact, the optimal (theta) fluctuates around (theta_{d1}) as a result of the small error of coarse estimation. To reduce calculation, the range for (theta) can be decreased to (left[ {theta_{d1} – theta_{M} ,theta_{d1} + theta_{M} } right]). In practice, according to Malus’s law, ({text{Im}}_{1}) can be generated and treated as the model image. Apparently, LRTs of ({text{Im}}_{1}) also satisfies Malus’s law. That is, the integral of the image at the direction (theta_{i}) is $$g(theta_{i} ) = Acos^{2} left( {theta_{i} – theta_{d1} + frac{pi }{2}} right).$$ (9) (A) is a coefficient decided by the image brightness. (theta_{d1}) is the darkest direction of the image. Depending on Eq. (8), a set of LRTs of ({text{Im}}_{1}) (i.e., (G_{1} (theta_{i} )),(theta_{i} = theta_{d1} – theta_{M} + (i – 1)theta_{r})) can be gotten. For the input image ({text{Im}}_{2}), substituting (G_{2} (theta_{c} )) into Eq. (7), the corrected darkest direction can be captured. Taking the image in Fig. 1 as an example, the working mechanism is illustrated in Fig. 3. In this experiment, the darkest direction of the model image is (0^{ circ }). According to Eq. (8), a set of LRTs of the model image are generated while the center angle changes from (- 5^{ circ } (175^{ circ } )) to (5^{ circ }), in (0.01^{ circ }) increment. For the input image shown in Fig. 1, the predicted darkest direction estimated by coarse estimation is (theta_{{text{c}}} = 25.6^{ circ }). In EC stage, we found that, (G_{1} (0.6^{ circ } )) has the best correlation value with (G_{2} (theta_{c} )). G1(□) and G2(□) denote the LRTs of the model image and the input image, respectively. Finally, according to Eq. (7), the estimated darkest direction of the input image is corrected to be (25^{ circ }). ### Implementation details In practice, once the parameters of the algorithm are given, some intermediate data including the coordinates of pixels used for the coarse estimation, the coordinates and weights of pixels for LRT computation keep unchanged while different input images are treated. Hence, these data can be computed ahead and saved in tables which are named as circle pixel coordinate table (CPCT), integral pixel coordinate table (IPCT), and integral pixel weight table (IPWT), respectively. It should be noted that, due to the different darkest direction of the input images, the coordinates and weights of pixels for gray integration should be saved while the azimuth angle changes from (0^{ circ }) to (180^{ circ }). In addition, the LRTs of the model images with different center angles, which are independent to the input image, also can be captured offline using Eq. (8) and saved. The flow chart and pseudo code of our method are shown in Fig. 4.
2023-03-31 13:48:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.731118381023407, "perplexity": 4639.273735225765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949642.35/warc/CC-MAIN-20230331113819-20230331143819-00164.warc.gz"}
https://mathematica.stackexchange.com/questions/57107/text-with-subscript-is-automatically-italic
# Text with subscript is automatically Italic I am creating a figure for a document using Graphics in which there is Text with some variables and again those same variables with some subscripts. The Text with subscript is automatically Italic. I do not want this, especially because it contrasts with the variables without subscript which are not Italic. How can I change this behaviour? • PleasePostAPicture.png – Öskå Aug 10 '14 at 19:06 • Does Text[yourtext, FormatType -> StandardForm] give what you need? – kglr Aug 10 '14 at 19:06 • Graphics[{Text["P", {0, 0}], Text["\!(*SubscriptBox[(P), (q)])", {1/4, 0}], Text["\!(*SubscriptBox[(P), (q)])", {2/4, 0}, FormatType -> StandardForm]}] Yes, it does. Thank you kguler. – LBogaardt Aug 10 '14 at 19:12 • I recommend Text[Style[Subscript["P", "q"], 32], {2/4, 0}] – m_goldberg Aug 11 '14 at 3:31 • Related: (19364) – Mr.Wizard May 5 '15 at 4:58 I think this is nice, easy way to do it that will not affect the displayed font, Graphics[Text[Style[Subscript["P", "q"], 36], {1/2, 0}], ImageSize -> Tiny] ### Update The OP asks, "Is there an easy way to use this trick within a sentence?" I ask, what is easy? But here is something using Row and not too difficult. Graphics[ Text[Row[{"The variable ", Subscript["P", "q"], " is positive"}], {0, 0}], BaseStyle -> {FontSize -> 14}, AspectRatio -> 1/6, ImageSize -> Small] • Interesting how 'Text[Style[Subscript["P", "q"], 36], {1/2, 0}]' and 'Text[Style[Subscript[P, q], 36], {1/2, 0}]' makes a difference. – LBogaardt Aug 12 '14 at 11:07 • It is because in the first case P is a String while in the second case it is a variable. – Alexey Popkov Aug 12 '14 at 14:14 • Is there an easy way to use this trick within a sentence? As in something like 'Text["The variable"<>Subscript["P", "q"]<>"is positive", {0, 0}]'. – LBogaardt Aug 20 '14 at 18:18 • Well done! Thank you for the quick update. – LBogaardt Aug 20 '14 at 22:48 While I think that m_goldberg's solution is the simplest when you type the text by hands, here is the convenient way to disable italicization of single letters which will work in all cases: Graphics[Text[ Style[Subscript[P, q], 36, "SingleLetterItalics" -> False], {1/2, 0}], ImageSize -> Tiny] Use the option FormatType -> StandardForm inside Text. Using the example in OP's comment: Graphics[{Circle[], Text[Style["P", 32], {0, 0}], Text[Style[Subscript[P, q], 32], {1/4, 0}], Text[Style[Subscript[P, q], 32], {2/4, 0}, FormatType -> StandardForm]}] • Interesting how 'Text[Style[Subscript["P", "q"], 36], {1/2, 0}]' and 'Text[Style[Subscript[P, q], 36], {1/2, 0}]' makes a difference. – LBogaardt Aug 12 '14 at 11:07
2020-08-09 00:32:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19570229947566986, "perplexity": 8495.071961581707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738366.27/warc/CC-MAIN-20200808224308-20200809014308-00502.warc.gz"}
https://www.physicsforums.com/threads/general-relativity-circular-orbit-around-earth.795470/
# General Relativity - Circular Orbit around Earth Tags: 1. Feb 1, 2015 ### unscientific 1. The problem statement, all variables and given/known data (a) Find the proper time in the rest frame of particle (b) Find the proper time in the laboratory frame (c) Find the proper time in a photon that travels from A to B in time P 2. Relevant equations 3. The attempt at a solution Part(a) The metric is given by: $$ds^2 = -\left( 1 - \frac{2GM}{c^2r} \right) c^2 dt^2 + \left( 1 + \frac{2GM}{c^2r} \right) dr^2$$ Circular orbit implies that $dr^2 = 0$, so $$ds^2 = c^2 d\tau^2 = -\left( 1 - \frac{2GM}{c^2R} \right) c^2 dt^2$$ $$\left( \frac{d\tau}{dt} \right)^2 = \left( 1 - \frac{2GM}{c^2R} \right)$$ $$\frac{d\tau}{dt} = \sqrt { \left( 1 - \frac{2GM}{c^2R} \right) }$$ Since the time between event A and B is $dt = P$, the time experienced in the rest frame must be $d\tau = \sqrt { \left( 1 - \frac{2GM}{c^2R} \right) } P$? I'm not sure how to approach parts (b) and (c).. 2. Feb 1, 2015 ### TSny Does $dx^2 + dy^2 +dz^2 = 0$ for two neighboring points on the orbit? 3. Feb 1, 2015 ### unscientific They are at the same radius, so yes. 4. Feb 1, 2015 ### TSny For general motion in the xy plane (z = 0), how would you express $dx^2+dy^2$ in polar coordinates $(r, \theta)$? 5. Feb 1, 2015 ### unscientific $$r^2sin^2 \theta d\phi$$ 6. Feb 1, 2015 7. Feb 1, 2015 ### unscientific Oh I must have confused the r's. It should be: $$ds^2 = -\left( 1 - \frac{2GM}{c^2r} \right) c^2 dt^2 + \left( 1 + \frac{2GM}{c^2r} \right) R^2 d\phi^2$$ Since for $z=0$ we let $\theta = \frac{\pi}{2}$. 8. Feb 1, 2015 ### TSny On the circular orbit of radius R, what is the value of little r? 9. Feb 1, 2015 ### unscientific r = R 10. Feb 1, 2015 ### TSny Right. 11. Feb 1, 2015 ### unscientific $$ds^2 = -\left( 1 - \frac{2GM}{c^2R} \right) c^2 dt^2 + \left( 1 + \frac{2GM}{c^2r} \right) R^2 d\phi^2$$ Using $d\phi = \omega dt = \frac{2\pi}{P} dt$ $$ds^2 = -\left( 1 - \frac{2GM}{c^2R} \right) c^2 dt^2 + \left( 1 + \frac{2GM}{c^2 R} \right) R^2 (\frac{2\pi}{P})^2 dt^2$$ 12. Feb 1, 2015 ### TSny Not quite. Does $R^2P^2dt^2$ have the correct dimensions of distance squared? 13. Feb 1, 2015 ### unscientific Using $d\phi = \omega dt = \frac{2\pi}{P} dt$, $$ds^2 = -\left( 1 - \frac{2GM}{c^2R} \right) c^2 dt^2 + \left( 1 + \frac{2GM}{c^2 R} \right) R^2 (\frac{2\pi}{P})^2 dt^2$$ $$-c^2 d\tau^2 = -\left( 1 - \frac{2GM}{c^2R} \right) c^2 dt^2 + \left( 1 + \frac{2GM}{c^2 R} \right) R^2 (\frac{2\pi}{P})^2 dt^2$$ $$\left( \frac{d\tau}{dt} \right)^2 = \left( 1 - \frac{2GM}{c^2R} \right) - \left( 1 + \frac{2GM}{c^2 R} \right) R^2 \left(\frac{2\pi}{Pc}\right)^2$$ 14. Feb 1, 2015 ### TSny Looks good. 15. Feb 1, 2015 ### unscientific I am stuck on part (b) though.. 16. Feb 1, 2015 ### TSny What is $dx^2 + dy^2 + dz^2$ for a fixed point on the orbit? 17. Feb 1, 2015 ### unscientific For a fixed point, it is $0$, so $\frac{d\tau}{dt} = 1 - \frac{2GM}{c^2R}$. 18. Feb 1, 2015 ### TSny ...and $ds^2 =$? 19. Feb 1, 2015 ### unscientific Solving, we get $\frac{d\tau}{dt} = 1 - \frac{2GM}{c^2R}$. 20. Feb 1, 2015 ### unscientific For part (c), since beam of light travels at speed $c$ in all frames, is $dr^2 = c^2 dt^2 = c^2 P^2$?
2017-10-22 11:07:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5635571479797363, "perplexity": 3612.2655334223864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825174.90/warc/CC-MAIN-20171022094207-20171022114207-00655.warc.gz"}
https://proofwiki.org/wiki/Definition:Principle_of_Finite_Induction/Induction_Hypothesis
# Definition:Principle of Finite Induction/Induction Hypothesis Jump to navigation Jump to search ## Terminology of Principle of Finite Induction Consider a Proof by Finite Induction. The assumption made that $n \in S$ for some $n \in \Z$ is the induction hypothesis. ## Also known as The induction hypothesis can also be referred to as the inductive hypothesis.
2019-12-10 00:12:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9518367648124695, "perplexity": 1676.0262799662762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540525598.55/warc/CC-MAIN-20191209225803-20191210013803-00416.warc.gz"}
https://codereview.stackexchange.com/questions/8928/find-the-smallest-regular-number-that-is-not-less-than-n-python
# Find the smallest regular number that is not less than N (Python) This is from an answer to my own StackOverflow question on how to efficiently find a regular number that is greater than or equal to a given value. I originally implemented this in C# but translated it to Python because I guessed (correctly) that it would be shorter and would eliminate a third-party library dependency. I am a Python newbie though - This is the longest Python program I have ever written. So I would like to know: • Is there a convention for indicating what version of the language you are targeting (like require <version> in perl?) This program only works in 2.6. • Instead of the priority queue, is there a better way to generate the odd regular numbers? I know it is possible to implement lazy lists in Python but they are not in 2.6 are they? • Any other idioms/naming conventions I am missing? from itertools import ifilter, takewhile from Queue import PriorityQueue def nextPowerOf2(n): p = max(1, n) while p != (p & -p): p += p & -p return p # Generate multiples of powers of 3, 5 def oddRegulars(): q = PriorityQueue() q.put(1) prev = None while not q.empty(): n = q.get() if n != prev: prev = n yield n if n % 3 == 0: q.put(n // 3 * 5) q.put(n * 3) # Generate regular numbers with the same number of bits as n def regularsCloseTo(n): p = nextPowerOf2(n) numBits = len(bin(n)) for i in takewhile(lambda x: x <= p, oddRegulars()): yield i << max(0, numBits - len(bin(i))) def nextRegular(n): bigEnough = ifilter(lambda x: x >= n, regularsCloseTo(n)) return min(bigEnough) from itertools import ifilter, takewhile from Queue import PriorityQueue def nextPowerOf2(n): Python convention says that function should be named with_underscores p = max(1, n) while p != (p & -p): Parens not needed. p += p & -p return p I suspect that isn't the most efficient way to implement this function. See here for some profiling done on various implementations. http://www.willmcgugan.com/blog/tech/2007/7/1/profiling-bit-twiddling-in-python/ # Generate multiples of powers of 3, 5 def oddRegulars(): q = PriorityQueue() So this is a synchronized class, intended to be used for threading purposes. Since you aren't using it that way, you are paying for locking you aren't using. q.put(1) prev = None while not q.empty(): Given that the queue will never be empty in this algorithm, why are you checking for it? n = q.get() if n != prev: prev = n The prev stuff bothers me. Its ugly code that seems to distract from your algorithm. It also means that you are generating duplicates of the same number. I.e. it would be better to avoid generating the duplicates at all. yield n if n % 3 == 0: q.put(n // 3 * 5) q.put(n * 3) So why don't you just push n * 3 and n * 5 onto your queue? # Generate regular numbers with the same number of bits as n def regularsCloseTo(n): p = nextPowerOf2(n) numBits = len(bin(n)) These two things are basically the same thing. p = 2**(numBits+1). You should be able to calculate one from the other rather then going through the work over again. for i in takewhile(lambda x: x <= p, oddRegulars()): yield i << max(0, numBits - len(bin(i))) I'd have a comment here because its tricky to figure out what you are doing. def nextRegular(n): bigEnough = ifilter(lambda x: x >= n, regularsCloseTo(n)) return min(bigEnough) I'd combine those two lines. Is there a convention for indicating what version of the language you are targeting (like > require in perl?) This program only works in 2.6. Honestly, much programs just fail when trying to use something not supported in the current version of python. I know it is possible to implement lazy lists in Python but they are not in 2.6 are they? Lazy lists? You might be referring to generators. But they are supported in 2.6, and you used one in your code. • The point of the bit twiddling was to calculate exactly the right power of 2 to multiply by to get the regular number in the right range. This could be applied to your version, if you turn those nested loops inside-out, the inner loop could be replaced by a bit count + shift. And no I was referring to lazy lists: svn.python.org/projects/python/tags/r267/Lib/test/… – finnw Feb 13 '12 at 16:57 • @finnw, oh I figured out what you were doing with the bit twiddles. It just took me a while, hence my suggestion for a comment. Certainly you can further improve my version, I've left that as an exercise for the reader. – Winston Ewert Feb 13 '12 at 17:00 • @finnw, I don't think that any version of python includes lazy lists. You can implement lazy lists, which is what is done in what you linked. But as far as I know they've never been added to the standard library. – Winston Ewert Feb 13 '12 at 17:03 • Note that this doesn't fulfill the original requirements of "greater than or equal to". The original question wants p(18) = 18, but this returns 24 – endolith Oct 2 '13 at 23:51 • @endolith, removed apparently bad algorithm. – Winston Ewert Oct 6 '13 at 2:06
2021-02-25 01:37:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23818203806877136, "perplexity": 1426.0498832155893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350706.6/warc/CC-MAIN-20210225012257-20210225042257-00227.warc.gz"}
https://www.nature.com/articles/s41550-017-0361-4?utm_content=bufferc0423&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer&error=cookies_not_supported&code=64a42a85-959f-4f39-80c2-cfbdecfab103
Letter | Published: # Spectroscopy and thermal modelling of the first interstellar object 1I/2017 U1 ‘Oumuamua Nature Astronomyvolume 2pages133137 (2018) | Download Citation ## Abstract During the formation and evolution of the Solar System, significant numbers of cometary and asteroidal bodies were ejected into interstellar space1,2. It is reasonable to expect that the same happened for planetary systems other than our own. Detection of such interstellar objects would allow us to probe the planetesimal formation processes around other stars, possibly together with the effects of long-term exposure to the interstellar medium. 1I/2017 U1 ‘Oumuamua is the first known interstellar object, discovered by the Pan-STARRS1 telescope in October 2017 (ref. 3). The discovery epoch photometry implies a highly elongated body with radii of ~ 200 × 20 m when a comet-like geometric albedo of 0.04 is assumed. The observable interstellar object population is expected to be dominated by comet-like bodies in agreement with our spectra, yet the reported inactivity of 'Oumuamua implies a lack of surface ice. Here, we report spectroscopic characterization of ‘Oumuamua, finding it to be variable with time but similar to organically rich surfaces found in the outer Solar System. We show that this is consistent with predictions of an insulating mantle produced by long-term cosmic ray exposure4. An internal icy composition cannot therefore be ruled out by the lack of activity, even though ‘Oumuamua passed within 0.25 au of the Sun. ## Main Following the announcement of the discovery, we performed spectroscopic observations at two facilities. The 4.2 m William Herschel Telescope (WHT) on La Palma was used with the ACAM auxillary port imager and spectrograph on 25 October 21:45 ut–22:03 ut. An initial analysis of this spectrum revealed an optically red body5. Spectra were also obtained using the X-shooter spectrograph on the European Southern Observatory 8.2 m Very Large Telescope (VLT) on 27 October 00:21 ut–00:53 ut, covering 0.3–2.5 μm. Observation circumstances are given in Table 1 and the resulting binned reflectance spectra at optical wavelengths are shown in Fig. 1. Active comets possess strong molecular emission bands via electronic transitions within the vibrational ground state due to fluorescence of CN at 0.38 μm and C2 at 0.52 μm6. Although our spectra are noisy, no such emission is seen, in concordance with imaging reports of an apparently inert body3,7,8,9. Asteroid spectra can show significant solid-state absorption features in this region depending on their mineralogy, notably a wide shallow absorption centred at ~0.7 μm due to phyllosilicates (aqueously altered silicates)10. Mafic minerals seen in asteroids (typically pyroxines and olivines) exhibit an absorption band starting at ~0.75 μm and centred at ≥0.95 μm11. Again, no such diagnostic features are observed. Over the range 0.4 μm ≤ λ ≤ 0.9 μm, the reflectance gradients are 17.0 ± 2.3%/100 nm (one standard deviation) and 9.3 ± 0.6%/100 nm for the ACAM and X-shooter data, respectively. Additional measurements of the spectral slope have been reported from the Palomar Observatory as 30 ± 15%/100 nm over 0.52 μm ≤ λ ≤ 0.95 μm on October 25.3 ut 12, and 10 ± 6%/100 nm over 0.4 μm ≤ λ ≤ 0.9 μm on October 26.2 ut 9. The published photometric colours range from somewhat neutral to moderately red3,8,13,14. While most of these measurements are similar within their uncertainties, the reported (g − r) = 0.47 ± 0.04 is relatively neutral8, while we have a significant red slope in this region. Within our own data, our spectra differ in slope by >3σ. This is due to the ACAM spectrum being redder than the X-shooter spectrum at 0.7 μm ≤ λ ≤ 0.9 μm, with the mean reflectance increasing to 42% and 21% relative to 0.55 μm, respectively. The measured rotation period is probably in the 7–8 h range based on photometry from different observers7,8,13,14. The most complete reported lightcurve is consistent with a rotation period of 7.34 h and an extremely elongated shape with an axial ratio of ~10:1 and a 20% change in minimum brightness, possibly due to hemispherically averaged albedo differences3. Using this rotation period, our spectra are separated by 0.66 in rotational phase and near opposing minima in the lightcurve. This implies that our spectra viewed different extrema of the body and supports the existence of compositional differences across the surface. We note the October 25 Palomar spectrum would have been obtained during lightcurve maximum, while the October 26 Palomar spectrum would have been near the same rotational phase as our WHT spectrum. Comparable spectral slope variations with rotation have been detected in ground-based data on a few S-type asteroids15 and trans-Neptunian objects (TNOs)16, although these objects are significantly larger than 1I/2017 U1. The X-shooter spectrum contained a weak but measurable signal at 1.0 ≤ λ ≤ 1.8 μm. Beyond 1.8 μm, the sky background is much brighter than the object. We therefore excluded this spectral region at longer wavelengths from further analysis. In Fig. 2, we show the ACAM and X-shooter spectra, binned to a spectral resolution of 0.02 μm at λ > 1 μm. Although the signal-to-noise is low, it is apparent that the reflectance is relatively neutral in this spectral region; a weighted least-squares fit gives a slope of −1.8 ± 5.3%/100 nm at these near-infrared (NIR) wavelengths. There is a suggestion of decreasing reflectance beyond 1.4 μm, but the uncertainties are large due to the very weak flux from the object. There is no apparent strong absorption band due to water ice at 1.5 μm, as observed on some large TNOs. The only other reported NIR data are J-band photometry (1.15–1.33 μm) from October 30.3 ut 8, where (r − J) = 1.20 corresponds to a slope of 3.6%/100 nm. Our spectrum gives a larger slope of 7.7 ± 1.3%/100 nm over 0.63–1.25 μm. Again assuming a rotation period of 7.34 h would give a rotational phase difference of 0.4, indicating a small change in optical-infrared reflectance properties around the body. Comparing our spectra with the reflectance spectra for different taxonomic classes of asteroid in the main belt and trojan clouds17, the closest spectral analogues are L- and D-type asteroids (Fig. 3). L-type asteroids are relatively rare in the asteroid belt. They exhibit a flattened or neutral spectrum beyond 0.75 μm and sometimes weak silicate absorption bands, indicating a small amount of silicates on their surfaces. These bands are not strong enough to be visible in our data. D-type asteroids form the dominant populations in the outer asteroid belt and Jupiter trojans. Most D-type reflectance spectra exhibit red slopes out to at least ~2 μm, in disagreement with our spectra, although some show a decrease in the spectral slope at λ > 1 μm, similar to 1I/2017 U118. Looking at both trojan asteroids and more distant bodies beyond 5 au we find a good match in spectral morphology with 1I/2017 U1 as shown in Fig. 3. The spectral slopes of cometary nuclei tend to be red in the visible range but shallower in the NIR19. Some TNOs also exhibit a red optical slope but a more neutral NIR reflectance20. We show a spectrum of the large active centaur (60558) Echeclus, whose optical slope falls between our ACAM and X-shooter spectra, demonstrating similar behaviour of a red optical slope that decreases in the NIR. The reddish optical spectra of D-type asteroids, cometary nuclei and TNOs are believed to be a result of irradiated organic-rich surfaces. The spectra presented here would place 1I/2017 U1 in the less red class of dynamically excited TNOs21. Irradiation of carbon-rich ices produces refractory organic residues with a wide range of slopes depending on original composition but consistent with the diversity of slopes observed in the outer Solar System22. To produce such changes in the optically active upper micron of surface only requires exposure to the local interstellar medium of <107 years23. Hence, we conclude that the surface of 1I/2017 U1 is consistent with an originally organic-rich surface that has undergone exposure to cosmic rays. It was expected that discovered interstellar objects (ISOs) would be mostly icy objects due to both formation and observational biases. Planet formation and migration can expel large numbers of minor bodies, most of which would contain ices because they originated beyond the snow-line in their parent systems and would be ejected by the giant planets that form quickly in the same region1. Additionally, ISOs will have been produced from Oort clouds via the loss mechanisms of stellar encounters and galactic tides2. Our Oort cloud is expected to hold 200 to 10,000 times as many ‘cometary’ bodies than asteroidal objects24 and we assume that exo-planetary systems’ Oort clouds may form and evolve in a broadly similar manner. Therefore, both ISO production mechanisms should produce a population dominated by ice-rich bodies. In terms of discovery, active comet nuclei are much easier to detect than asteroids of the same diameter; their dust comae make them visible over much greater distances and are more likely to attract follow-up observations that would establish their ISO nature. Before the discovery of 1I/2017 U1, ISO models suggested that the typical discovered asteroidal ISO would have a perihelion distance of q < 2 au while the typical cometary ISO would have perihelia 2 to 3 times larger, because they can be detected at greater distances25. Thus, the combination of the ISO production process and strong observational bias towards detecting active cometary ISOs makes the 1I/2017 U1 discovery particularly surprising. However, its perihelion distance, eccentricity and inclination are in excellent agreement with the predicted orbital elements of detectable asteroid-like ISOs25. Given the spectral similarity with presumed ice-rich bodies in our Solar System, it might be expected that 1I/2017 U1 would have been heated sufficiently during its close (q = 0.25 au) perihelion passage to sublimate sub-surface ices and produce cometary activity. However, it has been shown that cosmic-ray irradiation of organic ices plus heating by local supernovae can produce devolatilized carbon-rich mantles26. Estimates of the thickness of this mantle range from ~0.1 m to ~2 m (ref. 4). Assuming that this object has such a mantle, we have modelled the thermal pulse transmitted through the object during its encounter with our Sun, assuming a spin obliquity of 0° and physical parameters that would be expected for a comet-like surface (see Methods). We find that the intense but brief heating 1I/2017 U1 experienced around perihelion does not translate into heating at significant depth. As shown in Fig. 4, the heat wave passes only slowly into the interior and, while the surface reached peak temperatures of ~600 K, H2O ice buried >20 cm deep would only commence sublimation weeks after perihelion. Layers 30 cm deep or more would never experience temperatures high enough to sublimate H2O ice. Taking the unphysical extreme of a surface continuously exposed to the Sun during the orbit only increases the depth of the ice sublimation layer by ~10 cm. Therefore, we conclude that if there is no ice within ~40 cm of the surface, we would expect to see no activity at all, even if the interior has an ice-rich composition. Simple thermal approximations give a similar surface temperature and thermal skin depth14. Would a body with interior ice have significant strength to resist rotational disruption? Assuming a low density of ≤1,000 kg m−3, the required strength is estimated to be in the range 0.5–3 Pa3,13. Weak materials like talcum powder have a strength of ~10 Pa, sufficient to maintain the body structure. The inactive surface of comet 67P had a tensile strength ranging from 3–15 Pa27. Therefore, the unusual shape of 1I/2017 U1 does not rule out an internal ice-rich comet-like composition. We recognize one obvious problem with this model—that Oort cloud comets should have undergone similar mantling due to cosmic-ray exposure over 4.6 Gyr, yet many show significant activity via sublimation of near-surface ice during their first perihelion passage28. 1I/2017 U1 cannot have had a significantly longer exposure to cosmic rays; even if it was formed around one of the earliest stars, it will not be more than ~3 times the age of our Solar System. More likely, 1I/2017 U1 dates from the more recent generations of stars as it could not be formed before the Universe had created enough heavy elements to, in turn, form planetesimals29. It may have become desiccated through sublimation of surface ices during close passages to its parent star before being ejected from its natal system. Damocloid objects in our own Solar System are thought to be similar cometary bodies that have developed thick insulating mantles preventing sublimation30. Alternatively, the cause could be the relatively small size of 1I/2017 U1 compared with active Oort cloud nuclei with radii of ≥1 km. The possible minimum radius of only ~20 m may have allowed most of the interior ice to escape over its unknown history. In this case, we should expect that the Large Synoptic Survey Telescope will find many small devolatized ‘comets’ from our own Oort cloud, in addition to more ISOs like 1I/2017 U1. ## Methods ### Observations The apparent magnitude and position of 1I/2017 U1 relative to the Earth and Sun at the time of the two sets of observations are given in Table 1. Details of the instrument setup for each observation are given below. At both telescopes, the observations were performed by observatory staff in service mode. Each set of data was subsequently independently reduced by two of the authors; intercomparison of the resulting spectra showed no significant differences for the individual instruments. For WHT, two 900 s exposures were obtained with ACAM31 in spectroscopic mode using a slit width of 2 arcsec at the parallactic angle. Subsequent inspection of the data showed that the second spectrum was contaminated by a late-type star passing through the slit, hence only the first spectrum was usable. The reflectance spectrum was obtained though division by a spectrum of the fundamental solar analogue 16 Cyg B taken directly afterwards with the same instrumental setup. Flux calibration was performed via a spectrum of the spectrophotometric standard BD + 25 4655 obtained through a 10-arcsec-wide slit. For VLT, X-shooter contains three arms covering the ultraviolet/blue (UVB), visible and NIR spectral regions, separated by dichroic beam-splitters to enable simultaneous observation over the 0.3–2.5 μm range32. Four consecutive exposures were obtained, with 900 s exposures in the UVB and NIR arms and 855 s exposure in the visible arm (as the UVB and visible arms share readout electronics, this allows the most efficient use of the telescope while maximizing the flux in the low-signal ultraviolet region). It was found that the signal in the last two exposures was very poor and these were not used in the analysis. Subsequent matching with published photometry shows we were near lightcurve minimum at that time3, potentially explaining the drop in flux. Slits with widths 1.0, 0.9 and 0.9 arcseconds were used in the UVB, visible and NIR arms, respectively, all of which were aligned with the parallactic angle at the start of the observations. Observations of the solar analogue star HD 1368 were obtained with the same setup to allow calculation of reflectance spectra. Flux calibration was performed via observations of the spectrophotometric standard LTT 7987 obtained through a 5-arcsec-wide slit. For the spectra from both facilities, the reflectance spectra were calculated from the median reflectance in spectral bins. There was enough flux at λ < 1 μm to allow binning over 0.01 μm bins in wavelength, but in the NIR the detected flux was so low bins had to be increased to 0.02 μm to obtain a reasonable spectrum. A robust estimation of the dispersion of the original spectral reflectance elements in each wavelength bin was performed using the ROBUST_SIGMA routine in IDL or equivalent code in Python. The reflectance uncertainty in each bin was then calculated by dividing by the square root of the number of original spectral elements in the bin. ### Spectrum comparison In Figs. 2 and 3, we compare our observed spectra of 1I/2017 U1 with various Solar System minor bodies. Spectral types for asteroids are taken from the Bus–DeMeo taxonomy definitions established in ref. 17 and available at http://smass.mit.edu/busdemeoclass.html. For outer Solar System bodies, we define the red centaur, trojan and comet zones based on observed spectra of extreme examples. The centaur zone upper limit is the Pholus spectrum taken from ref. 33, while the lower limit is (55576) Amycus34. The trojan spectra are also defined by previously published data35. The X-shooter spectrum of Echeclus was obtained by the authors (W.C.F. and T.S.) and reduced in the same manner as the I1/2017 U1 data. This will be fully described in a forthcoming paper. For comet nuclei, there are relatively few observations in the NIR, due to the fact that nuclei are very faint targets when far enough from the Sun to be inactive, but previous observations have shown the dust spectra of weakly active comets to match their nuclei (for example, 67P/Churyumov–Gerasimenko observed simultaneously with X-shooter and from Rosetta19,36). To define the comet zone, we take the upper limit from 19P/Borrelly from spacecraft data37 and the lower limit from C/2001 OG108 (ref. 38), as it covers a wide wavelength range. ### Thermal modelling To determine the surface and sub-surface temperature of 1I/2017 U1 as a function of time, we solve the one-dimensional heat conduction equation with a suitable surface boundary condition. For temperature T, time t, and depth z, one-dimensional heat conduction is described by $$\frac{dT}{dt}=\frac{k}{\rho C}\frac{{d}^{2}T}{d{z}^{2}}$$ where k is the thermal conductivity, ρ is the material density and C is the heat capacity39. These properties are assumed to be constant with temperature and depth. For a surface element located on 1I/2017 U1, conservation of energy leads to the surface boundary condition $$f(1-{A}_{{\rm{B}}})\frac{{F}_{\odot }}{{r}_{{\rm{h}}}{(t)}^{2}}+k{\left(\frac{dT}{dz}\right)}_{z=0}-\varepsilon \sigma {T}_{z=0}^{4}=0$$ where A B is the Bond albedo, F is the integrated solar flux at 1 au (1,367 W m−2), r h(t) is the heliocentric distance in au of 1I/2017 U1 at time t, ε is the bolometric emissivity and σ is the Stefan–Boltzmann constant. f is a multiplying factor to take into account the different illumination scenarios we considered. For instance, f has a value of 1/π to give the rotationally averaged temperature of a surface element located on the equator of 1I/2017 U1 when considering a pole obliquity of 0°. If the surface element is permanently illuminated by the Sun during the encounter, f = 1. The true solution for 1I/2017 U1 will therefore lie between these two illumination condition extremes. A finite difference numerical technique was used to solve the one-dimensional heat conduction equation and a Newton–Raphson iterative technique was used to solve the surface boundary condition40. In particular, the depth down to 5 m was resolved into 1 mm steps and time was propagated in increments of 1 s. Zero temperature gradient was also assumed at maximum depth to give a required internal boundary condition. The simulation was started 6,500 days before perihelion when 1I/2017 U1 was over 100 au away from the Sun. Low albedo isothermal objects have a temperature of ~30 K at such heliocentric distances, as calculated from $$T={\left(\frac{{F}_{\odot }(1-{A}_{{\rm{B}}})}{4\varepsilon \sigma {r}_{{\rm{h}}}^{2}}\right)}^{1/4}$$ and so the initial temperature at all depths was set to this value. The hyperbolic orbital elements of 1I/2017 U1 were then used to calculate the heliocentric distance at each time step. Regarding the material properties of 1I/2017 U1, cometary bodies typically have low albedo and highly insulating surfaces41,42, and we assume that 1I/2017 U1 is similar. Therefore, we assume a Bond albedo of 0.01, a bolometric emissivity of 0.95, a thermal conductivity of 0.001 W m−1 K−1, a density of 1,000 kg m−3 and a heat capacity of 550 J kg−1 K−1. The latter three properties combine to give a thermal inertia of ~25 J m−2 K−1 s−1/2, calculated using $${\rm{\Gamma }}=\sqrt{k\rho C}$$, which is comparable to that measured for several comets43 and outer main-belt asteroids39. For the two illumination scenarios considered, the thermal model was propagated forwards from its initial starting point and run until 6,500 days after perihelion. The temperature at depths of 0, 10, 20, 30 and 40 cm was recorded at 1-day intervals in the model. As shown in Fig. 4, the most significant temperature changes occur during the 400 days centred on perihelion. We note that as the thermal penetration depth is proportional to $$\sqrt{k{\rm{/}}(\rho C)}$$ our results can be scaled to different thermal property values. Identical temperature profiles can be found at depths given by $$z={z}_{0}\sqrt{\frac{(k{\rm{/}}0.001)}{(\rho {\rm{/}}1000)(C{\rm{/}}550)}}$$ For example, if the thermal inertia was ~250 J m−2 K−1 s−1/2 (the 3σ upper limit determined for comet 103 P/Hartley 2; ref. 44), the depth of the temperature profiles would be ten times higher if the difference in thermal inertia was solely due to a difference in thermal conductivity. However, the depth would be less if the increased thermal inertia was spread equally across its three components. Furthermore, if the geometric albedo of 1I/2017 U1 is very low, the temperatures are also relatively insensitive to factor of two changes in this parameter. ### Data availability The ACAM and X-shooter spectra that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Charnoz, S. & Morbidelli, A. Coupling dynamical and collisional evolution of small bodies: an application to the early ejection of planetesimals from the Jupiter–Saturn region. Icarus 166, 141–156 (2003). 2. 2. Dones, L., Weissman, P. R., Levison, H. F. & Duncan, M. J. in Comets II (eds Festou, M. C., Keller, H. U. & Weaver, H. A.) 153–174 (Univ. Arizona Press, Tucson, 2004). 3. 3. Meech, K. J. et al. A brief visit from a red and extremely elongated interstellar asteroid. Nature https://doi.org/10.1038/nature25020 (2017). 4. 4. Guilbert-Lepoutre, A. et al. On the evolution of comets. Space Sci. Rev. 197, 271–296 (2015). 5. 5. Fitzsimmons, A., Hyland, M., Jedicke, R., Snodgrass, C. & Yang, B. Minor planets 2017 SN_33 and 2017 U1. Cent. Bur. Electr. Telegr. 4450, 2 (2017). 6. 6. Feldman, P. D., Cochran, A. L. & Combi, M. R. in Comets II (eds Festou, M. C., Keller, H. U. & Weaver, H. A.) 425–447 (Univ. Arizona Press, Tucson, 2004). 7. 7. Knight, M. M. et al. On the rotation period and shape of the hyperbolic asteroid 1I/Oumuamua (2017) U1 from its lightcurve. Astrophys. J. Lett. (in the press); preprint at https://arxiv.org/abs/1711.01402. 8. 8. Bannister, M. T. et al. Col-OSSOS: colors of the interstellar planetesimal 1I/Oumuamua. Astrophys. J. Lett. (in the press); preprint at https://arxiv.org/abs/1711.06214. 9. 9. Ye, Q.-Z., Zhang, Q., Kelley, M. S. P. & Brown, P. G. 1I/2017 U1 (`Oumuamua) is hot: imaging, spectroscopy and search of meteor activity. Astrophys. J. Lett. (in the press); preprint at https://arxiv.org/abs/1711.02320. 10. 10. Rivkin, A. S. et al. in Asteroids IV (eds Michel, P., DeMeo, F. E. & Bottke, W. F.) 65–87 (Univ. Arizona Press, Tucson, 2015). 11. 11. Reddy, V., Dunn, T. L., Thomas, C. A., Moskovitz, N. A. & Burbine, T. H. in Asteroids IV (eds Michel, P., DeMeo, F. E. & Bottke, W. F.) 43–63 (Univ. Arizona Press, Tucson, 2015). 12. 12. Masiero, J. Palomar optical spectrum of hyperbolic near-earth object A/2017 U1. Preprint at https://arxiv.org/abs/1710.09977v2 (2017). 13. 13. Bolin, B. T. et al. APO time resolved color photometry of highly-elongated interstellar object 1I/’Oumuamua. Astrophys. J. Lett. (in the press); preprint at https://arxiv.org/abs/1711.04927v4. 14. 14. Jewitt, D. et al. Interstellar Interloper 1I/2017 U1: observations from the NOT and WIYN telescopes. Astrophys J. Lett. 850, L36 (2017). 15. 15. Mothé-Diniz, T., Lazzaro, D., Carvano, J. M. & Florczak, M. Rotationally resolved spectra of some S-type asteroids. Icarus 148, 494–507 (2000). 16. 16. Fraser, W. C., Brown, M. E. & Glass, F. The Hubble Wide Field Camera 3 test of surfaces in the outer solar system: spectral variation on Kuiper belt objects. Astrophys. J. 804, 31 (2015). 17. 17. DeMeo, F. E., Binzel, R. P., Slivan, S. M. & Bus, S. J. An extension of the bus asteroid taxonomy into the near-infrared. Icarus 202, 160–180 (2009). 18. 18. Emery, J. P., Burr, D. M. & Cruikshank, D. P. Near-infrared spectroscopy of trojan asteroids: evidence for two compositional groups. Astron. J. 141, 25 (2011). 19. 19. Snodgrass, C. et al. Distant activity of 67P/Churyumov–Gerasimenko in 2014: ground-based results during the Rosetta pre-landing phase. Astron. Astrophys. 588, A80 (2016). 20. 20. Merlin, F., Hromakina, T., Perna, D., Hong, M. J. & Alvarez-Candal, A. Taxonomy of trans-Neptunian objects and centaurs as seen from spectroscopy. Astron. Astrophys. 604, A86 (2017). 21. 21. Pike, R. E. et al. Col-OSSOS: z-band photometry reveals three distinct TNO surface types. Astron. J. 154, 101 (2017). 22. 22. Brunetto, R., Barucci, M. A., Dotto, E. & Strazzulla, G. Ion irradiation of frozen methanol, methane, and benzene: linking to the colors of centaurs and trans-Neptunian objects. Astrophys. J. 644, 646–650 (2006). 23. 23. Strazzulla, G., Cooper, J. F., Christian, E. R. & Johnson, R. E. Irradiation ionique des OTNs: des flux mesurés dans l’espace aux expériences de laboratoire. C. R. Phys. 4, 791–801 (2003). 24. 24. Meech, K. J. et al. Inner solar system material discovered in the Oort cloud. Sci. Adv. 2, e1600038 (2016). 25. 25. Engelhardt, T. et al. An observational upper limit on the interstellar number density of asteroids and comets. Astron. J. 153, 133 (2017). 26. 26. Jewitt, D. C. in Comets II (eds Festou, M. C., Keller, H. U. & Weaver, H. A.) 659–676 (Univ. Arizona Press, Tucson, 2004). 27. 27. Groussin, O. et al. Gravitational slopes, geomorphology, and material strengths of the nucleus of comet 67P/Churyumov–Gerasimenko from OSIRIS observations. Astron. Astrophys. 583, A32 (2015). 28. 28. Meech, K. J. et al. Activity of comets at large heliocentric distances pre-perihelion. Icarus 201, 719–739 (2009). 29. 29. Zackrisson, E. et al. Terrestrial planets across space and time. Astrophys. J. 833, 214 (2016). 30. 30. Jewitt, D. Color systematics of comets and related bodies. Astron. J. 150, 201 (2015). 1510.07069. 31. 31. Benn, C., Dee, K. & Agócs, T. ACAM: a new imager/spectrograph for the William Herschel Telescope Proc. SPIE 7014, 70146X (2008); https://doi.org/10.1117/12.788694. 32. 32. Vernet, J. et al. X-shooter, the new wide band intermediate resolution spectrograph at the ESO Very Large Telescope. Astron. Astrophys. 536, A105 (2011). 33. 33. Cruikshank, D. P. et al. The composition of centaur 5145 Pholus. Icarus 135, 389–407 (1998). 34. 34. Doressoundiram, A. et al. Spectral characteristics and modeling of the trans-Neptunian object (55565) 2002 AW197 and the centaurs (55576) 2002 GB10 and (83982) 2002 GO9: ESO Large Program on TNOs and Centaurs. Plan. Space Sci. 53, 1501–1509 (2005). 35. 35. Emery, J. P., Burr, D. M. & Cruikshank, D. P. Near-infrared spectroscopy of trojan asteroids: evidence for two compositional groups. Astron. J. 141, 25 (2011). 36. 36. Capaccioni, F. et al. The organic-rich surface of comet 67P/Churyumov–Gerasimenko as seen by VIRTIS/Rosetta. Science 347, 628 (2015). 37. 37. Soderblom, L. A. et al. Short-wavelength infrared (1.3-2.6 μm) observations of the nucleus of Comet 19P/Borrelly. Icarus 167, 100–112 (2004). 38. 38. Abell, P. A. et al. Physical characteristics of comet nucleus C/2001 OG108 (LONEOS). Icarus 179, 174–194 (2005). 39. 39. Delbo, M., Mueller, M., Emery, J. P., Rozitis, B. & Capria, M. T. in Asteroids IV (eds Michel, P., DeMeo, F. E. & Bottke, W. F.) 107–128 (Univ. Arizona Press, Tucson, 2015). 40. 40. Rozitis, B. & Green, S. F. Directional characteristics of thermal-infrared beaming from atmosphereless planetary surfaces—a new thermophysical model. Mon. Not. R. Astron. Soc. 415, 2042–2062 (2011). 41. 41. Fornasier, S. et al. Spectrophotometric properties of the nucleus of comet 67P/Churyumov–Gerasimenko from the OSIRIS instrument onboard the ROSETTA spacecraft. Astron. Astrophys. 583, A30 (2015). 42. 42. Spohn, T. et al. Thermal and mechanical properties of the near-surface layers of comet 67P/Churyumov–Gerasimenko. Science 349, aab0464 (2015). 43. 43. Lowry, S. et al. The nucleus of comet 67P/Churyumov–Gerasimenko. A new shape model and thermophysical analysis. Astron. Astrophys. 548, A12 (2012). 44. 44. Groussin, O. et al. The temperature, thermal inertia, roughness and color of the nuclei of comets 103P/Hartley 2 and 9P/Tempel 1. Icarus 222, 580–594 (2013). 45. 45. Gasc, S. et al. Change of outgassing pattern of 67P/Churyumov–Gerasimenko during the March 2016 equinox as seen by ROSINA. Mon. Not. R. Astron. Soc. 469, S108–S117 (2017). ## Acknowledgements We thank the observatory staff at the Isaac Newton Group of Telescopes and the European Southern Observatory for responding quickly to our observing requests. Particular thanks go to R. Ashley, C. Fariña and I. Skillen (Isaac Newton Group) and G. Beccari, B. Haeussler and F. Labrana (European Southern Observatory). A.F., M.T.B. and W.C.F. acknowledge support from Science and Technology Facilities Council grant ST/P0003094/1 and M.T.B. acknowledges support from Science and Technology Facilities Council grant ST/L000709/1. C.S. acknowledges support from the Science and Technology Facilities Council in the form of an Ernest Rutherford Fellowship. B.R. is supported by a Royal Astronomical Society Research Fellowship. The WHT is operated on the island of La Palma by the Isaac Newton Group of Telescopes in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias. The ACAM spectroscopy was obtained as part of programme SW2017b11. This paper is also based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under European Southern Observatory programme 2100.C-5009. ## Author information ### Affiliations 1. #### Astrophysics Research Centre, School of Mathematics and Physics, Queen’s University Belfast, Belfast, UK • Alan Fitzsimmons • , Méabh Hyland • , Tom Seccull • , Michele T. Bannister • , Wesley C. Fraser •  & Pedro Lacerda 2. #### Planetary and Space Sciences, School of Physical Sciences, The Open University, Milton Keynes, UK • Colin Snodgrass •  & Ben Rozitis • Bin Yang 4. #### Institute for Astronomy, Honolulu, HI, USA • Robert Jedicke ### Contributions A.F. led the application and organization of the WHT observations, analysis of these data and writing of the paper. C.S. led the application for VLT observations, organized the observing plan and assisted with analysis and writing. B.R. performed the thermal modelling of 1I/2017 U1. B.Y. was co-investigator on the telescope proposals, assisted in writing the VLT proposal and reduced the X-shooter data. M.T.B. and W.C.F. assisted in interpretation of the spectra in terms of known TNO properties and helped with writing the paper. M.H. reduced the WHT data. T.S. reduced the VLT data and provided the comparison spectrum of Echeclus. R.J. was co-investigator on the telescope proposals and contributed to the analysis and interpretation, especially with respect to observational selection effects. P.L. assisted in interpretation of the variable spectra and helped with writing the paper. ### Competing interests The authors declare no competing financial interests. ### Corresponding author Correspondence to Alan Fitzsimmons. ### DOI https://doi.org/10.1038/s41550-017-0361-4 • ### Mysterious interstellar visitor is a comet — not an asteroid • Alexandra Witze Nature (2018) • ### Ice can survive an interstellar trip • Karen J. Meech Nature Astronomy (2018) • ### Tumbling motion of 1I/‘Oumuamua and its implications for the body’s distant past • Michał Drahus • , Piotr Guzik • , Wacław Waniak • , Barbara Handzlik • , Sebastian Kurowski •  & Siyi Xu Nature Astronomy (2018) • ### The tumbling rotational state of 1I/‘Oumuamua • Wesley C. Fraser • , Petr Pravec • , Alan Fitzsimmons • , Pedro Lacerda • , Michele T. Bannister • , Colin Snodgrass •  & Igor Smolić Nature Astronomy (2018) • ### Non-gravitational acceleration in the trajectory of 1I/2017 U1 (‘Oumuamua) • Marco Micheli • , Davide Farnocchia • , Karen J. Meech • , Marc W. Buie • , Olivier R. Hainaut • , Dina Prialnik • , Norbert Schörghofer • , Harold A. Weaver • , Paul W. Chodas • , Jan T. Kleyna • , Robert Weryk • , Richard J. Wainscoat • , Harald Ebeling • , Jacqueline V. Keane • , Kenneth C. Chambers • , Detlef Koschny •  & Anastassios E. Petropoulos Nature (2018)
2019-01-16 22:18:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7230846881866455, "perplexity": 4650.631057874174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657907.79/warc/CC-MAIN-20190116215800-20190117001800-00336.warc.gz"}
https://math.stackexchange.com/questions/775538/full-rank-of-the-matrix-ceatb
# Full rank of the matrix $Ce^{At}B$ Let $A \in \mathbb R^{n \times n}$. Fix $m <n$ and let $B \in \mathbb R^{n \times m}$, $C \in \mathbb R^{(m-1) \times n}$ be two matrices with full rank. I want to find algebraic conditions (which I suspect to be related to observability of $(A,C)$ and/or controllability of $(A,B)$) on the matrices $A,B,C$, such that the kernel \begin{align*} \ker Ce^{At}B \end{align*} is one-dimensional for infinitely many times $t\ge 0$. Or, said differently (due to dimensionality reasons), I want to find conditions such that the matrix \begin{align*} Ce^{At}B \end{align*} has full (row) rank. The conjecture is false. Let $n \geq 2m$ and let $A=0$, then $e^{At}$ is the identity matrix. Now let $B \in \mathbb{R}^{n \times m}$ be a full rank matrix such that the first $n-m$ lines are zero. Analogously, let $C \in \mathbb{R}^{(m-1) \times n}$ be a full rank matrix such tat the last $n+1-m$ columns are zero. Then $CB$ is the zero matrix. The same argument works if $A$ is a diagonal matrix. • Would the conjecture be true though ,if I assume that the rank of $\begin{pmatrix} CB \\ CAB \\ \vdots \\ CA^{n-1}B \end{pmatrix}$ is $n$ ? – samsa44 Apr 30 '14 at 12:33
2019-06-24 13:49:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9761028289794922, "perplexity": 175.7715159391714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999539.60/warc/CC-MAIN-20190624130856-20190624152856-00495.warc.gz"}
https://mathoverflow.net/questions/210068/calculating-the-distinguished-varieties-of-intersection-product/222039
# Calculating the distinguished varieties of intersection product In Fulton's Intersection theory Example 6.1.2,one considers two divisors on $\mathbf{P}^2$ given by $D_1=A+2B,D_2=2A+B$, where $A,B$ are lines meeting at a point. Let $X=D_1\times D_2,Y=\mathbf{P}^2\times\mathbf{P}^2$, $V=\mathbf{P}^2$, $f\colon V\to Y$ is diagonal map. Consider the fiber product of closed immersion $i\colon X\to Y$ and $f$, one gets closed immersion $j\colon W\to V$. Denote the ideal sheaf of $W$ in $V$ by $\mathcal{I}$, the cone $C_WV=\operatorname{Spec} (\mathcal{O}_W/\mathcal{I}\oplus \mathcal{I}/\mathcal{I}^2\oplus\mathcal{I}^2/\mathcal{I}^3...)$ has components $C_i$ with multiplicities $m_i$. The cone is naturally equipped with a projection $\pi$ to $W$,by taking zeroth component then closed immersion. Then one defines the cycle $[\pi(C_i)]$ to be distinguished varieties, and $\sum m_is_i^*[C_i]$ the canonical decomposition.($s_i^*$ the Gysin homomorphisms of zero section in normal bundle) In Example 6.1.2. The canonical decomposition is $\alpha+\beta+3[P]$, where $\alpha, \beta$ are degree $3$ zero cycle on $A,B$. I don't how to calculate it. I tried the calculation by taking $Y=\operatorname{Spec}k[X,Y]$, then the cone is affine scheme associated to $k[X,Y]/(X^2Y,XY^2)\oplus(X^2Y,XY^2)/(X^2Y,XY^2)^2...$. I am not sure how to describe the ring, I guess it is $k[X,Y,S,T]/(X^2Y,XY^2,SXY^2,TXY^2,SX^2Y,TX^2Y,SY-XT)$? I am not sure how to calculate the components of the scheme and the multiplicities? Let's fix some notation for making things explicit. (The notation in your last 3 paragraphs is slightly confusing, so I'm making up my own). I take $A=V(z)$ and $B=V(y)$, hence $P=[1,0,0]$. Then $D_1=V(y^2z)$ and $D_2=V(yz^2)$, and hence $$W=D_1 \cap D_2=V(y^2z,yz^2).$$ Since $D_i$ are divisors, we can calculate their normal bundles as $\mathcal{O}(D_i)|_{D_i} = \mathcal{O}(3)|_{D_i}$. In particular, after pulling back the normal bundle of $D_1\times D_2$ to the intersection, we get $N=\mathcal{O}(3)|_W\oplus \mathcal{O}(3)|_W$. The normal cone $C$ of $D_1\cap D_2$ to $\mathbb{P}^2$ sits inside $N$. Let's restrict to the open dense subset of $W$ where $x\neq 0$ and calculate the cone and the irreducible components lying over this subset (one can easily see that we get nothing extra by considering the two points at infinity). Then the base is $\mathrm{Spec}\ k[y,z]/(y^2z,yz^2)$ and the cone is, similar to what is written in the question, $$C|_{D(x)}= \mathrm{Spec}\ (k[y,z]/(y^2z,yz^2) \oplus (y^2z,yz^2)/(y^2z,yz^2)^2 \oplus \dots).$$ Trivialize $N|_{D(x)}= \mathrm{Spec}\ k[y,z,s,t]/(y^2z,yz^2)$ with the obvious embedding given by $s\mapsto y^2z$ and $t\mapsto yz^2$. This is clearly a surjective map of rings, and the kernel is generated by $sz-yt$. (Remark: hence the description in the question is correct, after setting up everything such that it makes sense). We want to calculate the irreducible components of $$\mathrm{Spec}\ k[y,z,s,t]/(y^2z,yz^2,sz-yt).$$ On the locus where $z\neq 0$, we have $y=s=0$, which gives us the $z$-$t$-plane minus the line $z=0$, and on the locus where $y\neq 0$, we get the $y$-$s$-plane, minus the line $y=0$. Taking the closure of these two, we get two two-dimensional irreducible components $C_1$ and $C_2$. Moreover, the fiber over $y=z=0$ is just $\mathrm{Spec}\ k[s,t]$, which is two-dimensional and irreducible and contained in the none of the other components, hence it is another irreducible component $C_3$. To conclude, the irreducible components of $C$ over $D(x)\subset W$ are given by the $z$-$t$-plane, the $y$-$s$-plane, and the $s$-$t$-plane. This already gives us the distinguished varieties $A$, $B$ and $P$. To get the multiplicities, we need to calculate the length of the local rings. The local rings at the first two components are just the quotient fields of the respective components (calculate on the complement of $y=z=0$). After localizing the coordinate ring at the prime ideal $(y,z)$, we get $$k[y,z,s,t]/(y^2z,yz^2,sz-yt)_{(y,z)}=k(s,t)[y,z]/(y^2z,yz^2,z-\frac{t}{s}\cdot y),$$ which is the same as $$k(s,t)[y]/(y^3).$$ This ring clearly has dimension $3$ as a $k(s,t)$-vector space, so by Example A.1.1 from Fulton, the last irreducible component has multiplicity $3$. Hence $$[C]= [C_1]+[C_2]+3[C_3].$$ Then you apply Gysin for the vector bundle $N$ restricted to $A$, $B$, and $P$ to get the result. Note that $C_3$ is equal to the whole fiber of $N$ over $P$, so this simply gets mapped to the point $P$, since the vector bundle is trivial over the point $P$. On the other hand, the cycles $[C_1]$ and $[C_2]$ get mapped to zero-cycles of degree $3$ due to the fact that $N$ is a sum of two copies of $\mathcal{O}(3)$.
2018-11-16 02:15:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9788439869880676, "perplexity": 82.79972672525267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742968.18/warc/CC-MAIN-20181116004432-20181116030432-00334.warc.gz"}
https://www.physicsforums.com/threads/bullet-penetration.739982/
# Bullet penetration 1. Feb 23, 2014 ### oneamp If F=ma, why does a bullet that's going a constant velocity, have enough force to penetrate an object? 2. Feb 23, 2014 ### Staff: Mentor It doesn't go at a constant velocity. A bullet decelerates very rapidly. 3. Feb 23, 2014 ### Staff: Mentor A moving bullet doesn't have 'force', it has momentum and energy. Under the right conditions, that energy may be sufficient to allow the bullet to penetrate an object. During the collision, forces are generated that slow down and deform the bullet. [STRIKE]And as DaleSpam stated, [/STRIKE]a bullet doesn't move with constant velocity. The air exerts a retarding force on it. Last edited: Feb 24, 2014 4. Feb 23, 2014 ### oneamp Thanks... 5. Feb 23, 2014 ### Staff: Mentor I'm pretty sure that's not what he meant. I'm pretty sure that he meant that in the case being described by the OP - when the bullet hits something - it isn't going at a constant velocity, it is decelerating very rapidly, which involves a very large force. ...We've gotten almost this exact question several times in the past few days... 6. Feb 23, 2014 ### LikesIntuition You could look at the bullet's behavior in a vacuum, in which case there would be no air friction. Then it would travel at a constant velocity, and when it hit something, some sort of impulse would occur, which involves forces over some period of time that change the momentum of both the bullet and whatever it's hitting, but conserve the total momentum. This is exactly what's happening in the air, too. It's traveling through a "vacuum" of space, that also happens to be filled with air molecules. So when the bullet runs into those molecules, the interaction I described above happens. Sorry if that just sounds like rambling, but I figured another take on it might help you glean some more insight! And if your question is specifically about how a thing moving at a constant velocity can deliver a force, then it's like the posters above were saying: when the bullet makes contact with anything else, it does accelerate (changes velocity) and that acceleration is the one in the F=ma equation, not its lack of acceleration BEFORE the impact. The bullet has no "force" if it isn't interacting with something. Hope that helps a bit! Last edited: Feb 23, 2014 7. Feb 24, 2014 ### Staff: Mentor Ah, OK. I was wondering, since I would not have said that a bullet decellerates rapidly through the air. :uhh: 8. Feb 26, 2014 penetration of bullet It penetrates not due to its velocity but by the deceleration provided by the body into which it penetrates. hence by action reaction force is exerted and the bullet penetrates hope this helps. if there is need of further comprehension please feel free to reply. regards 9. Feb 26, 2014 $F=\frac{mv-mu}{t}$ $F=\frac{0.2*0-0.2*500}{0.5}=-200N$
2016-08-31 16:12:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5734774470329285, "perplexity": 1199.3223166388764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982295966.49/warc/CC-MAIN-20160823195815-00017-ip-10-153-172-175.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Bimodal_distribution
# Multimodal distribution (Redirected from Bimodal distribution) "Bimodal" redirects here. For the musical concept, see Bimodality. Figure 1. A simple bimodal distribution, in this case a mixture of two normal distributions with the same variance but different means. The figure shows the probability density function (p.d.f.), which is an average of the bell-shaped p.d.f.s of the two normal distributions. Figure 2. Histogram of body lengths of 300 weaver ant workers[1] Figure 3. A bivariate, multimodal distribution In statistics, a bimodal distribution is a continuous probability distribution with two different modes. These appear as distinct peaks (local maxima) in the probability density function, as shown in Figure 1. More generally, a multimodal distribution is a continuous probability distribution with two or more modes, as illustrated in Figure 3. ## Terminology When the two modes are unequal the larger mode is known as the major mode and the other as the minor mode. The least frequent value between the modes is known as the antimode. The difference between the major and minor modes is known as the amplitude. In time series the major mode is called the acrophase and the antimode the batiphase. ## Galtung's classification Galtung introduced a classification system (AJUS) for distributions:[2] • A: unimodal distribution – peak in the middle • J: unimodal – peak at either end • U: bimodal – peaks at both ends • S: bimodal or multimodal – multiple peaks This classification has since been modified slightly: • J (modified) – peak on right • L: unimodal – peak on left • F: no peak (flat) Under this classification bimodal distributions are classified as type S or U. ## Examples Bimodal distributions occur both in mathematics and in the natural sciences. ### Probability distributions Important bimodal distributions include the arcsine distribution and the beta distribution. Others include the U-quadratic distribution. The ratio of two normal distributions is also bimodally distributed. Let ${\displaystyle R={\frac {a+x}{b+y}}}$ where a and b are constant and x and y are distributed as normal variables with a mean of 0 and a standard deviation of 1. R has a known density that can be expressed as a confluent hypergeometric function.[3] The distribution of the reciprocal of a t distributed random variable is bimodal when the degrees of freedom are more than one. Similarly the reciprocal of a normally distributed variable is also bimodally distributed. A t statistic generated from data set drawn from a Cauchy distribution is bimodal.[4] ### Occurrences in nature Examples of variables with bimodal distributions include the time between eruptions of certain geysers, the color of galaxies, the size of worker weaver ants, the age of incidence of Hodgkin's lymphoma, the speed of inactivation of the drug isoniazid in US adults, the absolute magnitude of novae, and the circadian activity patterns of those crepuscular animals that are active both in morning and evening twilight. In fishery science multimodal length distributions reflect the different year classes and can thus be used for age distribution- and growth estimates of the fish population.[5] Sediments are usually distributed in a bimodal fashion. Bimodal distributions are also seen in traffic analysis, where traffic peaks in during the AM rush hour and then again in the PM rush hour. This phenomena is also seen in daily water distribution, as water demand, in the form of showers, cooking, and toilet use, generally peak in the morning and evening periods. ### Econometrics In econometric models, the parameters may be bimodally distributed.[6] ## Origins Main article: Mixture distribution ### Mathematical A bimodal distribution most commonly arises as a mixture of two different unimodal distributions (i.e. distributions having only one mode). In other words, the bimodally distributed random variable X is defined as ${\displaystyle Y}$ with probability ${\displaystyle \alpha }$ or ${\displaystyle Z}$ with probability ${\displaystyle (1-\alpha ),}$ where Y and Z are unimodal random variables and ${\displaystyle 0<\alpha <1}$ is a mixture coefficient. Mixtures with two distinct components need not be bimodal and two component mixtures of unimodal component densities can have more than two modes. There is no immediate connection between the number of components in a mixture and the number of modes of the resulting density. ### Particular distributions Bimodal distributions, despite their frequent occurrence in data sets, have only rarely been studied. This may be because of the difficulties in estimating their parameters either with frequentist or Bayesian methods. Among those that have been studied are Bimodality also naturally arises in the cusp catastrophe distribution. ### Biology In biology five factors are known to contribute to bimodal distributions of population sizes: • the initial distribution of individual sizes • the distribution of growth rates among the individuals • the size and time dependence of the growth rate of each individual • mortality rates that may affect each size class differently • the DNA methylation in human and mouse genome. The bimodal distribution of sizes of weaver ant workers shown in Figure 2 arises due to existence of two distinct classes of workers, namely major workers and minor workers.[1] In this case, Y would be the size of a random major worker, Z the size of a random minor worker, and α the proportion of worker weaver ants that are major workers. The distribution of fitness effects of mutations for both whole genomes[11][12] and individual genes[13] is also frequently found to be bimodal with most mutations being either neutral or lethal with relatively few having intermediate effect. ## General properties A mixture of two unimodal distributions with differing means is not necessarily bimodal. The combined distribution of heights of men and women is sometimes used as an example of a bimodal distribution, but in fact the difference in mean heights of men and women is too small relative to their standard deviations to produce bimodality.[14] Bimodal distributions have the peculiar property that – unlike the unimodal distributions – the mean may be a more robust sample estimator than the median.[15] This is clearly the case when the distribution is U shaped like the arcsine distribution. It may not be true when the distribution has one or more long tails. ### Moments of mixtures Let ${\displaystyle f(x)=pg_{1}(x)+(1-p)g_{2}(x)\,}$ where gi is a probability distribution and p is the mixing parameter. The moments of f(x) are[16] ${\displaystyle \mu =p\mu _{1}+(1-p)\mu _{2}}$ ${\displaystyle \nu _{2}=p[\sigma _{1}^{2}+\delta _{1}^{2}]+(1-p)[\sigma _{2}^{2}+\delta _{2}^{2}]}$ ${\displaystyle \nu _{3}=p[S_{1}\sigma _{1}^{3}+3\delta _{1}\sigma _{1}^{2}+\delta _{1}^{3}]+(1-p)[S_{2}\sigma _{2}^{3}+3\delta _{2}\sigma _{2}^{2}+\delta _{2}^{3}]}$ ${\displaystyle \nu _{4}=p[K_{1}\sigma _{1}^{4}+4S_{1}\delta _{1}\sigma _{1}^{3}+6\delta _{1}^{2}\sigma _{1}^{2}+\delta _{1}^{4}]+(1-p)[K_{2}\sigma _{2}^{4}+4S_{2}\delta _{2}\sigma _{2}^{3}+6\delta _{2}^{2}\sigma _{2}^{2}+\delta _{2}^{4}]}$ where ${\displaystyle \mu =\int xf(x)\,dx}$ ${\displaystyle \delta _{i}=\mu _{i}-\mu }$ ${\displaystyle \nu _{r}=\int (x-\mu )^{r}f(x)\,dx}$ and Si and Ki are the skewness and kurtosis of the ith distribution. ## Mixture of two normal distributions It is not uncommon to encounter situations where an investigator believes that the data comes from a mixture of two normal distributions. Because of this, this mixture has been studied in some detail.[17] A mixture of two normal distributions has five parameters to estimate: the two means, the two variances and the mixing parameter. A mixture of two normal distributions with equal standard deviations is bimodal only if their means differ by at least twice the common standard deviation.[14] Estimates of the parameters is simplified if the variances can be assumed to be equal (the homoscedastic case). It is obvious that if the means of the two normal distributions are equal that the combined distribution is unimodal. Conditions for unimodality of the combined distribution were derived by Eisenberger.[18] Necessary and sufficient conditions for a mixture of normal distributions to be bimodal have been identified by Ray and Lindsay.[19] A mixture of two approximately equal mass normal distributions have a negative kurtosis since the two modes on either side of the center of mass effectively flatten out the distribution. A mixture of two normal distributions with highly unequal mass has a positive kurtosis since the smaller distribution lengthens the tail of the more dominant normal distribution. Mixtures of other distributions require additional parameters to be estimated. ### Mixture of two normal distributions with equal variances In the case of equal variance, the mixture is unimodal if and only if[20] ${\displaystyle d\leq 1}$ or ${\displaystyle \left\vert \log(1-p)-\log(p)\right\vert \geq 2\log(d-{\sqrt {d^{2}-1}})+2d{\sqrt {d^{2}-1}}}$ where p is the mixing parameter and d is ${\displaystyle d={\frac {\left\vert \mu _{1}-\mu _{2}\right\vert }{2{\sqrt {\sigma _{1}\sigma _{2}}}}}}$ where μ1 and μ2 are the means of the two normal distributions and σ1 and σ2 are their standard deviations. ## Summary statistics Bimodal distributions are a commonly used example of how summary statistics such as the mean, median, and standard deviation can be deceptive when used on an arbitrary distribution. For example, in the distribution in Figure 1, the mean and median would be about zero, even though zero is not a typical value. The standard deviation is also larger than deviation of each normal distribution. Although several have been suggested, there is no presently generally agreed summary statistic (or set of statistics) to quantify the parameters of a general bimodal distribution. For a mixture of two normal distributions the mean and standard deviation along with the mixing parameter (a weighing system for the combination) are usually used – a total of five parameters. ### Ashman's D A statistic that may be useful is Ashman's D:[21] ${\displaystyle D=2^{\frac {1}{2}}{\frac {\left|\mu _{1}-\mu _{2}\right|}{\sqrt {(\sigma _{1}^{2}+\sigma _{2}^{2})}}}}$ where μ1, μ2 are the means and σ1 σ2 are the standard deviations. For a mixture of two normal distributions D > 2 is required for a clean separation of the distributions. ### van der Eijk's A This measure is a weighted average of the degree of agreement the frequency distribution.[22] A ranges from -1 (perfect bimodality) to +1 (perfect unimodality). It is defined as ${\displaystyle A=U(1-{\frac {S-1}{K-1}})}$ where U is the unimodality of the distribution, S the number of categories that have nonzero frequencies and K the total number of categories. The value of U is 1 if the distribution has any of the three following characteristics: • all responses are in a single category • the responses are evenly distributed among all the categories • the responses are evenly distributed among two or more contiguous categories, with the other categories with zero responses With distributions other than these the data must be divided into 'layers'. Within a layer the responses are either equal or zero. The categories do not have to be contiguous. A value for A for each layer (Ai) is calculated and a weighted average for the distribution is determined. The weights (wi) for each layer are the number of responses in that layer. In symbols ${\displaystyle A_{overall}=\sum w_{i}A_{i}}$ A uniform distribution has A = 0: when all the responses fall into one category A = +1. One theoretical problem with this index is that it assumes that the intervals are equally spaced. This may limit its applicability. ### Bimodal separation This index assumes that the distribution is a mixture of two normal distributions with means (μ1 and μ2) and standard deviations (σ1 and σ2):[23] ${\displaystyle S={\frac {\mu _{1}-\mu _{2}}{2(\sigma _{1}+\sigma _{2})}}}$ ### Bimodality coefficient Sarle's bimodality coefficient b is[24] ${\displaystyle \beta ={\frac {\gamma ^{2}+1}{\kappa }}}$ where γ is the skewness and κ is the kurtosis. The kurtosis is here defined to be the standardised fourth moment around the mean. The value of b lies between 0 and 1.[25] The logic behind this coefficient is that a bimodal distribution will have very low kurtosis, an asymmetric character, or both – all of which increase this coefficient. The formula for a finite sample is[26] ${\displaystyle b={\frac {g^{2}+1}{k+{\frac {3(n-1)^{2}}{(n-2)(n-3)}}}}}$ where n is the number of items in the sample, g is the sample skewness and k is the sample excess kurtosis. The value of b for the uniform distribution is 5/9. This is also its value for the exponential distribution. Values greater than 5/9 may indicate a bimodal or multimodal distribution. The maximum value (1.0) is reached only by a Bernoulli distribution with only two distinct values or the sum of two different Dirac delta functions (a bi-delta distribution). The distribution of this statistic is unknown. It is related to a statistic proposed earlier by Pearson – the difference between the kurtosis and the square of the skewness (vide infra). ### Bimodality amplitude This is defined as[23] ${\displaystyle A_{B}={\frac {A_{1}-A_{an}}{A_{1}}}}$ where A1 is the amplitude of the smaller peak and Aan is the amplitude of the antinode. AB is always < 1. Larger values indicate more distinct peaks. ### Bimodal ratio This is the ratio of the left and right peaks.[23] Mathematically ${\displaystyle R={\frac {A_{r}}{A_{l}}}}$ where Al and Ar are the amplitudes of the left and right peaks respectively. ### Bimodality parameter This parameter (B) is due to Wilcock.[27] ${\displaystyle B={\frac {A_{r}}{A_{l}}}\sum P_{i}}$ where Al and Ar are the amplitudes of the left and right peaks respectively and Pi is the logarithm taken to the base 2 of the proportion of the distribution in the ith interval. The maximal value of B is 1. ### Bimodality indices The bimodality index proposed by Wang et al assumes that the distribution is a sum of two normal distributions with equal variances but differing means.[28] It is defined as follows: ${\displaystyle \delta ={\frac {|\mu _{1}-\mu _{2}|}{\sigma }}}$ where μ1, μ2 are the means and σ is the common standard deviation. ${\displaystyle BI=\delta {\sqrt {p(1-p)}}}$ where p is the mixing parameter. A different bimodality index has been proposed by Sturrock.[29] This index (B) is defined as ${\displaystyle B={\frac {1}{N}}\left[\left(\sum _{1}^{N}\cos(2\pi m\gamma )\right)^{2}+\left(\sum _{1}^{N}\sin(2\pi m\gamma )\right)^{2}\right]}$ When m = 2 and γ is uniformly distributed, B is exponentially distributed.[30] This statistic is a form of periodogram. It suffers from the usual problems of estimation and spectral leakage common to this form of statistic. Another bimodality index has been proposed by de Michele and Accatino.[31] Their index (B) is ${\displaystyle B=|\mu -\mu _{M}|}$ where μ is the arithmetic mean of the sample and ${\displaystyle \mu _{M}={\frac {\sum _{i=1}^{L}m_{i}x_{i}}{\sum _{i=1}^{L}m_{i}}}}$ where mi is number of data points in the ith bin,xi is the center of the ith bin and L is the number of bins. The authors suggested a cut off value of 0.1 for B to distinguish between a bimodal (B > 0.1)and unimodal (B < 0.1) distribution. No statistical justification was offered for this value. A further index (B) has been proposed by Sambrook Smith et al[32] ${\displaystyle B=|\phi _{2}-\phi _{1}|{\frac {p_{2}}{p_{1}}}}$ where p1 and p2 are the proportion contained in the primary (that with the greater amplitude) and secondary (that with the lesser amplitude) mode and φ1 and φ2 are the φ-sizes of the primary and secondary mode. The φ-size is defined as minus one times the log of the data size taken to the base 2. This transformation is commonly used in the study of sediments. The authors recommended a cut off value of 1.5 with B being greater than 1.5 for a bimodal distribution and less than 1.5 for a unimodal distribution. No statistical justification for this value was given. Another bimodality parameter has been proposed by Chaudhuri and Agrawal.[33] This parameter requires knowledge of the variances of the two subpopulations that make up the bimodal distribution. It is defined as ${\displaystyle k={\frac {n_{1}\sigma _{1}^{2}+n_{2}\sigma _{2}^{2}}{m\sigma ^{2}}}}$ where ni is the number of data points in the ith subpopulation, σi2 is the variance of the ith subpopulation, m is the total size of the sample and σ2 is the sample variance. It is a weighted average of the variance. The authors suggest that this parameter can be used as the optimisation target to divide a sample into two subpopulations. No statistical justification for this suggestion was given. ## Statistical tests A number of tests are available to determine if a data set is distributed in a bimodal (or multimodal) fashion. ### Graphical methods In the study of sediments, particle size is frequently bimodal. Empirically, it has been found useful to plot the frequency against the log( size ) of the particles.[34][35] This usually gives a clear separation of the particles into a bimodal distribution. In geological applications the logarithm is normally taken to the base 2. The log transformed values are referred to as phi (Φ) units. This system is known as the Krumbein (or phi) scale. An alternative method is to plot the log of the particle size against the cumulative frequency. This graph will usually consist two reasonably straight lines with a connecting line corresponding to the antimode. Statistics Approximate values for several statistics can be derived from the graphic plots.[34] ${\displaystyle {\mathit {Mean}}={\frac {\phi _{16}+\phi _{50}+\phi _{84}}{3}}}$ ${\displaystyle {\mathit {StdDev}}={\frac {\phi _{84}-\phi _{16}}{4}}+{\frac {\phi _{95}-\phi _{5}}{6.6}}}$ ${\displaystyle {\mathit {Skew}}={\frac {\phi _{84}+\phi _{16}-2\phi _{50}}{2(\phi _{84}-\phi _{16})}}+{\frac {\phi _{95}+\phi _{5}-2\phi _{50}}{2(\phi _{95}-\phi _{5})}}}$ ${\displaystyle {\mathit {Kurt}}={\frac {\phi _{95}-\phi _{5}}{2.44(\phi _{75}-\phi _{25})}}}$ where Mean is the mean, StdDev is the standard deviation, Skew is the skewness, Kurt is the kurtosis and φx is the value of the variate φ at the xth percentage of the distribution. ### Unimodal vs. bimodal distribution A necessary but not sufficient condition for a symmetrical distribution to be bimodal is that the kurtosis be less than three.[36][37] Here the kurtosis is defined to be the standardised fourth moment around the mean. The reference given prefers to use the excess kurtosis – the kurtosis less 3. Pearson in 1894 was the first to devise a procedure to test whether a distribution could be resolved into two normal distributions.[38] This method required the solution of a ninth order polynomial. In a subsequent paper Pearson reported that for any distribution skewness2 + 1 < kurtosis.[25] Later Pearson showed that[39] ${\displaystyle b_{2}-b_{1}\geq 1}$ where b2 is the kurtosis and b1 is the square of the skewness. Equality holds only for the two point Bernoulli distribution or the sum of two different Dirac delta functions. These are the most extreme cases of bimodality possible. The kurtosis in both these cases is 1. Since they are both symmetrical their skewness is 0 and the difference is 1. Baker proposed a transformation to convert a bimodal to a unimodal distribution.[40] Several tests of unimodality versus bimodality have been proposed: Haldane suggested one based on second central differences.[41] Larkin later introduced a test based on the F test;[42] Benett created one based on the G test.[43] Tokeshi has proposed a fourth test.[44][45] A test based on a likelihood ratio has been proposed by Holzmann and Vollmer.[20] A method based on the score and Wald tests has been proposed.[46] This method can distinguish between unimodal and bimodal distributions when the underlying distributions are known. ### Antimode tests Statistical tests for the antimode are known.[47] Otsu's method Otsu's method is commonly employed in computer graphics to determine the optimal separation between two distributions. ### General tests To test if a distribution is other than unimodal, several additional tests have been devised: the bandwidth test,[48] the dip test,[49] the excess mass test,[50] the MAP test,[51] the mode existence test,[52] the runt test,[53][54] the span test,[55] and the saddle test. The dip test is available for use in R.[1] The P values for the dip statistic values range between 0 to 1. P values less than 0.05 indicate significant multimodality and P values greater than 0.05 but less than 0.10 suggest multimodality with marginal significance. ### Silverman's test Silverman introduced a bootstrap method for the number of modes.[48] The test uses a fixed bandwidth which reduces the power of the test and its interpretability. Under smoothed densities may have an excessive number of modes whose count during bootstrapping is unstable. ### Bajgier-Aggarwal test Bajgier and Aggarwal have propsoed a test based on the kurtosis of the distribution.[56] ### Special cases Additional tests are available for a number of special cases: Mixture of two normal distributions A study of a mixture density of two normal distributions data found that separation into the two normal distributions was difficult unless the means were separated by 4–6 standard deviations.[57] In astronomy the Kernel Mean Matching algorithm is used to decide if a data set belongs to a single normal distribution or to a mixture of two normal distributions. Beta-normal distribution This distribution is bimodal for certain values of is parameters. A test for these values has been described.[58] ## Parameter estimation and fitting curves Assuming that the distribution is known to be bimodal or has been shown to be bimodal by one or more of the tests above, it is frequently desirable to fit a curve to the data. This may be difficult. Bayesian methods may be useful in difficult cases. ### Software Two normal distributions A package for R is available for testing for bimodality.[2] This package assumes that the data are distributed as a sum of two normal distributions. If this assumption is not correct the results may not be reliable. It also includes functions for fitting a sum of two normal distributions to the data. Assuming that the distribution is a mixture of two normal distributions then the expectation-maximization algorithm may be used to determine the parameters. Several programmes are available for this including Cluster.[59] Other distributions The mixtools package also available for R can test for and estimate the parameters of a number of different distributions.[60] Another package for a mixture of two right tailed gamma distributions is available.[61] Several other packages for R are available to fit mixture models; these include flexmix,[62] mcclust,[63] and mixdist.[64] The programme SWRC fit can fit a number of bimodal distributions.[3] The statistical programme SAS can also fit a variety of mixed distributions with the command PROCFREQ. ## References 1. ^ a b Weber, NA (1946). "Dimorphism in the African Oecophylla worker and an anomaly (Hym.: Formicidae)" (PDF). Annals of the Entomological Society of America 39: 7–10. doi:10.1093/aesa/39.1.7. 2. ^ Galtung J (1969) Theory and methods of social research. Universitetsforlaget, Oslo ISBN 0043000177 3. ^ Fieller E (1932) The distribution of the index in a normal bivariate population. Biometrika (24):428–440 4. ^ Fiorio, CV; HajivassILiou, VA; Phillips, PCB (2010). "Bimodal t-ratios: the impact of thick tails on inference". The Econometrics Journal 13: 271–289. doi:10.1111/j.1368-423X.2010.00315.x. 5. ^ Introduction to tropical fish stock assessment 6. ^ Phillips P C B (2006) A remark on bimodality and weak instrumentation in structural equation estimation. Cowles Foundation paper no. 1171 7. ^ Hassan, MY; Hijazi, R. "H (2010) A bimodal exponential power distribution". Pak J Statist 26 (2): 379–396. 8. ^ Elal-Olivero, D (2010). "Alpha-skew-normal distribution". Proyecciones J Math 29 (3): 224–240. doi:10.4067/s0716-09172010000300006. 9. ^ Hassan MY and El-Bassiouni MY (2013) Bimodal skew-symmetric normal distribution. UAEU-CBE-Working Paper Series pp 1–20 10. ^ Bosea S, Shmuelib G, Sura P, Dubey P (2013) Fitting Com-Poisson mixtures to bimodal count data. Proceedings of the 2013 International Conference on Information, Operations Management and Statistics (ICIOMS2013), Kuala Lumpur, Malaysia pp 1–8 11. ^ Sanjuán, R (Jun 27, 2010). "Mutational fitness effects in RNA and single-stranded DNA viruses: common patterns revealed by site-directed mutagenesis studies.". Philosophical transactions of the Royal Society of London. Series B, Biological sciences 365 (1548): 1975–82. doi:10.1098/rstb.2010.0063. PMC 2880115. PMID 20478892. 12. ^ Eyre-Walker, A; Keightley, PD (Aug 2007). "The distribution of fitness effects of new mutations.". Nature Reviews Genetics 8 (8): 610–8. doi:10.1038/nrg2146. PMID 17637733. 13. ^ Hietpas, RT; Jensen, JD; Bolon, DN (May 10, 2011). "Experimental illumination of a fitness landscape.". Proceedings of the National Academy of Sciences of the United States of America 108 (19): 7896–901. Bibcode:2011PNAS..108.7896H. doi:10.1073/pnas.1016024108. PMC 3093508. PMID 21464309. 14. ^ a b Schilling, Mark F.; Watkins, Ann E.; Watkins, William (2002). "Is Human Height Bimodal?". The American Statistician 56 (3): 223–229. doi:10.1198/00031300265. 15. ^ Mosteller F, Tukey JW (1977) Data analysis and regression: a second course in statistics. Reading, Mass, Addison-Wesley Pub Co 16. ^ 17. ^ Robertson, CA; Fryer, JG (1969). "Some descriptive properties of normal mixtures". Skandinavisk Aktuarietidskrift 69: 137–146. 18. ^ Eisenberger, I (1964). "Genesis of bimodal distributions". Technometrics 6 (4): 357–363. doi:10.1080/00401706.1964.10490199. 19. ^ Ray, S; Lindsay, BG (2005). "The topography of multivariate normal mixtures". Ann Stat 33 (5): 2042–2065. doi:10.1214/009053605000000417. 20. ^ a b Holzmann, H; Vollmer, S (2008). "A likelihood ratio test for bimodality in two-component mixtures – with application to regional income distribution in the EU.". AStA 2 (1): 57–69. Cite error: Invalid <ref> tag; name "Holzmann2008" defined multiple times with different content (see the help page). 21. ^ Ashman KM, Bird CM, Zepf SE (1994) Astronomical J 108: 2348 22. ^ Van der Eijk, C (2001). "Measuring agreement in ordered rating scales". Quality and quantity 35 (3): 325–341. doi:10.1023/a:1010374114305. 23. ^ a b c Zhang, C; Mapes, BE; Soden, BJ (2003). "Bimodality in tropical water vapour". Q J R. Meteorol Soc 129: 2847–2866. doi:10.1256/qj.02.16. 24. ^ Ellison, AM (1987). "Effect of seed dimorphism on the density-dependent dynamics of experimental populations of Atriplex triangularis (Chenopodiaceae)". Am J Botany 74 (8): 1280–1288. doi:10.2307/2444163. 25. ^ a b Pearson, K (1916). "Mathematical contributions to the theory of evolution, XIX: Second supplement to a memoir on skew variation". Phil Trans Roy Soc London. Series A 216 (538–548): 429–457. Bibcode:1916RSPTA.216..429P. doi:10.1098/rsta.1916.0009. JSTOR 91092. Cite error: Invalid <ref> tag; name "Pearson1916" defined multiple times with different content (see the help page). 26. ^ SAS Institute Inc. (2012). SAS/STAT 12.1 user’s guide. Cary, NC: Author. 27. ^ Wilcock, PR (1993). "The critical shear stress of natural sediments". J Hydraul Engrg ASCE 119: 491–505. doi:10.1061/(asce)0733-9429(1993)119:4(491). 28. ^ Wang, J; Wen, S; Symmans, WF; Pusztai, L; Coombes, KR (2009). "The bimodality index: a criterion for discovering and ranking bimodal signatures from cancer gene expression profiling data". Cancer Inform 7: 199–216. 29. ^ Sturrock, P (2008). "Analysis of bimodality in histograms formed from GALLEX and GNO solar neutrino data". Solar Physics 249: 1–10. arXiv:0711.0216. Bibcode:2008SoPh..249....1S. doi:10.1007/s11207-008-9170-3. 30. ^ Scargle, JD (1982). "Studies in astronomical time series analysis. II – Statistical aspects of spectral analysis of unevenly spaced data". Astrophys J 263 (1): 835–853. Bibcode:1982ApJ...263..835S. doi:10.1086/160554. 31. ^ De Michele, C; Accatino, F (2014). "Tree cover bimodality in savannas and forests emerging from the switching between two fire dynamics". PLOS ONE 9: e91195. Bibcode:2014PLoSO...991195D. doi:10.1371/journal.pone.0091195. 32. ^ Sambrook Smith, GH; Nicholas, AP; Ferguson, RI (1997). "Measuring and defining bimodal sediments: Problems and implications". Water Resources Research 33: 1179–1185. Bibcode:1997WRR....33.1179S. doi:10.1029/97wr00365. 33. ^ Chaudhuri, D; Agrawal, A (2010). "Split-and-merge procedure for image segmentation using bimodality detection approach". Defence Sci J 60 (3): 290–301. doi:10.14429/dsj.60.356. 34. ^ a b Folk, RL; Ward, WC (1957). "Brazos River bar: a study in the significance of grain size parameters". J Sedim Petrol 27: 3–26. Bibcode:1957JSedR..27....3F. doi:10.1306/74d70646-2b21-11d7-8648000102c1865d. 35. ^ Dyer, KR (1970). "Grain-size parameters for sandy gravels". J Sedim Petrol 40 (2): 616–620. 36. ^ Gneddin OY(2010) Quantifying Bimodality. 37. ^ Muratov AL, Gnedin OY (2010) Modeling the metallicity distribution of globular clusters. Ap J (submitted) arXiv:1002.1325 38. ^ Pearson, K (1894). "Contributions to the mathematical theory of evolution: On the dissection of asymmetrical frequency-curves". Phil Trans Roy Soc Series A 185: 71–90. 39. ^ Pearson, K (1929). "Editorial note". Biometrika 21: 370–375. 40. ^ Baker, GA (1930). "Transformations of bimodal distributions". Ann Math Stat 1 (4): 334–344. doi:10.1214/aoms/1177733063. 41. ^ Haldane, JBS (1951). "Simple tests for bimodality and bitangentiality". Ann Eugenics 16 (1): 359–364. doi:10.1111/j.1469-1809.1951.tb02488.x. 42. ^ Larkin, RP (1979). "An algorithm for assessing bimodality vs. unimodality in a univariate distribution". Behavior Research Methods 11 (4): 467–468. doi:10.3758/BF03205709. 43. ^ Bennett, SC (1992). "Sexual dimorphism of Pteranodon and other pterosaurs, with comments on cranial crests". J Vert Paleont 12 (4): 422–434. doi:10.1080/02724634.1992.10011472. 44. ^ Tokeshi, M (1992). "Dynamics and distribution in animal communities; theory and analysis". Researches in Population Ecology 34: 249–273. doi:10.1007/bf02514796. 45. ^ Barreto, S; Borges, PAV; Guo, Q (2003). "A typing error in Tokeshi's test of bimodality". Global Ecology & Biogeography 12: 173–174. doi:10.1046/j.1466-822x.2003.00018.x. 46. ^ Carolan, AM; Rayner, JCW (2001). "One sample tests for the location of modes of nonnormal data". J Applied Mathematics and Decision Science 5 (1): 1–19. doi:10.1155/s1173912601000013. 47. ^ Hartigan JA (2000) Testing for antimodes. Studies in Classification, Data Analysis, and Knowledge Organization 169–181 48. ^ a b Silverman BW (1981) Using kernel density estimates to investigate multimodality. J Roy Statist Soc Ser B 43:97–99 Cite error: Invalid <ref> tag; name "Silverman1981" defined multiple times with different content (see the help page). 49. ^ Hartigan, JA; Hartigan, PM (1985). "The dip test of unimodality". Ann Statist 13 (1): 70–84. doi:10.1214/aos/1176346577. 50. ^ Mueller, DW; Sawitzki, G (1991). "Excess mass estimates and tests for multimodality". JASA 86: 738–746. 51. ^ Rozál, GPM Hartigan JA (1994). "The MAP test for multimodality". J Classification 11 (1): 5–36. doi:10.1007/BF01201021. 52. ^ Minnotte, MC (1997). "Nonparametric testing of the existence of modes". Ann Statist 25 (4): 1646–1660. doi:10.1214/aos/1031594735. 53. ^ Hartigan, JA; Mohanty, S (1992). "The RUNT test for multimodality". J Classification 9: 63–70. doi:10.1007/bf02618468. 54. ^ Andrushkiw RI, Klyushin DD, Petunin YI (2008) Theory Stoch Processes 14 (1) 1–6 55. ^ Hartigan JA (1988) The span test of multimodality 56. ^ Bajgier SM, Aggarwal LK (1991) Powers of goodness-of-fit tests in detecting balanced mixed normal distributions. Educational and Psychological Measurement 51 (2) p253-269 57. ^ Jackson, PR; Tucker, GT; Woods, HF (1989). "Testing for bimodality in frequency distributions of data suggesting polymorphisms of drug metabolism--hypothesis testing". Br J Clin Pharmacol 28 (6): 655–662. doi:10.1111/j.1365-2125.1989.tb03558.x. 58. ^ http://www.amstat.org/sections/srms/Proceedings/y2002/Files/JSM2002-000150.pdf 59. ^ https://engineering.purdue.edu/~bouman/software/cluster/ 60. ^ https://cran.r-project.org/web/packages/mixtools/index.html 61. ^ https://cran.r-project.org/web/packages/discrimARTs/discrimARTs.pdf 62. ^ https://cran.r-project.org/web/packages/flexmix/index.html 63. ^ https://cran.r-project.org/web/packages/mclust/index.html 64. ^ https://cran.r-project.org/web/packages/mixdist/index.html
2016-07-26 22:24:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 38, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8066980838775635, "perplexity": 2852.146475608499}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257825124.55/warc/CC-MAIN-20160723071025-00209-ip-10-185-27-174.ec2.internal.warc.gz"}
http://www.evanconkle.com/tag/apache/
# Logging the client IP behind Amazon ELB with Apache When you place your Apache Web Server behind an Amazon Elastic Load Balancer, Apache receives all requests from the ELB’s IP address. Therefore, if you wish to do anything with the real client IP address, such as logging or whitelisting, you need to make use of the X-Forwarded-For HTTP Header Amazon ELB includes in each request which contains the IP address of the original host. Solution for logging the true client IP Before: After: The one downside is that depending on how ELB treats X-Forwarded-For, it may allow clients to spoof their source IP. Hopefully this helps out anyone experiencing this issue. # The Future of Presidential Debates I recently discussed a topic with a friend about having IBM’s Watson moderate a presidential debate or at least using it to instant fact check their claims. My argument would be that you cannot just “fact check” like that per say. The facts that the candidates are quoting are from various studies, all of which have their own degree of bias and/or error. Or they manipulate the language that they use so that they can appear to be saying something when in fact they’re doing something else. That’s politics. Watson was optimized for Jeopardy’s style of game play. Also, it does not have the linguistic analysis abilities needed to keep up with politics. For example, metaphors, euphemisms, sarcasm and things of the like would all confuse Watson. Some day though. So what makes Watson’s genius possible? A whole lot of storage, sophisticated hardware, super fast processors and Apache Hadoop, the open source technology pioneered by Yahoo! and at the epicenter of big data and cloud computing. Hadoop was used to create Watson’s “brain,” or the database of knowledge and facilitation of Watson’s processing of enormously large volumes of data in milliseconds. Watson depends on 200 million pages of content and 500 gigabytes of preprocessed information to answer Jeopardy questions. That huge catalog of documents has to be searchable in seconds. On a single computer, it would be impossible to do, but by using Hadoop and dividing the work on to many computers it can be done. In 2005, Yahoo! created Hadoop and since then has been the most active contributor to Apache Hadoop, contributing over 70 percent of the code and running the world’s largest Hadoop implementation, with more than 40,000 servers. As a point of reference, our Hadoop implementation processes 1.5 times the amount of data in the printed collections in the Library of Congress per day, approximately 16 terabytes of data. # Distributed Apache Flume Setup With an HDFS Sink I have recently spent a few days getting up to speed with Flume, Cloudera‘s distributed log offering. If you haven’t seen this and deal with lots of logs, you are definitely missing out on a fantastic project. I’m not going to spend time talking about it because you can read more about it in the users guide or in the Quora Flume Topic in ways that are better than I can describe it. But I will tell you about is my experience setting up Flume in a distributed environment to sync logs to a HDFS sink. Context I have 3 kinds of servers all running Ubuntu 10.04 locally: hadoop-agent-1: This is the agent which is producing all the logs hadoop-collector-1: This is the collector which is aggregating all the logs (from hadoop-agent-1, agent-2, agent-3, etc) hadoop-master-1: This is the flume master node which is sending out all the commands To add the CDH3 repository: Create a new file /etc/apt/sources.list.d/cloudera.list with the following contents: where: is the name of your distribution, which you can find by running lsb_release -c. For example, to install CDH3 for Ubuntu Lucid, use lucid-cdh3 in the command above. (To install a different version of CDH on a Debian system, specify the version number you want in the -cdh3 section of the deb command. For example, to install CDH3 Update 0 for Ubuntu Maverick, use maverick-cdh3u0 in the command above.) (Optionally) add a repository key. Add the Cloudera Public GPG Key to your repository by executing the following command: This key enables you to verify that you are downloading genuine packages Initial Setup On both hadoop-agent-1 and hadoop-collector-1, you’ll have to install flume-node (flume-node contains the files necessary to run the agent or the collector). First let’s jump onto the agent and set that up. Tune the hadoop-master-1 and hadoop-collector-1 variables appropriately, but change your /etc/flume/conf/flume-site.xml to look like: Now on to the collector. Same file, different config. Web Based Setup I chose to do the individual machine setup via the master web interface. You can get to this pointing your web browser at http://hadoop-master-1:35871/ (replace hadoop-master-1 with public/private DNS IP of your flume master or setup /etc/hosts for a hostname). Ensure that the port is accessible from the outside through your security settings. At this point, it was easiest for me to ensure all hosts running flume could talk to all ports on all other hosts running flume. You can certainly lock this down to the individual ports for security once everything is up and running. At this point, you should go to hadoop-agent-1 and hadoop-collector-1 run /etc/init.d/flume-node start. If everything goes well, then the master (whose IP is specified in their configs) should be notified of their existence. Now you can configure them from the web. Click on the config link and then fill in the text lines as follows (use what is in bold): Source: tailDir(“/var/logs/apache2/”,”.*.log”) Note: I chose to use tailDir since I will control rotating the logs on my own. I am also using agentBESink because I am ok with losing log lines if the case arises. Now click Submit Query and go back to the config page to setup the collector: Source: collectorSource(35853) This is going to tell the collector that we are sinking to HDFS with the with an initial folder of ‘flume’. It will then log to sub-folders with “flume/logs/YYYY/MM/DD/HH00” (or 2011/02/03/1300/server-.log). Now click Submit Query and go to the ‘master’ page and you should see 2 commands listed as “SUCCEEDED” in the command history. If they have not succeeded, ensure a few things have been done (there are probably more, but this is a handy start: Always use double quotes (“) since single quotes (‘) aren’t interpreted correctly. UPDATE: Single quotes are interpreted correctly, they are just not accepted intentionally (Thanks jmhsieh) In your regex, use something like “.*\\.log” since the ‘.’ is part of the regex. In your regex, ensure that your blackslashes are properly escaped: “foo\\bar” is the correct version of trying to match “foo\bar”. Additionally, there are also tables of Node Status and Node Configuration. These should match up with what you think you configured. At this point everything should work. Admittedly I had a lot of trouble getting to this point. But with the help of the Cloudera folks and the users on irc.freenode.net in #flume, I was able to get things going. The logs sadly aren’t too helpful here in most cases (but look anyway cause they might provide you with more info than they provided for me). If I missed anything in this post or there is something else I am unaware of, then let me know. # Apache Web Server Virtual Hosts ## 15.10. Virtual Hosts Using virtual hosts, host several domains with a single web server. In this way, save the costs and administration workload for separate servers for each domain. One of the first web servers that offered this feature, Apache offers several possibilities for virtual hosts: • Name-based virtual hosts • IP-based virtual hosts • Operation of multiple instances of Apache on one machine ### 15.10.1. Name-Based Virtual Hosts With name-based virtual hosts, one instance of Apache hosts several domains. You do not need to set up multiple IPs for a machine. This is the easiest, preferred alternative. Reasons against the use of name-based virtual hosts are covered in the Apache documentation. Configure it directly by way of the configuration file (/etc/apache2/httpd.conf). To activate name-based virtual hosts, a suitable directive must be specified: NameVirtualHost *. * is sufficient to prompt Apache to accept all incoming requests. Subsequently, the individual hosts must be configured: In the case of Apache 2, however, the paths of log files as shown in the above example (and in any examples further below) should be changed from /var/log/httpd to /var/log/apache2. A VirtualHost entry also must be configured for the domain originally hosted on the server (www.mycompany.com). So in this example, the original domain and one additional domain (www.myothercompany.com) are hosted on the same server. Just as in NameVirtualHost, a * is used in the VirtualHost directives. Apache uses the host field in the HTTP header to connect the request with the virtual host. The request is forwarded to the virtual host whose ServerName matches the host name specified in this field. For the directives ErrorLog and CustomLog, the log files do not need to contain the domain name. Here, use a name of your choice. ServerAdmin designates the e-mail address of the responsible person that can be contacted if problems arise. In the event of errors, Apache gives this address in the error messages it sends to the client. ### 15.10.2. IP-Based Virtual Hosts This alternative requires the setup of multiple IPs for a machine. In this case, one instance of Apache hosts several domains, each of which is assigned a different IP. The following example shows how Apache can be configured to host the original IP (192.168.1.10) plus two additional domains on additional IPs (192.168.1.20 and 192.168.1.21). This particular example only works on an intranet, as IPs ranging from 192.168.0.0 to 192.168.255.0 are not routed on the Internet. #### 15.10.2.1. Configuring IP Aliasing For Apache to host multiple IPs, the underlying machine must accept requests for multiple IPs. This is called multi-IP hosting. For this purpose, IP aliasing must be activated in the kernel. This is the default setting in SUSE LINUX. Once the kernel has been configured for IP aliasing, the commands ifconfig and route can be used to set up additional IPs on the host. These commands must be executed as root. For the following example, it is assumed that the host already has its own IP (such as 192.168.1.10), which is assigned to the network device eth0. Enter the command ifconfig to find out the IP of the host. Further IPs can be added with commands such as the following: All these IPs will be assigned to the same physical network device (eth0). #### 15.10.2.2. Virtual Hosts with IPs Once IP aliasing has been set up on the system or the host has been configured with several network cards, Apache can be configured. Specify a separate VirtualHost block for every virtual server: VirtualHost directives are only specified for the additional domains. The original domain (www.mycompany.com) is configured through its own settings (under DocumentRoot, etc.) outside the VirtualHost blocks. ### 15.10.3. Multiple Instances of Apache With the above methods for providing virtual hosts, administrators of one domain can read the data of other domains. To segregate the individual domains, start several instances of Apache, each with its own settings for User, Group, and other directives in the configuration file. In the configuration file, use the Listen directive to specify the IP handled by the respective Apache instance. For the above example, the directive for the first Apache instance would be as follows: For the other two instances: # Apache Hadoop 0.23.0 has been released The Apache Hadoop PMC has voted to release Apache Hadoop 0.23.0. This release is significant since it is the first major release of Hadoop in over a year, and incorporates many new features and improvements over the 0.20 release series. The biggest new features are HDFS federation, and a new MapReduce framework. There is also a new build system (Maven), Kerberos HTTP SPNEGO support, as well as some significant performance improvements which we’ll be covering in future posts. Note, however, that 0.23.0 is not a production release, so please don’t install it on your production cluster. # Monitoring and Scaling Production Setup using Scalr Scalr is a fully redundant, self-curing and self-scaling hosting environment utilizing Amazon’s EC2. It is an initiative delivered by Intridea. It allows you to create server farms through a web-based interface using prebuilt AMI’s for load balancers (pound or nginx), app servers (apache, others), databases (mysql master-slave, others), and a generic AMI to build on top of. The health of the farm is continuously monitored and maintained. When the Load Average on a type of node goes above a configurable threshold a new node is inserted into the farm to spread the load and the cluster is reconfigured. When a node crashes a new machine of that type is inserted into the farm to replace it. 4 AMI’s are provided for load balancers, mysql databases, application servers, and a generic base image to customize. Scalr allows you to further customize each image, bundle the image and use that for future nodes that are inserted into the farm. You can make changes to one machine and use that for a specific type of node. New machines of this type will be brought online to meet current levels and the old machines are terminated one by one. The project is still very young, but we’re hoping that by open sourcing it the AWS development community can turn this into a robust hosting platform and give users an alternative to the current fee based services available. # Run a Hadoop Cluster on EC2 the easy way using Apache Whirr To set up the cluster, you need two things- 1.    An AWS account 2.    A local machine running Ubuntu (Mine was running lucid) The following steps should do the trick- Step 1 – Add the JDK repository to apt and install JDK (replace lucid with your Ubuntu version, check using lsb_release – c in the terminal) – Step 2 – Create a file named cloudera.list in /etc/apt/sources.list.d/ and paste the following content in it (again, replace lucid with your version)- Step 3 – Add the Cloudera Public Key to your repository, update apt,  install Hadoop and Whirr- Step 4 – Create a file hadoop.properties in your $HOME folder and paste the following content in it. Step 5 – Replace [AWS ID] and [AWS KEY] with your own AWS Access Identifier and Key. You can find them in the Access Credentials section of your Account. Notice the third line, you can use it to define the nodes that will run on your cluster. This cluster will run a node as combined namenode (nn) and jobtracker (jt) and another node as combined datanode (dn) and tasktracker (tt). Step 6 – Generate a RSA keypair on your machine. Do not enter any passphrase. Step 7 – Launch the cluster! Navigate to your home directory and run- This step will take some time as Whirr creates instances and configures Hadoop on them. Step 8 – Run a Whirr Proxy. The proxy is required for secure communication between master node of the cluster and the client machine (your Ubuntu machine). Run the following command in a new terminal window- Step 9 – Configure the local Hadoop installation to use Whirr for running jobs. Step 10 – Add$HADOOP_HOME to ~/.bashrc file by placing the following line at the end- Step 11 – Test run a MapReduce job- Step 12 (Optional) – Destroy the cluster- Note: This tutorial was prepared using material from the CDH3 Installation Guide # Main Focus I recently attended Cloudera’s Developer Training for Apache Hadoop in Dallas which will play a big part in our application development.
2018-04-24 18:26:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1726798415184021, "perplexity": 3064.1518144792863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947033.92/warc/CC-MAIN-20180424174351-20180424194351-00152.warc.gz"}
https://www.physicsforums.com/threads/find-a-solution-to-an-equation-with-2-variables.558541/
# Homework Help: Find a solution to an equation with 2 variables 1. Dec 9, 2011 ### xeon123 Hi, I've this one equation. Is it possible to find a solution for this equation? How? 5x+3y=29 Is this a linear equation? 2. Dec 9, 2011 ### grzz Yes it is a linear equation. Given any set of values for x one can plot a graph showing the corresponding values of y which, together with the values of x, satisfy the equation. 3. Dec 9, 2011 ### xeon123 I'm going to replace 29 to 30, because it's easier to solve. Now the equation become 5x+3y=30. To solve this equation, I did the following: x=$\frac{30-3y}{5} = 6-\frac{3}{5}y$ y=$\frac{30-5x}{3} = 10-\frac{5}{3}x$ replacing the x variable in the y equation, I ended in the result: y=$10-\frac{5}{3}(6-\frac{3}{5}y) = 10-10+y$ Now, I'm stuck. y=y???? Really???? I must be doing something wrong and I don't know where. Any help? Last edited: Dec 9, 2011 4. Dec 9, 2011 ### hotvette Where did you get 30? It should be 29 based on your original equation. 5. Dec 9, 2011 ### xeon123 Yes, it should be 29 on the original equation, but doesn't matter if it's 29 or 30. I just put 30 now, because because I think it's easier to solve it. I just want to know what am I doing wrong, to my solution end in the stuck stage. 6. Dec 9, 2011 ### grzz Getting y = y shows that the working was OK. But getting expressions of x and for y from the given equation and substituting these expressions back into the equation will lead nowhere except to tell us that y = y! 7. Dec 9, 2011 ### xeon123 So, how do I solve this equation? 8. Dec 9, 2011 ### Staff: Mentor The equation 5x + 3y = 30 (your revised equation) represents a line in the plane. As such, there is no single solution. Every pair of points (x, y) that is on the line is a solution to this equation. Some points on the line are (6, 0), (0, 10), and an infinite number of others. 9. Dec 9, 2011 ### xeon123 1 - So, you mean that exists many solutions. This is because of this specific equation, or all these type of equations have several solutions, and what I did is enough? 2 - If I draw a line in a plan between (6,0) and (0,10), it means that all the values that are inside the area of the line ( (6,0) -- (0,0) -- (0,10); points AOB) are solutions? 10. Dec 9, 2011 ### Staff: Mentor Yes, there are many (an infinite number of) solutions to your equation. Any equation of the form Ax + By = C, where A and B are not both zero, represents a straight line. Any point (x, y) on the line is a solution to the equation, and any solution to the equation is a point on the line. As to what you did, all you did was write an equivalent equation y = 10 - (5/3)x. This is just another form for the same equation - you didn't find any solutions. You also wrote another equation in which you solved for x. That also is not finding a solution. Toward the end of your 2nd post, you did some more work in which you concluded that y = y. This is true, but not at all useful. A variable is obviously equal to itself. Any of the three equations can be used to find as many solutions as you want. For example, using the equation y = 10 - (5/3)x, if you let x = 0, it's easy to see that y = 10. So the point (0, 10) is a solution. If you let x = 3, then y = 5, so (3, 5) is a solution. And so on for as many values of x as you want to put into the equation. No. The only solutions to the equation 5x + 3y = 30 (or y = (-5/3)x + 10) are the points on the line - not under it or over it. The points (6, 0) and (0, 10) are solutions to the equation (and are points on the line) because each of these pairs of numbers makes 5x + 3y = 30 a true statement. The point (0, 0) is not a solution (and so is not on the line) because 5*0 + 3*0 $\neq$ 30. 11. Dec 9, 2011 ### DaleSwanson If you want a single solution for all the variables you'll have to have at least as many equations as variables. Here you have 2 variables and 1 equation, so a single solution isn't possible. Note that having as many equations as variables doesn't mean there will be a solution. In the case of your equation: y = -(5/3)x + 10 This equation will give a single solution: y = x + 2 In this case, the solution is where the two lines intersect, x=3, y=5. However this equation: y = -(5/3)x + 9 will produce a parallel line that will never intersect. Thus, there is no solution. Here's a good online graphing calculator to play around with this stuff: http://my.hrw.com/math06_07/nsmedia/tools/Graph_Calculator/graphCalc.html 12. Dec 9, 2011 ### silentbob14 Yeah you need at least two equations in this case to find a single solution ;]
2018-11-21 06:09:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7164592742919922, "perplexity": 340.3913819578622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039747215.81/warc/CC-MAIN-20181121052254-20181121074254-00187.warc.gz"}
https://tbc-python.fossee.in/convert-notebook/Fundamentals_Of_Aerodynamics_by_J._D._Anderson_Jr./CHAPTER01_3.ipynb
# CHAPTER01:AERODYNAMICS SOME INTRODUCTORY THOUGHTS¶ ## Example E01 : Pg 12¶ In [1]: # All the quantities are in SI units #from math import sind,cosd,tand,sqrt,math M_inf = 2.; # freestream mach number p_inf = 101000.; # freestream static pressure rho_inf = 1.23; # freestream density T_inf = 288.; # freestream temperature R = 287.; # gas constant of air a = 5.; # angle of wedge in degrees p_upper = 131000.; # pressure on upper surface p_lower = p_upper; # pressure on lower surface is equal to upper surface c = 2.; # chord length of the wedge c_tw = 431.; # shear drag constant # SOLVING BY FIRST METHOD # According to equation 1.8, the drag is given by D = I1 + I2 + I3 + I4 # Where the integrals I1, I2, I3 and I4 are given as I1 = 5.25*10**3;#(-p_upper*sind(-a)*c/cosd(a))+(-p_inf*sind(90)*c*tand(a)); # pressure drag on upper surface I2 = 5.25*10**3;#(p_lower*sind(a)*c/cosd(a))+(p_inf*sind(-90)*c*tand(a)); # pressure drag on lower surface I3 = 937;#c_tw*cosd(-a)/0.8*((c/cosd(a))**0.8); # skin friction drag on upper surface I4 = 937;#c_tw*cosd(-a)/0.8*((c/cosd(a))**0.8); # skin friction drag on lower surface D = I1 + I2 + I3 + I4; # Total Drag a_inf =340;#math.sqrt(1.4*R*T_inf); # freestream velocity of sound v_inf = 680;#M_inf*a_inf; # freestream velocity q_inf =1.24*10**4;# 1/2*rho_inf*(v_inf**2); # freestream dynamic pressure S = c*1; # reference area of the wedge c_d1 =0.0217;#D/q_inf/S; # Drag Coefficient by first method print"The Drag coefficient by first method is:", c_d1 # SOLVING BY SECOND METHOD C_p_upper = (p_upper-p_inf)/q_inf; # pressure coefficient for upper surface C_p_lower = (p_lower-p_inf)/q_inf; # pressure coefficient for lower surface c_d2 =0.0217;# (1/c*2*((C_p_upper*tand(a))-(C_p_lower*tand(-a)))) + (2*c_tw/q_inf/cosd(a)*(2**0.8)/0.8/c); print"The Drag coefficient by second method is:", c_d2 The Drag coefficient by first method is: 0.0217 The Drag coefficient by second method is: 0.0217 ## Example E03 : Pg 32¶ In [7]: # All the quantities are expressed in SI units alpha = 4.; # angle of attack in degrees c_l = 0.85; # lift coefficient c_m_c4 = -0.09; # coefficient of moment about the quarter chord x_cp = 1./4. - (c_m_c4/c_l); # the location centre of pressure with respect to chord print"Xcp/C =",round(x_cp,2) Xcp/C = 0.36 ## Example E05 : Pg 38¶ In [3]: import math V1 = 550.; # velocity of Boeing 747 in mi/h h1 = 38000.; # altitude of Boeing 747 in ft P1 = 432.6; # Freestream pressure in lb/sq.ft T1 = 390.; # ambient temperature in R T2 = 430.; # ambient temperature in the wind tunnel in R c = 50.; # scaling factor # Calculations # By equating the Mach numbers we get V2 = V1*math.sqrt(T2/T1); # Velocity required in the wind tunnel # By equating the Reynold's numbers we get P2 = c*T2/T1*P1; # Pressure required in the wind tunnel P2_atm = P2/2116.; # Pressure expressed in atm print"The velocity required in the wind tunnel is:mi/h",V2 print"The pressure required in the wind tunnel is:lb/sq.ft or atm",P2,P2_atm The velocity required in the wind tunnel is:mi/h 577.516788523 The pressure required in the wind tunnel is:lb/sq.ft or atm 23848.4615385 11.2705394794 ## Example E06 : Pg 39¶ In [5]: import math v_inf_mph = 492.; # freestream velocity in miles per hour rho = 0.00079656; # aimbient air density in slugs per cubic feet W = 15000.; # weight of the airplane in lbs S = 342.6; # wing planform area in sq.ft C_d = 0.015; # Drag coefficient # Calculations v_inf_fps = v_inf_mph*(88./60.); # freestream velocity in feet per second C_l = 2.*W/rho/(v_inf_fps**2)/S; # lift coefficient # The Lift by Drag ratio is calculated as L_by_D = C_l/C_d; print"The lift to drag ratio L/D is equal to:",L_by_D The lift to drag ratio L/D is equal to: 14.0744390238 ## Example E07 : Pg 42¶ In [4]: import math v_stall_mph = 100.; # stalling speed in miles per hour rho = 0.002377; # aimbient air density in slugs per cubic feet W = 15900; # weight of the airplane in lbs S = 342.6; # wing planform area in sq.ft # Calculations v_stall_fps = v_stall_mph*(88/60); # converting stalling speed in feet per second # The maximum lift coefficient C_l_max is given by the relation C_l_max = 2*W/rho/(v_stall_fps**2)/S; print"The maximum value of lift coefficient is Cl_max =",C_l_max The maximum value of lift coefficient is Cl_max = 3.90490596176 ## Example E08 : Pg 42¶ In [3]: import math d = 30.; # inflated diameter of ballon in feet W = 800.; # weight of the balloon in lb g = 32.2; # acceleration due to gravity # part (a) rho_0 = 0.002377; # density at zero altitude # Assuming the balloon to be spherical, the Volume can be given as V = 4/3*math.pi*((d/2)**3); # The Buoyancry force is given as B = g*rho_0*V; # The net upward force F is given as F = B - W; m = W/g; # Mass of the balloon # Thus the upward acceleration of the ballon can be related to F as a = F/m; print"The initial upward acceleration is:a = ft/s2",round(a,2) #Part b d = 30.; # inflated diameter of ballon in feet W = 800.; # weight of the balloon in lb g = 32.2; # acceleration due to gravity rho_0 = 0.002377; # density at sea level (h=0) # part (b) # Assuming the balloon to be spherical, the Volume can be given as V = 4/3*math.pi*((d/2.)**3.); # Assuming the weight of balloon does not change, the density at maximum altitude can be given as rho_max_alt = W/g/V; # Thus from the given variation of density with altitude, we obtain the maximum altitude as h_max = 1/0.000007*(1-((rho_max_alt/rho_0)**(1/4.21))) print"The maximum altitude that can be reached is:h =ft",h_max #Ex8_b d = 30; # inflated diameter of ballon in feet W = 800; # weight of the balloon in lb g = 32.2; # acceleration due to gravity rho_0 = 0.002377; # density at sea level (h=0) # part (b) # Assuming the balloon to be spherical, the Volume can be given as V = 4/3*pi*((d/2)**3); # Assuming the weight of balloon does not change, the density at maximum altitude can be given as rho_max_alt = W/g/V; # Thus from the given variation of density with altitude, we obtain the maximum altitude as h_max = 1/0.000007*(1-((rho_max_alt/rho_0)**(1/4.21))) print"The maximum altitude that can be reached is:\nh =",h_max,"ft" The initial upward acceleration is:a = ft/s2 0.46 The maximum altitude that can be reached is:h =ft 485.062768784 The maximum altitude that can be reached is: h = 485.062768784 ft
2021-06-23 23:53:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8003501296043396, "perplexity": 9445.357735895024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488544264.91/warc/CC-MAIN-20210623225535-20210624015535-00218.warc.gz"}
https://rpg.stackexchange.com/questions/98416/must-a-magic-items-spells-spell-effects-all-have-the-same-cl-and-be-priced-acc
# Must a magic item's spells/spell effects all have the same CL, and be priced accordingly? If I am creating a custom magic item in Pathfinder which uses certain spell effects, is it required that all of the spells use the same CL, and should therefore be priced accordingly in a custom item? It is explicitly the case for staves, as the rules say: The caster level of all spells in a staff must be the same, and no staff can have a caster level of less than 8th, even if all the spells in the staff are low-level spells. And the general rules for reading a magic item entry say: Caster Level (CL): The next item in a notational entry gives the caster level of the item, indicating its relative power. The caster level determines the item’s saving throw bonus, as well as range or other level-dependent aspects of the powers of the item (if variable). It also determines the level that must be contended with should the item come under the effect of a dispel magic spell or similar situation. This makes no provision for an item having multiple abilities used at different caster levels. I cannot find examples of any magic item that casts multiple spells doing so at different CLs. From this, should I infer that if I am creating a magic item which can provide multiple spell effects, even if the caster level does not actually change the spell effect at all, each effect must be considered to be cast at the same caster level, which would be at minimum the required CL for the highest level spell effect? And therefore, should they be priced as if cast at that CL? In this case I assume this is only called out explicitly for staves because they are the category of item intended by default to produce multiple different spell effects and it serves as a reminder that the general rule (one item, one CL) still applies. Yes, you use the minimum caster level to cast the highest spell on the item: While item creation costs are handled in detail below, note that normally the two primary factors are the caster level of the creator and the level of the spell or spells put into the item. A creator can create an item at a lower caster level than her own, but never lower than the minimum level needed to cast the needed spell. Since there are no items with multiple caster levels, nor any reference of such a thing being possible. The lowest caster level on a magic item with multiple spells should be the minimum caster level to cast the highest level spell on the item. Yes, a wondrous item can have an individual CL for each spell effect. Each item contains a number of different beads, each has its own CL mentioned in the CL of the item. Since the Item Entry also lists prices for each individual bead, we can attempt to calculate all costs as if they were different CL for each ability creates comparable prices to the actual item costs. Ability Entry Calculated Blessing 600 560 Healing 9,000 8,400 Smiting 16,800 15,680 Karma 20,000 25,200 Windwalking 46,800 47,520 Summoning 20,000 15,300(plus 10,000 Material component cost) Using the Magic Item Creation rules produces values relatively close to the values as given by the book. Additionally, the Necklace of Fireballs produces different caster level effects for each bead used. It is a bit odd, however, as it also produces a 2d6 fireball, something which is not possible for 3rd level fireball (5d6 minimum). Using the single-use activation option from magic item creation, and bending the miminum CL rule, it calculates out exactly as 150GP per die of damage. • My reading of the Necklace of Fireballs is not that the spheres are of variable CL, just variable damage - and that in all other respects, they are treated as fireballs of CL 10, like for overcoming SR or being dispelled. But the prayer beads are explicit that the different beads have different CLs - though they're a quite unusual case as an item. They seem to be an exception that proves a rule; unless the item description clarifies, effects are cast at (though maybe not priced at) the item's one listed CL. – Carcer Apr 23 '17 at 12:06 For staves this is required based on the rules for staves. For other items this may not be true, wondrous items calculate additional spell effects at different caster levels, but adds a 1.5 multiplier to add the additional effect. The item itself should list a single CL of the highest. This will play into the DC if making it all at once, dispelling and auras. Weapons are simpler as adding properties are independent of each other. Further some items can't have more than one effect: wands, potions, rods. Scrolls can store more than one spell, but each is an independent in terms of crafting. The chart on prd is a good reference. So is the section at the bottom for adding new abilities. • Which wondrous items have effects that function at different caster levels? – Carcer Apr 19 '17 at 23:41 • A few that may be based on pricing and DCs: boots of the winter lands, crystal ball (with see invis, telepathy, or detect thoughts), eyes of doom , Spade's answer lists two others. – Eric Apr 20 '17 at 19:42
2020-08-14 12:46:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3854253590106964, "perplexity": 2427.995082060949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739211.34/warc/CC-MAIN-20200814100602-20200814130602-00403.warc.gz"}
https://physics.stackexchange.com/questions/391710/kinetic-energy-when-observer-is-moving-and-object-is-stationary?noredirect=1
# Kinetic energy when observer is moving and object is stationary [duplicate] I know kinetic energy is due to motion of an object. But what if, I, the observer of an object is moving and that object appears to be moving for me, then which one has kinetic energy. is it me or that object?I mean the object doesn't actually move but it appears to be moving because I am moving. • Of course , you have a kinetic energy ... – Nehal Samee Mar 12 '18 at 7:21 • The cause for which the object moves in your reference frame is the pseudo force ... This is an imaginary force ... I think kinetic energy of object is absent... – Nehal Samee Mar 12 '18 at 7:23 • @NehalSamee Force and energy are different things! Kinetic energy consists of mass and a velocity. Force is all about acceleration. – Aziraphale Mar 12 '18 at 8:32 • So , @Aziraphale...So you say that the object has kinetic energy even though it's not moving in the platform while moving in the train frame... – Nehal Samee Mar 12 '18 at 9:19 • @NehalSamee, it all depends on the reference frame. If you use the train as refrence, the whole landscape is whooshing by with $-V_train$ plus its own motions (cars, wind, birds, humans etc.). Have a look at the Wikipedia entry. There is a section "Frame of reference" (see answer below). – Aziraphale Mar 12 '18 at 11:39 ## 1 Answer Kinetic energy depends on the reference frame of an observer. Therefore, kinetic energy is not a property of an object only: If you are moving along with an object and you define yourself as reference, then the kinetic energy of the object is zero (in this special reference frame). The English Wikipedia article on Kinetic Energy has a section "Frame of reference" where this is explained in detail. • Thanks guys. Now I'm clear with the concept of kinetic energy – Chin chin Mar 13 '18 at 12:50
2020-07-14 07:13:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37653011083602905, "perplexity": 453.4613630820793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657149205.56/warc/CC-MAIN-20200714051924-20200714081924-00242.warc.gz"}
https://datascience.stackexchange.com/questions/58434/why-my-keras-cnn-model-isnt-learning
# Why my Keras CNN model isn't learning My project have to decide if a image is 'pdr' or 'nonPdr', and I have 391 images (22 of PDR class, and the 369 of nonPdr).. In my first model i was trying this: https://stackoverflow.com/questions/57663233/my-keras-cnn-return-the-same-output-value-how-can-i-fix-improve-my-code .. and my return was always the same... Now I made some changes in my model file: TRAIN_DIR = 'train_data/' #TEST_DIR = 'test_data/' def ReadImages(Path): LabelList = list() ImageCV = list() # Get all subdirectories FolderList = os.listdir(Path) # Loop over each directory for File in FolderList: if(os.path.isdir(os.path.join(Path, File))): for Image in os.listdir(os.path.join(Path, File)): # Convert the path into a file ImageCV.append(cv2.imread(os.path.join(Path, File) + os.path.sep + Image)) # Add a label for each image and remove the file extension classes = ["nonPdr", "pdr"] LabelList.append(classes.index(os.path.splitext(File)[0])) else: ImageCV.append(cv2.imread(os.path.join(Path, File) + os.path.sep + Image)) # Add a label for each image and remove the file extension classes = ["nonPdr", "pdr"] LabelList.append(classes.index(os.path.splitext(File)[0])) return ImageCV, LabelList model = Sequential() model.add(Conv2D(64, kernel_size=(3,3), padding="same",activation="relu", input_shape=(605,700,3))) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(128, kernel_size=(4,4), padding="same",activation="relu")) model.add(MaxPooling2D((2, 2))) model.add(Flatten()) model.add(Dense(2, activation='sigmoid')) model.compile(optimizer='RMSprop', loss='binary_crossentropy', metrics=['accuracy']) data, labels = ReadImages(TRAIN_DIR) data = np.array(data, dtype="float") / 255.0 le = LabelEncoder() labels = le.fit_transform(labels) labels = np_utils.to_categorical(labels, 2) model.fit(data, labels, epochs=8, batch_size=20) model.save('model.h5') ... but running this code give me a Loss = 8.0 and a acc = 0.50 What can I do? I appreciate any answer.. UPDATE I forgot that I reduce my train imgs to 20/20 ## 1 Answer It seems that you have an output size of 2 in your final layer, while you should rather have size 1 (because of your sigmoid output and binary cross entropy loss). Also, don’t use the to_categorical transformation as you only have two classes so no need to one-hot encode. Try to change this and see if training improves. • ValueError: Error when checking target: expected dense_1 to have shape (1,) but got array with shape (2,) I have two classes.. – 0nroth1 Aug 30 '19 at 18:37 • I’ve edited the answer, don’t use to_categorical – Elliot Aug 30 '19 at 18:41 • I had the same output.. please see my update, i reduce to 20 pdr and 20 nonpdr images in my train data.. – 0nroth1 Aug 30 '19 at 18:47 • @Gilberto yep, but still it should not change the issue. However this is very little data and a CNN would hardly work. – Elliot Aug 30 '19 at 18:49 • so what can i do? – 0nroth1 Aug 30 '19 at 18:51
2021-05-06 03:39:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.520829975605011, "perplexity": 6719.584098064992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988725.79/warc/CC-MAIN-20210506023918-20210506053918-00098.warc.gz"}
https://leetcode.com/articles/range-sum-of-bst/
## Solution Intuition and Algorithm We traverse the tree using a depth first search. If node.val falls outside the range [L, R], (for example node.val < L), then we know that only the right branch could have nodes with value inside [L, R]. We showcase two implementations - one using a recursive algorithm, and one using an iterative one. Recursive Implementation Iterative Implementation Complexity Analysis • Time Complexity: , where is the number of nodes in the tree. • Space Complexity: , where is the height of the tree. Analysis written by: @awice.
2019-08-18 05:17:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7229974269866943, "perplexity": 2715.972992391147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313617.6/warc/CC-MAIN-20190818042813-20190818064813-00389.warc.gz"}
http://projecteuclid.org/euclid.dmj/1077306456
## Duke Mathematical Journal ### Degenerations of the hyperbolic space #### Article information Source Duke Math. J. Volume 56, Number 1 (1988), 143-161. Dates First available: 20 February 2004 http://projecteuclid.org/euclid.dmj/1077306456 Mathematical Reviews number (MathSciNet) MR932860 Zentralblatt MATH identifier 0652.57009 Digital Object Identifier doi:10.1215/S0012-7094-88-05607-4 #### Citation Bestvina, Mladen. Degenerations of the hyperbolic space. Duke Mathematical Journal 56 (1988), no. 1, 143--161. doi:10.1215/S0012-7094-88-05607-4. http://projecteuclid.org/euclid.dmj/1077306456. #### References • [Be] A. F. Beardon, The geometry of discrete groups, Graduate Texts in Mathematics, vol. 91, Springer-Verlag, New York, 1983. • [Bo] F. Bonahon, Bouts des variétés hyperboliques de dimension $3$, Ann. of Math. (2) 124 (1986), no. 1, 71–158. • [C-M] M. Culler and J. W. Morgan, Group actions on $\mathbbR$-trees, MSRI preprint, 1985. • [C-S] M. Culler and P. B. Shalen, Varieties of group representations and splittings of $3$-manifolds, Ann. of Math. (2) 117 (1983), no. 1, 109–146. • [Gr] M. Gromov, Groups of polynomial growth and expanding maps, Inst. Hautes Études Sci. Publ. Math. (1981), no. 53, 53–73. • [Mo] G. D. Mostow, Strong rigidity of locally symmetric spaces, Annals of Mathematics Studies, vol. 78, Princeton University Press, Princeton, N.J., 1973. • [M] J. W. Morgan, Group actions on trees and the compactification of the spaces of classes of $\mathrmSO(n,1)$-representations, preprint. • [MS1] J. W. Morgan and P. B. Shalen, Valuations, trees, and degenerations of hyperbolic structures. I, Ann. of Math. (2) 120 (1984), no. 3, 401–476. • [MS2] J. W. Morgan and P. B. Shalen, Degenerations of hyperbolic structures II, preprint. • [P] F. Paulin, Topologies de Gromov équivariantes, structures hyperboliques et arbres réals, dissertation, Orsay, 1986. • [Se] A. Selberg, On discontinuous groups in higher-dimensional symmetric spaces, Contributions to function theory (internat. Colloq. Function Theory, Bombay, 1960), Tata Institute of Fundamental Research, Bombay, 1960, pp. 147–164. • [TH1] W. P. Thurston, Geometry and topology of $3$-manifolds, unpublished manuscript, Princeton, 1979. • [TH2] W. P. Thurston, Hyperbolic structures on $3$-manifolds. I. Deformation of acylindrical manifolds, Ann. of Math. (2) 124 (1986), no. 2, 203–246.
2014-03-17 18:50:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5229871273040771, "perplexity": 2644.0607321814014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678705901/warc/CC-MAIN-20140313024505-00042-ip-10-183-142-35.ec2.internal.warc.gz"}
https://zbmath.org/?q=an%3A0591.12005
zbMATH — the first resource for mathematics On the insolubility of a class of diophantine equations and the nontriviality of the class numbers of related real quadratic fields of Richaud-Degert type. (English) Zbl 0591.12005 We establish criteria for the insolubility in integers $$(x,y)$$ of $$x^ 2-ny^ 2=\pm 4t$$ where $$t$$ is a positive integer and $${\mathbb{Q}}(\sqrt{n})$$ is of Richaud-Degert (R-D) type. These results are then used to establish the nontriviality of the class number of $${\mathbb{Q}}(\sqrt{n})$$ for a large class of R-D types. Tables of values for the class numbers and related diophantine equations are also provided. Immediate consequences of the above results are results in the literature of N. Ankeny, S. Chowla, H. Hasse, H. Takeuchi, S. D. Lang, and H. Yokoi. MSC: 11R29 Class numbers, class groups, discriminants 11R11 Quadratic extensions 11D09 Quadratic and bilinear Diophantine equations Full Text: References: [1] DOI: 10.1016/0022-314X(70)90010-7 · Zbl 0201.05703 [2] Yokoi, Nagoya Math. J 33 pp 139– (1968) · Zbl 0167.04401 [3] Yokoi, Nagoya Math. J 91 pp 151– (1983) · Zbl 0506.10012 [4] Wada, Kôkyûroku in Math 10 (1981) [5] DOI: 10.4153/CJM-1981-006-8 · Zbl 0482.12004 [6] Richaud, Atti Accad. pontif. Nuovi Lincei pp 177– (1866) [7] Ankeny, J. reine angew. Math 217 pp 217– (1965) [8] DOI: 10.1090/S0002-9939-1986-0826478-X [9] Lang, J. reine angew Math 290 pp 70– (1977) [10] Hasse, Elem. Math 20 pp 49– (1965) [11] Degert, Abh. Math. Sem. Univ 22 pp 92– (1958) [12] Azuhata, Nagoya Math. J 95 pp 125– (1984) · Zbl 0533.12008 [13] DOI: 10.1016/0022-314X(86)90053-3 · Zbl 0591.12006 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-09-23 09:53:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6741659641265869, "perplexity": 3590.442859079855}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057417.92/warc/CC-MAIN-20210923074537-20210923104537-00119.warc.gz"}
http://www.ck12.org/book/People%2527s-Physics-Book-Version-3-%2528with-Videos%2529/r3/section/8.2/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> You are reading an older version of this FlexBook® textbook: People's Physics Book Version 3 (with Videos) Go to the latest version. # 8.2: Key Concepts Difficulty Level: At Grade Created by: CK-12 • Work is simply how much energy was transferred from one system to another system. You can always find the work done on an object (or done by an object) by determining how much energy has been transferred into or out of the object through forces. If you graph force vs. distance, the area under the curve is work. (The semantics take some getting used to: if you do work on me, then you have lost energy, and I have gained energy.) • Work can be computed by multiplying the distance traveled with the component of force that is parallel to that distance • Energy can be transformed from one kind into the other; if the total energy at the end of the process appears to be less than at the beginning, the “lost” energy has been transferred to another system, often by heat or sound waves. • Work can be computed by multiplying the distance with the component of force that is parallel to the distance • Efficiency is equal to the output energy divided by the input energy. ## Math of Force, Energy, and Work When an object moves in the direction of an applied force, we say that the force does work on the object. Note that the force may be slowing the object down, speeding it up, maintaining its velocity --- any number of things. In all cases, the net work done is given by this formula: W=Fd=FΔx[1]Work is the dot product of force and displacement. In other words, if an object has traveled a distance d\begin{align*} d \end{align*} under force F\begin{align*} \vec{F} \end{align*}, the work done on it will equal to d\begin{align*} d \end{align*} multiplied by the component of F\begin{align*}\vec{F}\end{align*} along the object's path. Consider the following example of a block moving horizontally with a force applied at some angle: Here the net work done on the object by the force will be Fdcosθ\begin{align*}F d \cos \theta\end{align*}. Feb 23, 2012 Aug 01, 2014
2015-10-08 22:37:31
{"extraction_info": {"found_math": true, "script_math_tex": 5, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6586618423461914, "perplexity": 466.9934957858906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737904854.54/warc/CC-MAIN-20151001221824-00171-ip-10-137-6-227.ec2.internal.warc.gz"}
https://web.stevenson.edu/mbranson/payday-loans-interest.html
## Section1.5Simple and Compound Interest There are two main types of interest - simple and compound. Let's look at the mathematics involved in calculating each. Then we'll see how we can use this mathematics to look at payday loans. ### Subsection1.5.1Simple Interest Mathematically, interest is computed by taking a percentage of the principal. We can think about interest from two perspectives. From the perspective of the bank or the person loaning the money, they are making an investment - they give someone a certain amount of money, with the expectation that they will be paid back more money in the future. From the perspective of the borrower, this money (or the object that they bought with it, like a house or a car) is a loan - they will repay more back in the future than what they borrowed. ###### Simple Interest. If you invest a principle at an APR (written as a decimal) for a given number of years, the investment will now be worth: \begin{equation*} amount = principle*(1 + APR*Time) \end{equation*} This can also be written, using variables, as \begin{equation*} A = P(1 + r*t) \end{equation*} Here, $$A$$ is the amount the investment is worth, $$p$$ is the principle, $$r$$ is the APR (written as a decimal), and $$t$$ is the amount of time that the loan is made for. Simple interest is usually used for transactions between friends and family or for certain types of loans, like automobile purchases. It is used because it is easy to calculate and understand. Here are some examples for you to try: Susan's parents loan them $1000 to help them move into a new apartment. They ask that Susan pay the money back in 2 years with 4% simple interest. How much will they owe? Solution 4%, written as a decimal, is $$0.04\text{.}$$ So using the formula above, we get that Susan owes their parents \begin{equation*} amount = 1000*(1+0.04*2) = \$1080 \end{equation*} Fernando purchases a $20,000 car. The dealership loans them the money to buy the car at a 3.85% simple interest, for 5 years. How much will Fernando need to pay each month to pay off the car in 5 years? Hint First compute how much Fernando owes total. How many monthly payments will there be? Solution The total amount that Fernando owes is: \begin{equation*} amount = 20000*(1 + 0.0385*5) = \$23,850 \end{equation*} Since they are paying off the debt over 5 years, they will make $$5*12=60$$ monthly payments. Assuming that all of the monthly payments are the same, Fernando will need to pay \begin{equation*} \frac{\$23850}{60} = \$397.50 \end{equation*} ### Subsection1.5.2Compound Interest Most lenders actually use a different way of calculating interest, called compound interest. In compound interest, rather than just taking a percentage of the amount once, interest is calculated regularly - every day, month, year, or some other unit of time. This causes interest to accumulate more rapidly, since you are taking a percentage not just of the original amount, but also of any interest that has already been added on. For example, imagine you borrow $100 at 5% interest for 10 years. With simple interest, you would add 5% of$100 - $5 - each year for 10 years, for a total of$50 worth of interest. You would end up owing $150 after 10 years. If you were paying 5% interest compounded annually, though, you would take 5% of the amount each year - including any interest that has already accumulated. If we were using simple interest to calculate the amount owed here, after 10 years, we would owe \begin{equation*} 100 + 100*0.05*10 =$150 \end{equation*} The additional $12.89 is interest which is being paid on the interest!$12 may not seem like that big of a deal (although it can be, for people who are really struggling), but compound interest adds up much faster when the amount borrowed or the interest rate is larger. Even a small increase in the interest rate can have a huge effect on the amount owed at the end of the loan. Creating this table isn't hard, but it is time-consuming. One of the most important parts of mathematics is looking for patterns in numbers that we can use to save time. In this formula, we can see that each time we take the previous amount and add 0.05 times that amount: \begin{equation*} 105 = 100 + 100*0.05 \end{equation*} \begin{equation*} 110.25 = 105 + 105*0.05 \end{equation*} Notice that this is the same thing as multiplying the previous amount by 1.05 \begin{equation*} 105 = 100 + 100*0.05 = 100*1.05 \end{equation*} \begin{equation*} 110.25 = 105 + 105*0.05 = 105*1.05 \end{equation*} Multiplying by the same amount at every step gets us an exponential relationship. We have \begin{equation*} 105 = 100*1.05 \end{equation*} \begin{equation*} 110.25 = 105*1.05 = 100*1.05*1.05 = 100*1.05^2 \end{equation*} \begin{equation*} 115.76 = 110.25*1.05 = 100*1.05^3 \end{equation*} All together, this suggests a faster way to get any of the values in the table above, by using an exponential equation. After n years, we will owe \begin{equation*} 100*1.05^n \end{equation*} dollars. This gives us an equation for the amount of money owed when interest is computed annually. In general, though, lenders can charge interest as often as they like. Interest is frequently compounded monthly (12 times a year) or daily (365 times a year). In these cases, we use basically the same formula - with two main changes. • If you pay 12% interest over the year, but you're being charged interest each month, you wouldn't pay 12% interest every month! Instead, we divide the interest rate by the number of times we compound each year. 12% annual interest would become \begin{equation*} \frac{12\%}{12} = 1\% \end{equation*} monthly interest. If we were compounding daily, we would have \begin{equation*} \frac{12\%}{365} = 0.03288\% \end{equation*} daily interest. • We also need to increase the power on our exponent. If you compute interest every month for 10 years, you're computing interest 12*10 = 120 times overall. If you're compounding daily, you would have 365*10 = 3650 times overall. Making these two changes gives us our formula for Compound Interest. ###### Compound Interest. If you invest a principle at an APR (written as a decimal) for a given number of years, compounded $$k$$ times per year, the investment will now be worth: \begin{equation*} amount = principle*\left(1 + \frac{APR}{k}\right)^{(k*Time)} \end{equation*} This can also be written, using variables, as \begin{equation*} A = P\left(1 + \frac{r}{k}\right)^{(k*t)} \end{equation*} Here, $$A$$ is the amount the investment is worth, $$p$$ is the principle, $$r$$ is the APR (written as a decimal), $$k$$ is the number of times interest is compounded each year, and $$t$$ is the amount of time that the loan is made for. This exponential growth is what makes a small change to the interest rate such a big deal. Let's look at two loans - both for $1000, compounded monthly, over a period of 10 years. One lender charges an APR of 4%, while the other lender charges 7%. Let's see how much you owe at the end of 10 years using each formula: \begin{equation*} A = 1000\left(1 + \frac{0.04}{12}\right)^{(12*10)} = 1490.83 \end{equation*} \begin{equation*} A = 1000\left(1 + \frac{0.07}{12}\right)^{(12*10)} = 2009.66 \end{equation*} Even though 7% is only a small amount large than 4%, it makes a huge difference - over$500 - in how much you owe at the end of the loan. Increasing the loan to 10% makes an even bigger difference: \begin{equation*} 1000\left(1 + \frac{0.10}{12}\right)^{(12*10)} = $2707.04 \end{equation*} Here are some examples for you to try: Tasha borrows$2000 at 14% interest, compounded monthly. How much will they owe in a year? Hint We can tell this is compound interest because it says compounded. Figure out which numbers go where in the formula. Solution Tasha will owe \begin{equation*} A = 2000*\left(1 + \frac{0.14}{12}\right)^{12*1} = $2298.68 \end{equation*} Diego borrowed money at a 15% interest rate for 5 years, compounded daily. If they had to pay back$3000, how much did they borrow originally? Hint In this example, what are we looking for? Put everything else into the compound interest formula and solve for the missing variable. Solution We know what Diego has to pay back (A in the formula), but we don't know what he borrowed in the first place (P in the formula). So we have: \begin{equation*} 3000 = P*\left(1 + \frac{0.15}{365}\right)^{(5*365)} \end{equation*} \begin{equation*} \frac{3000}{\left(1 + \frac{0.15}{365}\right)^{(5*365)}} = P \end{equation*} \begin{equation*} $1417.32 = P \end{equation*} Diego only borrowed$1417.32 at the beginning of the 5 years. They paid back over twice as much as they borrowed. ### Subsection1.5.3Finding the Interest Rate One of the ways that payday lenders conceal how much interest they're charging is by charging a "fee" instead of an interest rate. For example, a payday lender may loan someone $200 for two weeks. When they pay the money back, they charge them a$20 fee. In this example, $20 may not seem like a lot - it's only 10% of the$200 that they borrowed. Remember, though, that when we talk about interest rates, we talk about annual percentage rates. This is 10% over two weeks - if you don't pay the $20 at the end of the first two weeks, you'll be charged another$20 at the end of the next two weeks! There are 52 weeks in the year, so by the end of the year (26 two-week periods) you would have paid $$26*20 = 520$$ in interest - that's $$\frac{520}{200} = 2.6 = 260%$$ of the amount you borrowed! The annual percentage rate is 260%! Notice that this is simple interest - the amount of interest that they charge you each time didn't change (it was $20 every two weeks). Most payday lenders work this way - every two weeks you pay the fee until you can pay back the original money you borrowed. The fees vary, but are usually between$10 and $30 per$100 borrowed every two weeks.[1.11.1.2] You can use the simple interest formula to figure out what the equivalent annual interest rate is. Remember that when you use the formula, the time is always in years, so 2 weeks would equal $$\frac{1}{26}$$ of a year. Say you borrow $300 and pay a$25 fee per $100 for every two weeks. You'll owe $$300+25*3 = 375$$ at the end of the two weeks, so the simple interest formula gives you: \begin{equation*} 375 = 300\left(1 + r*\left(\frac{1}{26}\right)\right) \end{equation*} \begin{equation*} 3275 = 300 + \left(\frac{150}{13}\right)r \end{equation*} \begin{equation*} 75 = \left(\frac{150}{13}\right)r \end{equation*} \begin{equation*} r = 75*\left(\frac{13}{150}\right) = 6.5 = 650\% \end{equation*} The equivalent annual percentage rate is 650%. How much would you owe after two weeks if you borrowed the same money on a credit card which charges 22% interest, compounded daily? Here we use the compound interest formula. Even though 22% is a pretty high interest rate for a credit card, you'll still end up paying a lot less than the payday loan: \begin{equation*} A = 300*\left(1 + \frac{0.22}{365}\right)^{\left(365*\frac{1}{26}\right)} =$302.55 \end{equation*} At 22%, compounded daily, you end up just paying \$2.55 in interest after 2 weeks.
2022-09-24 23:09:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.523704469203949, "perplexity": 1012.6445204517852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00089.warc.gz"}
http://rpg.stackexchange.com/questions/6839/is-there-a-way-to-make-the-damage-system-in-4e-more-like-previous-editions/6847
# Is there a way to make the damage system in 4e more like previous editions? I have been playing 4e for almost 3 years now and have noticed that the average character has the capacity to take massive amounts of damage each day without any ill effects the next day. I have even worked out through math that the average fighter can, over the course of one day, take almost four times his hit points in damage and still be at full health at the end of the day. That is without counting in the fact that leader types grant even greater levels of damage-taking abilities. Even if a character is at 1 hp at the end of the day, he wakes up at full hp and is ready to go as if nothing happened the previous day. 4e has no "carry over damage," unlike previous D&D editions that had damage heal slowly over time, with magic healing, and with potions. It is not my intention to add a death spiral to D&D; that is not something that would be beneficial to the game and it would change the game entirely. What I am looking for is a way to adjust the healing and damage system in place to make it feel more like past editions. - 4e was not designed with any sort of realism in mind. – C. Ross Mar 28 '11 at 1:23 Not only that, but hit points don't, and never have, represent actual damage. It's specifically stated in the rule books that hit points represent endurance, luck, advantageous position, and other such factors. You can lose a fight without ever taking an actual scratch. The trouble is that DnD has no rules for actual wounds. – Mike Riverso Mar 28 '11 at 1:39 I was not looking for personal opinions on whether this should be done, or personal feelings on the system itself. that said I feel that even though most damage and hp is abstract there are points that do matter ie. bloodied, and in some instances if a character takes damage equal to his bloodied value in one shot he dies so there are points in damage that matter. This IS a viable question with an answer and i would like to find it. – Rent_ZHB Mar 28 '11 at 1:56 Have you considered more realistic systems like Ars Magica or Warhammer Fantasy Roleplay? – Brian Ballsun-Stanton Mar 28 '11 at 5:53 @Rent_ZHB I wasn't saying that it can't be improved, or it's not worth being improved, just pointing out that the current system does not have realism as a consideration at all. You need to think about a lot of things (pacing, playstyle, etc) if you want to introduce it. – C. Ross Mar 28 '11 at 10:39 First of all, this is not so much a problem, as a design decision. 4e is purposefully designed to let characters start afresh every morning, to make encounter design easier for the DM (and published adventures). Another thing worth noting is the fact that hit points are highly abstract, and don't necessarily represent physical damage characters sustain: Hit points represent more than physical endurance. They represent your character’s skill, luck, and resolve—all the factors that combine to help you stay alive in a combat situation. - PHB293 In fact, as the term bloodied suggests, until the character has lost half of their hit points, they're not even that. With this out of the way... Two ideas come to mind. One is to limit the amount of healing surges a character regains with each extended rest. Probably a flat value modified by sleeping conditions and available medical care, something along the lines of: Everyone regains 3 healing surges after an extended rest. Following circumstances may increase this number: +1 - Moderate Heal check / +2 - Hard Heal check +1 - Moderate Endurance check / +2 - Hard Endurance check +1 - Decent sleeping conditions (warm bed roll, rations & water) / +2 - Excellent sleeping conditions (hard bed, hot meal and grog) These numbers, of course, are not at all tested, and can be modified according to your preferences. This doesn't really change the balance of the game, as long as you're careful with the number of encounters you throw at the tired party. The second idea is to award a wound condition to a character every time they drop below 0 hp. Broken leg, -2 to speed. Fractured arm, -1 to attack rolls, etc. Each damage type could impose its own wound - WFRP had something along these lines, IIRC, and so can be used for inspiration. Healing these wounds may require time, ritual magic, or burning healing surges at the end of an extended rest (in which case these two can be combined). Note that this actually changes the way characters operate, and so should be approached with extreme caution. - No Hitpoints are a fundamental component of the tactical game and directly provide for elements of heroism and adventure. A game with realistic combat would suggest that most wounds would take a character out of play for months. (See how injuries work in Ars Magica. One good swipe with a sword and a character can be on enforced bed-rest for a year.) I'm going to recommend @Magician's take on "not having death be dying" for a look at how to have grevious wounds replace the revolving door of death (which does actually quite up the realism). It's not even enough to hide HP (as discussed here, in context of video games). Any "realistic" combat system will generally have the first good blow decide the fight with a very rapid death-spiral. D&D is not designed nor balanced for this. It's far easier to take the flavour of D&D and port it to another game that has the deadly-injury mechanics you want. However, if we want to add realism, we start by looking at the practice of treatment during the crusades, which equates to roughly the technical era of D&D. We begin by describing what an adventurer at 0 HP has suffered. They are, by definition, bloodied and incapacitated. Looking here most wounds will be leg wounds with mostly soft-tissue damage. (ow). Looking at this horribly formatted post, the consensus seems to be that months of light activity are required to restore a deep leg wound or equivalent. With modern healing. To be truly brutal to your players, grab the Ars Magica rules on wounds. Doing less than 1/4 their HP inflicts a light wound, 1/4 to 1/2 medium, 1/2 to 3/4 heavy, and anything beyond that incapacitating. Every light wound they get takes at least a week to heal with good medical rest and reduces their HS total by 1. Medium by 3, Heavy by 5, and Incapacitating by "you don't get any healing surges." Thus, after one fight, Realistically speaking, anyone who's been injured needs to have a few weeks to a year of bed-rest and recovery. This is why D&D doesn't try for realism. Don't bother with penalties to-hit and to AC, as that will severely imbalance the game. Just adding light, medium, heavy, and incapacitating wounds as complications after a short rest should do the trick neatly. Be prepared for your characters to choose not to adventure, though. - I might try a scaled back version of this, with the wounds being more negligible and only being placed on when bloodied, and effecting the next days HS value. 1 wound placed on every time a character is bloodied, and each wound removes a HS from the next day. having wounds heal over a few days in stead of weeks would work better in the D&D system. Maybe healing 1 wound every two days or so? making them more abstract to fit the system. – Rent_ZHB Mar 28 '11 at 15:59 I have a couple ideas. The first isn't to change the system itself, but to adjust the availability of Healing Surges. I regularly charge surges on failed skill challenges. The player who trips and rolls down the hill isn't going to die to a skill check, but he will lose some surges. I've also fast forwarded through a few trivial fights, but charged the PCs surges. There are some options along those lines if you are willing to much with the system. Just treat surges as a resource. Ritual casting comes to mind. I've heard a lot of complaints that rituals are too slow to be useful. Give the PCs an item that halves the time of a ritual once per day, but costs 3 surges to use. Or something along those lines. You'll have to tweak the numbers. My other suggestion is to add some penalties for getting hit hard. I got this idea from Game of Thrones d20. In that system when you took damage above a set shock value, you made a save or were stunned for a number of rounds. I think something similar could work in 4e. Use bloodied or surge values as a trigger. When a single attack does that much damage, make a save or be dazed or stunned. I don't think this necessarily translates into fixing the problem of PCs having too much HP, but it will make combat more lethal. - i have little problems with lethality in combat but I like the suggestion of having healing surges be used in rituals and such, it gives an abstract damage without damage fell. – Rent_ZHB Mar 28 '11 at 2:03 Good idea of using surges as a cost of failing a skill challenge. I will remember that one. – C. Ross Mar 28 '11 at 19:14 @C. Ross, TY but I can't take credit for it. It was in several LFR mods and for the longest time I assumed that was how you were supposed to punish certain failures. – valadil Mar 28 '11 at 19:50 I've been playing the game for about as much time as you did and I came to the same conclusions. What I have done for the last few sessions (since we started a new game) was to make my players use their healing surges during the night (long rest) in place of getting full HP in the morning. A new batch of healing surge is given to them when waking up as it already is for action points. This way, if a character is badly hurt and has less than 4 healing surges he won't have full HP the next morning. Ex : Full HP : 100 Surge value : 25 Healing surges : 8 Going to sleep with 18 HP and 2 Healing surges, the character would wake up with 50 HP healed (2x Surge value) for a total of 68 on 100 and 8 new healing surges. He could use some of his healing surges during an extra short rest but they would be lost for the day. I feel the game is a bit more realistic this way and it makes the squishy characters squishier. - Did this result in the characters stopping early more often? It seems that rational folks would not willingly go below 4 surges when given a chance. – Pat Ludwig Mar 28 '11 at 19:21 @Pat Ludwig Since all classes don't have the same amount of healing surges per day, no. For exemple, in this one game, i have a warden (9+CON surges per day) who uses constitution as his main stat and a ranger (6+CON surges per day) who maxed his strenght for his two weapon fighting. – Monkios Mar 29 '11 at 13:27 Here is something that you might give a try. Every time a character is bloodied, roll a dice; d4 for heroic level, d6 for paragon, d10 for epic tier. The number you roll is the number of "carry over damage". These are not more damage but a small portion of the points of damage already taken that are taking a toll on the character. These points can not be healed by a healing surge. They can only be healed by magic or by 2 points (per tier) plus constitution modifier per day after an extended rest. This should give you a simple method for incorporating carry over damage, without seriously burdening the system. As a character is bloodied through out a day of adventuring they will experience something like combat fatigue. Magical healing off sets this but as the parties power wanes through the day the points would begin to accumulate. You could even include penalties for skills that are strength, constitution or dexterity based if a character has over a certain percentage of carry over points against their total. - i like this it add "carry over damage" without making it detrimental to the parties health. one question is would the "magic healing" extend to healing powers? because if so it makes this damage almost void in a party with a leader (all parties should have one) – Rent_ZHB Mar 28 '11 at 2:08 Hmmmm, I think I would add the caveat that only divine powers characters would fall under the magical affect and be able to heal this carry over damage. – Acedrummer_CLB Mar 28 '11 at 12:37 but that would hugely over power clerics as a must have over all the other leader types, I think that no power based healing should heal the "carry over damage." – Rent_ZHB Mar 28 '11 at 16:02 mmmm, good point. – Acedrummer_CLB Mar 29 '11 at 0:26 The simplest way is to eliminate the "novel" elements of 4E... 1. no spending surges without a power or spell based cause 2. no base of Con for hitpoints; just use the per level dice. (Alternatively, cut the base to half-Con or 1/3 Con to retain some of the 4E toughness. Those two changes will render PC's FAR more fragile. - WAY too fragile, and nigh-unplayable, I would say, regardless of any realism. – YogoZuno Mar 31 '11 at 0:46 i second Yogo here, sorry not looking for fragility and if i did remove the con base from the players hp i would have to halve every monsters HP as well, as well as lower the damage out put of the monsters as to not be KO-ing the fighter when he is hit once. You never really touched on the heart of the question: day to day build up of damage. – Rent_ZHB Mar 31 '11 at 5:14 @rent what part of "No spending surges" did you not get... that's the major portion of the damage not lingering. restrict healing surges, and you again make damage cumulative over the day. The ability to spend surges willy-nilly during rests is where most of the "fresh every fight" mode comes from. – aramis Apr 1 '11 at 9:25 @aramis in your system you still have a full heal every extended rest, for you don't need to use healing surges for that. – Zachiel Apr 11 '13 at 10:22
2016-07-24 18:41:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32998430728912354, "perplexity": 2296.010770440371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824133.26/warc/CC-MAIN-20160723071024-00250-ip-10-185-27-174.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/27097/changing-the-font-size-in-a-table?answertab=active
# Changing the font size in a table What's the recommend way of changing the font size in a particular table? Is there a better way than enclosing all values with, for example, the \tiny function. - Write \tiny immediately after \begin{table}. If you don't use a (floating) table environment, enclose your (e.g.) tabular environment in a group and write \tiny after \begingroup. \documentclass{article} \begin{document} \begin{table} \tiny \centering \begin{tabular}{cc} Knuth & Lamport \end{tabular} \end{table} \end{document} EDIT: To change the fontsize for all tables (or even floats of every type), one may use the floatrow package (this also saves typing \centering in every table): \documentclass{article} \usepackage{floatrow} \DeclareFloatFont{tiny}{\tiny}% "scriptsize" is defined by floatrow, "tiny" not \floatsetup[table]{font=tiny} \begin{document} \begin{table} \begin{tabular}{cc} Knuth & Lamport \end{tabular} \end{table} \end{document} - Just one quick additional remark: whenever your figure/table has a caption, be sure to change the font size only after you've specified the caption. This is particularly important if you're specifying tiny or scriptsize for the font size. –  Mico Aug 31 '11 at 17:26 Anyway to specify font sizes between \tiny and \small? \tiny is too small for me to read and \small is the same as normal font size –  sphere Aug 13 '13 at 0:16 @sphere How about \footnotesize? –  lockstep Aug 13 '13 at 6:53 Dredging up the past... @Mico, I don't see this behaviour at all. None of my captions (specified with \caption) are affected in any way with a fontsize declaration before them inside a table environment. –  a different ben Jan 21 '14 at 23:21 Scale down your table to the textwidth \documentclass{article} \usepackage{graphicx} \begin{document} \begin{table} \resizebox{\textwidth}{!}{% \begin{tabular}{cc} Knuth & Lamport \end{tabular}} \end{table} \end{document} then you have the optimal font size. However, existing tabular lines are also scaled down which doesn't matter because it looks nicer. - You gave the needed answer, instead of the answer asked for. Great job! –  Marcel Valdez Orozco Feb 9 '13 at 5:51 this works great! thanks! –  ultrajohn May 8 '13 at 12:25 Now if this would only work with text wrapping as well and eliminate the need to manually specify columns widths, that would be awesome. Is there a way to use your functionality and also have automatic text wrapping? –  sphere Aug 9 '13 at 2:57 use a tabularx with X columns –  Herbert Aug 9 '13 at 20:50 superb solution. I like it +1 –  Espanta Feb 28 '14 at 14:11
2015-10-04 21:45:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9515631794929504, "perplexity": 4030.050352758955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736676092.10/warc/CC-MAIN-20151001215756-00154-ip-10-137-6-227.ec2.internal.warc.gz"}