url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://pos.sissa.it/395/457/ | Volume 395 - 37th International Cosmic Ray Conference (ICRC2021) - CRI - Cosmic Ray Indirect
Simulations of Cosmic Ray Ensembles originated nearby the Sun
D. Alvarez-Castillo*, O. Sushchov, P. Homola, D. Beznosko, N. Budnev, D. Gora, A. Gupta, B. Hnatyk, M. Kasztelan, P. Kovacs, B. Lozowski, M. Medvedev, J. Miszczyk, A. Mozgova, V. Nazari, M. Niedzwiecki, M. Pawlik, M. Rosas, K. Rzecki, K. Smelcerz, K. Smolek, J. Stasielak, S. Stuglik, M. Svanidze, A. Tursunov, Y. Verbetsky, T. Wibig, J. Zamora-Saa, B. Poncyljusz, J. Medrala, G. Opila, L. Bibrzyck and M. Piekarczyk.et al. (click to show)
Full text: pdf
Pre-published on: July 08, 2021
Published on:
Abstract
Cosmic Ray Ensembles (CRE) are yet not observed groups of cosmic rays with a common primary interaction vertex or the same parent particle. One of the processes capable of initiating identifiable CRE is an interaction of an ultra-high energy (UHE) photon with the solar magnetic field which results in an electron pair production and the subsequent synchrotron radiation. The resultant electromagnetic cascade forms a very characteristic line-like front of a very small width ($\sim$ meters), stretching from tens of thousands to even many millions of kilometers. In this contribution we present the results of applying a toy model to simulate detections of such CRE at the ground level with an array of ideal detectors of different dimensions. The adopted approach allows us to assess the CRE detection feasibility for a specific configuration of a detector array. The process of initiation and propagation of an electromagnetic cascade originated from an UHE photon passing near the Sun, as well as the resultant particle distribution on ground, were simulated using the CORSIKA program with the PRESHOWER option, both modified accordingly. The studied scenario results in photons forming a cascade that extends even over tens of millions of kilometers when it arrives at the top of the Earth's atmosphere, and the photon energies span practically the whole cosmic ray energy spectrum. The topology of the signal consists of very extended CRE shapes, and the characteristic, very much elongated disk-shape of the particle distribution on ground illustrates the potential for identification of CRE of this type.
DOI: https://doi.org/10.22323/1.395.0457
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access
Copyright owned by the author(s) under the term of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | 2021-07-25 21:16:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.262824147939682, "perplexity": 1838.907594128483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151866.98/warc/CC-MAIN-20210725205752-20210725235752-00000.warc.gz"} |
http://intothecontinuum.blogspot.com/2013/11/the-trace-distance-between-pure-states.html | ## Saturday, 30 November 2013
### The trace distance between pure states
In this post, the relationship between two distance measures on density operators will be analyzed for a few specific cases. Namely, the trace distance and the Euclidean distance of two pure states will be studied.
The trace norm is defined as $||M||_{Tr}=Tr(\sqrt{M^\dagger M})$, and the trace distance between two density operators $\rho$ and $\sigma$ is given by $||\rho-\sigma||_{Tr}$.
Let's calculate an expression for the trace distance between $\ket{0}$ and $\cos(\theta)\ket{0}+\sin(\theta)\ket{1}$ as a function of $\theta$.
Let
$\rho=\ket{0}\bra{0}=\begin{pmatrix} 1&0\\0&0\end{pmatrix}$
and
\begin{align*} \sigma&=(\cos(\theta)\ket{0}+\sin(\theta)\ket{1})(\cos(\theta)\bra{0}+\sin(\theta)\bra{1})\\ \sigma&=\cos^2(\theta)\ket{0}\bra{0}+\cos(\theta)\sin(\theta)\ket{0}\bra{1}+\cos(\theta)\sin(\theta)\ket{1}\bra{0}+\sin^2(\theta)\ket{1}\bra{1} \\ \sigma&=\begin{pmatrix}\cos^2(\theta)&\cos(\theta)\sin(\theta)\\ \cos(\theta)\sin(\theta)&\sin^2(\theta) \end{pmatrix}. \end{align*}
In this case, since both $\rho$ and $\sigma$ are density operators, they are Hermitian. Thus, $(\rho-\sigma)^\dagger=(\rho-\sigma)$ implying that $(\rho-\sigma)^\dagger(\rho-\sigma)=(\rho-\sigma)^2$. Then
$\rho-\sigma=\begin{pmatrix}1-\cos^2(\theta)&-\cos(\theta)\sin(\theta)\\ -\cos(\theta)\sin(\theta)&-\sin^2(\theta)\end{pmatrix}=\begin{pmatrix}\sin^2(\theta)&-\cos(\theta)\sin(\theta)\\ -\cos(\theta)\sin(\theta)&-\sin^2(\theta)\end{pmatrix},$
from which it follows that
\begin{align*} (\rho-\sigma)^2&=\begin{pmatrix}\sin^2(\theta)&-\cos(\theta)\sin(\theta)\\ -\cos(\theta)\sin(\theta)&-\sin^2(\theta)\end{pmatrix}\begin{pmatrix}\sin^2(\theta)&-\cos(\theta)\sin(\theta)\\ -\cos(\theta)\sin(\theta)&-\sin^2(\theta)\end{pmatrix}\\ &=\begin{pmatrix}\sin^4(\theta)+\cos^2(\theta)\sin^2(\theta)&-\sin^2(\theta)\cos(\theta)\sin(\theta)+\sin^2(\theta)cos(\theta)\sin(\theta) \\ -\sin^2(\theta)\cos(\theta)\sin(\theta) +\sin^2(\theta)cos(\theta)\sin(\theta)&\sin^4(\theta)+\cos^2(\theta)\sin^2(\theta)\end{pmatrix} \\ &=\begin{pmatrix} \sin^2(\theta)&0 \\ 0&\sin^2(\theta). \end{pmatrix} \end{align*}
Thus,
$\sqrt{(\rho-\sigma)^2}=\begin{pmatrix} \sqrt{\sin^2(\theta)}&0 \\ 0&\sqrt{\sin^2(\theta)}\end{pmatrix} =\begin{pmatrix} |\sin(\theta)|&0 \\ 0&|\sin(\theta)|\end{pmatrix},$
and therefore
$||(\rho-\sigma)||_{Tr}=Tr\begin{pmatrix} |\sin(\theta)|&0 \\ 0&|\sin(\theta)|\end{pmatrix}=2|\sin(\theta)|.$
As another example, lets now calculate an expression for the Euclidean distance between the two points in the Bloch sphere that correspond to the pure states $\ket{0}$ and $\cos(\theta)\ket{0}+\sin(\theta)\ket{1}$.
In the Bloch sphere representation, a general density matrix can be expressed as
$\frac{1}{2}(I+c_xX+c_yY+c_zZ),$
where $X$, $Y$, $Z$ are the Pauli matrices, and the vector $(c_x,c_y,c_z)$ gives the coordinates of the density matrix on the Bloch sphere. In such a representation, $\rho$ and $\sigma$ can be expressed as follows:
\begin{align*} \rho=&\frac{1}{2}(I+Z),\\ \sigma=&\frac{1}{2}(I+2\cos(\theta)\sin(\theta)X+\cos(2\theta)Z), \end{align*}
so that the
coordinate vectors for the Block sphere representations of $\rho$ and $\sigma$ are given by, respectively,
\begin{align*} v_{\rho}=&(0,0,1) \\ v_{\sigma}=&(2\cos(\theta)\sin(\theta),0,\cos(2\theta)). \end{align*}
Then the Euclidean distance between $\rho$ and $\sigma$ on the Block sphere is given by
\begin{align*} ||v_\rho-v_\sigma||_2&=\sqrt{(-2\cos(\theta)\sin(\theta))^2+(1-\cos(2\theta))^2}\\ &=\sqrt{4\cos^2(\theta)\sin^2(\theta)+4\sin^4(\theta)} \\ &=\sqrt{4\sin^2(\theta)(\cos^2(\theta)+\sin^2(\theta))}\\ &=\sqrt{4\sin^2(\theta)}\\ &=2|\sin(\theta)|. \end{align*}
Hence, $||(\rho-\sigma)||_{Tr}=||v_\rho-v_\sigma||_2=2|\sin(\theta)|$, and the two distance measures agree.
Now we'll repeat the calculations done above for these two states: $\rho=\begin{pmatrix} 1/2 &0 \\ 0 & 1/2 \end{pmatrix}$ and the pure state $\cos(\theta)\ket{0}+\sin(\theta)\ket{1}$ as a function of $\theta$.
As calculatedabove, the density matrix corresponding to the state $\cos(\theta)\ket{0}+\sin(\theta)\ket{1}$ is given by
$\sigma:=\begin{pmatrix}\cos^2(\theta)&\cos(\theta)\sin(\theta)\\ \cos(\theta)\sin(\theta)&\sin^2(\theta) \end{pmatrix}$
Calculating the relevant matrices in order to determine the trace distance $||(\rho-\sigma)||_{Tr}=Tr(\sqrt{(\rho-\sigma)^2})$ yields
\begin{align*} \rho-\sigma=\begin{pmatrix}\frac{1}{2}-\cos^2(\theta)&\cos(\theta)\sin(\theta)\\ \cos(\theta)\sin(\theta)&\frac{1}{2}-\sin^2(\theta) \end{pmatrix}, \end{align*}
and after some further calculation
\begin{align*} (\rho-\sigma)^2&=\begin{pmatrix}\frac{1}{2}-\cos^2(\theta)&\cos(\theta)\sin(\theta)\\ \cos(\theta)\sin(\theta)&\frac{1}{2}-\sin^2(\theta) \end{pmatrix}\begin{pmatrix}\frac{1}{2}-\cos^2(\theta)&\cos(\theta)\sin(\theta)\\ \cos(\theta)\sin(\theta)&\frac{1}{2}-\sin^2(\theta) \end{pmatrix}\\ &=\begin{pmatrix} \frac{1}{4}&0\\ 0&\frac{1}{4} \end{pmatrix}, \end{align*}
which implies that
$\sqrt{ (\rho-\sigma)^2}=\begin{pmatrix} \sqrt{ \frac{1}{4}}&0\\ 0&\sqrt{\frac{1}{4}} \end{pmatrix}=\begin{pmatrix} \frac{1}{2}&0\\ 0&\frac{1}{2} \end{pmatrix}$
Therefore,
$||(\rho-\sigma)||_{Tr}=Tr(\sqrt{(\rho-\sigma)^2})=Tr\begin{pmatrix} \frac{1}{2}&0\\ 0&\frac{1}{2} \end{pmatrix}=1$
is the trace distance between the two state $\rho$ and $\sigma$.
Now, to calculate the Euclidean distance between the two states in the Block sphere representation observe that
\begin{align*} \rho=&\frac{1}{2}I,\\ \sigma=&\frac{1}{2}(I+2\cos(\theta)\sin(\theta)X+\cos(2\theta)Z), \end{align*}
so that the
coordinate vectors for the Block sphere representations of $\rho$ and $\sigma$ are given by, respectively,
\begin{align*} v_{\rho}=&(0,0,0) \\ v_{\sigma}=&(2\cos(\theta)\sin(\theta),0,\cos(2\theta)). \end{align*}
The Euclidean distance is then given by
\begin{align*} ||v_\rho-v_\sigma||_2&=\sqrt{(-2\cos(\theta)\sin(\theta))^2+(-\cos(2\theta))^2}\\ &=\sqrt{4\cos^2(\theta)\sin^2(\theta)+\cos^2(2\theta)} \\ &=\sqrt{4\cos^2(\theta)\sin^2(\theta)+(\cos^2(\theta)-\sin^2(\theta))^2}\\ &=\sqrt{2\cos^2(\theta)\sin^2(\theta)+\cos^4(\theta)+\sin^4(\theta))^2}\\ &=\sqrt{2\cos^2(\theta)\sin^2(\theta)+\cos^4(\theta)+\sin^4(\theta))^2}\\ &=\sqrt{(\cos^2(\theta)+\sin^2(\theta))^2}\\ &=\sqrt{1^2}\\ &=1, \end{align*}
and so the trace distance of $\rho$ and $\sigma$ agrees with their Euclidean distance on the Block sphere since $||(\rho-\sigma)||_{Tr}=||v_\rho-v_\sigma||_2=1$. | 2018-10-22 15:19:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999178647994995, "perplexity": 736.3422787369007}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515088.88/warc/CC-MAIN-20181022134402-20181022155902-00315.warc.gz"} |
https://www.imgarlicchives.icu/archives/17.html | # The Earth Is Round (p<.05)
Reference:
Cohen, J. (1994). The earth is round (p<. 05). American psychologist49(12), 997.
After 4 decades of severe criticism, the ritual of null hypothesis significance testing—mechanical dichotomous decisions around a sacred .05 criterion—still persists. This article reviews the problems with this practice, including its near-universal misinterpretation of p as the probability that H_0 is false, the misinterpretation that its complement is the probability of successful replication, and the mistaken assumption that if one rejects H_0 one thereby affirms the theory that led to the test. Exploratory data analysis and the use of graphic methods, a steady improvement in and a movement toward standardization in measurement, an emphasis on estimating effect sizes using confidence intervals, and the informed use of available statistical methods is suggested. For generalization, psychologists must finally rely, as has been done in all the older sciences, on replication.
I make no pretense of the originality of my remarks in this article. One of the few things we, as psychologists, have learned from over a century of scientific study is that at age three score and 10, originality is not to be expected. David Bakan said back in 1966 that his claim that “a great deal of mischief has been associated” with the test of significance “is hardly original,” that it is “what ‘everybody knows,'” and that “to say it ‘out loud’ is… to assume the role of the child who pointed out that the emperor was really outfitted in his underwear” (p. 423). If it was hardly original in 1966, it can hardly be original now. Yet this naked emperor has been shamelessly running around for a long time.
Like many men my age, I mostly grouse. My harangue today is on testing for statistical significance, about which Bill Rozeboom (1960) wrote 33 years ago, “The statistical folkways of a more primitive past continue to dominate the local scene” (p. 417).
And today, they continue to continue. And we, as teachers, consultants, authors, and otherwise perpetrators of quantitative methods, are responsible for the ritualization of null hypothesis significance testing (NHST; I resisted the temptation to call it statistical hypothesis inference testing) to the point of meaninglessness and beyond. I argue herein that NHST has not only failed to support the advance of psychology as a science but also has seriously impeded it.
Consider the following: A colleague approaches me with a statistical problem. He believes that a generally rare disease does not exist at all in a given population, hence H_0: P = 0. He draws a more or less random sample of 30 cases from this population and finds that one of the cases has the disease, hence P_s = 1/30 = .033. He is not sure how to test H_0, chi-square with Yates’s (1951) correction or the Fisher exact test, and wonders whether he has enough power. Would you believe it? And would you believe that if he tried to publish this result without a significance test, one or more reviewers might complain? It could happen.
Almost a quarter of a century ago, a couple of sociologists, D. E. Morrison and R. E. Henkel (1970), edited a book entitled The Significance Test Controversy. Among the contributors were Bill Rozeboom (1960), Paul Meehl (1967), David Bakan (1966), and David Lykken (1968). Without exception, they damned NHST. For example, Meehl described NHST as “a potent but sterile intellectual rake who leaves in his merry path a long train of ravished maidens but no viable scientific offspring” (p. 265). They were, however, by no means the first to do so. Joseph Berkson attacked NHST in 1938, even before it sank its deep roots in psychology. Lancelot Hogben’s book-length critique appeared in 1957. When I read it then, I was appalled by its rank apostasy. I was at that time well trained in the current Fisherian dogma and had not yet heard of Neyman-Pearson (try to find a reference to them in the statistics texts of that day—McNemar, Edwards, Guilford, Walker). Indeed, I had already had some dizzying success as a purveyor of plain and fancy NHST to my fellow clinicians in the Veterans Administration.
What’s wrong with NHST? Well, among many other things, it does not tell us what we want to know, and we so much want to know what we want to know that, out of desperation, we nevertheless believe that it does! What we want to know is “Given these data, what is the probability that H_0 is true?” But as most of us know, what it tells us is “Given that H_0 is true, what is the probability of these (or more extreme) data?” These are not the same, as has been pointed out many times over the years by the contributors to the Morrison-Henkel (1970) book, among others, and, more recently and emphatically, by Meehl (1978, 1986, 1990a, 1990b), Gigerenzer (1993), Falk and Greenbaum (in press), and yours truly (Cohen, 1990).
## The Permanent Illusion
One problem arises from a misapplication of deductive syllogistic reasoning. Falk and Greenbaum (in press) called this the “illusion of probabilistic proof by contradiction” or the “illusion of attaining improbability.” Gigerenzer (1993) called it the “permanent illusion” and the “Bayesian Id’s wishful thinking,” part of the “hybrid logic” of contemporary statistical inference—a mishmash of Fisher and Neyman-Pearson, with invalid Bayesian interpretation. It is the widespread belief that the level of significance at which H_0 is rejected, say .05, is the probability that it is correct or, at the very least, that it is of low probability.
The following is almost but not quite the reasoning of null hypothesis rejection:
If the null hypothesis is correct, then this datum (D) can not occur.
It has, however, occurred.
Therefore, the null hypothesis is false.
If this were the reasoning of H_0 testing, then it would be formally correct. It would be what Aristotle called the modus tollens, denying the antecedent by denying the consequent. But this is not the reasoning of NHST. Instead, it makes this reasoning probabilistic, as follows:
If the null hypothesis is correct, then these data are highly unlikely.These data have occurred.Therefore, the null hypothesis is highly unlikely.
By making it probabilistic, it becomes invalid. Why? Well, consider this:
The following syllogism is sensible and also the formally correct modus tollens:
If a person is a Martian, then he is not a member of Congress.
This person is a member of Congress.
Therefore, he is not a Martian.
Sounds reasonable, no? This next syllogism is not sensible because the major premise is wrong, but the reasoning is as before and still a formally correct modus tollens:
If a person is an American, then he is not a member of Congress. (WRONG!)
This person is a member of Congress.
Therefore, he is not an American.
If the major premise is made sensible by making it probabilistic, not absolute, the syllogism becomes formally incorrect and leads to a conclusion that is not sensible:
If a person is an American, then he is probably not a member of Congress. (TRUE, RIGHT?)
This person is a member of Congress.
Therefore, he is probably not an American. (Pollard & Richardson. 1987)
This is formally exactly the same as
If H_0 is true, then this result (statistical significance) would probably not occur.
This result has occurred.
Then H_0 is probably not true and therefore formally invalid.
This formulation appears at least implicitly in article after article in psychological journals and explicitly in some statistics textbooks—”the illusion of attaining improbability.”
## Why P \left( D \mid H_0 \right) \neq P \left( H_0 \mid D \right) ?
When one tests H_0, one is finding the probability that the data (D) could have arisen if H_0 were true, P \left( D \mid H_0 \right). If that probability is small, then it can be concluded that if H_0 is true, then D is unlikely. Now, what really is at issue, what is always the real issue, is the probability that H_0 is true, given the data, P \left( H_0 \mid D \right) , the inverse probability. When one rejects H_0, one wants to conclude that H_0 is unlikely, say, p < .01. The very reason the statistical test is done is to be able to reject H_0 because of its unlikelihood! But that is the posterior probability, available only through Bayes’s theorem, for which one needs to know P(H_0), the probability of the null hypothesis before the experiment, the “prior” probability.
Now, one does not normally know the probability of H_0. Bayesian statisticians cope with this problem by positing a prior probability or distribution of probabilities. But an example from psychiatric diagnosis in which one knows P(H_0) is illuminating:
The incidence of schizophrenia in adults is about 2%. A proposed screening test is estimated to have at least 95% accuracy in making the positive diagnosis (sensitivity) and about 97% accuracy in declaring normality (specificity). Formally stated, P \left( \text{normal} \mid H_0 \right) \approx .97, P \left( \text{schizophrenia} \mid H_1 \right) > .95. So, let
H_0 = The case is normal, so thatH_1 = The case is schizophrenic, andD = The test result (the data) is positive for schizophrenia.
With a positive test for schizophrenia at hand, given the more than .95 assumed accuracy of the test, P \left( D \mid H_0 \right)—the probability of a positive test given that the case is normal—is less than .05, that is, significant at p < .05. One would reject the hypothesis that the case is normal and conclude that the case has schizophrenia, as it happens mistakenly, but within the .05 alpha error. But that’s not the point.
The probability of the case being normal, P(H_0), given a positive test (D), that is, P \left( H_0 \mid D \right), is not what has just been discovered however much it sounds like it and however much it is wished to be. It is not true that the probability that the case is normal is less than .05, nor is it even unlikely that it is a normal case. By a Bayesian maneuver, this inverse probability, the probability that the case is normal, given a positive test for schizophrenia, is about .60! The arithmetic follows:
P \left( H_0 \mid D \right) \\ =\dfrac{P(H_0) \times P \left( \text{test wrong} \mid H_0 \right)}{P(H_0) \times P \left( \text{test wrong} \mid H_0 \right) + P(H_1) \times P \left( \text{test correct} \mid H_1 \right)} \\ =\dfrac{.98 \times .03}{.98 \times .03 + .02 \times .95} = \dfrac{.0294}{.0294 + .0190} = .607
The situation may be made clearer by expressing it approximately as a 2 \times 2 table for 1,000 cases. The case actually is
As the table shows, the conditional probability of a normal case for those testing as schizophrenic is not small—of the 50 cases testing as schizophrenics, 30 are false positives, actually normal, 60% of them!
This extreme result occurs because of the low base rate for schizophrenia, but it demonstrates how wrong one can be by considering the p value from a typical significance test as bearing on the truth of the null hypothesis for a set of data.
It should not be inferred from this example that all null hypothesis testing requires a Bayesian prior. There is a form of H_0 testing that has been used in astronomy and physics for centuries, what Meehl (1967) called the “strong” form, as advocated by Karl Popper (1959). Popper proposed that a scientific theory be tested by attempts to falsify it. In null hypothesis testing terms, one takes a central prediction of the theory, say, a point value of some crucial variable, sets it up as the H_0, and challenges the theory by attempting to reject it. This is certainly a valid procedure, potentially even more useful when used in confidence interval form. What I and my ilk decry is the “weak” form in which theories are “confirmed” by rejecting null hypotheses.
The inverse probability error in interpreting H_0 is not reserved for the great unwashed, but appears many times in statistical textbooks (although frequently together with the correct interpretation, whose authors apparently think they are interchangeable). Among the distinguished authors making this error are Guilford, Nunnally, Anastasi, Ferguson, and Lindquist. Many examples of this error are given by Robyn Dawes (1988, pp. 70-75); Falk and Greenbaum (in press); Gigerenzer (1993, pp. 316—329), who also nailed R. A. Fisher (who emphatically rejected Bayesian theory of inverse probability but slipped into invalid Bayesian interpretations of NHST (p. 318); and Oakes (1986, pp. 17-20), who also nailed me for this error (p. 20).
The illusion of attaining improbability or the Bayesian Id’s wishful thinking error in using NHST is very easy to make. It was made by 68 out of 70 academic psychologists studied by Oakes (1986, pp. 79-82). Oakes incidentally offered an explanation of the neglect of power analysis because of the near universality of this inverse probability error:
After all, why worry about the probability of obtaining data that will lead to the rejection of the null hypothesis if it is false when your analysis gives you the actual probability of the null hypothesis being false? (p. 83)
A problem that follows readily from the Bayesian Id’s wishful thinking error is the belief that after a successful rejection of H_0, it is highly probable that replications of the research will also result in H_0 rejection. In their classic article “The Belief in the Law of Small Numbers,” Tversky and Kahneman (1971) showed that because people’s intuitions that data drawn randomly from a population are highly representative, most members of the audience at an American Psychological Association meeting and at a mathematical psychology conference believed that a study with a significant result would replicate with a significant result in a small sample (p. 105). Of Oakes’s (1986) academic psychologists 42 out of 70 believed that a t of 2.7, with df=18 and p = .01, meant that if the experiment were repeated many times, a significant result would be obtained 99% of the time. Rosenthal (1993) said with regard to this replication fallacy that “Nothing could be further from the truth” (p. 542f) and pointed out that given the typical .50 level of power for medium effect sizes at which most behavioral scientists work (Cohen, 1962), the chances are that in three replications only one in eight would result in significant results, in all three replications, and in five replications, the chance of as many as three of them being significant is only 50:50.
An error in elementary logic made frequently by NHST proponents and pointed out by its critics is the thoughtless, usually implicit, conclusion that if H_0 is rejected, then the theory is established: If A then B; B therefore A. But even the valid form of the syllogism (if A then B; not B therefore not A) can be misinterpreted. Meehl (1990a, 1990b) pointed out that in addition to the theory that led to the test, there are usually several auxiliary theories or assumptions and ceteris paribus clauses and that it is the logical product of these that is counterpoised against H_0. Thus, when H_0 is rejected, it can be because of the falsity of any of the auxiliary theories about instrumentation or the nature of the psyche or of the ceteris paribus clauses, and not of the substantive theory that precipitated the research.
So even when used and interpreted “properly,” with a significance criterion (almost always p < .05) set a priori (or more frequently understood), H_0 has little to commend it in the testing of psychological theories in its usual reject-H_0-confirm-the-theory form. The ritual dichotomous reject-accept decision, however objective and administratively convenient, is not the way any science is done. As Bill Rozeboom wrote in 1960, “The primary aim of a scientific experiment is not to precipitate decisions, but to make an appropriate adjustment in the degree to which one… believes the hypothesis… being tested” (p. 420)
## The Nil Hypothesis
Thus far, I have been considering H_0s in their most general sense—as propositions about the state of affairs in a population, more particularly, as some specified value of a population parameter. Thus, “the population mean difference is 4” may be an H_0, as may be “the proportion of males in this population is .75” and “the correlation in this population is .20.” But as almost universally used, the null in H_0 is taken to mean nil, zero. For Fisher, the null hypothesis was the hypothesis to be nullified. As if things were not bad enough in the interpretation, or misinterpretation, of NHST in this general sense, things get downright ridiculous when H_0 is to the effect that the effect size (ES) is 0—that the population mean difference is 0, that the correlation is 0, that the proportion of males is .50, that the raters’ reliability is 0 (an H_0 that can almost always be rejected, even with a small sample—Heaven help us!). Most of the criticism of NHST in the literature has been for this special case where its use may be valid only for true experiments involving randomization (e.g., controlled clinical trials) or when any departure from pure chance is meaningful (as in laboratory experiments on clairvoyance), but even in these cases, confidence intervals provide more information. I henceforth refer to the H_0 that an ES=0 as the “nil hypothesis.”
My work in power analysis led me to realize that the nil hypothesis is always false. If I may unblushingly quote myself,
It can only be true in the bowels of a computer processor running a Monte Carlo study (and even then a stray electron may make it false). If it is false, even to a tiny degree, it must be the case that a large enough sample will produce a significant result and lead to its rejection. So if the null hypothesis is always false, what’s the big deal about rejecting it? (p. 1308)
I wrote that in 1990. More recently I discovered that in 1938, Berkson wrote
It would be agreed by statisticians that a large sample is always better than a small sample. If, then, we know in advance the P that will result from an application of the Chi-square test to a large sample, there would seem to be no use in doing it on a smaller one. But since the result of the former test is known, it is no test at all. (p. 526f)
Tukey (1991) wrote that “It is foolish to ask ‘Are the effects of A and B different?’ They are always different—for some decimal place” (p. 100).
The point is made piercingly by Thompson (1992):
Statistical significance testing can involve a tautological logic in which tired researchers, having collected data on hundreds of subjects, then, conduct a statistical test to evaluate whether there were a lot of subjects, which the researchers already know, because they collected the data and know they are tired. This tautology has created considerable damage as regards the cumulation of knowledge, (p. 436)
In an unpublished study, Meehl and Lykken crosstabulated 15 items for a sample of 57,000 Minnesota high school students, including father’s occupation, father’s education, mother’s education, number of siblings, sex, birth order, educational plans, family attitudes toward college, whether they liked school, college choice, occupational plan in 10 years, religious preference, leisure time activities, and high school organizations. All of the 105 chi-squares that these 15 items produced by the crosstabulations were statistically significant, and 96% of them at p< .000001 (Meehl, 1990b).
One might say, “With 57,000 cases, relationships as small as a Cramér’s phi of .02-.03 will be significant at p <.000001, so what’s the big deal?” Well, the big deal is that many of the relationships were much larger than .03. Enter the Meehl “crud factor,” more genteelly called by Lykken “the ambient correlation noise.” In soft psychology, “Everything is related to everything else.” Meehl acknowledged (1990b) that neither he nor anyone else has accurate knowledge about the size of the crud factor in a given research domain, “but the notion that the correlation between arbitrarily paired trait variables will be, while not literally zero, of such minuscule size as to be of no importance, is surely wrong” (p. 212, italics in original).
Meehl (1986) considered a typical review article on the evidence for some theory based on nil hypothesis testing that reports a 16:4 box score in favor of the theory. After taking into account the operation of the crud factor, the bias against reporting and publishing “negative” results (Rosenthal’s, 1979, “file drawer” problem), and assuming power of .75, he estimated the likelihood ratio of the theory against the crud factor as 1:1. Then, assuming that the prior probability of theories in soft psychology is <.10, he concluded that the Bayesian posterior probability is also <.10 (p. 327f). So a 16:4 box score for a theory becomes, more realistically, a 9:1 odds ratio against it.
Meta-analysis, with its emphasis on effect sizes, is a bright spot in the contemporary scene. One of its major contributors and proponents, Frank Schmidt (1992), provided an interesting perspective on the consequences of current NHST-driven research in the behavioral sciences. He reminded researchers that, given the fact that the nil hypothesis is always false, the rate of Type I errors is 0%, not 5%, and that only Type II errors can be made, which run typically at about 50% (Cohen, 1962; Sedlmeier & Gigerenzer, 1989). He showed that typically, the sample effect size necessary for significance is notably larger than the actual population effect size and that the average of the statistically significant effect sizes is much larger than the actual effect size. The result is that people who do focus on effect sizes end up with a substantial positive bias in their effect size estimation. Furthermore, there is the irony that the “sophisticates” who use procedures to adjust their alpha error for multiple tests (using Bonferroni, Newman-Keuls, etc.) are adjusting for a nonexistent alpha error, thus reduce their power, and, if lucky enough to get a significant result, only end up grossly overestimating the population effect size!
Because NHST p values have become the coin of the realm in much of psychology, they have served to inhibit its development as a science. Go build a quantitative science with p values! All psychologists know that statistically significant does not mean plain-English significant, but if one reads the literature, one often discovers that a finding reported in the Results section studded with asterisks implicitly becomes in the Discussion section highly significant or very highly significant, important, big!
Even a correct interpretation of p values does not achieve very much, and has not for a long time. Tukey (1991) warned that if researchers fail to reject a nil hypothesis about the difference between A and B, all they can say is that the direction of the difference is “uncertain.” If researchers reject the nil hypothesis then they can say they can be pretty sure of the direction, for example, “A is larger than B.” But if all we, as psychologists, learn from a research is that A is larger than B (p < .01), we have not learned very much. And this is typically all we learn. Confidence intervals are rarely to be seen in our publications. In another article (Tukey, 1969), he chided psychologists and other life and behavior scientists with the admonition “Amount, as well as direction is vital” and went on to say the following:
The physical scientists have learned much by storing up amounts, not just directions. If, for example, elasticity had been confined to “When you pull on it, it gets longer!,” Hooke’s law, the elastic limit, plasticity, and many other important topics could not have appeared (p. 86).. . . Measuring the right things on a communicable scale lets us stockpile information about amounts. Such information can be useful, whether or not the chosen scale is an interval scale. Before the second law of thermodynamics—and there were many decades of progress in physics and chemistry before it appeared—the scale of temperature was not, in any nontrivial sense, an interval scale. Yet these decades of progress would have been impossible had physicists and chemists refused either to record temperatures or to calculate with them. (p. 80)
In the same vein, Tukey (1969) complained about correlation coefficients, quoting his teacher, Charles Winsor, as saying that they are a dangerous symptom. Unlike regression coefficients, correlations are subject to vary with selection as researchers change populations. He attributed researchers’ preference for correlations to their avoidance of thinking about the units with which they measure.
Given two perfectly meaningless variables, one is reminded of their meaninglessness when a regression coefficient is given, since one wonders how to interpret its value. . . . Being so uninterested in our variables that we do not care about their units can hardly be desirable, (p. 89)
The major problem with correlations applied to research data is that they can not provide useful information on causal strength because they change with the degree of variability of the variables they relate. Causality operates on single instances, not on populations whose members vary. The effect of A on B for me can hardly depend on whether I’m in a group that varies greatly in A or another that does not vary at all. It is not an accident
that causal modeling proceeds with regression and not correlation coefficients. In the same vein, I should note that standardized effect size measures, such as d and f developed in power analysis (Cohen, 1988) are, like correlations, also dependent on population variability of the dependent variable and are properly used only when that fact is kept in mind.
To work constructively with “raw” regression coefficients and confidence intervals, psychologists have to start respecting the units they work with, or develop measurement units they can respect enough so that researchers in a given field or subfield can agree to use them. In this way, there can be hope that researchers’ knowledge can be cumulative. There are few such in soft psychology. A beginning in this direction comes from meta-analysis, which, whatever else it may accomplish, has at least focused attention on effect sizes. But imagine how much more fruitful the typical meta-analysis would be if the research covered used the same measures for the constructs they studied. Researchers could get beyond using a mass of studies to demonstrate convincingly that “if you pull on it, it gets longer.”
Recall my example of the highly significant correlation between height and intelligence in 14,000 school children that translated into a regression coefficient that meant that to raise a child’s IQ from 100 to 130 would require giving enough growth hormone to raise his or her height by 14 feet (Cohen, 1990).
## What to Do?
First, don’t look for a magic alternative to NHST, some other objective mechanical ritual to replace it. It doesn’t exist.
Second, even before we, as psychologists, seek to generalize from our data, we must seek to understand and improve them. A major breakthrough to the approach to data, emphasizing “detective work” rather than “sanctification” was heralded by John Tukey in his article “The Future of Data Analysis” (1962) and detailed in his seminal book Exploratory Data Analysis (EDA; 1977). EDA seeks not to vault to generalization to the population but by simple, flexible, informal, and largely graphic techniques aims for understanding the set of data in hand. Important contributions to graphic data analysis have since been made by Tufte (1983, 1990), Cleveland (1993; Cleveland & McGill, 1988), and others. An excellent chapter-length treatment by Wainer and Thissen (1981), recently updated (Wainer & Thissen, 1993), provides many useful references, and statistical program packages provide the necessary software (see, for an example, Lee Wilkinson’s [1990] SYGRAPH, which is presently being updated).
Forty-two years ago, Frank Yates, a close colleague and friend of R. A. Fisher, wrote about Fisher’s “Statistical Methods for Research Workers” (1925/1951),
It has caused scientific research workers to pay undue attention to the results of the tests of significance they perform on their data… and too little to the estimates of the magnitude of the effects they are estimating (p. 32).
Thus, my third recommendation is that, as researchers, we routinely report effect sizes in the form of confidence limits. “Everyone knows” that confidence intervals contain all the information to be found in significance tests and much more. They not only reveal the status of the trivial nil hypothesis but also about the status of non-nil null hypotheses and thus help remind researchers about the possible operation of the crud factor. Yet they are rarely to be found in the literature. I suspect that the main reason they are not reported is that they are so embarrassingly large! But their sheer size should move us toward improving our measurement by seeking to reduce the unreliable and invalid part of the variance in our measures (as Student himself recommended almost a century ago). Also, their width provides us with the analogue of power analysis in significance testing—larger sample sizes reduce the size of confidence intervals as they increase the statistical power of NHST. A new program covers confidence intervals for mean differences, correlation, cross-tabulations (including odds ratios and relative risks), and survival analysis (Borenstein, Cohen, & Rothstein, in press). It also produces Birnbaum’s (1961) “confidence curves,” from which can be read all confidence intervals from 50% to 100%, thus obviating the necessity of choosing a specific confidence level for presentation.
As researchers, we have a considerable array of statistical techniques that can help us find our way to theories of some depth, but they must be used sensibly and be heavily informed by informed judgment. Even null hypothesis testing complete with power analysis can be useful if we abandon the rejection of point nil hypotheses and use instead “good-enough” range null hypotheses (e.g., “the effect size is no larger than 8 raw score units, or d = .5), as Serlin and Lapsley (1993) have described in detail. As our measurement and theories improve, we can begin to achieve the Popperian principle of representing our theories as null hypotheses and subjecting them to challenge, as Meehl (1967) argued many years ago. With more evolved psychological theories, we can also find use for likelihood ratios and Bayesian methods (Goodman, 1993;Greenwald, 1975). We quantitative behavioral scientists need not go out of business.
Induction has long been a problem in the philosophy of science. Meehl (1990a) attributed to the distinguished philosopher Morris Raphael Cohen the saying “All logic texts are divided into two parts. In the first part, on deductive logic, the fallacies are explained; in the second part, on inductive logic, they are committed” (p. 110). We appeal to inductive logic to move from the particular results in hand to a theoretically useful generalization. As I have noted, we have a body of statistical techniques, that, used intelligently, can facilitate our efforts. But given the problems of statistical induction, we must finally rely, as have the older sciences, on replication.
## References
Bakan, D. (1966). The test of significance in psychological research. Psychological Bulletin, 66, 1-29.
Berkson, J. (1938). Some difficulties of interpretation encountered in the application of the chi-square test. Journal of the American Statistical Association, 33, 526-542.
Birnbaum, A. (1961). Confidence curves: An omnibus technique for estimation and testing statistical hypotheses. Journal of the American Statistical Association, 56, 246-249.
Borenstein, M., Cohen, J., & Rothstein, H. (in press). Confidence intervals, effect size, and power [Computer program]. Hillsdale, NJ: Erlbaum.
Cleveland, W. S. (1993). Visualizing data. Summit, NJ: Hobart.
Cleveland, W. S., & McGill, M. E. (Eds.). (1988). Dynamic graphics for statistics. Belmont, CA: Wadsworth.
Cohen. J. (1962). The statistical power of abnormal-social psychological research: A review. Journal of Abnormal and Social Psychology, 69, 145-153.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale. NJ: Erlbaum.
Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45, 1304-1312.
Dawes, R. M. (1988). Rational choice in an uncertain world. San Diego, CA: Harcourt Brace Jovanovich.
Falk, R., & Greenbaum, C. W. (in press). Significance tests die hard: The amazing persistence of a probabilistic misconception. Theory and Psychology.
Fisher, R. A. (1951). Statistical methods for research workers. Edinburgh, Scotland: Oliver & Boyd. (Original work published 1925)
Gigerenzer, G. (1993). The superego, the ego, and the id in statistical reasoning. In G. Keren & C. Lewis (Ed.), A handbook for data analysis in the behavioral sciences: Methodological issues (pp. 311-339). Hillsdale, NJ: Erlbaum.
Goodman, S. N. (1993). P values, hypothesis tests, and likelihood implications for epidemiology: Implications of a neglected historical debate. American Journal of Epidemiology, 137. 485-496.
Greenwald, A. G. (1975). Consequences of prejudice against the null hypothesis. Psychological Bulletin, 82, 1-20.
Hogben, L. (1957). Statistical theory. London: Allen & Unwin.
Lykken, D. E. (1968). Statistical significance in psychological research. Psychological Bulletin, 70, 151-159.
Meehl, P. E. (1967). Theory testing in psychology and physics: A methodological paradox. Philosophy of Science, 34, 103-115.
Meehl. P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46, 806-834.
Meehl, P. E. (1986). What social scientists don’t understand. In D. W. Fiske & R. A. Shweder (Eds.), Metatheory in social science: Pluralisms and subjectivities (pp. 315-338). Chicago: University of Chicago Press.
Meehl, P. (1990a). Appraising and amending theories: The strategy of Lakatosian defense and two principles that warrant it. Psychological Inquiry, 1, 108-141.
Meehl, P. E. (1990b). Why summaries of research on psychological theories are often uninterpretable. Psychological Reports, 66 (Monograph Suppl. 1-V66), 195-244.
Morrison. D. E., & Henkel, R. E. (Eds.). (1970). The significance test controversy. Chicago: Aldine.
Oakes, M. (1986). Statistical inference: A commentary for the social and behavioral sciences. New York: Wiley.
Pollard, P., & Richardson, J. T. E. (1987). On the probability of making Type I errors. Psychological Bulletin, 102, 159-163.
Popper, K. (1959). The logic of scientific discovery. London: Hutchinson.
Rosenthal, R. (1979). The “file drawer problem” and tolerance for null results. Psychological Bulletin, 86, 638-641.
Rosenthal, R. (1993). Cumulating evidence. In G. Keren & C. Lewis (Ed.), A handbook for data analysis in the behavioral sciences: Methodological issues (pp. 519-559). Hillsdale, NJ: Erlbaum.
Rozeboom, W. W. (1960). The fallacy of the null hypothesis significance test. Psychological Bulletin, 57, 416-428.
Schmidt, F. L. (1992). What do data really mean? Research findings, meta-analysis, and cumulative knowledge in psychology. American Psychologist, 47, 1173-1181.
Sedlmeier, P., & Gigerenzer, G. (1989). Do studies of statistical power have an effect on the power of studies? Psychological Bulletin, 105, 309-316.
Serlin, R. A., & Lapsley, D. K. (1993). Rational appraisal of psychological research and the good-enough principle. In G. Keren & C. Lewis (Eds.), A handbook for data analysis in the behavioral sciences: Methodological issues (pp. 199-228). Hillsdale, NJ: Erlbaum.
Thompson, B. (1992). Two and one-half decades of leadership in measurement and evaluation. Journal of Counseling and Development, 70, 434-438.
Tufte, E. R. (1983). The visual display of quantitative information. Cheshire, CT: Graphics Press.
Tufte, E. R. (1990). Envisioning information. Cheshire, CT: Graphics Press.
Tukey, J. W. (1962). The future of data analysis. Annals of Mathematical Statistics, 33, 1-67.
Tukey, J. W. (1969). Analyzing data: Sanctification or detective work? American Psychologist, 24, 83-91.
Tukey, J. W. (1991). The philosophy of multiple comparisons. Statistical Science, 6, 100-116.
Tversky, A., & Kahneman, D. (1971). Belief in the law of small numbers. Psychological Bulletin, 76, 105-110.
Wainer, H., & Thissen, D. (1981). Graphical data analysis. In M. R.
Rosenzweig & L. W. Porter (Eds.), Annual review oj psychology (pp. 191-241). Palo Alto, CA: Annual Reviews.
Wainer, H., & Thissen, D. (1993). Graphical data analysis. In G. Keren & C. Lewis (Eds.), A handbook for data analysis in the behavioral sciences: Statistical issues (pp. 391-457). Hillsdale, NJ: Erlbaum.
Wilkinson. L. (1990). SYGRAPH: The system for graphics. Evanston, IL: SYSTAT.
Yates, F. (1951). The influence of statistical methods for research workers on the development of the science of statistics. Journal of the American Statistical Association, 46, 19-34. | 2021-11-27 05:54:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7349835634231567, "perplexity": 2073.5016801442152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358118.13/warc/CC-MAIN-20211127043716-20211127073716-00318.warc.gz"} |
https://www.neuron.yale.edu/neuron/static/new_doc/modelspec/programmatic/topology/secspec.html | Section and Segment Selection¶↑
Since sections share property names eg. a length called L it is always necessary to specify which section is being discussed.
There are three methods of specifying which section a property refers to (with each being compact in some contexts and cumbersome in others). They are given below in order of precedence (highest first).
Dot notation¶↑
This takes precedence over the other methods and is described by the syntax sectionname.varname. Examples are
dendrite[2].L = dendrite[1].L + dendrite[0].L
axon.v = soma.v
print soma.gnabar
axon.nseg = 2*axon.nseg
This notation is necessary when one needs to refer to more than one section within a single statement.
Stack of sections¶↑
The syntax is
sectionname {stmt}
and means that the currently selected section during the execution of stmt is sectionname. This method is the most useful for programming since the user has explicit control over the scope of the section and can set several range variables. Notice that after the stmt is executed the currently selected section reverts to the name (if any) it had before sectionname was seen. The programmer is allowed to nest these statements to any level. Avoid the error:
soma L=10 diam=10
which sets soma.L, then pops the section stack and sets diam for whatever section is then on the stack.
It is important that control flow reach the end of stmt in order to automatically pop the section stack. Therefore, one cannot use the continue, break, or return statements in stmt.
There is no explicit notion of a section variable in NEURON but the same effect can be obtained with the SectionRef class. The use of push_section() for this purpose is not recommended except as a last resort.
Looping over sets of sections is done most often with the forall and forsec commands.
Default section¶↑
The syntax
access sectionname
defines a default section name to be the currently selected section when the first two methods are not in effect. There is often a conceptually privileged section which gets most of the use and it is useful to declare that as the default section. e.g.
access soma
With this, one can, with a minimum of typing, get values of voltage, etc at the command line level.
In general, this statement should only be used once to give default access to a privileged section. It’s bad programming practice to change the default access within anything other than an initialization procedure. The “sec { stmt }” form is almost always the right way to use the section stack.
access
Syntax:
access section
Description:
Makes section the default currently accessed section. More precisely, it replaces the top of the section stack with the indicated section and so will be the permanent default section only if the section stack is empty or has only one section in it. This is lesser precedence than section stmt which is lesser precedence than section.var
Note:
The access statement should not be used within a procedure or function. In fact the best style is to execute it only once in a program to refer to a priviledged section such as “soma”. It can be very confusing when a procedure has the side effect of permanently changing the default section.
Example:
create a, b, c, d
access a
print secname()
b {
print secname()
access c // not recommended. The "go_to" of sections.
print secname()
d {print secname()}
print secname()
} // because the stack has more than one section, c is popped off
print secname() // and the second "access" was not permanent!
forall
Syntax:
forall stmt
Description:
Loops over all sections, successively making each section the currently accessed section.
Within an object, forall refers to all the sections declared in the object. This is generally the right thing to do when a template creates sections but is inconvenient when a template is constructed which needs to compute using sections external to it. In this case, one can pass a collection of sections into a template function as a SectionList object argument.
The forall is relatively slow, especially when used in conjunction with issection() and ismembrane() selectors. If you are often iterating over the same sets it is much faster to keep the sets in SectionList objects and use the much faster forsec command.
The iteration sequence order is undefined but will remain the same for a given sequence of create statements.
Example:
create soma, axon, dend[3]
forall {
print secname()
}
prints the names of all the sections which have been created.
soma
axon
dend[0]
dend[1]
dend[2]
ifsec
Syntax:
ifsec string stmt
ifsec sectionlist stmt
Description:
ifsec string stmt
Executes stmt if string is contained in the name of the currently accessed section. equivalent to if(issection(string)) stmt Note that the regular expression semantics is not the same as that used by issection. To get an exact match use ifsec ^string\$
ifsec sectionlist stmt
Executes stmt if the currently accessed section is in the sectionlist.
forsec
Syntax:
forsec string stmt
forsec sectionlist stmt
Description:
forsec string stmt
equivalent to forall ifsec string stmt but faster. Note that forsec string is equivalent to forall if (issection(string)) stmt
forsec sectionlist
equivalent to forall ifsec sectionlist stmt but very fast.
These provide a very efficient iteration over the list of sections.
Example:
create soma, dend[3], axon
forsec "a" print secname()
create soma, dend[3], axon
objref sl
sl = new SectionList()
for (i = 2; i >= 0; i = i - 1) dend[i] sl.append()
forsec sl print secname()
pop_section()
Syntax:
pop_section()
Description:
Take the currently accessed section off the section stack. This can only be used after a function which pushes a section on the section stack such as point_process.getloc().
Example:
create soma[5]
objref stim[5]
for i=0,4 soma[i] stim[i] = new IClamp(i/4)
for i=0,4 {
x = stim[i].get_loc()
printf("location of %s is %s(%g)\n", stim[i], secname(), x)
pop_section()
}
push_section()
Syntax:
push_section(number)
push_section(section_name)
Description:
This function, along with pop_section() should only be used as a last resort. It will place a specified section on the top of the section stack, becoming the current section to which all operations apply. It is probably always better to use SectionRef or SectionList .
push_section(number)
Push the section identified by the number returned by this_section, etc. which you desire to be the currently accessed section. Any section pushed must have a corresponding pop_section() later or else the section stack will be corrupted. The number is not guaranteed to be the same across separate invocations of NEURON.
push_section(section_name)
Push the section identified by the name obtained from sectionname(strdef). Note: at this time the implementation iterates over all sections to find the proper one; so do not use in loops. | 2020-03-28 21:10:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35572460293769836, "perplexity": 2380.0491715490034}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493120.15/warc/CC-MAIN-20200328194743-20200328224743-00092.warc.gz"} |
https://support.bioconductor.org/p/31146/ | Question: (no subject)
0
9.3 years ago by
hi every one I have just started working with data coming from RNA-seq.I have run an edgeR program on the the data in linux,for differential expression ,but in one of the steps I got the error like this: cls <- gsub("[0-9]", "", colnames(dataList$data)) > cls character(0) and after this I will get error in next steps.I want to know if any one has got an error like this?if yes how to solve it? Med venlig hilsen Leila Farajzadeh Ph.d. studerende AARHUS UNIVERSITET Det Jordbrugsvidenskabelige Fakultet Inst. for Genetik og Bioteknologi Blichers Allé 20, Postboks 50 8830 Tjele Telefon: 8999 1900 Direkte: 8999 1353 E-mail: Leila.Farajzadeh@agrsci.dk<mailto:leila.farajzadeh@agrsci.dk> Web: www.agrsci.dk<http: www.agrsci.dk=""/> ________________________________ DJF udbyder nye uddannelser<http: www.agrsci.dk="" ny_navigation="" uddannelse=""/>. Tilmeld dig DJF's nyhedsbrev<http: www.agrsci.dk="" user="" register?lan="dan-DK">. Denne email kan indeholde fortrolig information. Enhver brug eller offentliggørelse af denne email uden skriftlig tilladelse fra DJF er ikke tilladt. Hvis De ikke er den tiltænkte adressat, bedes De venligst straks underrette DJF samt slette emailen. [[alternative HTML version deleted]] edger • 548 views ADD COMMENTlink modified 9.3 years ago by John Zhang2.9k • written 9.3 years ago by Leila Farajzadeh10 Answer: (no subject) 0 9.3 years ago by Heidi Dvinge2.0k Heidi Dvinge2.0k wrote: Hi Leila, this isn't an error as such - R is doing exactly what you're asking it to do. With gsub, you're currently searching for colnames containing a number (the first "[0-9]" argument), and you're replacing those with nothing (the second "" argument). Try having a look at the help pages for gsub, or send us for example the output of colnames(dataList$data) and tell us how you want these column names to be changed. You'll probably want to include part of your search pattern in (), and use that for replacement with \\1. Cheers \Heidi > hi every one > I have just started working with data coming from RNA-seq.I have run an > edgeR program on the the data in linux,for differential expression ,but in > one of the steps I got the error like this: > > cls <- gsub("[0-9]", "", colnames(dataList\$data)) >> cls > character(0) > > and after this I will get error in next steps.I want to know if any one > has got an error like this?if yes how to solve it? > > > Med venlig hilsen > > Leila Farajzadeh > Ph.d. studerende > > > > AARHUS UNIVERSITET > Det Jordbrugsvidenskabelige Fakultet > Inst. for Genetik og Bioteknologi > Blichers All? 20, Postboks 50 > 8830 Tjele > > Telefon: 8999 1900 > Direkte: 8999 1353 > E-mail: Leila.Farajzadeh at agrsci.dk<mailto:leila.farajzadeh at="" agrsci.dk=""> > Web: www.agrsci.dk<http: www.agrsci.dk=""/> > > ________________________________ > DJF udbyder nye > uddannelser<http: www.agrsci.dk="" ny_navigation="" uddannelse=""/>. > > Tilmeld dig DJF's > nyhedsbrev<http: www.agrsci.dk="" user="" register?lan="dan-DK">. > > Denne email kan indeholde fortrolig information. Enhver brug eller > offentligg?relse af denne email uden skriftlig tilladelse fra DJF er ikke > tilladt. Hvis De ikke er den tilt?nkte adressat, bedes De venligst straks > underrette DJF samt slette emailen. > > > [[alternative HTML version deleted]] > > _______________________________________________ > Bioconductor mailing list > Bioconductor at stat.math.ethz.ch > https://stat.ethz.ch/mailman/listinfo/bioconductor > Search the archives: > http://news.gmane.org/gmane.science.biology.informatics.conductor
0
9.3 years ago by
Marc Noguera100
Marc Noguera100 wrote:
Dear list, I am trying to find function enrichment for a gene list that we have obtained through chip-chip experiments. The chip that we have used is the Agilent human promoter array, which consists of two differents slides. I cannot find any package corresponding to the annotation of this chip, which I think I need to obtain "gene universe" information and, from there, obtain gene set enrichments using GO. Does an annotation package for that chip exists or should I build the annotation package myself, and if so, could you provide pointers to how to build it? Thanks in advance Marc -- ----------------------------------------------------- Marc Noguera i Julian, PhD Genomics unit / Bioinformatics Institut de Medicina Preventiva i Personalitzada del C?ncer (IMPPC) B-10 Office Carretera de Can Ruti Cam? de les Escoles s/n 08916 Badalona, Barcelona
On Mon, Jan 11, 2010 at 7:15 AM, Marc Noguera <mnoguera at="" imppc.org=""> wrote: > Dear list, > I am trying to find function enrichment for a gene list that we have > obtained through chip-chip experiments. The chip that we have used is > the Agilent human promoter array, which consists of two differents slides. > I cannot find any package corresponding to the annotation of this chip, > which I think I need to obtain "gene universe" information and, from > there, obtain gene set enrichments using GO. Does an annotation package > for that chip exists or should I build the annotation package myself, > and if so, could you provide pointers to how to build it? Hi, Marc. The chip annotation packages will not be helpful for this situation, as they are gene-based. Your chips are not gene-based, so an annotation package built for them, even if you could get it built, probably wouldn't make a lot of sense. Instead, you will probably need to generate a gene list of all genes covered by the array and then a gene list of the genes showing promoter signal. With those two lists, you should be able to define your universe and use the org.Hs.eg.db package for GO enrichment. Sean
Thanks for the help Sean, It has been very useful. I have managed to create the geneUniverse from chip data and run the hyperGTests on my selected genesets looking for enrichment. Now I have hyperGResults instances as a result, and apart from summary information on MF, BP or CC with the corresponding p-values I would like to know how could I run some kind of FDR corrections on these results. Also, what are the available tools for visualizing such data. I am checking topGO now, but maybe there are some other tools. Thanks in advance Marc Sean Davis wrote: > On Mon, Jan 11, 2010 at 7:15 AM, Marc Noguera <mnoguera at="" imppc.org=""> wrote: > >> Dear list, >> I am trying to find function enrichment for a gene list that we have >> obtained through chip-chip experiments. The chip that we have used is >> the Agilent human promoter array, which consists of two differents slides. >> I cannot find any package corresponding to the annotation of this chip, >> which I think I need to obtain "gene universe" information and, from >> there, obtain gene set enrichments using GO. Does an annotation package >> for that chip exists or should I build the annotation package myself, >> and if so, could you provide pointers to how to build it? >> > > Hi, Marc. The chip annotation packages will not be helpful for this > situation, as they are gene-based. Your chips are not gene-based, so > an annotation package built for them, even if you could get it built, > probably wouldn't make a lot of sense. Instead, you will probably > need to generate a gene list of all genes covered by the array and > then a gene list of the genes showing promoter signal. With those two > lists, you should be able to define your universe and use the > org.Hs.eg.db package for GO enrichment. > > Sean > -- ----------------------------------------------------- Marc Noguera i Julian, PhD Genomics unit / Bioinformatics Institut de Medicina Preventiva i Personalitzada del C?ncer (IMPPC) B-10 Office Carretera de Can Ruti Cam? de les Escoles s/n 08916 Badalona, Barcelona
On Tue, Jan 12, 2010 at 4:26 AM, Marc Noguera <mnoguera at="" imppc.org=""> wrote: > Thanks for the help Sean, It has been very useful. > I have managed to create the geneUniverse from chip data and run the > hyperGTests on my selected genesets looking for enrichment. > Now I have hyperGResults instances as a result, and apart from summary > information on MF, BP or CC with the corresponding p-values I would like > to know how could I run some kind of FDR corrections on these results. The correct statistical treatment of nested categories is probably still a topic of research. However, our own Seth Falcon and Robert Gentleman (among others) have looked into this: http://www.ncbi.nlm.nih.gov/pubmed/17098774 > Also, what are the available tools for visualizing such data. I am > checking topGO now, but maybe there are some other tools. I'm not sure what you mean by "visualizing", but take a look at the Rgraphviz package. Sean > Marc > Sean Davis wrote: >> On Mon, Jan 11, 2010 at 7:15 AM, Marc Noguera <mnoguera at="" imppc.org=""> wrote: >> >>> Dear list, >>> I am trying to find function enrichment for a gene list that we have >>> obtained through chip-chip experiments. The chip that we have used is >>> the Agilent human promoter array, which consists of two differents slides. >>> I cannot find any package corresponding to the annotation of this chip, >>> which I think I need to obtain "gene universe" information and, from >>> there, obtain gene set enrichments using GO. Does an annotation package >>> for that chip exists or should I build the annotation package myself, >>> and if so, could you provide pointers to how to build it? >>> >> >> Hi, Marc. ?The chip annotation packages will not be helpful for this >> situation, as they are gene-based. ?Your chips are not gene-based, so >> an annotation package built for them, even if you could get it built, >> probably wouldn't make a lot of sense. ?Instead, you will probably >> need to generate a gene list of all genes covered by the array and >> then a gene list of the genes showing promoter signal. ?With those two >> lists, you should be able to define your universe and use the >> org.Hs.eg.db package for GO enrichment. >> >> Sean >> > > > -- > ----------------------------------------------------- > Marc Noguera i Julian, PhD > Genomics unit / Bioinformatics > Institut de Medicina Preventiva i Personalitzada > del C?ncer (IMPPC) > B-10 Office > Carretera de Can Ruti > Cam? de les Escoles s/n > 08916 Badalona, Barcelona > ------------------------------------------------------- > >
0
9.3 years ago by
Luz G Alonso20
Luz G Alonso20 wrote:
Dear J Zhang, I'm a PhD student. I was looking for a package that provides functions to identify minimum common genomic regions of interests based on segmented copy number data from multiple samples. I've found yours could be very useful for me. Your Manual shows how to generate the segment data based on raw data using DNAcopy package, and then, use these segment data (as a DNAcopy class object) as the input to the cghMCR function. My problem is I've generated the segment list using other method. This segment list has the same parameters than the segment list you use as example (called "segData") but it is a data frame object, not a DNAcopy object like "segData". How could I apply cghMCR and MCR functions using my R data frame? There is some method to get a DNAcopy class object from my segment list? Thanks Luz G Alonso lgarcia at cipf.es PhD Student Bioinformatics and Genomics Department Centro de Investigaciones Principe Felipe Avda. Autopista Saler 16, 46012 Valencia, Spain Phone: +34 96 328 96 80 Fax: +34 96 328 97 01 http://bioinfo.cipf.es/
0
9.3 years ago by
John Zhang2.9k
John Zhang2.9k wrote:
Luz, There is a better way of finding commonly altered regions/genes across samples using the CNTools and cghMCR packages. Please read the vignettes of the two packages in the release (2.5) or devel track. The tools do take the segment list of CBS as the input. JZ >X-Original-To: jzhang at jimmy.harvard.edu >Delivered-To: jzhang at jimmy.harvard.edu >X-IronPort-Anti-Spam-Filtered: true >X-IronPort-Anti-Spam-Result: AogAAHbiTkuBhJEPkWdsb2JhbACbdwEBAQEJCwoHEwW8awKCKIIGBIkw >X-IronPort-AV: E=Sophos;i="4.49,275,1262581200"; d="scan'208";a="144986670" >X-SBRS: 4.5 >X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on hypatia.math.ethz.ch >X-Spam-Level: >X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00, MISSING_SUBJECT, RCVD_IN_DNSWL_MED autolearn=ham version=3.2.5 >X-IronPort-Anti-Spam-Filtered: true >X-IronPort-Anti-Spam-Result: Ah0CAPIsTEusEGcz/2dsb2JhbAAIg1eWY64JjV0CgSl/gTBWBIkq >X-IronPort-AV: E=Sophos;i="4.49,262,1262559600"; d="scan'208";a="97510715" >From: Luz G Alonso <lgarcia at="" cipf.es=""> >To: bioconductor at stat.math.ethz.ch >Date: Tue, 12 Jan 2010 17:10:56 +0100 >User-Agent: KMail/1.9.10 >MIME-Version: 1.0 >Content-Disposition: inline >X-Tag-Only: YES >X-Filter-Node: phil1.ethz.ch >X-USF-Spam-Level: -- >X-USF-Spam-Status: hits=-2.3 tests=BAYES_00,MISSING_SUBJECT >X-USF-Spam-Flag: NO >X-Virus-Scanned: by amavisd-new at stat.math.ethz.ch >X-Mailman-Approved-At: Thu, 14 Jan 2010 18:25:29 +0100 >Subject: [BioC] (no subject) >X-BeenThere: bioconductor at stat.math.ethz.ch >X-Mailman-Version: 2.1.13 >List-Id: The Bioconductor Project Mailing List <bioconductor.stat.math.ethz.ch> >List-Unsubscribe: <https: stat.ethz.ch="" mailman="" options="" bioconductor="">, <mailto:bioconductor-request at="" stat.math.ethz.ch?subject="unsubscribe"> >List-Archive: <https: stat.ethz.ch="" pipermail="" bioconductor=""> >List-Post: <mailto:bioconductor at="" stat.math.ethz.ch=""> >List-Help: <mailto:bioconductor-request at="" stat.math.ethz.ch?subject="help"> >List-Subscribe: <https: stat.ethz.ch="" mailman="" listinfo="" bioconductor="">, <mailto:bioconductor-request at="" stat.math.ethz.ch?subject="subscribe"> >Content-Transfer-Encoding: 7bit > > >Dear J Zhang, >I'm a PhD student. >I was looking for a package that provides functions to identify minimum common >genomic regions of interests based on segmented copy number data from >multiple samples. I've found yours could be very useful for me. >Your Manual shows how to generate the segment data based on raw data using >DNAcopy package, and then, use these segment data (as a DNAcopy class object) >as the input to the cghMCR function. My problem is I've generated the segment >list using other method. This segment list has the same parameters than the >segment list you use as example (called "segData") but it is a data frame >object, not a DNAcopy object like "segData". >How could I apply cghMCR and MCR functions using my R data frame? There is >some method to get a DNAcopy class object from my segment list? > >Thanks > > > >Luz G Alonso lgarcia at cipf.es >PhD Student >Bioinformatics and Genomics Department >Centro de Investigaciones Principe Felipe >Avda. Autopista Saler 16, >46012 Valencia, Spain >Phone: +34 96 328 96 80 >Fax: +34 96 328 97 01 >http://bioinfo.cipf.es/ > >_______________________________________________ >Bioconductor mailing list >Bioconductor at stat.math.ethz.ch >https://stat.ethz.ch/mailman/listinfo/bioconductor >Search the archives: http://news.gmane.org/gmane.science.biology.informatics.conductor Jianhua Zhang Department of Medical Oncology Dana-Farber Cancer Institute 44 Binney Street Boston, MA 02115-6084 | 2019-04-24 14:19:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38236814737319946, "perplexity": 8747.072746006908}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578643556.86/warc/CC-MAIN-20190424134457-20190424160457-00164.warc.gz"} |
https://math.eretrandre.org/tetrationforum/showthread.php?pid=9293 | • 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
On to C^\infty--and attempts at C^\infty hyper-operations JmsNxn Long Time Fellow Posts: 461 Threads: 85 Joined: Dec 2010 02/08/2021, 12:12 AM (This post was last modified: 02/08/2021, 03:55 AM by JmsNxn.) As Sheldon has thoroughly convinced me of non-holomorphy of my tetration. I thought I'd provide the proof I have that it is $C^{\infty}$ on the line $(-2,\infty)$. I sat on this proof and didn't develop it much because I was too fixated on the holomorphy part. But, I thought it'd be nice to have a proof of $C^{\infty}$. Now, the idea is to apply Banach's fixed point theorem, but it's a bit more symbol heavy now. We will go by induction on the degree of the derivative. So let's assume that, $ \tau^{(k)}(t) : (-2,\infty) \to \mathbb{R}\,\,\text{for}\,\,k \sum_{m=1}^\infty ||\tau^{(k)}_{m+1}(t) - \tau^{(k)}_{m}||_{a \le t \le b} <\infty\\$ Where, $ \tau^{(k)}_0(t) = 0\\ \tau^{(k)}_m(t) = \frac{d^k}{dt^k} \log(1+\frac{\tau_{m-1}(t+1)}{\phi(t+1)})\\$ And $||...||_{a\le t \le b}$ is the sup-norm across some interval $[a,b] \subset (-2,\infty)$. As a forewarning, this is going to be very messy... Now to begin we can bound, $ ||\phi^{(j)}(t)||_{a\le t \le b} \le M\,\,\text{for}\,\,j\le k\\$ And that next, $ \phi^{(k)}(t+1) + \tau_m^{(k)}(t+1) = \frac{d^k}{dt^k} e^{\phi(t) +\tau_{m+1}(t)}\\ = \sum_{j=0}^k \binom{k}{j} (\frac{d^{k-j}}{dt^{k-j}} \phi(t+1)e^{-t})(\frac{d^j}{dt^j} e^{\tau_{m+1}(t)})\\ = \sum_{j=0}^{k-1} \binom{k}{j} (\frac{d^{k-j}}{dt^{k-j}} \phi(t+1)e^{-t})(\frac{d^j}{dt^j} e^{\tau_{m+1}(t)}) + \phi(t+1)e^{-t}(\frac{d^k}{dt^k} e^{\tau_{m+1}(t)})$ Now, $ \frac{d^k}{dt^k} e^{\tau_{m+1}(t)} = e^{\tau_{m+1}(t)}(\tau_{m+1}^{(k)}(t) + \sum_{j=0}^{k-1} a_j \tau_{m+1}^{(j)}(t))\\$ So, we ask you to put on your thinking cap, and excuse me if I write, $ \tau_{m}^{(k)}(t+1) = A_m + C_m\tau_{m+1}^{(k)}(t)\\$ And by the induction hypothesis, $ \sum_{j=1}^\infty ||A_{j+1} - A_j||_{a\le t \le b} < \infty\\ \sum_{j=1}^\infty ||C_{j+1} - C_j||_{a\le t \le b} < \infty\\$ Which is because these terms are made up of finite sums and products of $\tau_m^{(j)}$ and these are said to be summable. Now the proof is a walk in the park. $ \tau_{m}^{(k)}(t) = \frac{\tau_{m-1}^{(k)}(t+1) - A_{m-1}}{C_{m-1}}... = \sum_{j=0}^{m-1} (\prod_{k=0}^{m-1-j} C_{m-1-k}^{-1}) A_j\\$ Where, we've continued the iteration and set $\tau_0 = 0$ and $\tau_1 = 0$ for $k>1$, and $\tau_1 = 1$ for $k=1$ (but we're tossing this away because we know it's differentiable). Therefore, $ \sum_{m=1}^\infty ||\tau_{m+1}^{(k)}(t) - \tau_m^{(k)}(t)||_{a \le t \le b} < \infty\\$ Of which, I've played a little fast and loose, but filling in the blanks would just require too much tex code. EDIT: I'll do it properly as I correct my paper and lower my expectations of the result. *********************** As to the second part of this post--now that we have $C^\infty$ out of the way, we ask if we can continue this iteration and get pentation. Now, $\text{slog}$ will certainly be $C^{\infty}$ and $\frac{d}{dt}e \uparrow \uparrow t > 0$ so it's a well defined bijection of $\mathbb{R} \to (-2,\infty)$. So, first up to bat is to get another phi function, $ \Phi(t) = \Omega_{j=1}^\infty e^{t-j} e \uparrow \uparrow x \bullet x = e^{t-1} e \uparrow \uparrow (e^{t-2}e \uparrow \uparrow (e^{t-3} e \uparrow \uparrow ...)) $ This will be $C^\infty$ (it'll be a bit trickier to prove because we aren't using analytic functions, but just bear with me). And it satisfies the equation, $ \Phi(t+1) = e^t (e \uparrow \uparrow \Phi(t))\\$ By now, I think you might know where i'm going with this. $ e \uparrow^3 t = \lim_{n\to\infty} \text{slog} \text{slog} \cdots (n\,\text{times})\cdots\text{slog} \Phi(t+\omega_1 + n)\\$ And now I'm going to focus on showing this converges... Wish me luck; after being trampled by this holomorphy I thought I'd stick to where things are nice--no nasty dips to zero and the like... JmsNxn Long Time Fellow Posts: 461 Threads: 85 Joined: Dec 2010 02/10/2021, 02:30 AM (This post was last modified: 02/10/2021, 03:38 AM by JmsNxn.) So I'm having trouble coming up with a general proof to give us $C^\infty$ pentation (or any hyper-operators), but certainly getting a continuous one is not that hard. We can start by taking, $ \frac{d}{dt} \text{slog}(t) = \frac{1}{(\frac{d}{dt} e \uparrow \uparrow t) \bullet \text{slog}(t)} \le \frac{1}{t}\\$ So that, $ |\text{slog}(a) - \text{slog}(b)| \le \frac{1}{\min(a,b)}|a-b|\\$ Then the sequence of convergents, $ \tau_{m+1}(t) = \text{slog}(\Phi(t+1) + \tau_m(t+1)) - \Phi(t)\\$ Satisfy, $ |\tau_{m+1}(t) - \tau_m(t)| \le |\text{slog}(\Phi(t+1) + \tau_m(t+1)) - \text{slog}(\Phi(t+1) + \tau_{m-1}(t+1))|\\ \le\frac{1}{\Phi(t+1)}|\tau_{m}(t+1) - \tau_{m-1}(t+1)|\\ \le \frac{|\tau_1(t+m) - \tau_0(t+m)|}{\prod_{j=1}^m \Phi(t+j)}\\ \le \frac{|t+m|}{\prod_{j=1}^m \Phi(t+j)}\\$ Because $\tau_0 = 0$ and, $ \tau_1(t) = \text{slog}(\Phi(t+1)) - \Phi(t)\\ = \text{slog}(e^t e\uparrow \uparrow \Phi(t)) - \Phi(t)\\ \le \text{slog}(e\uparrow \uparrow \Phi(t) + t) - \Phi(t)\\ \le t\\$ This certainly converges uniformly. So we have a continuous function, $ e \uparrow \uparrow \uparrow t = \Phi(t+\omega) + \tau(t+\omega)\\$ We can continue this for arbitrary order hyper-operators. The trouble comes from proving that this solution is $C^\infty$. I haven't had the AHA moment yet to prove this. The way I proved tetration is $C^\infty$ is not very helpful here because we used properties of the logarithm. Anyway, $ e \uparrow^n t : \mathbb{R}^+\to \mathbb{R}^+\\ e \uparrow^n t \,\,\text{is continuous}\\$ God, I suck at real analysis, getting $C^\infty$ might take a while... sheldonison Long Time Fellow Posts: 664 Threads: 23 Joined: Oct 2008 02/10/2021, 04:09 AM (This post was last modified: 02/10/2021, 04:27 AM by sheldonison.) (02/08/2021, 12:12 AM)JmsNxn Wrote: As Sheldon has thoroughly convinced me of non-holomorphy of my tetration. I thought I'd provide the proof I have that it is $C^{\infty}$ on the line $(-2,\infty)$. I sat on this proof and didn't develop it much because I was too fixated on the holomorphy part. But, I thought it'd be nice to have a proof of $C^{\infty}$. Now, the idea is to apply Banach's fixed point theorem, but it's a bit more symbol heavy now. We will go by induction on the degree of the derivative. So let's assume that, $\tau^{(k)}(t)|(-2,\infty)\to\mathbb{R}\,\,\text{for}\,\,k edit the : didn't work replaced with | $\sum_{m=1}^\infty||\tau^{(k)}_{m+1}(t)-\tau^{(k)}_{m}||_{a \le t \le b}<\infty\\$ Where, $\tau^{(k)}_0(t)=0$ $\tau^{(k)}_m(t) = \frac{d^k}{dt^k} \log(1+\frac{\tau_{m-1}(t+1)}{\phi(t+1)})$ And $||...||_{a\le~t\le~b}$ is the sup-norm across some interval $[a,b]\subset (-2,\infty)$. As a forewarning, this is going to be very messy... Now to begin we can bound, $||\phi^{(j)}(t)||_{a\le~t\le b}\le~M\,\,\text{for}\,\,j\le k\\$ And that next, $\phi^{(k)}(t+1)+\tau_m^{(k)}(t+1)=\frac{d^k}{dt^k}e^{\phi(t)+\tau_{m+1}(t)}$ $=\sum_{j=0}^k\binom{k}{j} (\frac{d^{k-j}}{dt^{k-j}} \phi(t+1)e^{-t})(\frac{d^j}{dt^j} e^{\tau_{m+1}(t)})$ $=\sum_{j=0}^{k-1}\binom{k}{j}(\frac{d^{k-j}}{dt^{k-j}}\phi(t+1)e^{-t})(\frac{d^j}{dt^j}e^{\tau_{m+1}(t)})+\phi(t+1)e^{-t}(\frac{d^k}{dt^k}e^{\tau_{m+1}(t)})$ Now, $\frac{d^k}{dt^k} e^{\tau_{m+1}(t)}=e^{\tau_{m+1}(t)}(\tau_{m+1}^{(k)}(t)+\sum_{j=0}^{k-1} a_j \tau_{m+1}^{(j)}(t))$ So, we ask you to put on your thinking cap, and excuse me if I write, $\tau_{m}^{(k)}(t+1)=A_m+C_m\tau_{m+1}^{(k)}(t)$ And by the induction hypothesis, $\sum_{j=1}^\infty ||A_{j+1}-A_j||_{a\le t \le b}<\infty$ $\sum_{j=1}^\infty ||C_{j+1}-C_j||_{a\le t \le b}<\infty$ Which is because these terms are made up of finite sums and products of $\tau_m^{(j)}$ and these are said to be summable. Now the proof is a walk in the park. $\tau_{m}^{(k)}(t)=\frac{\tau_{m-1}^{(k)}(t+1)-A_{m-1}}{C_{m-1}}...=\sum_{j=0}^{m-1}(\prod_{k=0}^{m-1-j}C_{m-1-k}^{-1})A_j$ Where, we've continued the iteration and set $\tau_0 = 0$ and $\tau_1 = 0$ for $k>1$, and $\tau_1 = 1$ for $k=1$ (but we're tossing this away because we know it's differentiable). Therefore, $\sum_{m=1}^\infty||\tau_{m+1}^{(k)}(t)-\tau_m^{(k)}(t)||_{a\le~t\le b}<\infty$ Of which, I've played a little fast and loose, but filling in the blanks would just require too much tex code. EDIT: I'll do it properly as I correct my paper and lower my expectations of the result. *********************** As to the second part of this post--now that we have $C^\infty$ out of the way, we ask if we can continue this iteration and get pentation. Now, $\text{slog}$ will certainly be $C^{\infty}$ and $\frac{d}{dt}e \uparrow \uparrow t > 0$ so it's a well defined bijection of $\mathbb{R} \to (-2,\infty)$. So, first up to bat is to get another phi function... Hey James, I reposted your post, given the limitation that the forums Tex support has bugs with spaces. I don't know if it was always this bad I might write a oneline perl script to remove spaces or substiture a ~ and replace new lines with a new tex /tex pair and remove \\ at the end of a line. Of course if I do all that, then I could also have perl trivially replace $...$ combos with a tex /tex pair too and then I'd automatically convert Latex to math.eretandre Tex .... Anyhow, I'm reading your equations now; are you satisified with proving $C^\infty$ for the Tetration case? I've seen Walker's proof of $C^\infty$, but I've always been curious about how to do it for other functions. - Sheldon JmsNxn Long Time Fellow Posts: 461 Threads: 85 Joined: Dec 2010 02/10/2021, 08:10 AM (This post was last modified: 02/10/2021, 08:18 AM by JmsNxn.) Hey, Sheldon I had written a page long reply to what you just said; but somehow my computer deleted it and it didn't post (I don't know what happened there). I'm a little upset because I had written a fair amount of math, but in the meantime I'll give you the rundown. I am absolutely certain I can prove that this tetration is $C^\infty$ (I went on to relay how this is really no different than the construction of $\phi$ using infinite compositions). However, I can't quite prove it for hyper-operations; and I'm trying to abstract how I can do it for tetration, so that it works for pentation and the sort. So I don't have an essay-like proof for $C^{\infty}$ nature of $\text{tet}_\phi$. But trust me; it's not much more than the construction of $\phi$. Just a whole load of infinite composition stuff. And I'm busy on generalizing the proof so it works for pentation, hexation, etc... Give me a couple weeks. I'm going to pull more all nighters than I should, and I'll have this. Like how every presidential secretary should respond--give me a week and I'll circle back to your question. Regards, James We'll circle back. EDIT: I also had written about how a perl script won't fix your problems. This forum needs to run on MathJax. Which is the javascript version of LaTeX. For god's sakes Henryk. Lol. JmsNxn Long Time Fellow Posts: 461 Threads: 85 Joined: Dec 2010 02/16/2021, 08:40 AM So I posted a proof of $C^\infty$ before I started working today--I noticed a small expansion error, which I hastily fixed--so I deleted the post. I figured I'd do a couple more run throughs of the result before I posted. It was a dumb mistake towards the end where I got ahead of myself, but everything still works; there was one download on the file, so whoever downloaded it I apologize in advance. (Unless the one download was my own download, not sure how it works here.) I am in the process of rewriting my entire paper to focus on $C^\infty$ hyper-operations; whereby it's a long proof by induction. But the initial step is to prove that this tetration is $C^\infty$. Now I can most definitely show this tetration is $C^\infty$; the trouble I'm having is making the proof as general as possible; so that we can create a proof by induction showing $e \uparrow^k t$ is $C^\infty$. The proof I'm posting now is intended to be abstract because I intend to use it as a template for the inductive process. Now, the proof is a little rough around the edges. I haven't fleshed out everything, but honestly it'd only take more words, not more work. Everything definitely works. This isn't much more than what my initial post in this thread was; but it's far better explained. I'm posting this theorem here, without the full paper; essentially to gauge how well explained it is. If anyone has any questions, or any comments or hangups, I'm beyond happy to answer them (and they'll help because they'll teach me how to better write the paper). Despite my errors at assuming this construction would be analytic; I promise no mistakes were made in this circumstance. Somethings may be unclear though, and if they are, please tell me so I can correct myself. Attached Files just_c_infty-3.pdf (Size: 154.48 KB / Downloads: 61) sheldonison Long Time Fellow Posts: 664 Threads: 23 Joined: Oct 2008 02/21/2021, 01:38 AM (02/16/2021, 08:40 AM)JmsNxn Wrote: So I posted a proof of $C^\infty$ before I started working today... I am in the process of rewriting my entire paper to focus on $C^\infty$ hyper-operations; whereby it's a long proof by induction. But the initial step is to prove that this tetration is $C^\infty$. Now I can most definitely show this tetration is $C^\infty$; the trouble I'm having is making the proof as general as possible; so that we can create a proof by induction showing $e \uparrow^k t$ is $C^\infty$. Hi James, I like your paper. I would suggest generating an infinite sequence of entire $\phi_n$ functions, perhaps defined as follows; this is slightly modified from your approach where this $\phi_2(s)$ = JmsNxn phi(s+1) we could start with $\phi_1(s)=\exp(s)$ $\phi_2(s)=\exp(\phi_2(s-1)+s);\;$ this $\phi_2(s)$ asymptotically approaches exp(s) as $\Re(s)$ gets arbitrarily negative, $\phi_n(s)=\phi_{n-1}(\phi_{n}(s-1)+s);\;$ $\phi_n(s)$ also asymptotically approaches exp(s) as $\Re(s)$ gets arbitrarily negative James has proven that $\phi_2(s)$ is entire, and I think each of these phi functions is also entire, and each $\phi_n(s)$ would probably lead to an $e\uparrow^n(s)\;$ function which is also $C^\infty$ only defined at the real axis; details tbd... - Sheldon tommy1729 Ultimate Fellow Posts: 1,438 Threads: 349 Joined: Feb 2009 02/27/2021, 12:08 AM (This post was last modified: 02/27/2021, 12:09 AM by tommy1729.) (02/21/2021, 01:38 AM)sheldonison Wrote: (02/16/2021, 08:40 AM)JmsNxn Wrote: So I posted a proof of $C^\infty$ before I started working today... I am in the process of rewriting my entire paper to focus on $C^\infty$ hyper-operations; whereby it's a long proof by induction. But the initial step is to prove that this tetration is $C^\infty$. Now I can most definitely show this tetration is $C^\infty$; the trouble I'm having is making the proof as general as possible; so that we can create a proof by induction showing $e \uparrow^k t$ is $C^\infty$. Hi James, I like your paper. I would suggest generating an infinite sequence of entire $\phi_n$ functions, perhaps defined as follows; this is slightly modified from your approach where this $\phi_2(s)$ = JmsNxn phi(s+1) we could start with $\phi_1(s)=\exp(s)$ $\phi_2(s)=\exp(\phi_2(s-1)+s);\;$ this $\phi_2(s)$ asymptotically approaches exp(s) as $\Re(s)$ gets arbitrarily negative, $\phi_n(s)=\phi_{n-1}(\phi_{n}(s-1)+s);\;$ $\phi_n(s)$ also asymptotically approaches exp(s) as $\Re(s)$ gets arbitrarily negative James has proven that $\phi_2(s)$ is entire, and I think each of these phi functions is also entire, and each $\phi_n(s)$ would probably lead to an $e\uparrow^n(s)\;$ function which is also $C^\infty$ only defined at the real axis; details tbd... $\phi_3(s)=\phi_{2}(\phi_{3}(s-1)+s)$ probably grows more like pentation. Notice the similarity to the superfunctions of the previous function in the list. So that would probably fail to get another c^oo solution to tetration but rather a c^oo solution to pentation or higher. Unfortunately probably not analytic either. Generalizing to fractional index n is then probably similar to the classic ' semi-super ' function type questions. So sorry but .. I am not convinced of its usefullness. *** $f_n(s)=f_{n-1}((f_{n}(s-1)+s)/2)$ **could** however converge to f(s) = s + 1 ( the successor function !) for appropriately defined f_1(s). Maybe that could be usefull for some kind of hyperoperator ? However going to negative index n does not seem to give interesting results ( only linear functions ?). Ofcourse many variants of the above can be considered and the question is very vague and open. But it is not certain in what direction we should proceed .. or is it ?? It does not seem simpler than generalizing ackermann , making ackermann analytic etc Regards tommy1729 MphLee Fellow Posts: 175 Threads: 16 Joined: May 2013 02/27/2021, 11:37 AM I agree with Tommy. That kind of functional equation is structurally analogous to double recursion. I wonder if it is possible to adapt, at least theoretically disregarding radii of convergence, the classic and Nixon's limit formulas to that cases. The problem that I see, as Tommy does, that thing going back tho the successor as the limits goes to infinity. There has to be a deep reason for this. In superfunctions/subfunctions and non-integer-superfunctions (aka non-integer ranks) successor is a fixed point of the "operator". The iteration theory of those kinds of (non-linear)operators on functions spaces has to coincide with the rank theory of the underlying function spaces. As different generating laws define different HOF(amilies) and in some cases a generating law is in some cases a kind of operator on functions, ranks theory reduces to a special kind of iteration theory. MathStackExchange account:MphLee Fundamental Law $(\sigma+1)0=\sigma (\sigma+1)$ sheldonison Long Time Fellow Posts: 664 Threads: 23 Joined: Oct 2008 02/27/2021, 09:57 PM (This post was last modified: 03/02/2021, 10:27 PM by sheldonison.) (02/27/2021, 12:08 AM)tommy1729 Wrote: $\phi_3(s)=\phi_{2}(\phi_{3}(s-1)+s)$ probably grows more like pentation. .... So sorry but .. I am not convinced of its usefullness. ... It does not seem simpler than generalizing ackermann , making ackermann analytic etc Regards tommy1729Hey Tommy, I mostly agree with you with one small caveat. Here is my observation. Kneser's tetration function is actually beautifully behaved in the complex plane, or at least as reasonably nicely behaved as any tetration superfunction can be. The same thing can be said for Jame's $\phi_2$ function, except that it is also entire and 2pi*i periodic. Now, all of the "analytic" higher order functions like pentation generated from the lower fixed point of Kneser's Tetration are actually pretty poorly behaved in the complex plane. All those singularities at the negative integers for tetration get reflected in pentation as real(z) grows and they eventually show up arbitrarily close to the real axis. I have experimented with an analytic hexation generated from a complex conjugate pair of fixed points from such an "analytic" pentation, but it too is really poorly behaved in the complex plane. So imho, the higher order analytic Ackermann functions after tetration don't seem as interesting to me as tetration. Contrast this with the family of $\phi_n$ functions all of which seem to be entire and 2pi*i periodic. Since they are 2pi*I periodic with limiting behavior as real(z) grows arbitrarily negative of $\exp(z)$, then each of these functions has a corresponding Schroeder function whose inverse $\Psi^{-1}$ is also entire with $\phi_n(z)=\Psi_n^{-1}(\exp(z))$ $\Psi^{-1}_n(x)=x+\sum_{n=2}^{\infty}a_n\cdot x^n;\;\;$ there is a formal entire inverse Schroeder function for each phi_n function. Moreover, each formal Schroeder function's Taylor series can be generated with surprising ease. For $\Psi^{-1}_2(x)=x+...$ we initialize a function f=x, and we iterate n times to calculate n+1 Taylor series terms as follows, where each iteration gives one additional exact term in the Taylor series of the inverse Schroeder function. $f_1=x; f_n(x)=x\cdot\exp(f_{n-1}(\frac{x}{e}));\;\;\Psi^{-1}_2(x)=\lim_{n\to\infty}f_n(x);\;\;\Psi^{-1}_2(x)=x+\frac{x^2}{e}+...$ Surprisingly, the iteration for $\Psi^{-1}_3(x);\Psi^{-1}_4(x);\Psi^{-1}_5(x);\;$ are only a little bit more complicated but can also easily be coded in a single lines of pari-gp code or mathematical equations as follows: $g_1=x; g_n(x)=\Psi^{-1}_2(x\cdot\exp(g_{n-1}(\frac{x}{e})));\;\;\Psi^{-1}_3(x)=\lim_{n\to\infty}g_n(x);\;\;$this works for Psi_4,5 etc. So I would assert that this family of iterated entire superfunctions is more well behaved than any other family of iterated superfunctions that I am aware of. It is also more accessible and easy to calculate than any other family than I am aware. In that sense, it is also more accessible than Kneser, which is actually pretty difficult to understand and calculate. - Sheldon MphLee Fellow Posts: 175 Threads: 16 Joined: May 2013 03/01/2021, 11:22 PM (This post was last modified: 03/02/2021, 12:57 AM by MphLee.) (02/27/2021, 09:57 PM)sheldonison Wrote: Since they are 2pi*I periodic with limiting behavior as real(z) grows arbitrarily negative of $\exp(z)$, then each of these functions has a corresponding Schroeder function whose inverse $\psi^{-1}$ is also entire with $\phi_n(z)=\psi_n^{-1}(\exp(z))$ $\psi^{-1}_n(x)=x+\sum_{n=2}^{\infty}a_n\cdot x^n;\;\;$ there is a formal entire inverse Schroeder function for each phi_n function. I'm probably missing some key piece of the puzzle (terminology). Are you talking about a kind of inverse Schroeder-like function right? A confortable abuse of name similar to how we can call $\phi_{n+1}$ inverse Abel-like function of $\phi_{n}$? In a strict sense, I don't see how $\psi_{n}$ is a Schroeder function of $\phi_{n}$ or of $\phi_{n-1}$. For this sequence I see this: define $\sigma_{n}:=\psi_n^{-1}$ in your notation. $\sigma_{n+1}(z)=\phi_n(\ln z+\sigma_{n+1}(\frac{z}{e}))$ $\sigma_{n+1}(z)=\sigma_n(z\cdot e^{\sigma_{n+1}(\frac{z}{e})})$ That is "inverse Schroeder-like" in some sense. $\sigma_{n+1}(ez)=\phi_n(1+\ln z+\sigma_{n+1}(z))$ $\sigma_{n+1}(ez)=\sigma_n(ez\cdot e^{\sigma_{n+1}(z)})$ ADD: it is possible to express phi_2 (an offset of JmsNxn's phi) as an infinite composition. $\phi_2(z)=\Omega_{j=0}^\infty \phi_1(z-j+w)\bullet w$ at $w=0$ Is there a similar "closed form" for the other phi_n? Is it too optimistic to declare $\phi_{n+1}(z)=\Omega_{j=0}^\infty \phi_{n-1}(z-j+w)\bullet w$ at $w=0$? MathStackExchange account:MphLee Fundamental Law $(\sigma+1)0=\sigma (\sigma+1)$ « Next Oldest | Next Newest »
Possibly Related Threads... Thread Author Replies Views Last Post On my old fractional calculus approach to hyper-operations JmsNxn 13 443 05/29/2021, 01:13 AM Last Post: MphLee hyper 0 dantheman163 2 4,622 03/09/2021, 10:28 PM Last Post: MphLee Thoughts on hyper-operations of rational but non-integer orders? VSO 2 3,009 09/09/2019, 10:38 PM Last Post: tommy1729 Could there be an "arctic geometry" by raising the rank of all operations? Syzithryx 2 3,378 07/24/2019, 05:59 PM Last Post: Syzithryx Hyper-volume by integration Xorter 0 2,627 04/08/2017, 01:52 PM Last Post: Xorter Hyper operators in computability theory JmsNxn 5 8,255 02/15/2017, 10:07 PM Last Post: MphLee Recursive formula generating bounded hyper-operators JmsNxn 0 2,839 01/17/2017, 05:10 AM Last Post: JmsNxn holomorphic binary operators over naturals; generalized hyper operators JmsNxn 15 25,497 08/22/2016, 12:19 AM Last Post: JmsNxn Intresting ternary operations ? tommy1729 0 2,735 06/11/2015, 08:18 AM Last Post: tommy1729 on constructing hyper operations for bases > eta JmsNxn 1 4,605 04/08/2015, 09:18 PM Last Post: marraco
Users browsing this thread: 1 Guest(s) | 2021-06-16 14:50:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 155, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8825446963310242, "perplexity": 1149.3174762832107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487623942.48/warc/CC-MAIN-20210616124819-20210616154819-00558.warc.gz"} |
http://answerparty.com/question/answer/why-do-you-need-to-line-up-the-decimal-points-before-comparing-and-ordering-numbers-with-decimals | Question:
# Why do you need to line up the decimal points before comparing and ordering numbers with decimals?
Answer:
## It makes doing calculations, particularly totaling up numbers, a lot easier to do. It also makes it easier to compare the numbers when you line up the decimal points
Tags:
Elementary arithmetic
Numeral systems
Decimal
Fractions
Pi
Number
Hexadecimal
0.999...
Mathematics
Arithmetic
Linguistics
42 | 2014-03-09 09:22:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.887922465801239, "perplexity": 1110.3319031696503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999676283/warc/CC-MAIN-20140305060756-00000-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://ged-testprep.com/question/solve-the-given-equation-for-x-14x--84-4560743370326016/ | Scan QR code or get instant email to install app
Question:
# Solve the given equation for x. $14x = 84$
A x=6.
explanation
To undo the effect of multiplying by 14, divide both sides of the equation by 14. In this way, we will isolate an unknown value x.
$14x = 84$ Original equation.
$14x \div 14=84 \div 14=6$ | 2023-03-24 23:08:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18482111394405365, "perplexity": 902.8865963743654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945289.9/warc/CC-MAIN-20230324211121-20230325001121-00007.warc.gz"} |
https://cs.stackexchange.com/questions/124862/prove-that-a-red-black-tree-with-n-internal-nodes-has-height-at-most-2-lgn1 | # Prove that a red-black tree with $n$ internal nodes has height at most $2\lg(n+1)$
I cannot understand the first paragraph of the proof, which comes from the known book Introduction to Algorithms, third-edition, and I consider it has some errors, could anyone help me check about it?
Possible errors:
1. It first prove the case $$\text{height(x)=0},$$ then it says "For inductive steps, consider a node $$x$$ that has positive height".
From my understanding of inductive proof, the base case should be able to trigger the inductive statement. I mean: the "first domino" should trigger the next one, so the statement should be something like non-negative height.
2. It says "each child has a black-height of either $$\text{bh}(x)$$ or $$\text{bh}(x)-1$$", but when applying, only the latter is used: $$(2^{\text{bh}(x)-1}-1)+(2^{\text{bh}(x)-1}-1)+1=2^{\text{bh}(x)}-1$$.
## Paragraph from the book:
1) The base case proves the claim when the height of the subtree rooted at $$x$$ is $$0$$. The inductive step proves the claim for every subtree rooted at $$x$$ of positive height $$h$$ assuming that the claim is true for all subtrees of heights from $$0$$ to $$h-1$$. So the inductive step for $$h=1$$ is able to prove the claim for all subtrees of height $$1$$ using the fact that the claim is true for the base case, i.e., for all subtrees of height $$0$$. This is a standard proof by structural induction.
2) Each subtree $$T_u$$ rooted in a child $$u$$ of $$x$$ has a black-height $$bh(u)$$ of either $$bh(x)$$ or $$bh(x)-1$$ but, in any case, the height of $$T_u$$ is smaller than the height of the subtree $$T_x$$ rooted in $$x$$. This means that the induction hypothesis can be applied, showing that the number of internal nodes is at least:
• $$2^{bh(x)}-1$$ if $$bh(u)=bh(x)$$; or
• $$2^{bh(x)-1}-1$$ if $$bh(u)=bh(x)-1$$.
In any case the number of internal nodes of $$T_u$$ is at least the smaller of the above two quantities, i.e., $$\min\{ 2^{bh(x)}-1, 2^{bh(x)-1}-1 \} = 2^{bh(x)-1}-1$$. This means that the number of internal nodes in $$T_x$$ is at least:
$$2 \cdot ( 2^{bh(x)-1}-1 ) + 1 = 2^{bh(x)}-1.$$ | 2021-03-03 18:15:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 31, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7386617660522461, "perplexity": 120.10593708631293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367183.21/warc/CC-MAIN-20210303165500-20210303195500-00005.warc.gz"} |
http://douglaswhitaker.com/tag/statistics/ | # Tag: statistics
Familiarity with statistical computing software - particularly programs as flexible and feature-filled as R and the packages on CRAN - has been a tremendous boon. However, this familiarity has sent me searching the web for a way ask for particular output that is not printed by default. This expectation that the output I want from software is available with the right option or command has led me (more than once) to forget the possibility of simply computing the required output manually.
In particular, I recently needed to compute the RMSEA of the null model for confirmatory factor analysis (CFA). A few months ago, I chose to use Mplus for the CFA because I was familiar with it (moreso than the lavaan R package at least) and it had some estimation methods I needed that other software does not always have implemented (e.g. the WLSMV estimator is not available in JMP 10 with SAS PROC CALIS).
Mplus does not print the RMSEA for the null model (or baseline model, in Mplus parlance) in the standard output, nor does there seem to be an command to request it. Fortunately, this is not an insurmountable problem because the formula for RMSEA is straightforward:
where $X^2$ is the observed Chi-Square test statistic, df is the associated degrees of freedom, and N is the sample size. In the Mplus output, look for "Chi-Square Test of Model Fit for the Baseline Model" for the $X^2$ and df values.
The reason for needing to check the null RMSEA is that incremental fit indices such as CFI and TLI may not be informative if the null RMSEA is less than 0.158 (Kenny, 2014). If you are using the lavaan package, it appears this can be calculated using the nullRMSEA function in the semTools package.
(As an aside, don't let the vintage '90s design fool you: David Kenny's website is a great resource for structural equation modeling. He has the credentials to back up the site, too: Kenny is a Distinguished Professor Emeritus at the University of Connecticut.)
References
Kenny, D. A. (2014). Measuring Model Fit. Retrieved from http://davidakenny.net/cm/fit.htm
I spotted a dot plot while watching TV the other day:
It isn't too frequently that one sees a dot plot on TV, so this is a good opportunity to discuss something students might have encountered. Using this commercial might be a worthwhile topic of discussion in a statistics lesson.
The apparently constructed the dot plot by asking 400 people "How old is the oldest person you've known?" A few more details can be gleaned from the Prudential website and a "behind the scenes" video that was shot.
A few things that can be discussed with students come to mind:
• What can we actually conclude from the dot plot?
• The description of the YouTube video describes this as an "experiment" (as does the narrator in the behind the scenes video). Is this really an experiment?
• What do we know about the sample?
• What happens as we get older in terms of the oldest person "[we]'ve known"? (Children and adults with a wide range of ages are asked to place a sticker.)
There's a 30 second version of the advertisement, too.
I saw this graphic reblogged by NPR on Tumblr (originally posted by Luminous Enchiladas, though I can't be sure of the creator), and I must say that it is impressive.
Olympics vs Mars
There are some pretty substantial problems with this impressively bad graphic.
• Pie charts should only be used when comparing parts to a whole. The $17.5 billion dollars that went to the Olympics and the Curiosity Rover wasn't a priori some whole amount of money. Treating it as "the whole" implies that there was only$17.5 billion dollars from wherever to be spent, and that it was spent only on the Olympics and Mars.
• The pieces of the pie chart aren't labeled with the dollar amounts. Instead, the pieces are labeled with the piece's name which does address a complaint with pie charts (namely that the reader needs to continually look back and forth from the chart to the key). Because there are only two pieces, there is room for including the dollar figures in the chart area. With more complicated charts, this wouldn't be the case.
• This chart uses an unnecessary "3D" effect which obscures the true areas being compared. A flat pie chart would be less misleading.
Additionally, there are some general problems with pie charts which make them inferior to other charts (specifically bar charts):
• Comparing areas is difficult. Cleveland (1985) writes about how area comparisons are subject to bias, and Schmid (1983) specifically describes how, when comparing two circles (e.g. two pie charts of different size used to indicate change over time), the area of the larger circle is underestimated relative to the smaller.
• Comparing angles is difficult. Cleveland (1985) states that ordering the sections of a pie chart is prone to error based on earlier empirical research.
References:
• Cleveland, W. S. (1985). The elements of graphing data. Monterey, Calif: Wadsworth Advanced Books and Software.
• Schmid, C. F. (1983). Statistical graphics: Design principles and practices. New York: Wiley.
The other day I was looking for a package that did the Quadrant Count Ratio (QCR) in R. I couldn't find one, so I whipped up some simple code to do what I needed to do.
qcr <- function(dat){
n <- nrow(dat);
m.x <- mean(dat[,1]); m.y <- mean(dat[,2]);
# in QCR we ignore points that are on the mean lines
# number of points in Quadrants 1 and 3
q13 <- sum(dat[,1] > mean(dat[,1]) & dat[,2] > mean(dat[,2]))+sum(dat[,1] < mean(dat[,1]) & dat[,2] < mean(dat[,2]))
# number of points in Quadrants 2 and 4
q24 <- sum(dat[,1] < mean(dat[,1]) & dat[,2] > mean(dat[,2]))+sum(dat[,1] < mean(dat[,1]) & dat[,2] > mean(dat[,2]))
return((q13-q24)/n)
}
The above assumes dat is an Nx2 array with column 1 serving as X and column 2 serving as Y. This can easily be changed. I also wrote a little function to plot the mean lines:
plot.qcr <- function(dat){
value <- qcr(dat);
plot(dat);
abline(v=mean(dat[,1]),col="blue"); # adds a line for x mean
abline(h=mean(dat[,2]),col="red"); # adds a line for y mean
}
Both of these functions are simple, but I will likely extend and polish them (and then release them as a package). I'd also like to explore what would happen to the QCR if median lines were used instead of mean lines. (This new QCR* would no longer directly motivate Pearson's Product-Moment Correlation, but could have its own set of advantages.) Below is a quick example:
# QCR example
set.seed(1)
dat.x <- c(1:10)
dat.y <- rbinom(10,10,.5)
dat <- cbind(dat.x,dat.y)
qcr(dat)
# [1] 0.6
plot.qcr(dat)
This is the plot:
For more information on the QCR check out this article: Holmes, Peter (2001). “Correlation: From Picture to Formula,” Teaching Statistics, 23(3):67–70.
[Updated on 2013-10-03 to add a link to Wikipedia.]
Ever on the quest for good comics related to statistics, I've started searching for specific terms. Of course, xkcd doesn't fail to deliver with a comic related to Markov chains.
Freestyle rapping is basically applied Markov chains.
"90's Flowchart" - Copyright CC BY-NC 2.5 by Randall Munroe, xkcd.com
keywords: flowchart; Markov chain; probability; | 2017-06-28 07:15:38 | {"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49728405475616455, "perplexity": 2221.611335126559}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128322873.10/warc/CC-MAIN-20170628065139-20170628085139-00366.warc.gz"} |
https://www.law.cornell.edu/cfr/text/40/63.764 | # 40 CFR § 63.764 - General standards.
prev | next
§ 63.764 General standards.
(a) Table 2 of this subpart specifies the provisions of subpart A (General Provisions) of this part that apply and those that do not apply to owners and operators of affected sources subject to this subpart.
(b) All reports required under this subpart shall be sent to the Administrator at the appropriate address listed in § 63.13. Reports may be submitted on electronic media.
(c) Except as specified in paragraph (e) of this section, the owner or operator of an affected source located at an existing or new major source of HAP emissions shall comply with the standards in this subpart as specified in paragraphs (c)(1) through (3) of this section.
(1) For each glycol dehydration unit process vent subject to this subpart, the owner or operator shall comply with the requirements specified in paragraphs (c)(1)(i) through (iii) of this section.
(i) The owner or operator shall comply with the control requirements for glycol dehydration unit process vents specified in § 63.765;
(ii) The owner or operator shall comply with the monitoring requirements specified in § 63.773; and
(iii) The owner or operator shall comply with the recordkeeping and reporting requirements specified in §§ 63.774 and 63.775.
(2) For each storage vessel with the potential for flash emissions subject to this subpart, the owner or operator shall comply with the requirements specified in paragraphs (c)(2)(i) through (iii) of this section.
(i) The control requirements for storage vessels specified in § 63.766;
(ii) The monitoring requirements specified in § 63.773; and
(iii) The recordkeeping and reporting requirements specified in §§ 63.774 and 63.775.
(3) For ancillary equipment (as defined in § 63.761) and compressors at a natural gas processing plant subject to this subpart, the owner or operator shall comply with the requirements for equipment leaks specified in § 63.769.
(d) Except as specified in paragraph (e)(1) of this section, the owner or operator of an affected source located at an existing or new area source of HAP emissions shall comply with the applicable standards specified in paragraph (d) of this section.
(1) Each owner or operator of an area source located within an UA plus offset and UC boundary (as defined in § 63.761) shall comply with the provisions specified in paragraphs (d)(1)(i) through (iii) of this section.
(i) The control requirements for glycol dehydration unit process vents specified in § 63.765;
(ii) The monitoring requirements specified in § 63.773; and
(iii) The recordkeeping and reporting requirements specified in §§ 63.774 and 63.775.
(2) Each owner or operator of an area source not located in a UA plus offset and UC boundary (as defined in § 63.761) shall comply with paragraphs (d)(2)(i) through (iii) of this section.
(i) Determine the optimum glycol circulation rate using the following equation:
${L}_{\mathrm{OPT}}=1.15*3.0\frac{\mathrm{gal}\phantom{\rule{0ex}{0ex}}\mathrm{TEG}}{\mathrm{lb}{\phantom{\rule{0ex}{0ex}}H}_{2}O}*\left(\frac{F*\left(I-O\right)}{24\phantom{\rule{0ex}{0ex}}\mathrm{hr}/\mathrm{day}}\right)$
Where:
LOPT = Optimal circulation rate, gal/hr.
F = Gas flowrate (MMSCF/D).
I = Inlet water content (lb/MMSCF).
O = Outlet water content (lb/MMSCF).
3.0 = The industry accepted rule of thumb for a TEG-to water ratio (gal TEG/lb H2O).
1.15 = Adjustment factor included for a margin of safety.
(ii) Operate the TEG dehydration unit such that the actual glycol circulation rate does not exceed the optimum glycol circulation rate determined in accordance with paragraph (d)(2)(i) of this section. If the TEG dehydration unit is unable to meet the sales gas specification for moisture content using the glycol circulation rate determined in accordance with paragraph (d)(2)(i), the owner or operator must calculate an alternate circulation rate using GRI-GLYCalc TM, Version 3.0 or higher. The owner or operator must document why the TEG dehydration unit must be operated using the alternate circulation rate and submit this documentation with the initial notification in accordance with § 63.775(c)(7).
(iii) Maintain a record of the determination specified in paragraph (d)(2)(ii) in accordance with the requirements in § 63.774(f) and submit the Initial Notification in accordance with the requirements in § 63.775(c)(7). If operating conditions change and a modification to the optimum glycol circulation rate is required, the owner or operator shall prepare a new determination in accordance with paragraph (d)(2)(i) or (ii) of this section and submit the information specified under § 63.775(c)(7)(ii) through (v).
(e) Exemptions.
(1) The owner or operator of an area source is exempt from the requirements of paragraph (d) of this section if the criteria listed in paragraph (e)(1)(i) or (ii) of this section are met, except that the records of the determination of these criteria must be maintained as required in § 63.774(d)(1).
(i) The actual annual average flowrate of natural gas to the glycol dehydration unit is less than 85 thousand standard cubic meters per day, as determined by the procedures specified in § 63.772(b)(1) of this subpart; or
(ii) The actual average emissions of benzene from the glycol dehydration unit process vent to the atmosphere are less than 0.90 megagram per year, as determined by the procedures specified in § 63.772(b)(2) of this subpart.
(2) The owner or operator is exempt from the requirements of paragraph (c)(3) of this section for ancillary equipment (as defined in § 63.761) and compressors at a natural gas processing plant subject to this subpart if the criteria listed in paragraph (e)(2)(i) or (ii) of this section are met, except that the records of the determination of these criteria must be maintained as required in § 63.774(d)(2).
(i) Any ancillary equipment and compressors that contain or contact a fluid (liquid or gas) must have a total VHAP concentration less than 10 percent by weight, as determined by the procedures specified in § 63.772(a); or
(ii) That ancillary equipment and compressors must operate in VHAP service less than 300 hours per calendar year.
(f) Each owner or operator of a major HAP source subject to this subpart is required to apply for a 40 CFR part 70 or part 71 operating permit from the appropriate permitting authority. If the Administrator has approved a State operating permit program under 40 CFR part 70, the permit shall be obtained from the State authority. If a State operating permit program has not been approved, the owner or operator of a source shall apply to the EPA Regional Office pursuant to 40 CFR part 71.
(g)-(h) [Reserved]
(i) In all cases where the provisions of this subpart require an owner or operator to repair leaks by a specified time after the leak is detected, it is a violation of this standard to fail to take action to repair the leak(s) within the specified time. If action is taken to repair the leak(s) within the specified time, failure of that action to successfully repair the leak(s) is not a violation of this standard. However, if the repairs are unsuccessful, and a leak is detected, the owner or operator shall take further action as required by the applicable provisions of this subpart.
(j) At all times the owner or operator must operate and maintain any affected source, including associated air pollution control equipment and monitoring equipment, in a manner consistent with safety and good air pollution control practices for minimizing emissions. Determination of whether such operation and maintenance procedures are being used will be based on information available to the Administrator which may include, but is not limited to, monitoring results, review of operation and maintenance procedures, review of operation and maintenance records, and inspection of the source.
[64 FR 32628, June 17, 1999, as amended at 66 FR 34551, June 29, 2001; 72 FR 38, Jan. 3, 2007; 77 FR 49570, Aug. 16, 2012] | 2023-03-24 20:48:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46536344289779663, "perplexity": 3905.4177383423225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945288.47/warc/CC-MAIN-20230324180032-20230324210032-00210.warc.gz"} |
https://soundwriting.pugetsound.edu/adobe-acrobat-reader.html |
##### Step 2: Do some research!
Put those great research skills to use! (If you need a refresher on how to get started, see Chapter 1.)
When you find a source you want, download it and save it to your research folder or desktop.
If you have installed Adobe Acrobat Reader, your document should be automatically saved as an Adobe Acrobat PDF file. After it's saved, you can then open it, and it should open with Adobe Acrobat.
Yay! Now you can begin exploring all of the annotation capabilities Adobe has to offer.
##### Step 4: Annotate to your heart's content.
There are several different ways to annotate a PDF document in Adobe Acrobat. We'll highlight some of these capabilities below.
Commenting
You can easily comment on certain parts of the source document by selecting the small speech bubble icon on the right side of the toolbar and clicking on the part you want to comment on. A small window text box will appear wherein you may leave notes/questions to yourself about the text, observations about how to use a certain piece of information, or reminders to do more research elsewhere. You can also engage in a dialogue with yourself (or others if you are collaboratively examining a source) by replying to these comments later on.
Highlighting
You can also easily highlight sections of the text using the highlighter function, making it easier for you to go back later and identify significant pieces of information. To do this, click on the highlighter icon on the far right of the toolbar (next to the comment icon) and select the section of the text you'd like to highlight (but try not to over-highlight!). See Section 2.3 for tips on how to annotate effectively).
###### Note
While this highlighting technique works for most PDF files, some files that have been photocopied don't have the right line-spacing to accommodate the highlighter function. However, Adobe is smart; in these cases, an option to highlight manually using a box-select tool should be available.
##### Unlocking More Annotation Tools
To unlock a host of other annotation functions, select the “Tools” tab at the top left of the screen; a screen of icons will appear. Select the “Comment” option at the top left of the screen.
The source document will reappear with a new toolbar below the original. Here you'll notice the commenting and highlighting icons along with a slew of other editing features.
These features include the following:
Underlining
To underline a section of text, select the third icon from the left (“T” with a line underneath) and highlight the section of text you want underlined. A green line will appear beneath the section of text.
Strike-Through
If you'd like to strike-through a section of text (particularly useful if you're editing one of your own pieces or if you want to identify a part of the source document with which you do not agree), select the icon fourth from the left (“T” with a strike-through) and highlight the desired section of text. A red line will strike through this section of text.
Replacing Text
If you'd like to replace text (not so useful for annotating published source documents, but especially useful if you're editing yours or another's paper), select the fifth icon from the left (“T” with a strike-through and a speech bubble) and highlight the section of desired text. A blue line will strike through this section of the original text and a window text box will appear allowing you to write the text you want to replace it with.
Inserting Text
If you'd like to insert text (again, more useful for documents you are actively editing than source documents you are annotating), select the sixth icon from the left (“T” with a subscript carrot) and place your cursor in the place in the text where you'd like to insert new text. A small blue box with a carrot will appear in the text, while a window text box will allow you to write the text you'd like to insert.
If you'd like to add text directly onto the document (useful if you want to more easily view your comments), select the seventh icon from the left (plain “T”) and highlight the area you'd like the text to go. Begin writing and your comment will appear directly onto the document.
Another way to add text directly onto a source document is to create a text box. To do this, select the eighth icon from the left (“T” within a box) and create the text box where you want it on the document. A red text box will appear in which you can write your comment.
Drawing Free-form
You can also draw free-form on the document to emphasize certain sections or (if you're good at drawing using a mouse or touchpad, which I am not) even draw pictures or write comments to annotate the document. To do this, select the ninth icon from the left (pen) and begin to write/draw in red pen anywhere on the document.
(Like I said, I'm not that artistically savvy.)
Erasing
If you'd like to erase something you've drawn, you can also do this by selecting the tenth icon from the left (eraser) and moving it over your drawing to erase it.
If you're more artistically/visually inclined, you can also add pre-formed shapes to your document to draw emphasis to certain sections or have fun while you annotate. To do this, select the thirteenth icon from the left on the toolbar (cluster of shapes), and use the drop-down menu to select which shape you'd like to draw.
You can also change the color, opacity, and line thickness of your drawing tools by selecting paint bucket and line-thickness icons at the far right of the toolbar.
Go wild! (But not too wild.)
Adobe is also really great because it keeps track of each of your comments as you add them and provides a running list of them along the right side of the screen. If you want to jump to a certain comment, simply click on the desired comment in the list and the document will take you to that comment. You can also arrange these comments by page, author, date, type, checkmark status, and color or filter them by reviewer, type, status, and color.
Changing the Page Viewer
There are also useful page-viewing capabilities to the left side of the screen under the “Page Thumbnails,” “Bookmarks,” and “Attachments” icons. “Page Thumbnails” will allow you to see which page of the document you are currently viewing:
Or you can switch to “Bookmarks” to view an outline of the source document and click to go to a certain section:
If there are attachments that were included in the source document, selecting the “Attachments” option will allow you to view them.
Searching the Document Source
Finally, you can search for key words and phrases in the document by selecting the “Find text” icon (magnifying glass) in the upper left of the original toolbar. A window will appear in which you can type in the word or phrase you'd like to find; selecting “previous” or “next” will bring you to each instance in the document where that term was used. | 2018-02-22 12:38:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45322611927986145, "perplexity": 1596.6595686720018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814105.6/warc/CC-MAIN-20180222120939-20180222140939-00425.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php?title=Holomorphic_function&diff=prev&oldid=31004 | # Difference between revisions of "Holomorphic function"
A holomorphic function $f: \mathbb{C} \to \mathbb{C}$ is a differentiable complex function. That is, just as in the real case, $f$ is holomorphic at $z$ if $\lim_{h\to 0} \frac{f(z+h)-f(z)}{h}$ exists. This is much stronger than in the real case since we must allow $h$ to approach zero from any direction in the complex plane.
## Cauchy-Riemann Equations
Let us break $f$ into its real and imaginary components by writing $f(z)=u(x,y)+iv(x,y)$, where $u$ and $v$ are real functions. Then it turns out that $f$ is holomorphic at $z$ iff $u$ and $v$ have continuous partial derivatives and the following equations hold:
• $\frac{\partial u}{\partial x}=\frac{\partial v}{\partial y}$
• $\frac{\partial u}{\partial y}=-\frac{\partial v}{\partial x}$
These equations are known as the Cauchy-Riemann Equations.
## Analytic Functions
A related notion to that of homolorphicity is that of analyticity. A function $f:\mathbb{C}\to\mathbb{C}$ is said to be analytic at $z$ if $f$ has a convergent power series expansion on some neighborhood of $z$. Amazingly, it turns out that a function is holomorphic at $z$ if and only if it is analytic at $z$. | 2022-12-09 22:49:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 21, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9650483131408691, "perplexity": 84.79725562499318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711552.8/warc/CC-MAIN-20221209213503-20221210003503-00075.warc.gz"} |
https://www.flyingcoloursmaths.co.uk/category/sport/ | # Browsing category sport
## Ask Uncle Colin: Expected goals
Dear Uncle Colin, What on earth is "expected goals" and why is it supposed to be useful? - Just Enjoy Football FFS Hello, JEFF, and thanks for your message! The first part of your question is simple, although with some subtleties. What does 'expected goals' mean? 'Expected goals' is a
## Review: The Numbers Game, by Chris Anderson and David Sally
"'The book that could change football for ever' -- The Times," screams the garish orange front cover. Noted football experts Malcolm Gladwell and Billy Beane shower it with praise. Apparently everything I know about football is wrong. Despite all of the dubious hype, The Numbers Game: Why Everything You Know
## A tennis puzzle
A puzzle that occurred to me watching Wimbledon this week: A tennis match goes to five sets. The number of games one of the players wins in each set forms an arithmetic series. Given that the two players won the same number of games in total, who won the match?
## An ex-Formula 1 driver asks… why does nobody ever ask me my opinion?
Former Formula 1 driver @MBrundleF1 asks: These election opinion polls. I've never been approached, nor has anybody I know, about a poll on anything whatsoever. Who are these people? — Martin Brundle (@MBrundleF1) April 5, 2015 (Obviously, he didn't ask me specifically, but I thought it was a good question.)
## Sport, maths, twitter and hulk-smashing (a rant)
My dear readers, it is not often I become angry. Rage is not an emotion that frequently troubles Flying Colours Towers. A sharp word for a student who is trying to wind me up, perhaps once in a while, but rarely a rant. As the man says, you wouldn't like
## How big a lead can a football team have?
A reader asks: What’s the biggest lead a football team can have in the table after $n$ games? In a typical football league, teams get three points for a win, one for a draw, and none for getting beat. After, for example, one game, if one team wins and all
## Probability and prediction
A reader asks: Is it easier to predict every game in the first round of a tournament1, or to pick the eventual winner? There's a competition run by Sky, which is free to enter; you win £250,000 if you correctly predict the scores in six Premier League games. It's free
## BBC Sport’s anti-smartness bias
“[James McEvoy] is an unashamed geek - he was reading a book on physics, as you do, to see if it could improve his performance.” - Radio 5 swimming commentator “You need some kind of accountancy degree to work out what each of them needs to do in the final
## Sam Warburton’s Dilemma
There are six minutes to play in the last Autumn international, and Australia are leading Wales by 30 points to 26. Australia, however, have just conceded a penalty in front of the posts, leaving the Welsh captain, Sam Warburton, with a dilemma: should he kick at goal (and take a
## Scheduling a Scrabble tournament
Once upon a time, there was a Scrabble tournament. Sixteen of the county's greatest Scrabbleologists descended on the venue... only to find the organiser had lost the fixture list. What the organise could remember was this: there were five rounds, and each player played each of the others exactly once. | 2019-11-18 19:48:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31246787309646606, "perplexity": 4082.7179396627143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669813.71/warc/CC-MAIN-20191118182116-20191118210116-00017.warc.gz"} |
https://scioly.org/forums/viewtopic.php?t=11139&p=351816 | ## Hovercraft B/C
Moderator
Posts: 412
Joined: Fri Dec 06, 2013 9:56 pm
Division: C
State: TX
Location: Katy, Texas
Contact:
### Re: Hovercraft B/C
UTF-8 U+6211 U+662F wrote:
Adi1008 wrote:Suppose I have a bowling ball with a diameter of 25 centimeters. What is the largest mass it can have such that it floats in corn syrup (specific gravity = 1.4)?
Looks good to me. Your turn!
University of Texas at Austin '22
Seven Lakes High School '18
Beckendorff Junior High '14
UTF-8 U+6211 U+662F
Exalted Member
Posts: 750
Joined: Sun Jan 18, 2015 3:42 pm
Division: C
State: PA
Location: (0, 0)
Contact:
### Re: Hovercraft B/C
All right! Given a graph of v vs t, how do you find the displacement traveled? What about the distance?
Moderator
Posts: 412
Joined: Fri Dec 06, 2013 9:56 pm
Division: C
State: TX
Location: Katy, Texas
Contact:
### Re: Hovercraft B/C
UTF-8 U+6211 U+662F wrote:All right! Given a graph of v vs t, how do you find the displacement traveled? What about the distance?
University of Texas at Austin '22
Seven Lakes High School '18
Beckendorff Junior High '14
UTF-8 U+6211 U+662F
Exalted Member
Posts: 750
Joined: Sun Jan 18, 2015 3:42 pm
Division: C
State: PA
Location: (0, 0)
Contact:
### Re: Hovercraft B/C
UTF-8 U+6211 U+662F wrote:All right! Given a graph of v vs t, how do you find the displacement traveled? What about the distance?
Yep!
Moderator
Posts: 412
Joined: Fri Dec 06, 2013 9:56 pm
Division: C
State: TX
Location: Katy, Texas
Contact:
### Re: Hovercraft B/C
UTF-8 U+6211 U+662F wrote:
UTF-8 U+6211 U+662F wrote:All right! Given a graph of v vs t, how do you find the displacement traveled? What about the distance?
Yep!
Suppose you have a contracting star whose new radius is 1/x as big as the old radius.
a. How much faster does the star spin?
b. By what factor does its rotational kinetic energy change?
University of Texas at Austin '22
Seven Lakes High School '18
Beckendorff Junior High '14
MattChina
Member
Posts: 132
Joined: Sun Feb 12, 2017 4:06 pm
Division: B
State: NY
Location: somewhere over the rainbow
Contact:
### Re: Hovercraft B/C
UTF-8 U+6211 U+662F wrote:
Yep!
Suppose you have a contracting star whose new radius is 1/x as big as the old radius.
a. How much faster does the star spin?
b. By what factor does its rotational kinetic energy change?
a. x^2 faster
b. x^2
Physics is my city
2017 events: Hovercraft, Ecology, Optics, Write it do it, Fast Facts, Mystery Design(NY trial)
2018 events: Hovercraft, Ecology, Optics, Thermodynamics, Quiz bowl(NY trial)
Ankles moister than a humid summer day
UTF-8 U+6211 U+662F
Exalted Member
Posts: 750
Joined: Sun Jan 18, 2015 3:42 pm
Division: C
State: PA
Location: (0, 0)
Contact:
### Re: Hovercraft B/C
It's been a while, so I guess I'll ask a question.
Consider a basketball player throwing a ball into the hoop. The ball is 625 g, and the basketball player throws it at 10 m/s at 65 degrees to the horizontal. Neglect air resistance.
1) Find the force that acts on the ball once it leaves the player's hand.
2) A regulation height hoop is 10 feet tall. Find the distance from the hoop he needs to be if he shoots the ball from just above his head and he is 1.8 m tall.
Edit: Wait hovercraft is being replaced next year
Nydauron
Member
Posts: 6
Joined: Wed Mar 21, 2018 3:10 am
Division: C
State: IL
Contact:
### Re: Hovercraft B/C
UTF-8 U+6211 U+662F wrote:It's been a while, so I guess I'll ask a question.
Consider a basketball player throwing a ball into the hoop. The ball is 625 g, and the basketball player throws it at 10 m/s at 65 degrees to the horizontal. Neglect air resistance.
1) Find the force that acts on the ball once it leaves the player's hand.
2) A regulation height hoop is 10 feet tall. Find the distance from the hoop he needs to be if he shoots the ball from just above his head and he is 1.8 m tall.
Edit: Wait hovercraft is being replaced next year
Old Events
Varsity Div C
Next Year Events
Circuit Lab
Mousetrap Vehicle
Sounds of Music
Thermodynamics
UTF-8 U+6211 U+662F
Exalted Member
Posts: 750
Joined: Sun Jan 18, 2015 3:42 pm
Division: C
State: PA
Location: (0, 0)
Contact:
### Re: Hovercraft B/C
Nydauron wrote:
UTF-8 U+6211 U+662F wrote:It's been a while, so I guess I'll ask a question.
Consider a basketball player throwing a ball into the hoop. The ball is 625 g, and the basketball player throws it at 10 m/s at 65 degrees to the horizontal. Neglect air resistance.
1) Find the force that acts on the ball once it leaves the player's hand.
2) A regulation height hoop is 10 feet tall. Find the distance from the hoop he needs to be if he shoots the ball from just above his head and he is 1.8 m tall.
Edit: Wait hovercraft is being replaced next year | 2018-08-21 07:35:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 40, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48617783188819885, "perplexity": 6076.66520740591}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221218070.73/warc/CC-MAIN-20180821073241-20180821093241-00501.warc.gz"} |
https://math.answers.com/Q/What_are_he_multiples_of_4 | 0
# What are he multiples of 4?
Wiki User
2013-03-04 02:29:41
The multiples of 4 are the values such that they are divisible by 4. For example, 4, 8, 12 and any integer multiplied by 4 to be the part of the multiples of 4.
Wiki User
2013-03-04 02:29:41
🙏
0
🤨
0
😮
0
Study guides
20 cards
➡️
See all cards
1 card
➡️
See all cards
96 cards
➡️
See all cards | 2021-11-30 10:00:37 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8328472971916199, "perplexity": 2579.4392409512516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358966.62/warc/CC-MAIN-20211130080511-20211130110511-00607.warc.gz"} |
https://zbmath.org/?q=an:1212.35426 | ## Non-uniform dependence on initial data for a family of non-linear evolution equations.(English)Zbl 1212.35426
Summary: We show that solutions to the periodic Cauchy problem for a family of non-linear evolution equations, which contains the Camassa-Holm equation, do not depend uniformly continuously on initial data in the Sobolev space $$H^s(\mathbb T)$$, when $$s=1$$ or $$s\geq 2$$.
### MSC:
35Q53 KdV equations (Korteweg-de Vries equations) 35B65 Smoothness and regularity of solutions to PDEs
### Keywords:
periodic Cauchy problem; Camassa-Holm equation | 2022-11-26 12:54:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5214106440544128, "perplexity": 874.248121839145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706291.88/warc/CC-MAIN-20221126112341-20221126142341-00837.warc.gz"} |
https://math.stackexchange.com/questions/2471578/mean-value-theorem-question-involving-a-piecewise-function | # Mean Value Theorem question involving a piecewise function:
Could someone give me some help with the following question? I'm not sure how to tackle this question because it involves a piecewise function. Any help is appreciated!
Consider the function f: [-1,1] defined by
$f(x)=(x+1)^2+e^{-1/x^2}, x≠0$
$f(x)=1,\ x=0$
(a) Establish whether f satisfies the hypotheses of the Mean Value Theorem on the given interval
(b) Regardless of the answer to part (a) show that f satisfies the conclusion of the Mean Value Theorem with c=0. Be aware that this doesn't tell us anything about part (a)!
• You need to determine if $f$ remains differentiable on the interval $(-1, 1)$. Is $f$ differentiable at $0$? You'll need to take the derivative at $x=0$ explicitly to find out (which after simplification becomes $\lim_{h \to 0}\frac{h^2 + 2h + e^{-1/h^2}}{h}$). – Chris Oct 14 '17 at 6:15 | 2019-06-27 03:59:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9164573550224304, "perplexity": 145.83202240440497}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000613.45/warc/CC-MAIN-20190627035307-20190627061307-00098.warc.gz"} |
http://hackage.haskell.org/package/spake2-0.4.2/docs/Crypto-Spake2.html | spake2-0.4.2: Implementation of the SPAKE2 Password-Authenticated Key Exchange algorithm
Crypto.Spake2
Contents
Description
Say that you and someone else share a secret password, and you want to use this password to arrange some secure channel of communication. You want:
• to know that the other party also knows the secret password (maybe they're an imposter!)
• the password to be secure against offline dictionary attacks
• probably some other things
SPAKE2 is an algorithm for agreeing on a key exchange that meets these criteria. See Simple Password-Based Encrypted Key Exchange Protocols by Michel Abdalla and David Pointcheval for more details.
## How it works
### Preliminaries
Before exchanging, two nodes need to agree on the following, out-of-band:
In general:
• hash algorithm, $$H$$
• group to use, $$G$$
• arbitrary members of group to use for blinding
• a means of converting this password to a scalar of group
For a specific exchange:
• whether the connection is symmetric or asymmetric
• the IDs of the respective sides
• a shared, secret password in bytes
### Protocol
#### How we map the password to a scalar
Use HKDF expansion (see expandData) to expand the password by 16 bytes, using an empty salt, and "SPAKE2 pw" as the info.
Then, use a group-specific mapping from bytes to scalars. Since scalars are normally isomorphic to integers, this will normally be a matter of converting the bytes to an integer using standard deserialization and then turning the integer into a scalar.
#### How we exchange information
See Math for details on the mathematics of the exchange.
#### How python-spake2 works
• Message to other side is prepended with a single character, A, B, or S, to indicate which side it came from
• The hash function for generating the session key has a few interesting properties:
• uses SHA256 for hashing
• does not include password or IDs directly, but rather uses their SHA256 digests as inputs to the hash
• for the symmetric version, it sorts $$X^{\star}$$ and $$Y^{\star}$$, because neither side knows which is which
• By default, the ID of either side is the empty bytestring
## Open questions
• how does endianness come into play?
• what is Shallue-Woestijne-Ulas and why is it relevant?
Synopsis
# Documentation
Shared secret password used to negotiate the connection.
Constructor deliberately not exported, so that once a Password has been created, the actual password cannot be retrieved by other modules.
Construct with makePassword.
Instances
# The SPAKE2 protocol
data Protocol group hashAlgorithm Source #
Everything required for the SPAKE2 protocol.
Both sides must agree on these values for the protocol to work. This mostly means value equality, except for us, where each side must have complementary values.
Construct with makeAsymmetricProtocol or makeSymmetricProtocol.
makeAsymmetricProtocol :: hashAlgorithm -> group -> Element group -> Element group -> SideID -> SideID -> WhichSide -> Protocol group hashAlgorithm Source #
Construct an asymmetric SPAKE2 protocol.
makeSymmetricProtocol :: hashAlgorithm -> group -> Element group -> SideID -> Protocol group hashAlgorithm Source #
Construct a symmetric SPAKE2 protocol.
Arguments
:: (AbelianGroup group, HashAlgorithm hashAlgorithm) => Protocol group hashAlgorithm A Protocol with all the parameters for the exchange. These parameters must be shared by both sides. Construct with makeAsymmetricProtocol or makeSymmetricProtocol. -> Password The password shared between both sides. Construct with makePassword. -> (ByteString -> IO ()) An action to send a message. The ByteString parameter is this side's SPAKE2 element, encoded using the group encoding, prefixed according to the parameters in the Protocol. -> IO (Either error ByteString) An action to receive a message. The ByteString generated ought to be the protocol-prefixed, group-encoded version of the other side's SPAKE2 element. -> IO (Either (MessageError error) ByteString) Either the shared session key or an error indicating we couldn't parse the other side's message.
Perform an entire SPAKE2 exchange.
Given a SPAKE2 protocol that has all of the parameters for this exchange, generate a one-off message from this side and receive a one off message from the other.
Once we are done, return a key shared between both sides for a single session.
Note: as per the SPAKE2 definition, the session key is not guaranteed to actually work. If the other side has failed to authenticate, you will still get a session key. Therefore, you must exchange some other message that has been encrypted using this key in order to confirm that the session key is indeed shared.
Note: the "send" and "receive" actions are performed concurrently. If you have ordering requirements, consider using a TVar or MVar to coordinate, or implementing your own equivalent of spake2Exchange.
If the message received from the other side cannot be parsed, return a MessageError.
Since 0.4.0.
startSpake2 :: (MonadRandom randomly, AbelianGroup group) => Protocol group hashAlgorithm -> Password -> randomly (Spake2Exchange group) Source #
Commence a SPAKE2 exchange.
computeOutboundMessage :: AbelianGroup group => Spake2Exchange group -> Element group Source #
Determine the element (either $$X^{\star}$$ or $$Y^{\star}$$) to send to the other side.
Arguments
:: AbelianGroup group => Spake2Exchange group An initiated SPAKE2 exchange -> Element group The outbound message from the other side (i.e. inbound to us) -> Element group The final piece of key material to generate the session key.
Generate key material, $$K$$, given a message from the other side (either $$Y^{\star}$$ or $$X^{\star}$$).
This key material is the last piece of input required to make the session key, $$SK$$, which should be generated as:
$SK \leftarrow H(A, B, X^{\star}, Y^{\star}, K, pw)$
Where:
• $$H$$ is a hash function
• $$A$$ identifies the initiating side
• $$B$$ identifies the receiving side
• $$X^{star}$$ is the outbound message from the initiating side
• $$Y^{star}$$ is the outbound message from the receiving side
• $$K$$ is the result of this function
• $$pw$$ is the password (this is what makes it SPAKE2, not SPAKE1)
extractElement :: Group group => Protocol group hashAlgorithm -> ByteString -> Either (MessageError error) (Element group) Source #
Extract an element on the group from an incoming message.
Returns a MessageError if we cannot decode the message, or the other side does not appear to be the expected other side.
TODO: Need to protect against reflection attack at some point.
data MessageError e Source #
An error that occurs when interpreting messages from the other side of the exchange.
Instances
Eq e => Eq (MessageError e) Source # Methods(==) :: MessageError e -> MessageError e -> Bool #(/=) :: MessageError e -> MessageError e -> Bool # Show e => Show (MessageError e) Source # MethodsshowsPrec :: Int -> MessageError e -> ShowS #show :: MessageError e -> String #showList :: [MessageError e] -> ShowS #
formatError :: Show e => MessageError e -> Text Source #
Turn a MessageError into human-readable text.
elementToMessage :: Group group => Protocol group hashAlgorithm -> Element group -> ByteString Source #
Turn an element into a message from this side of the protocol.
Arguments
:: (Group group, HashAlgorithm hashAlgorithm) => Protocol group hashAlgorithm The protocol used for this exchange -> Element group The outbound message, generated by this, $$X^{\star}$$, or either side if symmetric -> Element group The inbound message, generated by the other side, $$Y^{\star}$$, or either side if symmetric -> Element group The calculated key material, $$K$$ -> Password The shared secret password -> ByteString A session key to use for further communication
Create a session key based on the output of SPAKE2.
$SK \leftarrow H(A, B, X^{\star}, Y^{\star}, K, pw)$
Including $$pw$$ in the session key is what makes this SPAKE2, not SPAKE1.
Note: In spake2 0.3 and earlier, The $$X^{\star}$$ and $$Y^{\star}$$ were expected to be from side A and side B respectively. Since spake2 0.4, they are the outbound and inbound elements respectively. This fixes an interoperability concern with the Python library, and reduces the burden on the caller. Apologies for the possibly breaking change to any users of older versions of spake2.
newtype SideID Source #
Bytes that identify a side of the protocol
Constructors
SideID FieldsunSideID :: ByteString
Instances
Source # Methods(==) :: SideID -> SideID -> Bool #(/=) :: SideID -> SideID -> Bool # Source # Methods(<) :: SideID -> SideID -> Bool #(<=) :: SideID -> SideID -> Bool #(>) :: SideID -> SideID -> Bool #(>=) :: SideID -> SideID -> Bool #max :: SideID -> SideID -> SideID #min :: SideID -> SideID -> SideID # Source # MethodsshowsPrec :: Int -> SideID -> ShowS #showList :: [SideID] -> ShowS #
data WhichSide Source #
Which side we are.
Constructors
SideA SideB
Instances
Source # Methods Source # MethodsenumFrom :: WhichSide -> [WhichSide] # Source # Methods Source # Methods Source # MethodsshowList :: [WhichSide] -> ShowS # | 2019-10-15 12:46:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2004530429840088, "perplexity": 6883.239731360355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986658566.9/warc/CC-MAIN-20191015104838-20191015132338-00110.warc.gz"} |
https://www.studyadda.com/question-bank/mental-ability_q25/4563/361393 | • # question_answer There are 30 plants of chiku, Guava, Pineapple and mango in a row. There is one pair of mango plants after chiku and Guava and Mango plants are followed by one Chiku and One Pineapple plant and so on. If the row begins with a plant of Chiku, then which of the following will be the last in the row? A) Guava B) Mango C) Chiku D) Pineapple
The order of given plants is as follows: $\therefore$ The required 30th plant = Pineapple | 2020-09-23 12:58:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.808841347694397, "perplexity": 4573.396204208508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400210996.32/warc/CC-MAIN-20200923113029-20200923143029-00482.warc.gz"} |
https://gallery.usgs.gov/centers/casc-sc/science/predicting-future-forage-conditions-elk-and-mule-deer-montana-and-wyoming | # Predicting Future Forage Conditions for Elk and Mule Deer in Montana and Wyoming
## Science Center Objects
Improving the quality of habitat for western big-game species, such as elk and mule deer, was identified as a priority by the Department of the Interior in 2018. Maintaining healthy herds not only supports the ecosystems where these species are found, but also the hunting and wildlife watching communities. For example, in Wyoming, big game hunting contributed over $300 million to the state’s ec... Improving the quality of habitat for western big-game species, such as elk and mule deer, was identified as a priority by the Department of the Interior in 2018. Maintaining healthy herds not only supports the ecosystems where these species are found, but also the hunting and wildlife watching communities. For example, in Wyoming, big game hunting contributed over$300 million to the state’s economy in 2015. Yet as climate conditions change, the quantity, quality, and timing of vegetation available to mule deer, elk, and other ungulates, known as forage, could shift. It’s possible that these changes could have cascading impacts on the behavior and population sizes of many species.
A key strategy used by managers to improve forage availability and adapt to change is the implementation of habitat treatments. These treatments include prescribed fire, forest thinning, and removal of invasive weeds, and are currently being planned to counteract the expected decline in mule deer habitat in the Kemmerer-Cokeville Area of southwestern Wyoming. To ensure that these activities are effective in meeting their goals, it is important for managers to have information on how forage conditions are already changing due to climate variability, and what any potential tradeoffs associated with these techniques may be.
Focusing on Montana and Wyoming, this project aims to meet this need by achieving three objectives. First, researchers will prepare summaries of past and future changes in forage by watershed, herd, and hunting area for both states. These summaries will help managers prioritize areas for management by providing baseline information about the direction, degree, and certainty of change in the quality and timing of forage. Second, researchers will assess changes in forage conditions in aspen, sagebrush, and mixed mountain shrub habitat in southwest Wyoming. They will develop maps of future forage based on scenarios that reflect probabilities of important weather patterns (such as drought), the current distribution of invasive cheatgrass (which decreases forage quality), and expected effects of planned habitat treatments, such as prescribed fire. Lastly, researchers will use these maps to evaluate the effects of treatment options on mule deer migration, fawning, and summer habitat, and on elk calving, migration, and habitat use.
The results of this project will be useful to a broad range of managers, including those with the states of Wyoming and Montana, and with federal agencies such as the U.S. Fish and Wildlife Service, the National Park Service, and the Bureau of Land Management. Successful management of elk and mule deer habitat will support healthy populations and ecosystems, as well as recreational opportunities that feed valuable revenue into local and state economies. | 2021-04-12 15:56:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2190096080303192, "perplexity": 7288.300302529158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038067870.12/warc/CC-MAIN-20210412144351-20210412174351-00032.warc.gz"} |
http://www.dalnefre.com/wp/2011/01/playing-the-stooge-with-humus/ | # Playing the Stooge with Humus
Happy New Year and welcome to 2011. With the coming of the new year, I’m happy to announce the availability of a simulator/debugger environment for Humus, currently hosted at http://dalnefre.com/humus/sim/humus.html
Please note that this is a simulator of the Humus language written in JavaScript, so it runs very slowly. It is intended to allow experimentation with the language and eventually will provide debugging and visualization features to help understand execution of Humus programs. Let’s try it out with a few examples!
## Label behavior
We may as well begin with the traditional “Hello, World” program. Copy the following program into the edit panel on the simulator, then click the “Execute” button.
SEND (#Hello, #World) TO println
This should produce the following output (above the “Execute” button):
#Hello, #World
Each time you click “Execute”, the text in the edit panel is executed, so clicking repeatedly will give you multiple lines of output.
Now let’s establish some re-usable components. We will define an actor behavior which adds a label to a message and sends it to a pre-defined customer. Using that definition, we create a couple of label actors and send a message to each one.
LET label_beh(cust, label) = \msg.[ SEND (label, msg) TO cust ]
CREATE R WITH label_beh(println, #Right)
CREATE L WITH label_beh(println, #Left)
SEND #Hello TO R
SEND #World TO L
This should produce the following output:
#Right, #Hello
#Left, #World
Definitions and actor creations are persistent (until the browser is refreshed). Once a behavior has been defined, we can clear the edit panel and write new code that uses previous definitions. We can also send messages to any previously-created actors. For example, the println actor is pre-created by the system to provide a simple means of producing output.
Another pre-created actor is random. We can send a request to random consisting of a customer and a range value. The random actor will send the customer a number from 0 to range − 1. Clear the edit panel and copy the follow code:
SEND (R, 100) TO random
SEND (L, 100) TO random
Click “Execute” a few times and you should see that a new pair of random numbers are generated each time.
## Hot Potato
We can use the random service to help create an actor that forwards each messages it receives to one of two other actors chosen at random.
LET hot_potato_beh(left, right) = \msg.[
SEND (choice, 2) TO random
CREATE choice WITH \n.[
CASE n OF
0 : [ SEND msg TO left ]
1 : [ SEND msg TO right ]
END
]
SEND (SELF, msg) TO println
]
CREATE hot_potato WITH hot_potato_beh(L, R)
SEND 1 TO hot_potato
SEND 2 TO hot_potato
SEND 3 TO hot_potato
Let’s take a closer look at what’s happening here. We define the behavior function hot_potato_beh, which takes a left and right target. When a message arrives, a request is sent to random to generate either a 0 or 1 value. The customer for this request choice is a new actor that receives the random value n, and sends the message msg to either left or right depending on the value of n. Since choice is created in the context of an actor behavior, a new choice actor is created for each message. In parallel, the actor’s identity and the original message msg are sent to println, so we can watch the action.
Next we create a hot_potato actor, using this behavior, that will forward messages to either L or R, our previously defined label actors. Then we send three messages to hot_potato and observe which way the messages are forwarded. Note that the output may not occur in the order you expect. Since the message sends are processed concurrently, the messages may not arrive in the same order they are sent. Keep in mind that Humus is concurrent by default. Sequencing must be arranged explicitly, where it is required.
## Three Stooges
Let’s get a few more actors into the game. We will create the three stooges, Larry, Curly and Moe, each with hot_potato_beh and references to the other two stooges. Then we’ll kick off the fun by sending a message to one of the stooges (we picked Moe).
Caution: Before you execute this code, you will want to take note of the “Halt” button in upper-right corner of the simulator window. This button stops the actor run-time engine. You will need this to interrupt the unbounded flow of messages created by this program.
CREATE Stooges WITH \msg.[
CREATE Larry WITH hot_potato_beh(Curly, Moe)
CREATE Curly WITH hot_potato_beh(Moe, Larry)
CREATE Moe WITH hot_potato_beh(Larry, Curly)
SEND msg TO Moe
]
SEND #Potato TO Stooges
The behavior of Stooges is critically dependent on the concurrent execution of each statement in the block, and the automatic resolution of data dependencies, allowing each actor to refer to the others at creation time.
Now consider what would happen if we sent additional messages to Stooges. Each message would create its own instances of Larry, Curly and Moe; and each message would circulate among a distinct set of actors. We could arrange, instead, for the actors to be created once, and additional messages to be injected into the same set of stooges.
LET hot_potato_beh(left, right) = \msg.[
SEND (choice, 2) TO random
CREATE choice WITH \n.[
CASE n OF
0 : [ SEND msg TO left ]
1 : [ SEND msg TO right ]
END
]
SEND (SELF, msg) TO println
]
CREATE Stooges WITH \msg.[
CREATE Larry WITH hot_potato_beh(Curly, Moe)
CREATE Curly WITH hot_potato_beh(Moe, Larry)
CREATE Moe WITH hot_potato_beh(Larry, Curly)
BECOME \msg.[
SEND msg TO Moe
]
SEND msg TO SELF
]
SEND #Potato TO Stooges
SEND #Tomato TO Stooges
If you’ve “Halt”ed the actor run-time, you’ll have to reload the simulator page to reset the environment. After the reset, there will be no actors or definitions other than those provided by default, like println and random. You will need to provide all of the relevant definitions together, as shown in the code sample above.
The significant change we’ve made to the behavior of Stooges is that it BECOMEs a new behavior after creating Larry, Curly and Moe. The new behavior simply forwards any message it receives directly to Moe. Note that the initial creation behavior was triggered by receiving a message, so that message is re-sent to SELF, where it will be received by the same actor after the BECOME has taken effect. This is a form of lazy initialization. We don’t create the three stooges until there is actually a message for them to handle.
## Incremental Enhancements
It’s not really fair that we always send new messages to Moe first. We should arrange to select our target at random. We can use the same technique that we used to choose between left and right in hot_potato_beh.
LET hot_potato_beh(left, right) = \msg.[
SEND (choice, 2) TO random
CREATE choice WITH \n.[
CASE n OF
0 : [ SEND msg TO left ]
1 : [ SEND msg TO right ]
END
]
SEND (SELF, msg) TO println
]
CREATE Stooges WITH \msg.[
CREATE Larry WITH hot_potato_beh(Curly, Moe)
CREATE Curly WITH hot_potato_beh(Moe, Larry)
CREATE Moe WITH hot_potato_beh(Larry, Curly)
BECOME \msg.[
SEND (choice, 3) TO random
CREATE choice WITH \n.[
CASE n OF
0 : [ SEND msg TO Larry ]
1 : [ SEND msg TO Curly ]
2 : [ SEND msg TO Moe ]
END
]
]
SEND msg TO SELF
]
SEND #Potato TO Stooges
SEND #Tomato TO Stooges
If you’ve been executing these samples along the way, you may have noticed that it can be difficult to keep track of who is throwing what. Let’s enhance the output to be more informative. For each hand-off, we want to print which actor is throwing, what is being thrown, and which actor is catching.
LET hot_potato_beh(left, right) = \msg.[
SEND (choice, 2) TO random
CREATE choice WITH hand_off_beh(SELF, msg, left, right)
]
LET hand_off_beh(source, msg, left, right) = \n.[
LET target = \$(
CASE n OF
0 : left
1 : right
END
)
SEND (source, msg, target) TO println
SEND msg TO target
]
CREATE Stooges WITH \msg.[
CREATE Larry WITH hot_potato_beh(Curly, Moe)
CREATE Curly WITH hot_potato_beh(Moe, Larry)
CREATE Moe WITH hot_potato_beh(Larry, Curly)
BECOME \msg.[
SEND (choice, 3) TO random
CREATE choice WITH \n.[
CASE n OF
0 : [ SEND msg TO Larry ]
1 : [ SEND msg TO Curly ]
2 : [ SEND msg TO Moe ]
END
]
]
SEND msg TO SELF
]
SEND #Potato TO Stooges
SEND #Tomato TO Stooges
SEND #Banana TO Stooges
We’ve extracted the behavior of the hand-off so we can collect all of the information needed in one place. We define target to be the value of either left or right depending on the value of n. Then we can use target to both create our enhanced output and pass along the original message msg. Just for kicks, we’ve also thrown a #Banana into the party.
## Conclusion
With all this produce flying around at random, we’re clearly working with non-deterministic concurrent activities. We could imagine events like these driving an amusing animation showing the three stooges passing these objects back and forth among themselves. As we built up this example, piece by piece, we’ve illustrated some of the interesting characteristics of the Humus language. For full effect, I hope you’ve run these samples in the Humus Simulator/Debugger and observed their behavior directly. | 2017-02-20 13:12:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3598404824733734, "perplexity": 4777.211272389961}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170562.59/warc/CC-MAIN-20170219104610-00187-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://math.wikia.com/wiki/Point-set_topology | ## FANDOM
1,097 Pages
Point-set topology is a fundamental branch of topology, sometimes referred to as general topology, which deals with the concepts of topological spaces and the mathematical structures defined on such spaces.
## Topology and open sets
Given a set $X$ , a family of subsets $\tau$ of $X$ is said to be a topology of $X$ if the following three conditions hold:
1. $X,\varnothing\in\tau$ (The empty set and $X$ are both elements of $\tau$)
2. $\{A_i\}_{i\in I}\in\tau\rArr\bigcup_{i\in I}A_i\in\tau$ (Any union of elements of $\tau$ is an element $\tau$)
3. $A,B\in\tau\rArr A\cap B\in\tau$ (Any finite intersection of elements of $\tau$ is an element of $\tau$)
The members of a topology are called open sets of the topology.
## Topological space
A topological space is a set $X$ , known as the underlying set, together with a topology T of $X$ .
## Basis for a topology
A basis for a topology on $X$ is a collection of subsets of $X$ , known as basis elements, such that the following two properties hold:
1. For every $x\in X$ there is at least one basis element $B$ that contains $x$ .
2. If $x$ is an element of the intersection of two basis elements $A,B$ , then there exists a basis element $C$ such that $C\subset A\cap B$ .
Given a basis for a topology, one can define the topology generated by the basis as the collection of all sets $A$ such that for each $x\in A$ there is a basis element $B$ such that $x\in B$ and $B\subset A$.
## Closed sets
A set $C$ is defined to be closed if its complement in $X$ is an open set in the given topology.
## Neighborhoods
A set $N$ is said to be a neighborhood of a point $a$ if it is an open set which contains the point $a$ . In some cases the term neighborhood is used to describe a set which contains an open set containing $a$ .
## Interior and closure
The interior of a subset $A$ of $X$ is defined to be the union of all open sets contained in $A$ .
The closure of a subset $A$ of $X$ is defined as the intersection of all closed sets containing $A$ .
## Limit points
A point $x$ of $X$ is said to be a limit point of a subset A of $X$ if every neighborhood of $x$ intersects A in at least one point other than $x$ .
## Continuous functions
A function $f:X\to Y$ is said to be continuous if for each subset $A$ of $Y$ , the set $f^{-1}(A)$ is an open set of $X$ .
## Homeomorphisms
A bijective function $f:X\to Y$ is said to be a homeomorphism if both $f$ and its inverse, $f^{-1}:Y\to X$ , are continuous.
If there exists a homeomorphism between two topological spaces X and Y, then the spaces are said to be homeomorphic.
Any property that is invariant under homeomorphisms is known as a topological property.
A homeomorphism is also dubbed a topological equivalence among mathematicians. | 2018-03-20 00:09:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 54, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9365842342376709, "perplexity": 78.8748734077747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647244.44/warc/CC-MAIN-20180319234034-20180320014034-00616.warc.gz"} |
https://unacademy.com/lesson/comparision-of-fractions-in-hindi/WQ9IGGRG | to enroll in courses, follow best educators, interact with the community and track your progress.
6 lessons,
1h 12m
Enroll
5
Comparision of Fractions (in Hindi)
1 plays
More
Comparision of fractions using four methods in been discussed.
Piyush Goyal
I am currently a Btech 1 year student pursuing my degree from IIT Mandi in the field of Computer Science.
U
its awsm lecture.....plz continue further sir ...
PRIYANSH DUBEY
7 months ago
Thanks
1. UPTET-2019 14 1 FRACTIONS (Part 3) -Piyush Goyal
2. Comparision of fraction If the denominator is same, then the greater the numerator greater the value of fraction. e.g. 2/3 > 1/3 .If the numerator is same, then the smaller the denominator greater the value of fraction. e.g. 2/4 > 2/5
3. TRICK .If the difference b/w the values of the numerator and the denominator is same, then the greater the denominator greater the value of the fraction. In this case 2-1-1 And 4-3 1 Hence as 3>1 3/4>1/2
4. Comparision of fraction when neithe the numerator nor the denominator is same 1) First make the denominator of all fraction same by multiplying them with a number. 2) This number is the LCM(lowest common multiple) of all the given denominators divided by respective denominator. 3) The numerators are also multiplied with the respective number. 4) Now you can compare the numerator to obtain the result.
5. Problem 4 ardden A rranpe in axendiy oder and 3 2. 7 5 LCM (7,5,9 , 2,5) 630 So 37o (30 5 x 12 G
6. 9 315 2 x 315 3 x126 5 x 126 315 630 . 378 2 these we 2.
7. Comparision of fraction when neithe the numerator nor the denominator is same Alternative method of arranging fractions quickly is to: 1) Roughly calculate the decimal values of fraction. 2) Then they can be compared using number line e.g.2/3 and 5/9 2/3 0.66 5/9 0.55 Hence 5/9 < 2/3
8. Problem 5 "action in ascend; 3 6 C.) , , ,5. 3 96.
9. LCM 8, 6, 3,9) Q 8,6, 3, 7 a 8 x 9 5 x12 6 X12
10. 2 = 2x24-48 3 34 5.-288 -40 72 4 9
11. Problem 6 1 O UPTET -2ou 0) 3 MUs | 2019-10-20 18:56:56 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9404394626617432, "perplexity": 2950.608517220058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986718918.77/warc/CC-MAIN-20191020183709-20191020211209-00175.warc.gz"} |
https://c4science.ch/w/bioimaging_and_optics_platform_biop/covid-19/ | # Document Deleted
This document has been deleted. You can edit it to put new content here, or use history to revert to an earlier version.
Tags
None
Subscribers
None
Last Author
oburri
Last Edited
Tue, Aug 16, 13:15
# Document Hierarchy
### Event Timeline
oburri created this document.May 1 2020, 10:42
oburri edited the content of this document. (Show Details)
oburri edited the content of this document. (Show Details)
oburri edited the content of this document. (Show Details)
oburri edited the content of this document. (Show Details)
chiarutt edited the content of this document. (Show Details)May 1 2020, 11:59
chiarutt edited the content of this document. (Show Details)May 1 2020, 13:47
chiarutt edited the content of this document. (Show Details)May 4 2020, 14:25
oburri deleted this document.Tue, Aug 16, 13:15 | 2022-08-19 02:19:17 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9216120839118958, "perplexity": 10659.656936108717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573540.20/warc/CC-MAIN-20220819005802-20220819035802-00267.warc.gz"} |
https://answers.opencv.org/questions/118395/revisions/ | # Revision history [back]
### Opencv(3.10) android- How to draw an object's trajectory
I have a project in college where I have to draw the trajectory of an object based on the color of it. Is being built based on colorBlobDetection. This is part of the code I have so far:
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
mRgba = inputFrame.rgba();
if (mIsColorSelected) {
mDetector.process(mRgba);
List<MatOfPoint> contours = mDetector.getContours();
Log.e(TAG, "Contours count: " + contours.size());
Imgproc.drawContours(mRgba, contours, -1, CONTOUR_COLOR);
List<Moments> mu = new ArrayList<Moments>(contours.size());
for (int i = 0; i < contours.size(); i++) {
Moments p = mu.get(i);
int xContour = (int) (p.get_m10() / p.get_m00());
int yContour = (int) (p.get_m01() / p.get_m00());
Point ponto = new Point(xContour, yContour);
//Imgproc.line(mRgba, ponto, ponto, CONTOUR_COLOR, 5, Imgproc.LINE_AA, 0);
}
At the moment I'm picking up the center of each contour.
Through the positions of the contours I want to draw their trajectories when they move
Ps. This 'Imgproc.line' was just to test if the center of each contour is correct. | 2019-11-17 06:49:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25868067145347595, "perplexity": 5575.544760635513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668896.47/warc/CC-MAIN-20191117064703-20191117092703-00090.warc.gz"} |
https://stacks.math.columbia.edu/tag/07YS | • for $(x, A)$ as in (1) and a ring map $A \to B$ setting $y = x|_{\mathop{\mathrm{Spec}}(B)}$ there is a functoriality map $E_ x \to E_ y$ in $D(A)$.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | 2022-06-25 17:20:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9705182313919067, "perplexity": 593.8768058414807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036077.8/warc/CC-MAIN-20220625160220-20220625190220-00520.warc.gz"} |
https://kb.osu.edu/dspace/handle/1811/8221 | # THE FAR INFRARED SPECTRUM OF HYDROGEN $PEROXIDE^{*}$
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/8221
Files Size Format View
1963-E-05.jpg 75.45Kb JPEG image
Title: THE FAR INFRARED SPECTRUM OF HYDROGEN $PEROXIDE^{*}$ Creators: Hunt, R. H.; Peters, C. W. Issue Date: 1963 Publisher: Ohio State University Abstract: “The far infra-reel spectrum of $H_{2}O_{2}$ has been obtained in the region $14--690\;cm^{-1}$ with an average resolution of $0.3\;cm^{-1}$. Seven hindered rotation bands have been identified and the first five excited states of the internal rotation found to be; 11:43, 254.2, 370.8, 569.3, 7760.0 (in $cm^{-1}$ above the ground state). A good fit to these levels is obtained, in accordance with the theory of Leacock and Hecht (see above abstract), using the hindering potential $V(x) = 993 \cos x + 636 \cos 2x + 44 \cos 3x$ and the bond parameters of Redington, Olsen, and $Cross^{1}$. This potential has a \emph{cis} harrier of $2460\;cm^{-1}$ a \emph{trans} Earner of $386\;cm^{-1}$ and a potential minimum $111.5^{\circ}$ from the \emph{cis} configuration.” Description: $^{*}$Work supported in part U. S. Air Force. $^{1}$Reddington, Olsen and Cross, J. Chem. Phys. 36, 1311 (1962). Author Institution: Randall Laboratory, University of Michigan URI: http://hdl.handle.net/1811/8221 Other Identifiers: 1963-E-5 | 2016-08-28 02:35:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5214267373085022, "perplexity": 4321.717834797511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982932803.39/warc/CC-MAIN-20160823200852-00214-ip-10-153-172-175.ec2.internal.warc.gz"} |
http://geomblog.blogspot.com/2008/09/fonts.html?showComment=1220404380000 | Monday, September 01, 2008
Fonts !
John Holbo at CT does a not-review of books on fonts (or faces ? I'm confused now). In any case, this is clearly a book I need to get. | 2014-03-08 11:09:43 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.804970383644104, "perplexity": 4624.563381964}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654345/warc/CC-MAIN-20140305060734-00047-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://www.coin-or.org/CppAD/Doc/exp_eps_for0.htm | Prev Next Index-> contents reference index search external Up-> CppAD Introduction exp_eps exp_eps_for0 CppAD-> Install Introduction AD ADFun preprocessor multi_thread utility ipopt_solve Example speed Appendix Introduction-> exp_2 exp_eps exp_apx.cpp exp_eps-> exp_eps.hpp exp_eps.cpp exp_eps_for0 exp_eps_for1 exp_eps_rev1 exp_eps_for2 exp_eps_rev2 exp_eps_cppad exp_eps_for0-> exp_eps_for0.cpp Headings-> Mathematical Form Operation Sequence ---..Variable ---..Parameter ---..Index ---..Code ---..Operation ---..Zero Order ---..Sweep Return Value Comparisons Verification Exercises
exp_eps: Operation Sequence and Zero Order Forward Sweep
Mathematical Form
Suppose that we use the algorithm exp_eps.hpp to compute exp_eps(x, epsilon) with x is equal to .5 and epsilon is equal to .2. For this case, the mathematical form for the operation sequence corresponding to the exp_eps is $$f( x , \varepsilon ) = 1 + x + x^2 / 2$$ Note that, for these particular values of x and epsilon , this is the same as the mathematical form for exp_2 .
Operation Sequence
We consider the operation sequence corresponding to the algorithm exp_eps.hpp with the argument x is equal to .5 and epsilon is equal to .2.
Variable
We refer to values that depend on the input variables x and epsilon as variables.
Parameter
We refer to values that do not depend on the input variables x or epsilon as parameters. Operations where the result is a parameter are not included in the zero order sweep below.
Index
The Index column contains the index in the operation sequence of the corresponding atomic operation and variable. A Forward sweep starts with the first operation and ends with the last.
Code
The Code column contains the C++ source code corresponding to the corresponding atomic operation in the sequence.
Operation
The Operation column contains the mathematical function corresponding to each atomic operation in the sequence.
Zero Order
The Zero Order column contains the zero order derivative for the corresponding variable in the operation sequence. Forward mode refers to the fact that these coefficients are computed in the same order as the original algorithm; i.e., in order of increasing index.
Sweep
Index Code Operation Zero Order 1 abs_x = x; $v_1 = x$ $v_1^{(0)} = 0.5$ 2 temp = term * abs_x; $v_2 = 1 * v_1$ $v_2^{(0)} = 0.5$ 3 term = temp / Type(k); $v_3 = v_2 / 1$ $v_3^{(0)} = 0.5$ 4 sum = sum + term; $v_4 = 1 + v_3$ $v_4^{(0)} = 1.5$ 5 temp = term * abs_x; $v_5 = v_3 * v_1$ $v_5^{(0)} = 0.25$ 6 term = temp / Type(k); $v_6 = v_5 / 2$ $v_6^{(0)} = 0.125$ 7 sum = sum + term; $v_7 = v_4 + v_6$ $v_7^{(0)} = 1.625$
Return Value
The return value for this case is $$1.625 = v_7^{(0)} = f ( x^{(0)} , \varepsilon^{(0)} )$$
Comparisons
If x were negative, or if epsilon were a much smaller or much larger value, the results of the following comparisons could be different: if( Type(0) > x ) while(term > epsilon) This in turn would result in a different operation sequence. Thus the operation sequence above only corresponds to exp_eps.hpp for values of x and epsilon within a certain range. Note that there is a neighborhood of $x = 0.5$ for which the comparisons would have the same result and hence the operation sequence would be the same.
Verification
The file exp_eps_for0.cpp contains a routine that verifies the values computed above. It returns true for success and false for failure.
Exercises
1. Suppose that $x^{(0)} = .1$, what is the result of a zero order forward sweep for the operation sequence above; i.e., what are the corresponding values for $v_1^{(0)} , v_2^{(0)} , \ldots , v_7^{(0)}$.
2. Create a modified version of exp_eps_for0.cpp that verifies the values you obtained for the previous exercise.
3. Create and run a main program that reports the result of calling the modified version of exp_eps_for0.cpp in the previous exercise.
Input File: introduction/exp_eps.omh | 2018-01-21 20:28:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8803419470787048, "perplexity": 2301.0916324734117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890874.84/warc/CC-MAIN-20180121195145-20180121215145-00404.warc.gz"} |
http://math.stackexchange.com/questions/222320/proving-there-is-an-interval-where-fx-is-positive/222323 | # Proving there is an interval where $f(x)$ is positive
Let $f(x)$ be a continuous real function s.t $f(x_0) > 0$
Prove: There is some interval of the form $(x_0 -\delta, x_0 + \delta)$ where $f$ is positive.
Proof:
Since $f$ is continuous: $\forall \,{\epsilon > 0}\,\, \exists \,{\delta>0}$ s.t. $|x- x_0|<\delta \implies |f(x) - f(x_0)| < \epsilon$
By contradiction suppose there is no interval $(x_0 - \delta, x_0 + \delta)$ where $f(x)$ is positive. This means that $f(x_0) - \epsilon < f(x) < f(x_0) + \epsilon < 0$. Hence we have a contradiction since $\epsilon$ and $f(x_0)$ are both greater than zero.
1. Is this correct?
2. Could someone provide a non-contradiction proof?
-
You need to quantify your variables, because right now your proof doesn't make sense. – wj32 Oct 27 '12 at 21:45
It is not a proof. For a while you might try to use fewer logical symbols. The idea is simple. We have $f(x_0)=b\gt 0$. If $x$ is close enough to $x_0$, then $f(x)$ is very close to $b$ and therefore positive. More formally, pick $\epsilon=b/2$, say. Then there is a $\delta$ such that if $|x-x_0|\lt \delta$, then $|f(x)-b|\lt b/2$, and therefore by Triangle Inequality $f(x)\gt b-b/2\gt 0$. Note how the formal stuff comes from the geometry. – André Nicolas Oct 27 '12 at 21:55
Your proof by contradiction is incorrect. Specifically, the following statements are incorrect.
This means that $f(x_0) - \epsilon < f(x) < f(x_0) + \epsilon < 0$. Hence we have a contradiction since $\epsilon$ and $f(x_0)$ are both greater than zero.
You can argue by contradiction but what you have is not the right proof.
A direct proof is simple for this case. Choose $\epsilon = f(x_0)$ in your continuity criterion to get your $\delta$.
Now $f(x) > 0$ for $x \in (x_0 - \delta, x_0 + \delta)$
-
Is my proof correct? – CodeKingPlusPlus Oct 27 '12 at 21:38 @CodeKingPlusPlus: No. Where does your $epsilon$ come from? And from $f(x)<0$ and $f(x) show 1 more comment Choose$\epsilon = \frac{f(x_0)}{2}> 0$. Then there exists a$\delta>0$such that for$|x-x_0| < \delta$,$|f(x)-f(x_0)| < \epsilon = \frac{f(x_0)}{2}$. Then$-\frac{f(x_0}{2} < f(x)-f(x_0)$from which we get$0 < \frac{f(x_0)}{2} < f(x)$for all$x$such that$|x-x_0| < \delta$. Alternatively, a proof by contradiction is straightforward as well: Suppose on every interval of the form$I_\delta = (x_0-\delta, x_0+\delta)$, there is some$x \in I_\delta$such that$f(x) \leq 0$. Then choose$\delta = \frac{1}{n}$and let$x_n $be the corresponding$x \in I_{\frac{1}{n}}$. Then clearly$x_n \to x_0$, and since$f$is continuous,$f(x_n) \to f(x_0) >0$which contradicts$f(x_n) \leq 0\$.
- | 2013-05-18 08:32:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9916326403617859, "perplexity": 326.2363177511929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381630/warc/CC-MAIN-20130516092621-00028-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://www.thebarnpinetop.com/phpMyAdmin/doc/html/_sources/import_export.txt | Import and export ================= Import ++++++ To import data, go to the "Import" tab in phpMyAdmin. To import data into a specific database or table, open the database or table before going to the "Import" tab. In addition to the standard Import and Export tab, you can also import an SQL file directly by dragging and dropping it from your local file manager to the phpMyAdmin interface in your web browser. If you are having troubles importing big files, please consult :ref:faq1_16. You can import using following methods: Form based upload Can be used with any supported format, also (b|g)zipped files, e.g., mydump.sql.gz . Form based SQL Query Can be used with valid SQL dumps. Using upload directory You can specify an upload directory on your web server where phpMyAdmin is installed, after uploading your file into this directory you can select this file in the import dialog of phpMyAdmin, see :config:option:$cfg['UploadDir']. phpMyAdmin can import from several various commonly used formats. CSV --- Comma separated values format which is often used by spreadsheets or various other programs for export/import. .. note:: When importing data into a table from a CSV file where the table has an 'auto_increment' field, make the 'auto_increment' value for each record in the CSV field to be '0' (zero). This allows the 'auto_increment' field to populate correctly. It is now possible to import a CSV file at the server or database level. Instead of having to create a table to import the CSV file into, a best-fit structure will be determined for you and the data imported into it, instead. All other features, requirements, and limitations are as before. CSV using LOAD DATA ------------------- Similar to CSV, only using the internal MySQL parser and not the phpMyAdmin one. ESRI Shape File --------------- The ESRI shapefile or simply a shapefile is a popular geospatial vector data format for geographic information systems software. It is developed and regulated by Esri as a (mostly) open specification for data interoperability among Esri and other software products. MediaWiki --------- MediaWiki files, which can be exported by phpMyAdmin (version 4.0 or later), can now also be imported. This is the format used by Wikipedia to display tables. Open Document Spreadsheet (ODS) ------------------------------- OpenDocument workbooks containing one or more spreadsheets can now be directly imported. When importing an ODS speadsheet, the spreadsheet must be named in a specific way in order to make the import as simple as possible. Table name ~~~~~~~~~~ During import, phpMyAdmin uses the sheet name as the table name; you should rename the sheet in your spreadsheet program in order to match your existing table name (or the table you wish to create, though this is less of a concern since you could quickly rename the new table from the Operations tab). Column names ~~~~~~~~~~~~ You should also make the first row of your spreadsheet a header with the names of the columns (this can be accomplished by inserting a new row at the top of your spreadsheet). When on the Import screen, select the checkbox for "The first line of the file contains the table column names;" this way your newly imported data will go to the proper columns. .. note:: Formulas and calculations will NOT be evaluated, rather, their value from the most recent save will be loaded. Please ensure that all values in the spreadsheet are as needed before importing it. SQL --- SQL can be used to make any manipulation on data, it is also useful for restoring backed up data. XML --- XML files exported by phpMyAdmin (version 3.3.0 or later) can now be imported. Structures (databases, tables, views, triggers, etc.) and/or data will be created depending on the contents of the file. The supported xml schemas are not yet documented in this wiki. Export ++++++ phpMyAdmin can export into text files (even compressed) on your local disk (or a special the webserver :config:option:$cfg['SaveDir'] folder) in various commonly used formats: CodeGen ------- NHibernate _ file format. Planned versions: Java, Hibernate, PHP PDO, JSON, etc. So the preliminary name is codegen. CSV --- Comma separated values format which is often used by spreadsheets or various other programs for export/import. CSV for Microsoft Excel ----------------------- This is just preconfigured version of CSV export which can be imported into most English versions of Microsoft Excel. Some localised versions (like "Danish") are expecting ";" instead of "," as field separator. Microsoft Word 2000 ------------------- If you're using Microsoft Word 2000 or newer (or compatible such as OpenOffice.org), you can use this export. JSON ---- JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write and it is easy for machines to parse and generate. .. versionchanged:: 4.7.0 The generated JSON structure has been changed in phpMyAdmin 4.7.0 to produce valid JSON data. The generated JSON is list of objects with following attributes: .. js:data:: type Type of given object, can be one of: header Export header containing comment and phpMyAdmin version. database Start of a database marker, containing name of database. table Table data export. .. js:data:: version Used in header :js:data:type and indicates phpMyAdmin version. .. js:data:: comment Optional textual comment. .. js:data:: name Object name - either table or database based on :js:data:type. .. js:data:: database Database name for table :js:data:type. .. js:data:: data Table content for table :js:data:type. Sample output: .. code-block:: json [ { "comment": "Export to JSON plugin for PHPMyAdmin", "type": "header", "version": "4.7.0-dev" }, { "name": "cars", "type": "database" }, { "data": [ { "car_id": "1", "description": "Green Chrysler 300", "make_id": "5", "mileage": "113688", "price": "13545.00", "transmission": "automatic", "yearmade": "2007" } ], "database": "cars", "name": "cars", "type": "table" }, { "data": [ { "make": "Chrysler", "make_id": "5" } ], "database": "cars", "name": "makes", "type": "table" } ] LaTeX ----- If you want to embed table data or structure in LaTeX, this is right choice for you. LaTeX is a typesetting system that is very suitable for producing scientific and mathematical documents of high typographical quality. It is also suitable for producing all sorts of other documents, from simple letters to complete books. LaTeX uses TeX as its formatting engine. Learn more about TeX and LaTeX on the Comprehensive TeX Archive Network _ also see the short description od TeX _. The output needs to be embedded into a LaTeX document before it can be rendered, for example in following document: .. code-block:: latex \documentclass{article} \title{phpMyAdmin SQL output} \author{} \usepackage{longtable,lscape} \date{} \setlength{\parindent}{0pt} \usepackage[left=2cm,top=2cm,right=2cm,nohead,nofoot]{geometry} \pdfpagewidth 210mm \pdfpageheight 297mm \begin{document} \maketitle % insert phpMyAdmin LaTeX Dump here \end{document} MediaWiki --------- Both tables and databases can be exported in the MediaWiki format, which is used by Wikipedia to display tables. It can export structure, data or both, including table names or headers. OpenDocument Spreadsheet ------------------------ Open standard for spreadsheet data, which is being widely adopted. Many recent spreadsheet programs, such as LibreOffice, OpenOffice, Microsoft Office or Google Docs can handle this format. OpenDocument Text ----------------- New standard for text data which is being widely addopted. Most recent word processors (such as LibreOffice, OpenOffice, Microsoft Word, AbiWord or KWord) can handle this. PDF --- For presentation purposes, non editable PDF might be best choice for you. PHP Array --------- You can generate a php file which will declare a multidimensional array with the contents of the selected table or database. SQL --- Export in SQL can be used to restore your database, thus it is useful for backing up. The option 'Maximal length of created query' seems to be undocumented. But experiments has shown that it splits large extended INSERTS so each one is no bigger than the given number of bytes (or characters?). Thus when importing the file, for large tables you avoid the error "Got a packet bigger than 'max_allowed_packet' bytes". .. seealso:: https://dev.mysql.com/doc/refman/5.7/en/packet-too-large.html Data Options ~~~~~~~~~~~~ **Complete inserts** adds the column names to the SQL dump. This parameter improves the readability and reliability of the dump. Adding the column names increases the size of the dump, but when combined with Extended inserts it's negligible. **Extended inserts** combines multiple rows of data into a single INSERT query. This will significantly decrease filesize for large SQL dumps, increases the INSERT speed when imported, and is generally recommended. .. seealso:: http://www.scriptalicious.com/blog/2009/04/complete-inserts-or-extended-inserts-in-phpmyadmin/ Texy! ----- Texy! _ markup format. You can see example on Texy! demo _. XML --- Easily parsable export for use with custom scripts. .. versionchanged:: 3.3.0 The XML schema used has changed as of version 3.3.0 YAML ---- YAML is a data serialization format which is both human readable and computationally powerful ( ). | 2020-09-27 07:03:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.493878036737442, "perplexity": 12003.833110350335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400265461.58/warc/CC-MAIN-20200927054550-20200927084550-00798.warc.gz"} |
https://www.physicsforums.com/threads/what-is-the-temperature-of-the-bath.92191/ | # What is the temperature of the bath?
#### lw11011
At 50 degrees Celsius, the resistance of a segment of gold wire is 54. When the wire is placed in a liquid bath, the resistance increases to 189. The temperature coefficient is 0.0034 (degrees Celsius)^-1 at 20 degrees Celsius.
What is the temperature of the bath? Answer in units of degrees Celsius.
Related Introductory Physics Homework News on Phys.org
#### Tide
Homework Helper
What have you done so far?
#### lw11011
I tried to use the equation:
p-po = (po)a(T-To)
To= reference temperature
po= resistivity at that temperature
a= temperature coefficient of resistivity
So I did:
189-54 = (54)(0.0034)(T-50)
But I know the answer I got that way is wrong.
#### CarlB
Homework Helper
It sure looks to me like you've got the right equation. Are you sure you solved it correctly?
The form of the equation that I am familiar with is
$$R = \alpha T$$
where T is given in degrees Kelvin. But this reduces to your form in either degrees K or degrees C (or F).
Carl
#### lw11011
When I use the formula I mentioned above, I get T=785.29 degrees Celsius. I don't know if I solved it incorrectly but I did:
189-54=(54)(0.0034)(T-50)
135=(0.1836)(T-50)
735.29=T-50
T= 785.29
#### CarlB
Homework Helper
Sure looks right to me. But that's a mighty hot bath. Maybe a molten salt bath.
Carl
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving | 2019-06-24 09:33:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6830058097839355, "perplexity": 3118.038119765449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999298.86/warc/CC-MAIN-20190624084256-20190624110256-00247.warc.gz"} |
http://mathoverflow.net/questions/95372/what-is-the-usual-topology-of-c-infty-cm | # What is the usual topology of $C^\infty_c(M)$
If $M$ is a smooth paracompact manifold, then what is the usual topology of $C^\infty_c(M)$, i.e., the smooth function with compact support?
-
Crossposted to math.SE: math.stackexchange.com/q/137701/264 – Zev Chonoles Apr 27 '12 at 16:29
I don't know if this is usual, but it should be possible to define a metric by $$d(f,g) = \sum_n \frac{1}{2^{n+A(n)}}\sum_{|\alpha|=n}\frac{\left|\sup_K\frac\partial{\partial x^\alpha}(f-g)\right|}{1 + \left|\sup_K\frac\partial{\partial x^\alpha}(f-g)\right|}$$ where $A(n)$ is the number of $\alpha$ s.t. $|\alpha|=n$. The space should be complete in the induced topology. – Todd Leason Apr 27 '12 at 17:46
Added: $K$ has to be taken to include the support of $f,g$. – Todd Leason Apr 27 '12 at 17:50
Todd: smoothly truncating $e^{-x^2}$ on $\mathbb R$ so as to obtain a sequence of compactly supported functions appropriately should give a Cauchy sequence in that metric which does not converge, no? – Mariano Suárez-Alvarez Apr 27 '12 at 18:38
Topologizing $C_c^\infty(M)\subseteq C^\infty(M)$ with the subspace topology (where $C^\infty(M)$ has the Whitney topology, generated by the seminorms $\left|\sup_K\frac\partial{\partial x^\alpha}f\right|$), makes it a dense subspace; in particular it is not itself complete. So I wouldn't really call this the "usual topology" on $C_c^\infty(M)$. (it would be sort of like saying the usual topology on $C(M)$ is given by the $L^2$ norm).
To me the usual topology is the inductive limit topology $C_c^\infty(M)=\lim_{K\subseteq M}C_c^\infty(K)$ (which Mariano calls the colimit topology). This topology is not metrizable when $M$ is noncompact (since it's not even first-countable), but is "nicer" in the sense that it gives a well-understood dual space, namely the space of distributions on $M$.
In comparison, the dual space of $C^\infty(M)$ with the Whitney topology is the space of compactly supported distributions on $M$. | 2016-07-24 14:59:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9768203496932983, "perplexity": 153.72737993425486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824109.37/warc/CC-MAIN-20160723071024-00119-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://search.r-project.org/CRAN/refmans/confintr/html/ci_cor.html | ci_cor {confintr} R Documentation
## Confidence Interval for Correlation Coefficients
### Description
This function calculates confidence intervals for a population correlation coefficient. For Pearson correlation, "normal" confidence intervals are available (by stats::cor.test). Also bootstrap confidence intervals are supported and are the only option for rank correlations.
### Usage
ci_cor(
x,
y = NULL,
probs = c(0.025, 0.975),
method = c("pearson", "kendall", "spearman"),
type = c("normal", "bootstrap"),
boot_type = c("bca", "perc", "norm", "basic"),
R = 9999,
seed = NULL,
...
)
### Arguments
x A numeric vector or a matrix/data.frame with exactly two numeric columns. y A numeric vector (only used if x is a vector). probs Error probabilites. The default c(0.025, 0.975) gives a symmetric 95% confidence interval. method Type of correlation coefficient, one of "pearson" (default), "kendall", or "spearman". For the latter two, only bootstrap confidence intervals are supported. The names can be abbreviated. type Type of confidence interval. One of "normal" (the default) or "bootstrap" (the only option for rank-correlations). boot_type Type of bootstrap confidence interval ("bca", "perc", "norm", "basic"). Only used for type = "bootstrap". R The number of bootstrap resamples. Only used for type = "bootstrap". seed An integer random seed. Only used for type = "bootstrap". ... Further arguments passed to boot::boot.
### Details
Bootstrap confidence intervals are calculated by the package "boot", see references. The default bootstrap type is "bca" (bias-corrected accelerated) as it enjoys the property of being second order accurate as well as transformation respecting (see Efron, p. 188).
### Value
A list with class cint containing these components:
• parameter: The parameter in question.
• interval: The confidence interval for the parameter.
• estimate: The estimate for the parameter.
• probs: A vector of error probabilities.
• type: The type of the interval.
• info: An additional description text for the interval.
### References
1. Efron, B. and Tibshirani R. J. (1994). An Introduction to the Bootstrap. Chapman & Hall/CRC.
2. Canty, A and Ripley B. (2019). boot: Bootstrap R (S-Plus) Functions.
### Examples
ci_cor(iris[1:2])
ci_cor(iris[1:2], type = "bootstrap", R = 999, seed = 1)
ci_cor(iris[1:2], method = "spearman", type = "bootstrap", R = 999, seed = 1)
ci_cor(iris[1:2], method = "k", type = "bootstrap", R = 999, seed = 1)
[Package confintr version 0.1.2 Index] | 2022-05-27 15:36:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36532729864120483, "perplexity": 10009.892196291443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662658761.95/warc/CC-MAIN-20220527142854-20220527172854-00331.warc.gz"} |
http://specialfunctionswiki.org/index.php/Bell_numbers | # Bell numbers
The Bell numbers $B_n$ are defined by the formula $$B_n = \displaystyle\sum_{k=0}^n S(n,k),$$ where $S(n,k)$ denotes the Stirling numbers of the second kind. | 2018-04-24 20:40:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.998284101486206, "perplexity": 47.58977404058948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947328.78/warc/CC-MAIN-20180424202213-20180424222213-00439.warc.gz"} |
https://www.math.princeton.edu/events/matchings-and-covers-families-d-intervals-and-their-duals-2017-03-02t200003 | # Matchings and covers in families of d-intervals and their duals
-
Shira Zerbib , University of Michigan
Fine Hall 224
A classical theorem of Gallai is that in any family of closed intervals in the real line, the maximal number of disjoint intervals is equal to the minimal number of points piercing all intervals. Tardos and Kaiser extended this result (appropriately modified) to families of d-intervals'', that is, hypergraphs in which each edge is the union of d intervals. We prove an analogous result for dual d-interval hypergraphs, in which the roles of the points and the edges are reversed. The proof is topological. We also discuss a recent Helly-type result for families of d-intervals. | 2018-06-25 11:40:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5366700291633606, "perplexity": 708.2449506287356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867666.97/warc/CC-MAIN-20180625111632-20180625131632-00042.warc.gz"} |
https://www.sparrho.com/item/magnetic-field-adjustment-device-and-magnetic-field-adjustment-method-for-superconducting-magnet/108308d/ | # Magnetic field adjustment device and magnetic field adjustment method for superconducting magnet
Imported: 10 Mar '17 | Published: 27 Nov '08
Hajime Tamura
USPTO - Utility Patents
## Abstract
A magnetic field adjustment device for a superconducting magnet, wherein magnetic material shim mechanisms are arranged in an axial direction of an inside periphery of the cylindrical superconducting magnet, each of the magnetic material shim mechanisms including a combined shim tray (14 in FIG. 2) in which a plurality of divided shim trays (11 and 12) and shim tray spacers (13) inserted between the divided shim trays are rectilinearly coupled, and magnetic material shims (101) for magnetic field adjustments as are accommodated in the divided shim trays. The magnetic material shim mechanisms afford a high versatility of magnetic material shim arrangements, whereby a correctable range of a magnetic field uniformity can be widened.
## Description
### BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to a magnetic field adjustment device and a magnetic field adjustment method for a superconducting magnet as are chiefly employed for a magnetic resonance imaging equipment (hereinbelow, abbreviated to MRI equipment).
2. Description of the Related Art
In an MRI equipment, a static magnetic field which has a high magnetic field strength and which is of high spatial uniformity and high temporal stability is needed, and hence, a superconducting magnet is often employed.
It is required of the superconducting magnet for the MRI equipment to generate a magnetic field region which exhibits a very high uniformity of, for example, within 3 ppm, near a magnetic field center. Therefore, the superconducting magnet is designed at a high precision. In actuality, however, a magnetic field uniformity in an actual diagnostic space worsens due to a manufacturing dimensional error in a process for producing the magnet, the influence of a magnetic material existing around a place where the magnet is installed, and so forth.
Therefore, the superconducting magnet includes means for making the fine adjustment (hereinbelow, termed shimming) of a magnetic field. There is a passive shimming method in which magnetic material shims made of a magnetic material of high permeability are used as one means for performing the shimming. This method is such that the magnetic material shims in an appropriate quantity are arranged at an appropriate position, within the generated magnetic field of the superconducting magnet, and that the disorder of the highly uniform magnetic field of the superconducting magnet is corrected by utilizing a magnetic field generated by the magnetization of the magnetic material shims. (Refer to Patent Document 1 being JP-A-1-245154, and Patent Document 2 being JP-A-63-122441.)
The shimming based on a prior-art passive shimming method will be described with reference to FIGS. 13 and 14. FIG. 13 is a cut-away perspective view of a superconducting magnet for an MRI equipment as includes a prior-art magnetic field adjustment device. Referring to FIG. 13, numeral 21 designates the superconducting magnet which has a substantially cylindrical shape. Inside the superconducting magnet 21, a plurality of superconducting coils 22 are disposed substantially concentrically with the cylinder of the superconducting magnet 21, and they generate a static magnetic field in a uniform magnetic field space 23 to which a highly uniform magnetic field is to be outputted. The superconducting coils 22 are coils each of which is fabricated by winding a superconducting wire, and they are accommodated in a low-temperature container 24, together with liquid helium (not shown) which is a coolant required for holding these superconducting coils 22 in a superconducting state. The low-temperature container 24 is configured of a helium tank 25 in which the liquid helium and the superconducting coils are accommodated, a thermal shield 26 which serves to intercept thermal invasion from outside, and a vacuum tank 27 which holds the interior of this low-temperature container 24 in a vacuum state. Usually, a refrigerator (not shown) is connected to the low-temperature container 24 in order to suppress the consumption of the liquid helium.
Numeral 98 designates a magnetic material shim mechanism, which is fixed to the inside cylinder of the superconducting magnet 21. The magnetic material shim mechanism 98 is a mechanism for accommodating magnetic material shims therein so as to arrange them at the periphery of the uniform magnetic field space 23.
FIG. 14 shows the magnetic material shim mechanism 98. The magnetic material shim mechanism 98 is configured of the plurality of magnetic material shims 101, a shim tray 102 which accommodates the magnetic material shims 101 therein and which is fixed to the inside cylinder of the superconducting magnet 21, and shim spacers 103 and lids 104 which fix the magnetic material shims 101 within shim pockets. The shim tray 102 is formed of a nonmagnetic material, and it has a shim pocket structure for arranging the magnetic material shims 101 therein.
The magnetic material shims 101 are thin plates made of a magnetic material, for example, iron, and they have predetermined vertical and lateral dimensions so as to be accommodated in shim pockets. As the magnetic material shims 101, a plurality of sorts having different thicknesses are prepared. The quantity of the magnetic material which is arranged in each shim pocket is determined by the thicknesses and number of the magnetic material shims 101.
The shim tray 102 is formed with slits so that the lids 104 of snug fit type can be fixed. After the accommodation of the magnetic material shim 101 in the shim pocket, the shim spacers 103 made of a nonmagnetic material are packed into the shim pocket, and the lid 104 made of a nonmagnetic material is put on the shim pocket, whereby the magnetic material shims 101 are fixed within the shim pocket. The shim tray 102 has a mechanism for being fixed to the inside cylinder of the superconducting magnet 21, and it is fixed to the inside cylinder of the superconducting magnet 21 after the accommodation of the magnetic material shims 101. In this way, the magnetic material shims 101 are arranged on the magnet inside cylinder so as to correct the disorder of the magnetic field uniformity. The magnetic material shim mechanisms sometimes form part of the structure of inclined magnetic field coils in the MRI equipment.
The shimming which uses the magnetic material shim mechanisms is carried out by the following steps:
Initially, the superconducting magnet 21 is excited in a state where the magnetic material shims 101 are not mounted, and the magnetic fields of the uniform magnetic field space 23 are measured at a large number of points, so as to evaluate the uniformity of the magnetic field which the superconducting magnet 21 generates. In general, magnetic field strengths in a uniform magnetic field region are expressed as Formula (1) by employing a Legendre function expansion, and they are nominated by components based on (m, n) values. A (0, 0) component is a necessary uniform magnetic field component, and all the others are error magnetic field components which are nonuniform within the imaging region.
Subsequently, the arrangement of the magnetic material is designed in order to correct the nonuniformity of the magnetic field. The design is performed by a computation which optimizes the quantities of the magnetic material as is to be arranged in the respective shim pockets, so as to lessen the error magnetic field components to the utmost, on the basis of the measured magnetic fields at the large number of points. The result of the computation is outputted as a table in which the numbers of the magnetic material shims 101 to be attached into the respective shim pockets are listed. An operator attaches the magnetic material shims 101 on the basis of the table. When the attachment of the magnetic material shims 101 has been completed, the superconducting magnet 21 is excited and the magnetic fields of the uniform magnetic field space 21 are measured again, so as to reevaluate the uniformity of the generated magnetic field after the magnetic material shim correction.
Usually, it is difficult to bring the magnetic field uniformity into a target value by one time of execution. Therefore, the operations as stated above are repeated several times, whereby the magnetic field uniformity is gradually enhanced.
$B z ( r , , ) = m = 0 m = 0 n r n P n m ( cos ) { A n m cos ( m ) + B n m sin ( m ) } ( 1 )$
where:
• Bz denotes a magnetic field at coordinates (r, , ), the coordinates (r, , ) are indicated in FIG. 15, and
• Anm and Pnm denote those coefficients of respective components which are determined depending upon a magnetic field shape.
With the prior-art magnetic material shim mechanism as stated above, the shim tray is a unitary molded article. Accordingly, there has been the problem that the positions of the magnetic material shims cannot be finely adjusted, and that the versatility of the magnetic material shim arrangements is low. Therefore, in a case where the uniformity of the magnetic field (hereinbelow, termed rough magnetic field) generated by the superconducting magnet itself is inferior, it has been sometimes impossible to make the uniformity correction by the prior-art magnetic material shim mechanisms. Besides, the shim tray designed optimally for the rough magnetic field characteristic of a specified superconducting magnet type cannot be diverted to the uniformity adjustment of a superconducting magnet type having another rough magnetic field characteristic. It has accordingly been necessary to design the optimal shim tray every superconducting magnet type.
### SUMMARY OF THE INVENTION
This invention has been made in order to solve the problems as stated above, and it has for its object to provide a magnetic field adjustment device and a magnetic field adjustment method for a superconducting magnet as adopt magnetic material shim mechanisms affording a high versatility of magnetic material shim arrangements, whereby the correctable range of a magnetic field uniformity can be widened.
A magnetic field adjustment device according to this invention is a magnetic field adjustment device for a superconducting magnet, wherein magnetic material shim mechanisms are arranged in an axial direction of an inside periphery of the cylindrical superconducting magnet. Each of the magnetic material shim mechanisms includes a combined shim tray in which a plurality of divided shim trays divided in the axial direction, and at least one shim tray spacer inserted between the divided shim trays are rectilinearly coupled, and magnetic material shims for magnetic field adjustments as are accommodated in the divided shim trays.
Besides, a magnetic field adjustment method according to this invention is a magnetic field adjustment method for a superconducting magnet, wherein magnetic material shim mechanisms are arranged in an axial direction of an inside periphery of the cylindrical superconducting magnet. The magnetic field adjustment method includes the steps of preparing as each of the magnetic material shim mechanisms, a combined shim tray in which a plurality of divided shim trays for accommodating magnetic material shims for magnetic field adjustments therein, and shim tray spacers inserted between the divided shim trays are rectilinearly coupled, and simultaneously optimizing how to combine the divided shim trays and the shim tray spacers, and mounting quantities of the magnetic material shims, thereby to determine arrangements of the magnetic material shims.
According to this invention, the versatility of the positions of the magnetic material shims is heightened, whereby the correctable range of a magnetic field uniformity based on a passive shimming method can be widened.
Thus, the magnetic field uniformity can be heightened by magnetic material shim corrections, even for the magnet whose working precision is inferior and whose manufacturing dimensional errors are large, and the magnetic field uniformity can be heightened more, for the magnet whose working precision is high and whose rough field is highly uniform.
Besides, magnetic field uniformities can be adjusted using the same magnetic material shim mechanisms, for a plurality of types of magnets which exhibit different rough magnetic field characteristics.
The foregoing and other objects, features, aspects and advantages of this invention will become more apparent from the following detailed description of this invention when taken in conjunction with the accompanying drawings.
### Embodiment 1
Now, Embodiment 1 of this invention will be concretely described with reference to the drawings.
FIG. 1 is a cut-away perspective view showing a magnetic field adjustment device for a superconducting magnet according to Embodiment 1 of this invention, while FIG. 2 is a perspective view of a magnetic material shim mechanism which is used in the magnetic field adjustment device for the superconducting magnet in Embodiment 1.
Referring to FIG. 1, inside the superconducting magnet 21 having a substantially cylindrical shape, a plurality of superconducting coils 22 are disposed substantially concentrically with the cylinder of the superconducting magnet 21, they output a static magnetic field to a uniform magnetic field space 23 to which a highly uniform magnetic field is to be outputted, and they suppress an external leakage magnetic field. The superconducting coils 22 are coils each of which is fabricated by winding a superconducting wire material that contains, for example, niobium titanate (NbTi) as a superconductor. These superconducting coils 22 are accommodated in a low-temperature container 24, together with liquid helium (not shown) which is a coolant necessary for holding them in a superconducting state. The low-temperature container 24 is configured of a helium tank 25 which accommodates the liquid helium and the superconducting coils therein, a thermal shield 26 which serves to intercept thermal invasion from outside, and a vacuum tank 27 which holds the interior of the low-temperature container 24 in a vacuum state. Although not shown, a refrigerator is connected to the low-temperature container 24 in order to suppress the consumption of the liquid helium.
Magnetic material shim mechanisms 28 for correcting the disorder of the highly uniform magnetic field are arranged in the axial direction of the inside periphery of the superconducting magnet 21. The magnetic material shim mechanisms 28 are installed on the cylindrical portion of the superconducting magnet 21 at equal intervals in the number of, for example, 24.
As shown in FIG. 2, each magnetic material shim mechanism 28 is such that magnetic material shims 101 and shim spacers 103 are accommodated in the shim pockets 15 of a combined shim tray 14 which is configured by rectilinearly coupling divided shim trays for end parts, 11, a plurality of divided shim trays 12, and a plurality of shim tray spacers 13, and that lids 104 are put on the shim pockets 15. Among the constituents of each magnetic material shim mechanism 28, the others than the magnetic material shims 101 are fabricated of a nonmagnetic material such as resin.
The divided shim trays for the end parts, 11, the divided shim tray 12 and the shim tray spacer 13 are shown in FIG. 3. Each of the divided shim tray 12 and the shim tray spacer 13 is provided with fitting pawls 131 at one end and fitting holes 132 at the other end. The end at which the fitting holes 132 are open, is hollow. In assembling the divided shim tray 12 and the shim tray spacer 13, the end (left side in the figure) on the side of the fitting pawls 131 is inserted into the hollow part of the end (right side in the figure) on the side of the fitting holes 132, and the fitting pawls 131 are fitted into the fitting holes 132 so as to be fixed.
The divided shim trays for the end parts, 11 are of two sorts. In one of the two sorts, one end is formed with a magnet fixation hole 133, and the other end is formed with fitting holes 132. In the other sort, one end is formed with a magnet fixation hole 133, and the other end is formed with fitting pawls 131. The shim trays for the end parts, 11 are arranged at both the ends of the combined shim tray 14. Here, when the superconducting magnet 21 is excited, large electromagnetic forces act on all the shim trays. Therefore, the whole combined shim tray 14 is screwed and fixed to the inside end parts of the superconducting magnet 21 by using the magnet fixation holes 133, thereby to prevent the shim trays from shifting and moving.
These constituents can be combined and detached as may be needed. The combined shim tray 14 is configured by combining the divided shim trays 12 and the shim tray spacers 13 in an appropriate array so that the magnetic material shims 101 may be located at desired positions.
The length of the shim tray spacer 13 is half of the full length of the divided shim tray 12. In the whole combined shim tray 14, the shim tray spacers 13 are used in the number of 2n (n=1, 2, . . . , or 10), and the number of the divided shim trays 12 is decreased n (n=1, 2, . . . , or 10), whereby the length of the whole combined shim tray 14 is held constant.
By the way, in this embodiment, the fitting structures are utilized for the connections and fixations between the constituents of the combined shim tray 14, but the same functional effects are achieved even by screwing or another fixation method. Besides, in this embodiment, the shim trays 11 are arranged at the end parts of the combined shim tray 14. However, even when the spacers 13 are attached to the end parts, the same functional effects are achieved as long as they can be fixed to the superconducting magnet 21.
There will be described the adjustments of a magnetic field uniformity in Embodiment 1 of this invention.
Initially, the superconducting magnet 21 is excited in a state where the combined shim tray 14 is not mounted, and the magnetic fields of the uniform magnetic field space 23 are measured at a large number of points, so as to evaluate the uniformity of a magnetic field which the superconducting magnet 21 generates. The uniformity of the magnetic field is decomposed into individual components by employing the Legendre function expansion indicated in Formula (1).
Subsequently, the arrangements of the magnetic material shims 101 for correcting the uniformity of the magnetic field are studied. In the arrangements of the magnetic material shims 101, the arrangements and quantities of the magnetic material shims 101 are determined so that the error components decomposed into the respective components may be made small, thereby to enhance the magnetic field uniformity. When the quantities of the magnetic material shims 101 become excessively large, there occur such drawbacks that the magnetic field uniformity changes depending upon an ambient temperature, and that the errors of the designed computational values of magnetic material shim outputs cumulate, so the magnetic field uniformity is difficult of enhancement. Accordingly, the quantities of the magnetic material shims 101 should preferably be as small as possible. In the design here, therefore, those arrangements of the magnetic material shims 101 with which the total quantity of the magnetic material shims 101 becomes small to the utmost, and which satisfy the restrictive condition of the total output value of the individual error components are obtained by an optimization computation.
The individual error components of magnetic fields which the magnetic material shims 101 generate depend upon the positions of the magnetic material shims 101. FIG. 4 is a graph showing how a (0, 2) error component and a (0, 4) error component generated by the magnetic material shims 101 in accordance with axial positions change in a case where the magnetic material shims 101 are mounted in axial symmetry on the inside cylinder of the superconducting magnet 21, having a radius of 0.45 m. Also, the arrangement ranges of a prior-art magnetic material shim mechanism A having shim pockets 31a-36a and a magnetic material shim mechanism B in Embodiment 1, having shim pockets 31b-35b are superposedly shown in the graph.
In a case, for example, where the superconducting magnet 21 generates the (0, 2) error component in a large quantity on a minus side and where this error component is to be corrected, the error output of the superconducting magnet 21 is canceled in such a way that the magnetic material shims 101 are arranged in the axial symmetry within a range of and below 0.15 m in terms of a Z-coordinate, whereby the magnetic material shims 101 generate the pertinent component on a plus side. Besides, in a case where the superconducting magnet 21 generates the (0, 4) error component in a large quantity on the plus side and where this error component is to be corrected, the error output of the superconducting magnet 21 is canceled in such a way that the magnetic material shims 101 are arranged in the axial symmetry within a range of and below 0.14 m in terms of the Z-coordinate, whereby the magnetic material shims 101 generate the pertinent component on the minus side.
In order to correct both the (0, 2) and (0, 4) components by the actual magnetic-material shim arrangements, the magnetic material shims 101 are arranged in large quantities in the shim pockets whose axial coordinates are nearest to zero (the shim pockets 31a and 31b), in both the magnetic material shim mechanisms A and B in FIG. 4.
In the shimming of the superconducting magnet 21 according to this embodiment, the magnetic material shim arrangements which make the error outputs and the used quantity of the magnetic material as small as possible are derived by the optimization computation in the above way, as to the components of m12 and n16 among the (m, n) error components.
With the prior-art magnetic material shim mechanisms A, however, the solution of the arrangements which cancel all the error components of m12 and n16 cannot be obtained, or even when the solution has been obtained, the quantities of the required magnetic material shims 101 become very large in some cases. By way of example, in a case where, on the occasion of FIG. 4, the (0, 2) component of the superconducting electromagnet 21 is substantially zero, while the (0, 4) component is outputted on the minus side, the output of the (0, 2) component of the magnetic material shims 101 needs to be zero, while the output of the (0, 4) component thereof needs to be on the plus side. At this time, with the shimming which employs the prior-art magnetic material shim mechanisms A, the (0, 2) and (0, 4) components are outputted just in the opposite directions whichever of the shim pockets 31a-36a the magnetic material shims may be arranged in, and it is therefore difficult to obtain the shim arrangements which cancel both the components.
In Embodiment 1 of this invention, therefore, the shim tray spacers 13 are inserted between the divided shim trays 12 of the combined shim tray 14, thereby to finely adjust the arrangement range of the magnetic material shims 101. According to the magnetic material shim mechanism B thus combined, the magnetic material shims 101 are arranged in a large quantity in the shim pocket 32b, thereby to make the output of the (0, 2) component zero and to generate the output of the (0, 4) component on the plus side. It is accordingly permitted to cancel the errors generated by the superconducting magnet 21.
The magnetic material shims 101 in the arrangement optimally designed in the above way are attached to the superconducting magnet 21, this superconducting magnet 21 is excited, and the magnetic fields of the uniform magnetic field space 23 are measured again, so as to reevaluate the uniformity of the magnetic field after the magnetic material shim corrections. Usually, it is difficult to bring the magnetic field uniformity into a target range, by one time of execution. Therefore, the operations are repeated several times so as to gradually enhance the magnetic field uniformity.
In this manner, the magnetic field adjustment device according to Embodiment 1 of this invention is a magnetic field adjustment device for a superconducting magnet wherein magnetic material shim mechanisms are arranged in the axial direction of the inside periphery of the cylindrical superconducting magnet, in which each of the magnetic material shim mechanisms includes a combined shim tray that is configured by rectilinearly coupling a plurality of divided shim trays and shim tray spacers inserted between the divided shim trays, and in which magnetic material shims for magnetic field adjustments are accommodated in the divided shim trays, whereby the versatility of the axial positions of the magnetic material shims can be heightened, and the correctable range of a magnetic field uniformity based on the passive shimming method can be widened.
Thus, the magnetic field uniformity can be heightened by magnetic material shim corrections, even for the magnet whose working precision is inferior and whose manufacturing dimensional errors are large, and the magnetic field uniformity can be heightened more, for the magnet whose working precision is high and whose rough field is highly uniform.
Besides, magnetic field uniformities can be adjusted using the same magnetic material shim mechanisms, for a plurality of types of magnets which exhibit different rough magnetic field characteristics.
Incidentally, according to Embodiment 1, the fixation between the magnetic material shim mechanism 28 and the superconducting magnet 21 is done by the screwing, and the fixation method of the combined shim tray 14 is based on the fitting structure. However, the fixations may well be based on the screwing, the fitting structure or any other fixation method as long as they can prevent the magnetic material shim mechanisms 28 from shifting and moving due to the electromagnetic forces which act on the magnetic material shims 101.
Besides, in Embodiment 1, the length of the shim tray spacer 13 is set at of the length of the divided shim tray 11, and the divided shim trays 11 are removed in the number of n (n being an integer) when the shim tray spacers 13 are inserted in the number of 2n. However, the ratio of the lengths may be any other than .
### Embodiment 2
FIG. 5 is a sectional view of a combined shim tray 44 showing Embodiment 2 of this invention.
The combined shim tray 44 in Embodiment 2 is so configured that a plurality of sorts having different axial length dimensions are prepared as divided shim trays for end parts, 41a and 41b, divided shim trays 42a-42c, and shim tray spacers 43a and 43b, respectively, and that magnetic material shims 45a-45c of dimensions conforming to the divided shim trays are set. The constituents of Embodiment 2 except the combined shim tray 44 and the magnetic material shims 45a-45c are the same as in Embodiment 1.
In Embodiment 1 described before, the dimensions of each of the divided shim tray for the end part, 11, the divided shim tray 12, and the shim tray spacer 13 have been of one sort, whereas in Embodiment 2 here, the dimensional sorts of the divided shim trays and the shim tray spacers are increased, and the magnetic material shims are optimized by selecting the shim trays in a combination in which the shimming is performed most easily.
Thus, the versatility of the axial positions of the magnetic material shims can be heightened still more than in Embodiment 1, and the correctable range of a magnetic field uniformity based on the passive shimming method can be widened more.
### Embodiment 3
FIG. 6 is a sectional view of a combined shim tray 54 showing Embodiment 3 of this invention.
The combined shim tray 54 in Embodiment 3 is so configured that a plurality of sorts having different axial length dimensions are prepared as divided shim trays for end parts, 41a and 41b, and divided shim trays 42a-42c, respectively, and that magnetic material shims 45a-45c of dimensions conforming to the divided shim trays are set. The constituents of Embodiment 3 except the combined shim tray 54 and the magnetic material shims 45a-45c are the same as in Embodiments 1 and 2, but Embodiment 3 differs from Embodiments 1 and 2 in the point that any spacer is not used.
In Embodiment 3, the divided shim trays for the end parts, 41a and 41b and the divided shim trays 42a-42c in the several sorts are combined, and the magnetic material shims are optimized by selecting the shim trays in a combination in which the shimming is performed most easily.
Thus, the versatility of the axial positions of the magnetic material shims can be heightened more, and the correctable range of a magnetic field uniformity based on the passive shimming method can be widened more. Moreover, since the number of components becomes smaller in correspondence with the shim tray spacers than in Embodiments 1 and 2, the efficiency of actual operations is enhanced.
### Embodiment 4
FIG. 7 is a flow chart showing Embodiment 4 of this invention.
In Embodiment 1, 2 or 3 described above, one combination of the combined shim tray 14, 44 or 54 is determined, whereupon the arrangements of the magnetic material shims in the individual shim pockets are computed for the optimization. Since, however, the combined shim tray consists of the large number of components, the number of combinations becomes enormous, and it is difficult to make studies comprehending all the combinations. In a case, for example, where a combined shim tray having an axial length of 1200 mm is to be configured by combining divided shim trays each of which has an axial dimension of 120 mm, with shim tray spacers each of which has an axial dimension of 60 mm, the number of combinations considered is as large as 10945. It is, in fact, impossible to manually execute the magnetic-material shim design computations in all the combinations, and even loop computations by a computer are not practicable because a long time is required. In each of Embodiments 1 to 3, therefore, several combinations in which a shimming solution is easily found out are picked out on the basis of experiences and actual results in the past, and only the quantities of the arrangements of the magnetic material shims for the individual shim pockets are computed. With this method, however, the combination of the shim trays has not been optimized, and hence, the magnetic material shim arrangements are not the most efficient ones in a strict sense.
In this embodiment, therefore, it is assumed that the magnetic material shims can be arranged at all the magnetic-material shim arrangement positions considered. On that occasion, the installation range of the magnetic material shims and the exclusion conditions of impossible combinations are imposed as restrictive conditions, whereby the combination of the shim trays and the quantities of the magnetic material shims in the individual shim pockets are simultaneously optimized by one time of optimization computation, so as to correct the magnetic field uniformity. Concretely, the corrections of the magnetic field uniformity in this embodiment are made by the following steps as indicated in FIG. 7:
1. When the divided shim trays and the shim tray spacers to be used have been determined, a table of magnetic field outputs per unit volume is created beforehand, regarding the positions of the shim pockets as are possible in all the combinations of the combined shim tray.
2. The restrictive conditions which exclude the impossible impossible combinations are set on the basis of the dimensions of the divided shim trays and shim tray spacers which are to be used. In FIG. 8, for example, the magnetic material shims can be arranged simultaneously at positions A and B or positions A and C, but they cannot be arranged simultaneously at both the positions B and C. This fact can be formularized as Formula (2) given below. XA, XB and XC in Formula (2) denote the quantities of the magnetic material shims which are respectively arranged at the positions A, B and C. Such formularizations are previously set for all the combinations considered.
XAXB0
XAXC0
XBXC=0(2)
Step 3. Magnetic fields in the uniform magnetic field space are measured at a large number of points.
Step 4. Fitting is performed in conformity with Formula (1), and individual error components are obtained.
Step 5. An optimization computation is executed with mathematical programming such as linear programming, under the following conditions:
Target of optimization: Minimization of magnetic material shim quantity
Variable to be optimized: Magnetic material shim quantity which is mounted in each shim pocket
Parameters of optimization: (1) Magnetic field output of magnetic material shim per unit volume, and (2) Dimensions of divided shim tray and shim tray spacer
Restrictive conditions: (1) Magnetic field uniformity (individual error components) and leakage magnetic field, (2) installation range of magnetic material shims, and (3) exclusion conditions of impossible combinations
As the results of the optimization computation, shim tray combinations and magnetic material shim arrangements are outputted as shown in FIG. 9 by way of example. FIG. 9 concerns one set of magnetic material shim mechanisms. In actuality, the computed results of similar shim tray combinations and magnetic material shim arrangements are outputted for all the 24 sets of magnetic material shim mechanisms which are mounted on one superconducting magnet 21.
Step 6. The shim trays are assembled and arranged in accordance with the results of FIG. 9.
Step 7. The magnetic field uniformity is remeasured.
Step 8. The remeasured value of the magnetic field uniformity is compared with specification values. If the magnetic field uniformity does not satisfies the specification values, the routine is returned to the step 4 so as to make the magnetic field uniformity adjustment again.
Step 9. If the magnetic field uniformity satisfies the specification values, the shiming is ended.
By the way, in order to shorten a job time period and to ensure the reproducibility of the positions of the magnetic material shims, it is sometimes the case that, in attaching the magnetic material shims in the second time of shimming, et seq., only the arrangement quantities of the magnetic material shims are altered with the shim tray combinations in the first time of shimming used as they are. Besides, in order to shorten the job time period, all the 24 sets of magnetic material shim mechanisms are set as having the same configurations.
Not only the arrangements of the magnetic material shims, but also the combined shim tray is optimized by tracing such steps, whereby the enormous number of combinations of the shim trays can be comprehended in the optimization computation, and the enlargement of the correctable range of the magnetic field uniformity based on the passive shimming method and the shortening of the shimming design time period can be attained.
### Embodiment 5
FIG. 10 is a sectional view showing shim trays 64a-64c in Embodiment 5 of this invention.
Each of the shim trays 64a-64c in Embodiment 5 is a molded article having a unitary structure likewise to the prior-art type. The three sorts of shim trays differ in the axial positions of shim pockets from one another.
In each of Embodiments 1-4 described before, one shim tray is configured by combining the three sorts of components; the divided shim trays for the end parts, the divided shim trays, and the shim tray spacers. In contrast, according to Embodiment 5, the plurality of sorts of shim trays molded unitarily are prepared as shown in FIG. 10, and an optimization computation is executed, including which of the unitary type shim trays is to be used.
Referring to FIG. 10, the shim tray 64a is the prior-art shim tray. The -shifted shim tray 64b is a shim tray in which the magnetic-material-shim mounting positions of the prior-art shim tray are shifted of the width of each magnetic material shim. Likewise, the -shifted shim tray 64c is a shim tray in which the magnetic-material-shim mounting positions are shifted of the width of each magnetic material shim.
In optimizing the magnetic material shims, which of the shim trays is to be used is included among the variables of the optimization, whereby the versatility of the axial positions of the magnetic material shims can be heightened still more, and the correctable range of a magnetic field uniformity based on the passive shimming method can be widened more. Moreover, as compared with Embodiments 1-4, Embodiment 5 need not combine and fabricate the shim tray and can shorten the job time period of shimming. Incidentally, only the arrangements of the magnetic material shims may well be optimized using a predetermined shim tray, without optimizing the shim tray which is to be used.
### Embodiment 6
FIG. 11 is a sectional view of a magnetic material shim mechanism showing Embodiment 6 of this invention. Referring to FIG. 11, numeral 84 designates a shim tray, numeral 85 magnetic material shims, and numeral 86 a spacer in a shim pocket. In this embodiment, the magnetic material shims 85 which are smaller than each shim pocket are used, and the spacers in the shim pockets, 86 are employed, whereby the magnetic material shims 85 are fixed at desired positions within the shim pockets.
Thus, the arrangements of the magnetic material shims can be set more finely than in the prior art, the versatility of the optimization of a shim design heightens, and the correctable range of a magnetic field uniformity based on the passive shimming method widens.
By the way, in this embodiment, the specified configuration is applied to the unitary type shim tray, but the same functional effects are attained even when the configuration is applied to any of the combined shim trays and the unitary molding type shim trays shown in Embodiments 1-3 and 5.
### Embodiment 7
FIG. 12 is a top plan view of a combined shim tray 74 showing Embodiment 7 of this invention.
Embodiment 7 has as its feature, that the combined shim tray 74 is assembled using divided shim trays for end parts, 71 shown in FIG. 12. In Embodiments 1-4 described before, for the purpose of fixing the magnetic material shim mechanism to the superconducting magnet, the lengths of the spacer and the divided shim tray which are to be combined need to be regulated so that the full length of the magnetic material shim mechanism may become equal to the length of amounting mechanism on the side of the superconducting magnet. In contrast, according to Embodiment 7, a magnetic material shim mechanism does not require such a regulation.
Now, the magnetic material shim mechanism in this embodiment will be described with reference to FIG. 12. Referring to FIG. 12, the combined shim tray 74 is configured by rectilinearly connecting the divided shim trays for the end parts, 71, divided shim trays 72, and shim tray spacers 73. In the foregoing embodiments, the end parts of the magnetic material shim mechanism are respectively provided with round screw holes for fixing the whole magnetic-material shim mechanism to the superconducting magnet, whereas in this embodiment, such screw holes are slots 75 with which the mounting positions of the magnetic material shim mechanism in the lengthwise direction thereof can be adjusted. Therefore, the magnetic material shim mechanism can be mounted on the superconducting magnet even when its full length has fluctuated within a range of 50 mm.
Thus, it is possible to sharply loosen the restriction that the full length of the magnetic material shim mechanism must be held constant, and the axial positions of magnetic material shims can be continuously changed. Accordingly, the versatility of the optimization of a shim design heightens, and the correctable range of a magnetic field uniformity based on the passive shimming method widens.
By the way, in this embodiment, the lengths of the shim pockets of the divided shim trays for the end parts, 71 and the divided shim trays 72 are all equal, but the divided shim trays whose shim pockets have the different lengths may well be adopted as in Embodiments 1-3, or the specified configuration may well be applied to the shim trays of the unitarily molded articles as in Embodiments 5 and 6.
Various modifications and alterations of this invention will be apparent to those skilled in the art without departing from the scope and spirit of this invention, and it should be understood that this invention is not limited to the illustrative embodiments set forth herein.
## Claims
1. A magnetic field adjustment device for a superconducting magnet, wherein magnetic material shim mechanisms are arranged in an axial direction of an inside periphery of the cylindrical superconducting magnet;
each of the magnetic material shim mechanisms, comprising a combined shim tray in which a plurality of divided shim trays divided in the axial direction, and at least one shim tray spacer inserted between the divided shim trays are rectilinearly coupled, and magnetic material shims for magnetic field adjustments as are accommodated in the divided shim trays.
each of the magnetic material shim mechanisms, comprising a combined shim tray in which a plurality of divided shim trays divided in the axial direction, and at least one shim tray spacer inserted between the divided shim trays are rectilinearly coupled, and magnetic material shims for magnetic field adjustments as are accommodated in the divided shim trays.
2. The magnetic field adjustment device for a superconducting magnet as defined in claim 1, wherein a plurality of sorts of divided shim trays and shim tray spacers of different axial lengths are combined in said combined shim tray.
3. A magnetic field adjustment device for a superconducting magnet, wherein magnetic material shim mechanisms are arranged in an axial direction of an inside periphery of the cylindrical superconducting magnet;
each of the magnetic material shim mechanisms, comprising a combined shim tray in which a plurality of sorts of divided shim trays of different axial lengths are rectilinearly coupled, and magnetic material shims for magnetic field adjustments as are accommodated in the divided shim trays.
each of the magnetic material shim mechanisms, comprising a combined shim tray in which a plurality of sorts of divided shim trays of different axial lengths are rectilinearly coupled, and magnetic material shims for magnetic field adjustments as are accommodated in the divided shim trays.
4. A magnetic field adjustment device for a superconducting magnet, wherein magnetic material shim mechanisms are arranged in an axial direction of an inside periphery of the cylindrical superconducting magnet;
each of the magnetic material shim mechanisms, comprising a plurality of sorts of shim trays in which a plurality of shim pockets for accommodating magnetic material shims therein have different lengthwise positions, wherein the magnetic material shims for magnetic field adjustments are accommodated in the shim pockets.
each of the magnetic material shim mechanisms, comprising a plurality of sorts of shim trays in which a plurality of shim pockets for accommodating magnetic material shims therein have different lengthwise positions, wherein the magnetic material shims for magnetic field adjustments are accommodated in the shim pockets.
5. A magnetic field adjustment device for a superconducting magnet, wherein magnetic material shim mechanisms are arranged in an axial direction of an inside periphery of the cylindrical superconducting magnet;
each of the magnetic material shim mechanisms, comprising a shim tray which includes a plurality of shim pockets for accommodating magnetic material shims therein, and a spacer within the shim tray, which is arranged within the shim pocket and which partitions an arrangement space of the magnetic material shims.
each of the magnetic material shim mechanisms, comprising a shim tray which includes a plurality of shim pockets for accommodating magnetic material shims therein, and a spacer within the shim tray, which is arranged within the shim pocket and which partitions an arrangement space of the magnetic material shims.
6. The magnetic field adjustment device for a superconducting magnet as defined in claim 1, wherein said combined shim tray includes divided shim trays for end parts, whose mounting positions in a lengthwise direction are adjustable.
7. An MRI equipment comprising a magnetic field adjustment device for a superconducting magnet as defined in claim 1.
8. A magnetic field adjustment method for a superconducting magnet, wherein magnetic material shim mechanisms are arranged in an axial direction of an inside periphery of the cylindrical superconducting magnet, comprising the steps of:
preparing as the magnetic material shim mechanisms, combined shim trays each of which includes, at least, a plurality of divided shim trays divided in the axial direction and in each of which the divided shim trays are rectilinearly coupled; and
simultaneously optimizing how to combine the divided shim trays in accordance with sorts of the combined shim trays to-be-used, and mounting quantities of magnetic material shims to be accommodated in the individual shim trays, thereby to determine arrangements of the magnetic material shims.
preparing as the magnetic material shim mechanisms, combined shim trays each of which includes, at least, a plurality of divided shim trays divided in the axial direction and in each of which the divided shim trays are rectilinearly coupled; and
simultaneously optimizing how to combine the divided shim trays in accordance with sorts of the combined shim trays to-be-used, and mounting quantities of magnetic material shims to be accommodated in the individual shim trays, thereby to determine arrangements of the magnetic material shims.
9. The magnetic field adjustment method for a superconducting magnet as defined in claim 8, wherein said combined shim tray to-be-used is selected from the group consisting of a first combined shim tray in which a plurality of divided shim trays divided in the axial direction, and shim tray spacers inserted between the divided shim trays are rectilinearly coupled, a second combined shim tray in which a plurality of sorts of divided shim trays and shim tray spacers of different axial lengths are combined, and a third combined shim tray in which a plurality of sorts of divided shim trays of different axial lengths are rectilinearly coupled.
10. A magnetic field adjustment method for a superconducting magnet, wherein magnetic material shim mechanisms are arranged in an axial direction of an inside periphery of the cylindrical superconducting magnet, comprising the steps of:
preparing as the magnetic material shim mechanisms, a plurality of sorts of shim trays whose lengthwise positions of a plurality of shim pockets for accommodating magnetic material shims therein are different; and
simultaneously optimizing the sorts of the shim trays and arrangements of the magnetic material shims, thereby to determine the arrangements of the magnetic material shims.
preparing as the magnetic material shim mechanisms, a plurality of sorts of shim trays whose lengthwise positions of a plurality of shim pockets for accommodating magnetic material shims therein are different; and
simultaneously optimizing the sorts of the shim trays and arrangements of the magnetic material shims, thereby to determine the arrangements of the magnetic material shims.
11. A magnetic field adjustment method for a superconducting magnet, wherein magnetic material shim mechanisms are arranged in an axial direction of an inside periphery of the cylindrical superconducting magnet, comprising the steps of:
preparing as each of the magnetic material shim mechanisms, a mechanism which includes a shim tray having a plurality of shim pockets for accommodating magnetic material shims therein, and a spacer within the shim tray, which is arranged within the shim pocket and which partitions an arrangement space of the magnetic material shims; and
simultaneously optimizing the spacer within the shim tray and the magnetic material shims, thereby to determine arrangement of the magnetic material shims.
preparing as each of the magnetic material shim mechanisms, a mechanism which includes a shim tray having a plurality of shim pockets for accommodating magnetic material shims therein, and a spacer within the shim tray, which is arranged within the shim pocket and which partitions an arrangement space of the magnetic material shims; and
simultaneously optimizing the spacer within the shim tray and the magnetic material shims, thereby to determine arrangement of the magnetic material shims. | 2020-06-07 08:50:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26941463351249695, "perplexity": 2244.029933923812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348526471.98/warc/CC-MAIN-20200607075929-20200607105929-00479.warc.gz"} |
https://www.gamedev.net/forums/topic/384984-my-window-refuses-to-say-put/ | # [.net] My window refuses to say put
## Recommended Posts
How can I set the location of a Form when it comes up, relative to the top left of the screen? I tried using the Location property, the DesktopLocation property, the Dock, the Top and Left properties, even all at once, but the window still refuses to go where I tell it. Any suggestions?
##### Share on other sites
form.StartPosition = FormStartPosition.Manual;form.Location = new Point(wherever);
Both properties should be settable in your GUI design tool. You can also use FormStartPosition.CenterScreen if that's what you're trying to do, which will get it right whatever the resolution.
##### Share on other sites
wow, it worked, thanks alot!
## Create an account
Register a new account
• ## Partner Spotlight
• ### Forum Statistics
• Total Topics
627642
• Total Posts
2978354
• 10
• 12
• 22
• 13
• 33 | 2017-10-17 11:44:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17595380544662476, "perplexity": 5248.897894454008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187821088.8/warc/CC-MAIN-20171017110249-20171017130249-00564.warc.gz"} |
https://euler.stephan-brumme.com/31/ | << problem 30 - Digit fifth powers Pandigital products - problem 32 >>
# Problem 31: Coin sums
In England the currency is made up of pound, £, and pence, p, and there are eight coins in general circulation:
1p, 2p, 5p, 10p, 20p, 50p, £1 (100p) and £2 (200p).
It is possible to make £2 in the following way:
1x £1 + 1x 50p + 2x 20p + 1x 5p + 1x 2p + 3x 1p
How many different ways can £2 be made using any number of coins?
# My Algorithm
My program creates a table history that contains the number of combinations for a given sum of money:
• its entry history[0] refers to £0
• its entry history[1] refers to £0.01
• its entry history[2] refers to £0.02
• its entry history[3] refers to £0.03
• ...
• its entry history[200] refers to £2.00
There are 8 different coins and therefore each entry of history is a std::vector itself with 8 elements:
it tells how many combinations exist if only the current coin or smaller coins are used.
For example, there is always one way/combination to pay a certain amout if you only have single pennies.
That means, the first element is always 1.
Moreover, each of the next element is at least 1, too, because I said: "current coin or smaller coins".
If we would like to pay £0.01 then history[1] = { 1,1,1,1,1,1,1,1 }.
Now comes the only part that isn't obvious: there is one combination of paying zero pounds, too:
history[0] = { 1,1,1,1,1,1,1,1 }. From now on, everything comes natural, trust me, ...
If we would like to pay £0.02 then there are two ways: pay with two single pennies or a 2p coin.
What we do is:
1. try not to use the current coin (2p in our case), only smaller coins → there is one combination
2. try to use the current coin (2p in our case) → then there are 0.00 £ left which is possible in one way
So far we had history[2] = { 1,?,?,?,?,?,?,? }
Step 1 is the same as history[2][currentCoinId - 1] = history[2][0] = 1.
Step 2 is the same as history[2 - currentCoinValue][currentCoinId] = history[0][1] = 1.
Therefore we have 1+1=2 combinations (as expected:) history[2] = { 1,2,?,?,?,?,?,? }.
The next coin, it's the 5p coin, can't be used because it's bigger than the total of £0.02. In software terms currentCoinValue > total.
Only step 1 applies to all remaining elements: history[2] = { 1,2,2,2,2,2,2,2 }.
What does it mean ? There are 2 ways to pay 0.02 £ with 1p and 2p. And there are still only two ways if you use all coins up to £2.
When the program computes history[200] then the result of the problem is stored in the last element (history[200][7]).
## Modifications by HackerRank
There are multiple test cases. My program computes all combinations up to the input values and stores them in history.
If a test case's input is smaller than something we had before then no computation is required at all, it will become a basic table lookup.
The results may exceed 32 bits and thus I compute mod 10^9+7 whenever possible (as requested by their modified problem statement).
# Interactive test
You can submit your own input to my program and it will be instantly processed at my server:
Number of test cases (1-5):
Input data (separated by spaces or newlines):
Note: please enter in pence, not pound
This is equivalent to
echo "1 10" | ./31
Output:
Note: the original problem's input 200 cannot be entered
because just copying results is a soft skill reserved for idiots.
(this interactive test is still under development, computations will be aborted after one second)
# My code
… was written in C++11 and can be compiled with G++, Clang++, Visual C++. You can download it, too. Or just jump to my GitHub repository.
#include <iostream>
#include <vector>
const unsigned int NumCoins = 8;
// face value of all coins in cents
const unsigned int Coins[NumCoins] = { 1,2,5,10,20,50,100,200 };
// store number of combinations in [x] if coin[x] is allowed:
// [0] => combinations if only pennies are allowed
// [1] => 1 cent and 2 cents are allowed, nothing more
// [2] => 1 cent, 2 cents and 5 cents are allowed, nothing more
// ...
// [6] => all but 2 pounds (= 200 cents) are allowed
// [7] => using all coins if possible
typedef std::vector<unsigned long long> Combinations;
int main()
{
// remember combinations for all prices from 1 cent up to 200 cents (2 pounds)
std::vector<Combinations> history;
unsigned int tests;
std::cin >> tests;
while (tests--)
{
unsigned int total;
std::cin >> total;
// initially we start at zero
// but if there are previous test cases then we can re-use the old results
for (unsigned int cents = history.size(); cents <= total; cents++)
{
// count all combinations of those 8 coins
Combinations ways(NumCoins);
// one combination if using only 1p coins (single pennys)
ways[0] = 1;
// use larger coins, too
for (size_t i = 1; i < ways.size(); i++)
{
// first, pretend not to use that coin (only smaller coins)
ways[i] = ways[i - 1];
// now use that coin once (if possible)
auto currentCoin = Coins[i];
if (cents >= currentCoin)
{
auto remaining = cents - currentCoin;
ways[i] += history[remaining][i];
}
// not needed for the original problem, only for Hackerrank's modified problem
ways[i] %= 1000000007;
}
// store information for future use
history.push_back(ways);
}
// look up combinations
auto result = history[total];
// the last column (allow all coins) contains the desired value
auto combinations = result.back();
combinations %= 1000000007; // for Hackerrank only
std::cout << combinations << std::endl;
}
return 0;
}
This solution contains 12 empty lines, 20 comments and 2 preprocessor commands.
# Benchmark
The correct solution to the original Project Euler problem was found in less than 0.01 seconds on an Intel® Core™ i7-2600K CPU @ 3.40GHz.
(compiled for x86_64 / Linux, GCC flags: -O3 -march=native -fno-exceptions -fno-rtti -std=gnu++11 -DORIGINAL)
See here for a comparison of all solutions.
Note: interactive tests run on a weaker (=slower) computer. Some interactive tests are compiled without -DORIGINAL.
# Changelog
February 23, 2017 submitted solution
# Hackerrank
My code solves 9 out of 9 test cases (score: 100%)
# Difficulty
Project Euler ranks this problem at 5% (out of 100%).
Hackerrank describes this problem as easy.
Note:
Hackerrank has strict execution time limits (typically 2 seconds for C++ code) and often a much wider input range than the original problem.
In my opinion, Hackerrank's modified problems are usually a lot harder to solve. As a rule thumb: brute-force is rarely an option.
# Heatmap
Please click on a problem's number to open my solution to that problem:
green solutions solve the original Project Euler problem and have a perfect score of 100% at Hackerrank, too yellow solutions score less than 100% at Hackerrank (but still solve the original problem easily) gray problems are already solved but I haven't published my solution yet blue solutions are relevant for Project Euler only: there wasn't a Hackerrank version of it (at the time I solved it) or it differed too much orange problems are solved but exceed the time limit of one minute or the memory limit of 256 MByte red problems are not solved yet but I wrote a simulation to approximate the result or verified at least the given example - usually I sketched a few ideas, too black problems are solved but access to the solution is blocked for a few days until the next problem is published [new] the flashing problem is the one I solved most recently
I stopped working on Project Euler problems around the time they released 617.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100
101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200
201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300
301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400
401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600
601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708
The 310 solved problems (that's level 12) had an average difficulty of 32.6% at Project Euler and
I scored 13526 points (out of 15700 possible points, top rank was 17 out of ≈60000 in August 2017) at Hackerrank's Project Euler+.
My username at Project Euler is stephanbrumme while it's stbrumme at Hackerrank.
Look at my progress and performance pages to get more details.
<< problem 30 - Digit fifth powers Pandigital products - problem 32 >>
more about me can be found on my homepage, especially in my coding blog.
some names mentioned on this site may be trademarks of their respective owners.
thanks to the KaTeX team for their great typesetting library ! | 2020-04-07 00:52:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39989572763442993, "perplexity": 245.48397873264975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371662966.69/warc/CC-MAIN-20200406231617-20200407022117-00286.warc.gz"} |
https://livingthing.danmackinlay.name/delays.html | # Delays and reverbs (digital)
In which I think about parameterisations and implementations of audio recurrence for use in music.
A particular nook in the the linear feedback process library.
Weapons of choice:
1. Parameterisations of sparse unitary matrices
2. (Maybe) Composable Markov theory, as seen in Thermodynamics of life
Keywords: multichannel allpass filter.
One obvious parameterisation of unitary matrices is applying givens rotations to the identity matrix; this has $n(n-1)$ parameters.
Note that the recurrent neural networks people have glommed on to this. (ArSB15, MHRB16, JSDP16)
## Things to try
• Generalise reverb to Gerzon m-dimensional MIMO unitary allpass
• Thiran (multi-step, SISO) allpass?
## Refs
ArSB15
Arjovsky, M., Shah, A., & Bengio, Y. (2015) Unitary Evolution Recurrent Neural Networks. arXiv:1511.06464 [Cs, Stat].
DHCS15
De Sena, E., Haciihabiboglu, H., Cvetkovic, Z., & Smith, J. O.(2015) Efficient Synthesis of Room Acoustics via Scattering Delay Networks. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(9), 1478–1492. DOI.
Hede13
Hedemann, S. R.(2013) Hyperspherical Parameterization of Unitary Matrices. arXiv:1303.5904 [Quant-Ph].
Hend74
Hendeković, J. (1974) On parametrization of orthogonal and unitary matrices with respect to their use in the description of molecules. Chemical Physics Letters, 28(2), 242–245. DOI.
Jarl05
Jarlskog, C. (2005) A recursive parametrization of unitary matrices. Journal of Mathematical Physics, 46(10), 103508. DOI.
JSDP16
Jing, L., Shen, Y., Dubček, T., Peurifoy, J., Skirlo, S., Tegmark, M., & Soljačić, M. (2016) Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNN. arXiv:1612.05231 [Cs, Stat].
MeFa10
Menzer, F., & Faller, C. (2010) Unitary Matrix Design for Diffuse Jot Reverberators.
MHRB16
Mhammedi, Z., Hellicar, A., Rahman, A., & Bailey, J. (2016) Efficient Orthogonal Parametrisation of Recurrent Neural Networks Using Householder Reflections. arXiv:1612.00188 [Cs].
ReSa89
Regalia, P., & Sanjit, M. (1989) Kronecker Products, Unitary Matrices and Signal Processing Applications. SIAM Review, 31(4), 586–613. DOI.
Schr61
Schroeder, M. R.(1961) Improved Quasi-Stereophony and “Colorless” Artificial Reverberation. The Journal of the Acoustical Society of America, 33(8), 1061–1064. DOI.
ScLo61
Schroeder, M. R., & Logan, B. (1961) “Colorless” artificial reverberation. Audio, IRE Transactions on, AU-9(6), 209–214. DOI.
TiSu02
Tilma, T., & Sudarshan, E. C. G.(2002) Generalized Euler Angle Paramterization for SU(N). Journal of Physics A: Mathematical and General, 35(48), 10467–10501. DOI.
VaLa12
Valimaki, v., & Laakso, T. I.(2012) Fractional Delay Filters-Design and Applications. In F. Marvasti (Ed.), Nonuniform Sampling: Theory and Practice. Springer Science & Business Media | 2018-07-18 22:22:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7152429223060608, "perplexity": 14473.88907496288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590329.62/warc/CC-MAIN-20180718213135-20180718233135-00104.warc.gz"} |
https://cstheory.stackexchange.com/questions/41369/sorting-a-programs-instructions-until-it-works | # Sorting a programs instructions until it works
Lets say I have a computer program below.
(define (factorial x)
(if (= x 0)
1
(else (* x (factorial (- x 1)))))
I then take each line of the program and create a random permutation.
1
(if (= x 0)
(else (* x (factorial (- x 1)))))
(define (factorial x)
Now I give you a set of valid values that the program must return. 1,2,6,24
Your task is to sort the permutations of the program until it gives you the result.
Is this a new problem? I haven't found anything remotely like the problem I proposed above.
What is the runtime to solving this problem?
• In almost all cases where your program is more than 5-6 lines, there will only be a handful permutations that compile and return values? As you increase the complexity of the program, the permutations will decrease dramatically too. – Konstantinos Koiliaris Aug 13 '18 at 16:24
• @KonstantinosKoiliaris Yea so worst case runtime would be O(N!) * O(C) where N is the amount of lines and C is the run time of the program. – Joshua Herman Aug 13 '18 at 17:23
• Brute force, you’d have to try all O(N!) permutations, but the run time of the program is not guaranteed to terminate for any random permutation (halting problem). So, wouldn’t that make the problem undecidable? – Konstantinos Koiliaris Aug 13 '18 at 17:26
• This is surely undecidable in most programming languages -- as you can modify any given program P so that the only permutation that is syntactically correct is equivalent to P, and so solving the problem for the modified program could tell you whether running P ever returns, say, 0. Which is undecidable. – Neal Young Aug 17 '18 at 0:32
## 3 Answers
This can be done by running all the $n!$ permutations in parallel and wait for one of them to output $1,2,6,24$ on inputs $1,2,3,4$.
(Of course, that does not guarantee that you found the correct permutation for input 5.)
Specific running time estimates may depend on the programming language being used. For instance, in BASIC each line is supposed to start with a line number which makes this solvable in polynomial time.
• It seems your solution assumes that it is provided as a promise that one of the permutations outputs $1, 2, 6, 24$ on inputs $1, 2, 3, 4$? – C Komus Aug 14 '18 at 8:17
• Why is it decidable? It seems to me that it is only semi-decidable, as some of the permutations may run forever and we won't be able to tell. – Andrej Bauer Aug 15 '18 at 12:42
• I would say semidecidable. “Computable” is too vague and if anything it means decidable (to me at least). – Andrej Bauer Aug 15 '18 at 19:47
• @AndrejBauer fair enough, it's a partial recursive function – Bjørn Kjos-Hanssen Aug 15 '18 at 20:03
• @CKomus right, we're just discussing which words are appropriate for that case – Bjørn Kjos-Hanssen Aug 17 '18 at 16:03
Others have pointed out this is semidecidable. In most programming languages, the problem is NP-hard. In particular, the following problem is NP-hard:
Input: a set of lines of code
Question: does there exist a permutation, so that this yields a syntactically valid program?
If the programming language has two kinds of matched parentheses (e.g., () and []), then the problem is NP-hard, as shown here, as we can restrict to programs that contain only those symbols. Most languages have at least two kinds of parentheses symbols (e.g., () or [] or {}).
(Even in a language as simple as Scheme, it might still be possible to encode two independent types of parenthesis symbols: e.g., let (# and t) be the first pair of symbols, and (if #t and 1) be the other pair of symbols.)
• @NealYoung, you're right; corrected. – D.W. Aug 17 '18 at 1:20
• (Deleting my other comment given the updated post.) Here's another way to show it is NP hard: given a SAT formula, produce a program P that has no syntactically correct permutation (other than the identity) and that outputs "True" if and only if the formula is satisfiable. – Neal Young Aug 17 '18 at 3:27
• @NealYoung, that sounds interesting. I'd be interested to see how to construct such a program. – D.W. Aug 17 '18 at 3:54
• E.g. make a single-line program that checks all possible assignments and returns 0 iff there is a satisfying assignment. – Neal Young Aug 17 '18 at 14:52
At first not all the permutations will lead to a valid program. secondly, the answer depend on how the code is formatted.
next, there could be permutations that produce a program that do not terminate making it impossible to know if the output correspond to the valid values you gave. for instance suppose you have formatted your previous program as follow:
(define (factorial x)
(if (= x 0)
1
(else
(* x (factorial (- x 1)))
))
and consider the following permutation of its lines:
(define (factorial x)
(* x (factorial (- x 1)))
(if (= x 0)
1
(else
))
it is clear that it will not terminate.
As a result it cannot be possible to give the run time needed to solve it in the general case.
• I agree with your conclusion but not your reasoning. (It's true that the obvious approach -- executing each permutation -- doesn't work for the reason you point out, but that does not imply that no approach works. For that you need a more careful argument.) – Neal Young Aug 17 '18 at 0:35
• The difficulty is that if we want to know that a permuted program produces the right output, then we shall execute it and compare its output with the correct one. However this latter is not guaranteed to terminate. – RTK Aug 17 '18 at 7:11
• Sure but you then conclude "it cannot be possible" (by any means, presumably), which does not follow from your argument. – Neal Young Aug 17 '18 at 14:53 | 2019-10-20 08:38:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5201836228370667, "perplexity": 656.6287315126522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986705411.60/warc/CC-MAIN-20191020081806-20191020105306-00032.warc.gz"} |
https://www.gamedev.net/blogs/entry/1873949-2d-engine-map-editor/ | • entries
78
138
• views
35842
# 2D Engine Map Editor
118 views
So I've chosen to go with the map editor first. I know for a fact this is going to be the hardest thing for me to do because I've always struggled with making anything map wise when I tried in the past. I have no problem setting up any maze with graphics in collision, however I'm not sure on how I'm going setup the map editor to save the files and what I'm really going to be doing. I've brought up a big stack of papers from downstairs to get deigning on. In the past, anything I couldn't do was because of poor design ideas and having no real objective but on the spot programming. When I started C++ programming I started deigning what I wanted either on my tablet in Photoshop or on paper, every thing works out much better this way because of that sense of direction.
I know there are a lot of map editors out there, however I became a programmer to create. Learning to make this map editor will allow me to expand my knowledge of programming. Now that my coffee is ready, time to get to work!
Anyway, why not save it as a .txt file for now?
Just define the types, ex: WATER_ONE = 1, RED = 2 , CAR_TYPE_ONE = 3 so on, and when it saves you'l have a nice file with row after row of numbers and all that is left is too make a loader that reades each uumber at a time and loades the file that goes with it.
for (i = 0, i <= map_height, i++)
{
for (j = 0, j <= map_width, j++)
{
//print the type number to the file
//Then later just create a loader using the same method
//but instead of writing too a file it will read from it
}
}
Im just a noob right now in c++ so there might be a better and more efficent way too do it but thats one idea.
Hey again.
I'm using this way:
BITMAP *Tiles[2];
int MapOne[15][20] = {
{1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1},
{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{1, 0, 0, 0, 0, 0, 0, 0,
## Create an account
Register a new account | 2018-12-10 08:23:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31881949305534363, "perplexity": 136.555835425083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823320.11/warc/CC-MAIN-20181210080704-20181210102204-00560.warc.gz"} |
http://mathhelpforum.com/calculus/152040-using-disk-method-find-value-x-print.html | # Using disk method to find a value of x
• July 26th 2010, 02:59 PM
Poptimus
Using disk method to find a value of x
Hey guys this problem has me stumped.
The region bounded by $y = \sqrt{x}$, $y = 0$, and $y = 4$ is revolved about the x-axis.
(a) Find the value of x in the interval [0,4] that divides the solid into two parts of equal volume.
(b) Find the values of x in the interval [0,4] that divide the solid into three parts of equal volume.
I suspect the easiest way would be to use the disk method; however, I'm not sure how to get an x value that would result in 2 or 3 parts of equal volume. I can guess and check for two, but there has to be an easier way.
Thanks
• July 26th 2010, 03:34 PM
skeeter
Quote:
Originally Posted by Poptimus
Hey guys this problem has me stumped.
The region bounded by $y = \sqrt{x}$, $y = 0$, and $y = 4$ is revolved about the x-axis.
(a) Find the value of x in the interval [0,4] that divides the solid into two parts of equal volume.
(b) Find the values of x in the interval [0,4] that divide the solid into three parts of equal volume.
I suspect the easiest way would be to use the disk method; however, I'm not sure how to get an x value that would result in 2 or 3 parts of equal volume. I can guess and check for two, but there has to be an easier way.
Thanks
first, find the overall volume, $V$, using the disk method from $x = 0$ to $x = 4$
then set up the equation ...
$\displaystyle \pi \int_0^a (\sqrt{x})^2 \, dx = \frac{V}{2}$
solve for $a$
set up the equations ...
$\displaystyle \pi \int_0^b (\sqrt{x})^2 \, dx = \frac{V}{3}$
$\displaystyle \pi \int_0^c (\sqrt{x})^2 \, dx = \frac{2V}{3}$
solve for $b$ and $c$
• July 26th 2010, 03:37 PM
drumist
Did you make a typo? Did you mean bound by $y=\sqrt{x}, ~ y=0, ~ x=4$ ?
• July 26th 2010, 03:41 PM
drumist
The inside of skeeter's integrals was not correct, but the important part was how he set up the endpoints. Just make sure you use the correct integrals though.
• July 26th 2010, 03:46 PM
Poptimus
Yeah that was a typo, it should have been $x = 4$. I'll see what I come up with, thanks guys. | 2015-05-04 09:30:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8141701221466064, "perplexity": 263.81403157051943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430453938418.92/warc/CC-MAIN-20150501041858-00093-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.vedantu.com/iit-jee/jee-advanced-hyperbola-important-questions | # JEE Advanced Hyperbola Important Questions
## JEE Advanced Important Questions of Hyperbola
Hyperbola is one of the important topics that come under the conic section in the syllabus of JEE Advanced 2020. The below PDF consists of important questions on Hyperbola for JEE Advanced and the solutions to the same. Hyperbola is a curve that can be defined to be the locus, of the points in the planar region -
• which have a constant positive difference
• between their distances from two fixed points. | 2020-08-04 18:09:55 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8980127573013306, "perplexity": 558.934094040849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735881.90/warc/CC-MAIN-20200804161521-20200804191521-00249.warc.gz"} |
https://aviation.stackexchange.com/questions/34168/does-a-multi-engine-atp-with-single-engine-commercial-privileges-provide-single | Does a multi-engine ATP with single-engine commercial privileges provide single-engine instrument privileges?
Does a multi-engine ATP with single-engine commercial privileges provide single-engine instrument privileges?
I received my ATP certificate recently, and the back of my FAA certificate reads as follows.
AIRLINE TRANSPORT PILOT
AIRPLANE MULTIENGINE LAND
...
COMMERCIAL PRIVILEGES
AIRPLANE SINGLE ENGINE LAND
...
I left out the specific type ratings and limitations as noted by ....
The ATP supersedes an instrument rating, but the ATP is for the multi-engine category. Shouldn't the certificate reflect an instrument rating in the single-engine category?
The changes to my certificate were completed in IACRA and I have had part of a rating inadvertently removed before using the IACRA system.
• ecfr.gov/cgi-bin/… – vasin1987 Dec 26 '16 at 3:13
• Since the Instrument addon is by category, not class, aka Instrument Airplane, or Instrument Rotorcraft, the ATP Multi-Engine Airplane, counts for Instrument Airplane for the Commercial Single. – slookabill Dec 26 '16 at 3:34
• I agree with slookabill, I don't believe there is any distinction for instrument, you have it or you don't, it's not tied to a class of aircraft like single or multi. – Ron Beyer Dec 26 '16 at 4:51
• @slookabill you should post your comment as an answer. It's short but it's the correct one. – ryan1618 Dec 27 '16 at 0:20
• Note also that you have an ATP Certificate, which gives you instrument privileges. On this certificate, there is a sub-section which states that you have Commercial Privileges for ASEL but that is still part of your ATP Certificate. – Lnafziger May 15 '17 at 17:11
2 Answers
Note this is my interpretation of part 61. Double check with the FSDO for a more legally biding answer.
Since the Instrument addon is by category, not class, aka Instrument Airplane, or Instrument Rotorcraft, the ATP Multi-Engine Airplane, counts for Instrument Airplane for the Commercial Single.
ATP certificates do NOT supersede an instrument rating. Rather an instrument rating is a pre-requisite to obtain an ATPL. See FAR 61.153 subpart d.1.
An instrument-airplane or instrument-rotorcraft helicopter rating is applicable to all classes or types of aircraft in that category. I would think the ATPL would have an addendum to it stating INSTRUMENT-AIRPLANE. I guess the FAA does not list it that way.
• §61.167 (a) 1 ... an atp holder is entitled to the same privileges as someone who holds a commercial certificate and instrument rating, so that's probably why my ticket doesn't say instrument rated anymore. Your response doesn't directly address the question. – ryan1618 Dec 27 '16 at 0:10 | 2021-03-09 08:12:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6444998979568481, "perplexity": 5986.301143171839}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389472.95/warc/CC-MAIN-20210309061538-20210309091538-00245.warc.gz"} |
http://orbi.ulg.ac.be/browse?type=datepublished&sort_by=1&order=DESC&rpp=20&etal=3&value=1998&offset=120 | References of "1998" in Complete repository Arts & humanities Archaeology Art & art history Classical & oriental studies History Languages & linguistics Literature Performing arts Philosophy & ethics Religion & theology Multidisciplinary, general & others Business & economic sciences Accounting & auditing Production, distribution & supply chain management Finance General management & organizational theory Human resources management Management information systems Marketing Strategy & innovation Quantitative methods in economics & management General economics & history of economic thought International economics Macroeconomics & monetary economics Microeconomics Economic systems & public economics Social economics Special economic topics (health, labor, transportation…) Multidisciplinary, general & others Engineering, computing & technology Aerospace & aeronautics engineering Architecture Chemical engineering Civil engineering Computer science Electrical & electronics engineering Energy Geological, petroleum & mining engineering Materials science & engineering Mechanical engineering Multidisciplinary, general & others Human health sciences Alternative medicine Anesthesia & intensive care Cardiovascular & respiratory systems Dentistry & oral medicine Dermatology Endocrinology, metabolism & nutrition Forensic medicine Gastroenterology & hepatology General & internal medicine Geriatrics Hematology Immunology & infectious disease Laboratory medicine & medical technology Neurology Oncology Ophthalmology Orthopedics, rehabilitation & sports medicine Otolaryngology Pediatrics Pharmacy, pharmacology & toxicology Psychiatry Public health, health care sciences & services Radiology, nuclear medicine & imaging Reproductive medicine (gynecology, andrology, obstetrics) Rheumatology Surgery Urology & nephrology Multidisciplinary, general & others Law, criminology & political science Civil law Criminal law & procedure Criminology Economic & commercial law European & international law Judicial law Metalaw, Roman law, history of law & comparative law Political science, public administration & international relations Public law Social law Tax law Multidisciplinary, general & others Life sciences Agriculture & agronomy Anatomy (cytology, histology, embryology...) & physiology Animal production & animal husbandry Aquatic sciences & oceanology Biochemistry, biophysics & molecular biology Biotechnology Entomology & pest control Environmental sciences & ecology Food science Genetics & genetic processes Microbiology Phytobiology (plant sciences, forestry, mycology...) Veterinary medicine & animal health Zoology Multidisciplinary, general & others Physical, chemical, mathematical & earth Sciences Chemistry Earth sciences & physical geography Mathematics Physics Space science, astronomy & astrophysics Multidisciplinary, general & others Social & behavioral sciences, psychology Animal psychology, ethology & psychobiology Anthropology Communication & mass media Education & instruction Human geography & demography Library & information sciences Neurosciences & behavior Regional & inter-regional studies Social work & social policy Sociology & social sciences Social, industrial & organizational psychology Theoretical & cognitive psychology Treatment & clinical psychology Multidisciplinary, general & others Showing results 121 to 140 of 2401 2 3 4 5 6 7 8 9 10 11 12 Positive parity pentaquarks in a Goldstone boson exchange modelStancu, Floarea in Physical Review. D : Particles and Fields (1998), D58We study the stability of the pentaquarks $uudd\overline{Q}$, $uuds\overline{Q}$ and $udss\overline{Q}$ ($Q = c$ or $b$) of positive parity in a constituent quark model based on Goldstone boson exchange ... [more ▼]We study the stability of the pentaquarks $uudd\overline{Q}$, $uuds\overline{Q}$ and $udss\overline{Q}$ ($Q = c$ or $b$) of positive parity in a constituent quark model based on Goldstone boson exchange interaction between quarks. The pentaquark parity is the antiquark parity times that of a quark excited to a p-shell. We show that the Goldstone boson exchange interaction favors these pentaquarks much more than the negative parity ones of the same flavour content but all quarks in the ground state. We find that the nonstrange pentaquarks are stable against strong decays. [less ▲]Detailed reference viewed: 9 (4 ULg) Les nouveaux marqueurs de l'infarctus du myocardeChapelle, Jean-Paul Conference (1998, October 10)Detailed reference viewed: 6 (0 ULg) X-ray analysis of the NMC-A beta-lactamase at 1.64-A resolution, a class A carbapenemase with broad substrate specificitySwaren, Peter; Maveyraud, Laurent; Raquet, Xavier et alin Journal of Biological Chemistry (1998), 273(41), 26714-26721Detailed reference viewed: 14 (0 ULg) Acromegalie : OnderzoekenBeckers, Albert ; Stevenaert, Achille in Medical News (1998), 51Detailed reference viewed: 8 (0 ULg) Acromégalie : Examens complémentairesBeckers, Albert ; Stevenaert, Achille in Medical News (1998), 51Detailed reference viewed: 15 (0 ULg) Canonical variabes of aquatic bryophyte combinations for predicting water trophic level.Vanderpoorten, Alain ; Palm, Rodolphe in Hydrobiologia (1998), 386Detailed reference viewed: 4 (0 ULg) 4-Methoxypyridine N-Oxide: a new regulator for the controlled free radical polymerization of methyl methacrylateDetrembleur, Christophe ; Lecomte, Philippe ; Caille, Jean-Raphaël et alin Macromolecules (1998), 31(20), 7115-7117Detailed reference viewed: 42 (11 ULg) Acromégalie : Diagnostic différentielBeckers, Albert ; Stevenaert, Achille in Medical News (1998), 50Detailed reference viewed: 11 (0 ULg) Acromegalie : Differentiaal-diagnoseBeckers, Albert ; Stevenaert, Achille in Medical News (1998), 50Detailed reference viewed: 15 (0 ULg) Electrochemical synthesis of polypyrrole nanowiresJérôme, Christine ; Jérôme, Robert in Angewandte Chemie (International ed. in English) (1998), 37(18), 2488-2490Through a hole in a poly(ethyl acrylate) (PEA) layer that is electrochemically grafted to the surface of a vitreous carbon electrode-that is the route that must be taken by a growing polypyrrole nanowire ... [more ▼]Through a hole in a poly(ethyl acrylate) (PEA) layer that is electrochemically grafted to the surface of a vitreous carbon electrode-that is the route that must be taken by a growing polypyrrole nanowire in the electropolymerization of pyrrole. Chain growth is controlled by diffusion of the monomer through the DMF-swollen PEA layer, which acts as a template for the formation of nanowires (shown in the picture) with diameters of 400-1000 nm and lengths of up to 300 m. [less ▲]Detailed reference viewed: 26 (1 ULg) Molecular Cloning of a Mutated Hoxb7 Cdna Encoding a Truncated Transactivating Homeodomain-Containing ProteinChariot, Alain ; Senterre-Lesenfants, Sylviane; Sobel, Mark E et alin Journal of Cellular Biochemistry (1998), 71(1), 46-54Homeodomain-containing proteins regulate, as transcription factors, the coordinated expression of genes involved in development, differentiation, and malignant transformation. We report here the molecular ... [more ▼]Homeodomain-containing proteins regulate, as transcription factors, the coordinated expression of genes involved in development, differentiation, and malignant transformation. We report here the molecular cloning of a mutated HOXB7 transcript encoding a truncated homeodomain-containing protein in MCF7 cells. This is a new example of mutation affecting the coding region of a HOX gene. In addition, we detected two HOXB7 transcripts in several breast cell lines and demonstrated that both normal and mutated alleles were expressed at the RNA level in MCF7 cells as well as in a variety of breast tissues and lymphocytes, suggesting that a truncated HOXB7 protein might be expressed in vivo. Using transient co-transfection experiments, we demonstrated that both HOXB7 proteins can activate transcription from a consensus HOX binding sequence in breast cancer cells. Our results provide evidence that HOXB7 protein has transcription factor activity in vivo and that the two last amino acids do not contribute to this property. [less ▲]Detailed reference viewed: 18 (5 ULg) Cyclical Spectral and Photometric Variations of the Apparently Single Wolf-Rayet Star WR 134Morel, Thierry ; Marchenko, S. V.; Eenens, P. R. J. et alin Astrophysics & Space Science (1998), 260Evidence is presented for the existence of a 2.3 day periodicity in the line-profile changes of the apparently singleWolf-Rayet star WR 134. This cyclical variability may be induced either by the presence ... [more ▼]Evidence is presented for the existence of a 2.3 day periodicity in the line-profile changes of the apparently singleWolf-Rayet star WR 134. This cyclical variability may be induced either by the presence of an orbiting collapsed companion, or by the rotational modulation of a largely inhomogeneous outflow. [less ▲]Detailed reference viewed: 6 (4 ULg) Quasar polarization (Hutsemekers+ 1998)Hutsemekers, Damien ; Lamy, H.; Remy, MarcTextual, factual or bibliographical database (1998)Table 2 contains optical (V) polarimetric measurements for 42 optically selected QSOs including 29 broad absorption line QSOs. Table 3 contains a series of spectral indices characterizing the broad ... [more ▼]Table 2 contains optical (V) polarimetric measurements for 42 optically selected QSOs including 29 broad absorption line QSOs. Table 3 contains a series of spectral indices characterizing the broad absorption line QSOs. (2 data files). [less ▲]Detailed reference viewed: 12 (0 ULg) The burden of illness of hypopituitary adults with growth hormone deficiency.Hakkaart-van Roijen, L.; Beckers, Albert ; Stevenaert, Achille et alin PharmacoEconomics (1998), 14(4), 395-403Objective: The negative metabolic and psychosocial consequences of growth hormone deficiency (GHD) in adults are now well established. In the present study, an attempt was made to quantify the burden of ... [more ▼]Objective: The negative metabolic and psychosocial consequences of growth hormone deficiency (GHD) in adults are now well established. In the present study, an attempt was made to quantify the burden of illness, in terms of lost productivity and increased medical consumption, associated with hypopituitarism and untreated GHD. Design and Setting: The study population consisted of 129 Belgian adults with untreated GHD associated with hypopituitarism after pituitary surgery. The Short-Form 36 Health Survey (SF-36) was used to assess health status, and the Health and Labour Questionnaire was used to measure production losses and labour performance. Data on medical consumption were also collected. Main Outcome Measures and Results: Hypopituitary patients reported a lower health status than that of the general population in all but two dimensions of the SF-36 (pain and physical functioning). Nearly 11% of the patients reported being incapacitated for paid employment due to health problems, compared with 4.8% of the general Belgian population. Patients in paid employment reported a mean of 19.8 days of sickness leave per year, which is twice that in the general population. The annual number of visits to general practitioners and specialists was also higher in the patients (9.6 and 6.5 visits, respectively, for the patients compared with corresponding figures of 2.1 and 1.5 for the general Belgian population). The average annual number of days spent in hospital was 3.5 for the patients compared with 2.3 in the general population. The annual healthcare costs and costs due to production losses calculated for hypopituitary patients who had received pituitary surgery amounted to 135 024 Belgian francs (BeF) or $US4340 (1995 values). This compares with the mean annual cost per person for the Belgian population as a whole of BeF68 569 or$US2204. Conclusions: Hypopituitary patients with untreated GHD therefore have a higher cost to society in terms of lost production and medical consumption than the average Belgian population. [less ▲]Detailed reference viewed: 18 (0 ULg) La troponine, nouveau marqueur de l'infarctus du myocardeChapelle, Jean-Paul in Revue Médicale de Liège (1998), 53(10), 619-24The troponin (Tn) complex consists of three protein subunits referred to as TnT, TnI and TnC. Myocardium contains TnI and TnT isoforms which are not present in skeletal muscles and which can be separated ... [more ▼]The troponin (Tn) complex consists of three protein subunits referred to as TnT, TnI and TnC. Myocardium contains TnI and TnT isoforms which are not present in skeletal muscles and which can be separated from the muscular isoforms by immunological techniques. Using commercially available immunoassays, clinical laboratories are able to determine cardiac TnT and TnI (cTnT and cTnI) as quickly and reliably as classical cardiac markers. After acute myocardial infarction, cTnT and cTnI concentrations start to increase in serum in a rather similar way than CK-MB, but return to normal after longer periods of time (approximately one week). Because of their excellent cardiac specificity, Tn subunits appear ideally suited for the differential diagnosis of myocardial and muscular damage, for example in non cardiac surgery patients, in patients with muscular trauma or with chronic muscular diseases, or after intensive physical exercise. cTnT and cTnI may also be used for detecting evidence of minor myocardial damage, and therefore may find new applications in the management of patients with unstable angina. [less ▲]Detailed reference viewed: 66 (2 ULg) Foam-like evolution in polycrystalline systems following successive 'melt and growth' cyclesVandewalle, Nicolas ; Delisse, B.; Ausloos, Marcel et alin Philosophical Magazine B-Physics of condensed matter statistical mechanics electronic optical and magnetic properties (1998), 78(4), 397-408A stochastic model of multigrain growth which allows for successive melting-growth cycles is investigated on a square lattice. Two fundamental constraints are introduced: (i) the melted mass amplitude and ... [more ▼]A stochastic model of multigrain growth which allows for successive melting-growth cycles is investigated on a square lattice. Two fundamental constraints are introduced: (i) the melted mass amplitude and (ii) the number of cycles guide the process. The evolution of the microstructure is found to be quite similar to foam systems, i.e. topological rearrangements are observed together with the increase of the-mean grain area. A drastic crossover between two types of growth regimes is found as a function of the amplitude of the melting-growth cycles. This allows one to envisage the existence of optimal conditions for polycrystal processing. [less ▲]Detailed reference viewed: 11 (0 ULg) Pichia anomala strain K against post-hervest diseasesJijakli, Haissam Scientific conference (1998, October)Detailed reference viewed: 6 (0 ULg) Participation et intérêt politiques de lycéens français, belges et québécois au début des années quatre-vingt-dix : Une analyse plurielle fondée sur la dynamique de construction des univers de référenceFournier, Bernard Doctoral thesis (1998)Although certain recent theoretical developments insist more on the plurality of social and individual realities, the answers to empirical questions such as Are the young involved? or Are they interested ... [more ▼]Although certain recent theoretical developments insist more on the plurality of social and individual realities, the answers to empirical questions such as Are the young involved? or Are they interested in politics? still too often fit in a univocal logic of comprehension where concepts used are thought to be homogeneous, as if they were “subsumptions”. This is why it is coherent to explain phenomena from a linear point of view which supposes a precise vision of the processes of construction of individual worlds of reference. Jean Piaget’s perspective of socialization reminds us that the individual is not a passive being in the dynamics of appropriation of the world, but that he assimilates the social context, accommodates it to what he already understands and, thus, transforms it. Consequently, this opens the way to a form of conceptual relativism, with a plural logic of comprehension where each reality, each concept cannot be defined without being replaced in the context of the singular worlds of reference. Thus, the theoretical consideration of the plural possibilities must also emerge from our interpretations. This research challenge is raised in this thesis by studying a series of profiles of dimensions, where each dimension can be replaced in a context which materializes a certain organization of realities. The complexity thus introduced into the analysis is synthesized by multivariate statistical methods (analysis of multiple correspondences and hierarchical ascending classification) which respect the initial organization of profiles. On the basis of data from an original inquiry among French, Belgian and Québécois high-school students, series of “clusters” of similar individuals are presented to describe, in a plural way, their participation and their political interests at the beginning of the nineties. [less ▲]Detailed reference viewed: 75 (1 ULg) Optimization techniques for parameter identification of material constitutive laws in large deformation processesKLEINERMANN, J. P.; Ponthot, Jean-Philippe ; Hogge, Michel in Journées Samtech (1998, October)Detailed reference viewed: 24 (1 ULg) Procédures particulières de droit judiciaire privé : scellés - inventaire - divorce par consentement mutuelMoreau, Pierre in chronique du 10 octobre 1998 (1998)Detailed reference viewed: 21 (3 ULg) | 2014-10-31 04:16:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48638659715652466, "perplexity": 13690.860926877624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898844.3/warc/CC-MAIN-20141030025818-00174-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://physics.stackexchange.com/questions/10479/is-the-quantum-analog-of-a-probability-distribution-the-wave-function-or-the-den | # Is the quantum analog of a probability distribution the wave function or the density matrix?
Classically, probability distributions are nonnegative real measures over the space of all possible outcomes which add up to 1. What they mean is open to debate between Bayesians, frequentists and ensemble interpretations. A degenerate distribution is the least random distribution with a probability of 1 for a given fixed event, and 0 for everything else.
What is the analog of a classical probability distribution in quantum mechanics? Is it a wave function augmented with the Born interpretation for probabilities, or is it the density matrix? Does a pure density matrix correspond to a degenerate distribution?
-
The point is perhaps that quantum theory states are not analogs of probability distributions -- at least not exactly. I suggest arxiv.org/abs/quant-ph/0101012v4 as an interesting attempt to squeeze the two as close together as they can be (it's 34 pages, but Lucien Hardy is relatively easy reading). I'm not as happy as I'd like with this response, which is why it's a comment, not an Answer. – Peter Morgan May 27 '11 at 12:23
One obligatory reference is that of the Wigner quasi-probability function: en.wikipedia.org/wiki/Wigner_quasi-probability_distribution . This object, which is the Wigner transform of the coordinate representation of the density matrix, is as close as you can get to the classical distribution function. You can even prove that it becomes the classical phase space distribution function in the classical limit. – Olaf May 27 '11 at 13:15
Help my simple confusion. Isn't probability basically just the wave function squared? – Mike Dunlavey Dec 6 '11 at 14:38
It's difficult to give an authoritative answer to this one. But one thing to note is that, formally, density matrices are exactly equivalent to probability distributions in the case where you never transform them to have non-zero elements off the diagonal, i.e. you always work in the same orthonormal basis. It's the ability to measure in different bases that gives density matrices their specifically "quantum" character. – Nathaniel Mar 7 '12 at 18:34
This question is rather broad, but I will try to address the main issue. A state $|\psi\rangle$ is alinear sum of eigenvectors $|n\rangle$ so that $$|\psi\rangle~=~\sum_nc_n|n\rangle.$$ The probabilities maybe directly computed this way from $\langle\psi|\psi\rangle$ $=~1$ as $$\langle\psi|\psi\rangle~=~\sum_{mn}c^*_mc_n\langle m|n\rangle$$ and since $\langle m|n\rangle~=~\delta_{mn}$ and the amplitudes define the probabilities as their modulus squared $c^*_nc_n~=~P_n$ this expectation is then just a sum of the probabilities. The born rule then tells that for an observable ${\cal O}$ diagonal in this basis with ${\cal O}|n\rangle~=~o_n|n\rangle$ the expected value of the observable for $|\psi\rangle$ is then $$\langle\psi|{\cal O}\psi\rangle~=~\sum_{mn}c^*_mc_no_n \langle m| n\rangle~=~\sum_no_nP_n.$$ This is just the sum of the eigenvalues of the observable times their probabilities for being observed.
The density matrix is really a form generalization of this. We consider the outer product of the states as the density operator $${\hat\rho}~=~|\psi\rangle\langle\psi|~=~\sum_{mn}c^*_mc_n|n\rangle\langle m|$$ which is a matrix representation of this operator. The trace of the density matrix, $Tr{\hat\rho}~=~\sum_n\langle n|{\hat\rho}|n\rangle$ then gives the sum of probabilities. I leave it as an exercise to evaluate an observable in the trace $Tr{\cal O}{\hat\rho}$ to derive the Born rule.
The modulus squares of the amplitudes define the probabilities, which are real valued. So the connection between the states and the probabilities is not one to one. One reason people like the density matrix is that there is more of a connection between a linear quantum operator and the probabilities in the trace operation.
-
I'll reproduce my comment from above here.
The point is perhaps that quantum theory states are not analogs of probability distributions -- at least not exactly. I suggest arxiv.org/abs/quant-ph/0101012v4 as an interesting attempt to squeeze the two as close together as they can be (it's 34 pages, but Lucien Hardy is relatively easy reading). I'm not as happy as I'd like with this response, which is why it's a comment, not an Answer.
I decided I was being lazy, and looked at Lucien's paper for what I might take to be its relevance to your Question. Lucien takes quantum pure states to be analogous enough to degenerate probabilities to use the idea. Where that becomes interesting is the way he can then characterize the difference between classical probabilistic states and quantum states, given this starting point. The distinguishing feature is his fifth axiom,
"Axiom 5 Continuity. There exists a continuous reversible transformation on a system between any two pure states of that system."
IMO, this is definitely a curious way to construct things. It has a distinct failing, that it's limited to finite-dimensional Hilbert spaces and probability distributions over a finite set of outcomes, and AFAIK no-one has extended Lucien's analysis to infinite dimensional Hilbert spaces and probability spaces, which somewhat diminishes its interest unless you in any case work only with finite dimensional Hilbert spaces (as you might if you work in quantum information).
The point I'd make about this is that this is an interesting partial analogy, although I do not know that any more directly related-to-experiment use has been made of it. It may well be worth thinking in terms of this partial analogy some more, but my personal assessment has been that this is not something worth hanging my hat on exclusively. On the other hand, it's only if one immerses oneself in a way of thinking in a committed way that new results come from anywhere, and it's because people have made different choices than I make that we get different results.
In any case, I think Lucien's paper is relevant for you. It has, to me, the same feel as the way you have asked your Question.
-
You can make a very strong formal analogy between the density matrix and classical probability distributions. An alternate way to think of random variables from a particular distribution is instead to focus on an algebra of observables. Random variables are closed under products, sums, complex conjugation, and so forth, and expectation (with respect to a given distribution) must be a linear functional. This can be modeled perfectly well with the standard $\langle X \rangle = \operatorname{Tr} (\rho X)$ of quantum mechanics.
Seen this way, classical probability distributions are the special case where all of the operators commute. Since they all commute, they can be simultaneously diagonalized, and thus evrything observable about the distribution is entirely captured by the diagonal elements in this basis. The possible non-commutativity of operators in the quantum case allows for behaviors different than the classical case, of course. (If it didn't, it would be the classical case, rather than analogous to the classical case.) In this picture, wavefunctions are more analogous to classical ontic states, but with the curious feature that these pure state have questions with no definite answers.
-
The density matrix is the quantum analogue of a classical probability density.
If you write quantum mechanics from the start in terms of an algebra of observables and density matrices (rather than wave functions and operators), it looks very much like a direct generalization of classical mechanics, in the sense that almost everything is the same except that commutativity of multiplication is lost. (Indeed, in mathematics, the subject studying the probabilistic aspects of the quantum mechanical formalism is usually called ''noncommutative probability''.)
You can convince yourself of this by looking at my book ''Classical and Quantum Mechanics via Lie algebras'' (http://lanl.arxiv.org/abs/0810.1019), where this approach is followed systematically.
For example (directly relating to your question), in classical probability theory a pure state (with a degenerate distribution in your terminology) is characterized by zero entropy, and in quantum mechanics, a pure state (i.e., one in which the density matrix can be written as $\psi\psi^*$ with a wave function $\psi$ is also characterized by zero entropy. The formula for the entropy is also the same; $S=\langle -\log\rho\rangle$, with expectation taken in the state $\rho$.
Expressed directly in terms of the density matrix, quantum mechanics is governed by the following six axioms and their explanation. (This is taken from the Section ''Postulates for the formal core of quantum mechanics'' of Chapter A1: Fundamental concepts in quantum mechanics of my theoretical physics FAQ.)
Note that the only difference between classical and quantum mechanics in this axiomatic setting is that
• the classical case only works with diagonal operators, where all operations happen pointwise on the diagonal elements. Thus multiplication is commutative, and one can identify operators and functions. In particular, the density mattrix degenerates into a probability density.
• the quantum case allows for noncommutative operators, hence both observable quantities and the density are (usually infinite-dimensional) matrices.
A1. A generic system (e.g., a 'hydrogen molecule') is defined by specifying a Hilbert space $K$ and a (densely defined, self-adjoint) Hermitian linear operator $H$ called the Hamiltonian or the energy.
A2. A particular system (e.g., 'the ion in the ion trap on this particular desk') is characterized by its state $\rho(t)$ at every time $t \in R$ (the set of real numbers). Here $\rho(t)$ is a Hermitian, positive semidefinite, linear trace class operator on $K$ satisfying at all times the condition
$Tr\ \rho(t) = 1$, (normalization)
where $Tr$ denotes the trace.
A3. A system is called closed in a time interval $[t_1,t_2]$ if it satisfies the evolution equation
$d/dt\ \rho(t) = i/\hbar [\rho(t),H] \mbox{ for } t \in [t_1,t_2]$,
and open otherwise. ($\hbar$ is Planck's constant, and is often set to 1.) If nothing else is apparent from the context, a system is assumed to be closed.
A4. Besides the energy $H$, certain other (densely defined, self-adjoint) Hermitian operators (or vectors of such operators) are distinguished as observables. (E.g., the observables for a system of $N$ distinguishable particles conventionally include for each particle several 3-dimensional vectors: the position $x^a$, momentum $p^a$, _orbital_angular_momentum_ $L^a$ and the _spin_vector_ (or Bloch vector) $\sigma^a$ of the particle with label $a$. If $u$ is a 3-vector of unit length then $u \cdot p^a$, $u \cdot L^a$ and $u \cdot \sigma^a$ define the momentum, orbital angular momentum, and spin of particle $a$ in direction $u$.)
A5. For any particular system, and for every vector $X$ of observables with commuting components, one associates a time-dependent monotone linear functional $\langle \cdot\rangle_t$ defining the expectation
$\langle f(X)\rangle_t:=Tr\ \rho(t) f(X)$
of bounded continuous functions $f(X)$ at time $t$. (This is equivalent to a multivariate probability measure $d\mu_t(X)$ on a suitable sigma algebra over the spectrum $spec(X)$ of $X$) defined by
$\int d\mu_t(X) f(X) := Tr\ \rho(t) f(X) =\langle f(X)\rangle _t$.
This sigma algebra is uniquely determined.)
A6. Quantum mechanical predictions consist of predicting properties (typically expectations or conditional probabilities) of the measures defined in Axiom A5, given reasonable assumptions about the states (e.g., ground state, equilibrium state, etc.)
Axiom A6 specifies that the formal content of quantum mechanics is covered exactly by what can be deduced from Axioms A1-A5 without anything else added (except for restrictions defining the specific nature of the states and observables), and hence says that Axioms A1-A5 are complete.
The description of a particular closed system is therefore given by the specification of a particular Hilbert space in A1, the specification of the observable quantities in A4, and the specification of conditions singling out a particular class of states (in A6). Given this, everything else is determined by the theory, and hence is (in principle) predicted by the theory.
The description of an open system involves, in addition, the specification of the details of the dynamical law.
• -
It is wrong to think of a mixed density state as a probability distribution over "ontic" wavefunctions. See the question Are these two quantum systems distinguishable? . Two different ways of preparing a quantum state are given and the "ontic" wavefunctions don't agree, but you know what? The density matrices still agree and we can never tell the difference.
-
In the orthodox view, the wave function is the quantum analogue to a physical state in classical mechanics, and hence a degenerate probability distribution in classical probability. A mixed state is the quantum analogue to a probability distribution on phase space. A density matrix is the usual way to represent a mixed state, nowadays, it was not part of the original formulation of QM but was invented by von Neumann. It differs from a classical proability distribution, just like all quantum notions differ from classical notions, and leads to the formulation of a new, non-classical, probability, called either Quantum Probability or Non-Commutative Probability.
Not everyone will agree with me. Within QIT, and especially within one of Lucien Hardy's early 're-constructions' of QM, see
Quantum Theory from Five Reasonable Axioms. http://arxiv.org/abs/quant-ph/0101012
he takes mixed states or density matrices as the fundamental 'physical states', perhaps without caring whether this is an alteration of the orthodox QM or not. Admittedly, this revision of QM is very commonly accepted by physicists now.
Elsewhere, (just google on Axiomatisation of Physics) I have argued
that he would need to seriously revise even classical probability to make his ideas work, and I doubt it is possible, hence I doubt that his axioms are physically true. This at least shows, whether they are true or not, that they are different from normal QM, even for finite dimensional Hilbert spaces. I do not think very many QIT people understand even classical probability, even for discrete spaces, but then, it is awfully tricky. For the foundations of classical probability, see my own
"The Logical Structure of Physical Probability Assertions", http://arxiv.org/abs/quant-ph/0508059
-
The wavefunction does not have the same ontic status as a classical probability distribution of a Markovian process. Ignore classical probability distributions which can take on any form subject to the axioms of probability. If one just focuses upon Markovian processes, there is no need to specify the entire probability function for every single time step. All one needs to postulate are the existence of genuine randomness generators of the simple sort. All one needs to keep track of at each time step is the actual state of the system. The following state is determined from the Markov transition matrix and the random generator according to the Monte Carlo method. Unfortunately, this can't work in quantum mechanics because of interference. Pusey et al. proved one needs the entire wavefunction. The wavefunction is not analogous to the classical probability distribution of a Markovian process.
-
this does not answer the question – Arnold Neumaier Nov 15 '12 at 14:54
The density matrix has a property not shared by classical probability distributions. If you take the marginal trace of a classical degenerate distribution, you end up with another degenerate distribution, but if you take the partial trace of a pure state, you can easily end up with a mixed state. Such is the CRAZY nature of entanglement. This is the property used by decoherence guys to explain how probabilities emerge from a wavefunction. This does not explain away macroscopic superpositions by any means. A macroscopic superposition with suppressed interference terms is still a superposition. A decohered superposition, but a superposition nonetheless.
-
doesn't answer the question – Arnold Neumaier Nov 15 '12 at 15:07 | 2016-07-01 20:42:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8740620017051697, "perplexity": 366.2125918030178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00118-ip-10-164-35-72.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/49453/several-questions-around-the-exponential-law | # Several questions around the exponential law
$\mathrm{DISCLAIMER~:~}$I am not interested in working with compactly generated spaces.
This post is related to this one : Exponential Law for based spaces. I learned about the exponential law for topological spaces quite some time ago, and I thought I understood it well until I decided to reprove it today as I have been using it lately.
What confuses me is that I 'seem' to have proven it with weaker conditions than those stated in the textbooks. To be precise, I think I have shown that there is a natural homeomorphism $$\mathrm{Map}(X\times Y,Z)\simeq\mathrm{Map}(X,\mathrm{Map}(Y,Z))$$ where $Z$ is any topological space, $X$ is Hausdorff and $Y$ locally compact $no$ $Hausdorff$ $condition$ $required\dots$ In all textbooks I 'm familiar with, none of which feature a proof of the above fact, the extra assumption is made that $Y$ be Hausdorff. The proof I gave is, I think, the one I learnt in Switzer's book (if I remember right) yet I see no need for Hausdorffness in $Y$.
$\mathrm{QUESTION~1:~}$Is $Y$ Hausdorff really necessary?
Also, the reference I am currently using, Algebraic Topology from the Homotopical Viewpoint [Aguilar, Gitler, Prieto, Springer Universitext], exercice $1.3.4$ asks to show that for $X,Y,Z$ topological spaces with $X$ and $Y$ locally compact Hausdorff spaces, composition $$\mathrm{Map}(X,Y)\times\mathrm{Map}(Y,Z)\rightarrow\mathrm{Map}(X,Z),(f,g)\mapsto g\circ f$$ is continuous. Yet I'm pretty sure all you need is for $Y$ to be locally compact$\dots$
$\mathrm{QUESTION~2:~}$ Are all these extra conditions necessary?
-
Without Hausdorff, please specify what you mean by locally compact: every point has a base of open sets with compact closure, every point has a base of compact (not open) neighbourhoods, or instead of base you want just one open set with compact closure, or one compact neighbourhood? These are only equivalent under Hausdorff-like conditions... – Henno Brandsma Jul 4 '11 at 18:17
I used following definition : every point has a base of compact neighborhoods. – Olivier Bégassat Jul 4 '11 at 18:26
Ok, so not necessarily compact --> locally compact. – Henno Brandsma Jul 4 '11 at 18:31
Yes, this is no longer true (a priori). – Olivier Bégassat Jul 4 '11 at 18:36
Engelking states this homeomorphism for: any space Y, Z Hausdorff, X locally compact (which includes Hausdorff with him). And for any X,Z Hausdorff, any Y we have an embedding (which need not be onto). Also, the composition map is continuous in the compact open topologies for any X,Z, locally compact Y. – Henno Brandsma Jul 4 '11 at 18:45
As to Question 2, let $\Sigma$ be the composition map $\mathrm{Map}(X,Y)\times\mathrm{Map}(Y,Z)\rightarrow\mathrm{Map}(X,Z)$.
All functions spaces have the compact-open topology, with as a subbase all sets of the form $\mathrm{M}(C,U) = \{ f: f[C] \subset U \}$, where $C$ is a compact subset of the domain, and $U$ an open subset of the co-domain.
Let $(f,g)$ be a point in $\Sigma^{-1}[\mathrm{M}(C,U)]$, with $C \subset X$ compact and $U \subset Z$ open, and we want to show it's an interior point. We have by definition $(g \circ f)[C] \subset U$, or $f[C] \subset g^{-1}[U]$. As $g^{-1}[U]$ is open, and $f[C]$ is compact, both in $Y$, and if we assume $Y$ is locally compact (in the sense from the comments), we can find for each $y \in f[C]$ a compact neighbourhood $K_y$ that sits inside $g^{-1}[U]$, and so $f[C]$ is covered by finitely many sets of the form $\mathrm{Int}(K_y)$, say $\mathrm{Int}(K_{y_i})$ for $i=1 \ldots n$ and set $W$ to be the finite union of these. Then $W \subset \mathrm{Int}(\cup_{i=1}^n K_{y_i}) \subset K:=\cup_{i=1}^n K_{y_i} \subset g^{-1}[U]$ and so $\Sigma[\mathrm{M}(C,W) \times \mathrm{M}(K,U) ] \subset \mathrm{M}(C,U)$ and so we are done, as $(f,g)$ is in $\mathrm{M}(C,W) \times \mathrm{M}(K,U)$.
This convinces me that indeed we only need local compactness (in a rather strong sense) on $Y$ to show continuity of $\Sigma$, but no Hausdorffness. | 2014-04-19 18:02:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9186629056930542, "perplexity": 246.91589059534311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://stats.stackexchange.com/questions/36266/predictive-model-standardized-variables | # Predictive model & standardized variables
In a predictive model, I have standardized variables as predictors. Say I have to rescore the model on fresh data at some point in the future: do I use the means/stds as they were when I built the model to center and scale the new data, or do I use the means/stds as they are with the data I'm scoring.
My take is to use the means/stds of the data I'm scoring, since I want the standardized variables to reflect distributions as they are at the time of scoring.
Pros & cons of original means/stds vs. current means/stds?
Thanks.
• What do you mean by rescore? Sep 14 '12 at 0:18
• In the context of this post, 'rescore' means computing the predicted dependent values using new readings of the predictor variables. Sep 14 '12 at 1:23
If one were to fit a model $y= \beta_1 + \beta_2z$ where $z=\frac{x-\bar{x}}{sd(x)}$ and use that model to predict $y$ for some given values of $x$, then use the original $\bar{x}$ and $sd(x)$ to standardize the new $x$ values being used for prediction.
However, if one has many new values of $y$ and $x$ and wants to refit the model then standardize $x$ based on the new values of $\bar{x}$ and $sd(x)$.
• But if the model can't be rebuilt (or re-estimated or refit) because all I have are fresh values for the predictors, no reading on the dependent variable yet: do I understand your answer as taking the mean of 'fresh' predictors to compute $z$, and then $\hat{y}$ ? Sep 14 '12 at 1:32
• @user14075 No, if the model can't be rebuilt, then you have to use the original $\bar{x}$ and $sd(x)$ values. Sep 14 '12 at 1:42
• @user14075 It seems silly to use $x$ values that are scaled one way (i.e., standardized based on new $\bar{x}$ and $sd(x)$) to predict $\hat{y}$ from a model that was built using $x$ values that were scaled differently (i.e., standardized based on the original $\bar{x}$ and $sd(x)$). This would be like using an $x$ value that was measured in say inches to build the model and then using $x$ values measured in centimeters (without transformation to inches) to predict $\hat{y}$ from that model. Sep 14 '12 at 16:50 | 2021-10-18 04:28:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7055344581604004, "perplexity": 525.5658306769259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585196.73/warc/CC-MAIN-20211018031901-20211018061901-00695.warc.gz"} |
http://publ.plaidweb.site/manual/238-Command-Line-Interface | # Publ: Command-Line Interface
Last updated:
Publ extends the default flask command with some additional commands, which can be used for various useful things. In addition to this page, you can get detailed help from the command line with:
from within your Publ virtual environment (e.g. poetry run flask publ --help or env/bin/flask publ --help).
For usage of the flask command itself, see the Flask CLI documentation.
## reindex
This command lets you force a database reindex, and will wait until the index finishes. This is useful for deployment scripts and the like.
It takes two arguments:
• --fresh/-f: Start with a fresh index (i.e. remove all the cached data); this is useful if something weird has happened and you want a fresh start (if something weird has happened, please open an issue!)
• --quiet/-q: Don’t show the progress indicator
## token
This command can be used to generate an IndieAuth token for scripting purposes, such as for use with Pushl, or for testing authentication automatically.
For example:
will fetch the https://example.com/feed page with a token identifying the user as https://example.com (as generated by your Publ site).
Note that the session key will need to match between wherever you’re running this and the actual site. If they do not match for some reason (for example, because you’re running in a different configuration) this token will not be valid.
This takes the following arguments:
• --scope/-s: Generate the token with the specified scope (defaults to none)
• --lifetime/-l: How long the token should be valid for, in seconds (default: 3600)
Note that token scopes are not currently used by Publ itself; this is provided largely to make it easier to extend Publ via the Python API, such as implementing a mechanism to automatically post content from external sources. | 2021-09-22 09:02:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33088821172714233, "perplexity": 4139.934369869767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057337.81/warc/CC-MAIN-20210922072047-20210922102047-00101.warc.gz"} |
http://www.wsj.com/video/new-year-fireworks-from-around-the-world/8F773ABA-EFA3-467D-A399-7C232C3BD6DC.html | # New Year's Fireworks From Around the World
12/31/2013 8:02PM
## Remember that in other parts of the world it is already 2014. See fireworks shows from New Year's Eve celebrations around the world. (Photo: AP)
This transcript has been automatically generated and may not be 100% accurate.
I ... the ... the ... the ... the new the ... I ... I ... he ... said it ... will ... I ... the ... I ... I ... the ... I ... I ... I ... I ... I ... I ... I ... I ... I ... I ... I ... the ... the ... he ... I ... so ...
More →
More →
More →
More → | 2015-03-28 23:18:08 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.902079164981842, "perplexity": 3721.748636255289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297831.4/warc/CC-MAIN-20150323172137-00174-ip-10-168-14-71.ec2.internal.warc.gz"} |
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=DBSHBB_2005_v42n1_101 | REMARKS ON THE KKM PROPERTY FOR OPEN-VALUED MULTIMAPS ON GENERALIZED CONVEX SPACES
Title & Authors
REMARKS ON THE KKM PROPERTY FOR OPEN-VALUED MULTIMAPS ON GENERALIZED CONVEX SPACES
KIM HOONJOO; PARK SEHIE;
Abstract
Let (X, D; $\small{{\Gamma}}$) be a G-convex space and Y a Hausdorff space. Then $\small{U^K_C}$(X, Y) $\small{{\subset}}$ KD(X, Y), where $\small{U^K_C}$ is an admissible class (dup to Park) and KD denotes the class of multimaps having the KKM property for open-valued multimaps. This new result is used to obtain a KKM type theorem, matching theorems, a fixed point theorem, and a coincidence theorem.
Keywords
KKM principle;generalized convex (G-convex) spaces;multimaps having the KKM property;multimaps having the KKM property;
Language
English
Cited by
1.
ELEMENTS OF THE KKM THEORY ON CONVEX SPACES,;
대한수학회지, 2008. vol.45. 1, pp.1-27
2.
COINCIDENCE THEOREMS FOR NONCOMPACT ℜℭ-MAPS IN ABSTRACT CONVEX SPACES WITH APPLICATIONS,;;
대한수학회보, 2012. vol.49. 6, pp.1147-1161
1.
COINCIDENCE THEOREMS FOR NONCOMPACT ℜℭ-MAPS IN ABSTRACT CONVEX SPACES WITH APPLICATIONS, Bulletin of the Korean Mathematical Society, 2012, 49, 6, 1147
2.
Alternative principles and minimax inequalities in G-convex spaces, Applied Mathematics and Mechanics, 2008, 29, 5, 665
3.
Fixed points, coincidence points and maximal elements with applications to generalized equilibrium problems and minimax theory, Nonlinear Analysis: Theory, Methods & Applications, 2009, 70, 1, 393
4.
Fixed Points and Existence Theorems of Maximal Elements with Applications inFC-Spaces, ISRN Applied Mathematics, 2011, 2011, 1
5.
Fixed point theorems for better admissible multimaps on abstract convex spaces, Applied Mathematics-A Journal of Chinese Universities, 2010, 25, 1, 55
References
1.
M. Balaj, Applications of two matching theorems in generalized convex spaces, Nonlinear Anal. Forum 7 (2002), 123-130
2.
S. Park, Foundations of the KKM theory via coincidences of composites of upper semicontinuous maps, J. Korean Math. Soc. 31 (1994), 493-519
3.
S. Park, Coincidence theorems for the better admissible multimaps and their applications, Nonlinear Anal. 30 (1997), 4183-419
4.
S. Park, A unified fixed point theory of multimaps on topological vector spaces, J. Korean Math. Soc. 35 (1998), 803-829. Corrections
5.
S. Park, A unified fixed point theory of multimaps on topological vector spaces, J. Korean Math. Soc. 36 (1999), 829-832
6.
S. Park, Elements of the KKM theory for generalized convex spaces, Korean J. Comput. Appl. Math. 7 (2000), 1-28
7.
S. Park, Remarks on topologies of generalized convex spaces, Nonlinear Funct. Anal. Appl. 5 (2000), 67-79
8.
S. Park, New topological versions of the Fan-Browder fixed point theorem, Nonlinear Anal. 47 (2001), 595-606
9.
S. Park, Fixed point theorems in locally G-convex spaces, Nonlinear Anal. 48 (2002), 869-879
10.
S. Park, Coincidence, almost fixed point, and minimax theorems on generalized convex spaces, J. Nonlinear Convex Anal. 4 (2003), 151-164
11.
S. Park and H. Kim, Coincidences of composites of u.s.c. maps on H-spaces and applications, J. Korean Math. Soc. 32 (1995), 251-264
12.
S. Park and H. Kim, Foundations of the KKM theory on generalized convex spaces, J. Math. Anal. Appl. 209 (1997), 551-571
13.
S. Park and W. Lee, A unified approach to generalized KKM maps in generalized convex spaces, J. Nonlinear Convex Anal. 2 (2001), 157-166 | 2017-07-25 17:05:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 4, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44930189847946167, "perplexity": 3433.5527983174297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425339.22/warc/CC-MAIN-20170725162520-20170725182520-00458.warc.gz"} |
https://www.particleincell.com/2012/html5-dsmc/ | # HTML5 + Javascript DSMC Simulation
Posted on December 26th, 2012
Previous Article :: Next Article
## The Demo
In the previous article we presented a crash-course in the Direct Simulation Monte Carlo (DSMC) method (DSMC is an algorithm for simulating gas flows using computational macroparticles instead of fluid equations as is the case with CFD, Computational Fluid Dynamics). The article also contained an interactive example that demonstrated DSMC by colliding two populations in a single simulation cell. We will now go through the steps of getting this example working. You’ll find the demo right here:
Your browser does not support the <canvas> element. Please use a different browser or click here if you are viewing this in an email.
/***** START OF DSMC CLASS *****/
var dsmcCell = (function() {
var AMU = 1.66053886e-27; /*atomic mass*/
var K = 1.3806488e-23; /*Boltzmann constant*/
var V_c=0.001*0.01*0.001; /*cell volume*/
var sigma_cr_max=1e-18; /*initial guess*/
var Fn = 5e8; /*macroparticle weight*/
var delta_t=2e-5; /*time step length*/
var it; /*current iteration*/
var vdf; /*public array to hold VDF data*/
var part_list = new Array(); /*particle list*/
/*particle class, creates particle with isotropic velocity,
see: http://www.particleincell.com/2012/isotropic-velocity/ */
function Part(vdrift,temp,mass)
{
this.v=[];
/*sample speed*/
var vth = sampleVth(temp,mass);
/*pick a random direction*/
var theta = 2*Math.PI*Math.random();
var R = -1.0+2*Math.random();
var a = Math.sqrt(1-R*R);
this.v[0] = vth*Math.cos(theta)*a;
this.v[1] = vth*Math.sin(theta)*a;
this.v[2] = vth*R + vdrift;
this.mass=mass;
}
/*creates initial particle populations*/
function init()
{
/*clear data, important if restarting*/
part_list = [];
it = 0;
/*first create slow particles*/
for (var i=0;i<1000;i++)
part_list.push(new Part(100,5,32*AMU));
/*and now the fast ones*/
for (var i=0;i<1000;i++)
part_list.push(new Part(300,5,32*AMU));
/*show initial plot*/
vdf = computeVDF(0,450,25);
plotXY(vdf);
}
/* performs a single iteration of DSMC*/
function performDSMC()
{
/*fetch number of particles*/
var np = part_list.length;
/*compute number of groups according to NTC*/
var ng_f = 0.5*np*np*Fn*sigma_cr_max*delta_t/V_c;
var ng = Math.floor(ng_f+0.5); /*number of groups, round*/
var nc = 0; /*number of collisions*/
/*for updating sigma_cr_maxx*/
var sigma_cr_max_temp = 0;
var cr_vec = [];
/*iterate over groups*/
for (var i=0;i<ng;i++)
{
var part1,part2;
/*select first particle*/
part1=part_list[Math.floor(Math.random()*np)];
/*select the second one, making sure the two particles are unique*/
do {part2=part_list[Math.floor(Math.random()*np)];}
while (part1==part2);
/*relative velocity*/
for (var j=0;j<3;j++)
cr_vec[j] = part1.v[j]-part2.v[j];
var cr = mag(cr_vec);
/*eval cross section*/
var sigma = evalSigma(cr);
/*eval sigma_cr*/
var sigma_cr=sigma*cr;
/*update sigma_cr_max*/
if (sigma_cr>sigma_cr_max_temp)
sigma_cr_max_temp=sigma_cr;
/*eval prob*/
var P=sigma_cr/sigma_cr_max;
/*did the collision occur?*/
if (P>Math.random())
{
nc++;
collide(part1,part2);
}
}
/*update sigma_cr_max if we had collisions*/
if (nc)
sigma_cr_max = sigma_cr_max_temp;
/*show diagnostics*/
vdf = computeVDF(0,450,25);
plotXY(vdf);
it++;
/*output more data to the console*/
console.log("it: "+it+"\tng: "+ng+"\tnc: "+nc+"\tscmax: "+sigma_cr_max);
/*run for 500 time steps*/
if (it<500)
setTimeout(performDSMC,10);
}
/*samples from 1D Maxwellian using the Birdsall method*/
function maxw1D()
{
return 0.5*(Math.random()+Math.random()+Math.random()-1.5);
}
/*returns random thermal velocity*/
function sampleVth(temp,mass)
{
var sum=0;
var vth = Math.sqrt(2*K*temp/mass);
for (var i=0;i<3;i++)
{
var m1 = vth*maxw1D();
sum += m1*m1;
}
return Math.sqrt(sum);
}
/*evaluates cross-section using a simple (non-physical) inverse relationship*/
function evalSigma(rel_g)
{
return 1e-18*Math.pow(rel_g,-0.5);
}
/*returns magnitude of a 3 component vector*/
function mag(v)
{
return Math.sqrt(v[0]*v[0]+v[1]*v[1]+v[2]*v[2]);
}
/*performs momentum transfer collision between two particles*/
function collide (part1, part2)
{
/*center of mass velocity*/
var cm = [];
for (var i=0;i<3;i++)
cm[i] = (part1.mass*part1.v[i] + part2.mass*part2.v[i])/
(part1.mass+part2.mass);
/*relative velocity, magnitude remains constant through the collision*/
var cr=[];
for (var i=0;i<3;i++)
cr[i] = part1.v[i]-part2.v[i];
var cr_mag = mag(cr);
/*pick two random angles, per Bird's VHS method*/
var theta = Math.acos(2*Math.random()-1);
var phi = 2*Math.PI*Math.random();
/*perform rotation*/
cr[0] = cr_mag*Math.cos(theta);
cr[1] = cr_mag*Math.sin(theta)*Math.cos(phi);
cr[2] = cr_mag*Math.sin(theta)*Math.sin(phi);
/*post collision velocities*/
for (var i=0;i<3;i++)
{
part1.v[i] = cm[i]+part2.mass/(part1.mass+part2.mass)*cr[i];
part2.v[i] = cm[i]-part1.mass/(part1.mass+part2.mass)*cr[i];
}
}
/* bins velocities into nbins between min and max */
function computeVDF(min, max, nbins)
{
/*pointers for easier access*/
vel = [];
bin = [];
/*set delta range*/
var delta = (max-min)/(nbins-1);
/*shift left by half delta to center bins*/
min -= 0.5*delta;
max -= 0.5*delta;
/*set initial values*/
for (var i=0;i<nbins;i++)
{
vel[i] = i/(nbins-1);
bin[i] = 0;
}
/*compute histogram*/
for (var i=0;i<part_list.length;i++)
{
var part = part_list[i];
var vmag = mag(part.v);
var b = Math.floor((vmag-min)/(delta));
if (b<0 || b>=nbins-1) continue;
bin[b]++;
}
/*normalize values*/
var max_bin=0;
for (var b=0;b<nbins;b++)
if (bin[b]>max_bin) max_bin=bin[b];
for (var b=0;b<nbins;b++)
bin[b]/=max_bin;
return [vel, bin];
}
/*expose public members*/
return {init:init,performDSMC:performDSMC,vdf:vdf};
})();
/**** END OF DSMC CLASS *****/
/*grab canvas and related properties*/
var canvas=document.getElementById("cell");
var ctx = canvas.getContext("2d");
var height=canvas.height;
var width=canvas.width;
/*paint background before we scale the context*/
ctx.fillRect(0,0,width,height);
/*create axis*/
ctx.strokeStyle="black";
ctx.fillStyle="purple";
ctx.lineWidth=3;
setLineDashSafe([10,6]);
ctx.strokeRect(0.05*width,0.05*height,0.9*width,0.9*height);
setLineDashSafe([]);
/*save background*/
var saved_bk = ctx.getImageData(1, 1, width, height);
/*scale to [0,1][0,1] and put origin at left/bottom to simplify graphing*/
ctx.translate(0.05*width,0.95*height);
ctx.scale(0.9*width,-0.9*height);
/*show initial distribution*/
dsmcCell.init();
/*handler for the button click*/
function startSim()
{
dsmcCell.init();
dsmcCell.performDSMC();
}
/*generates XY plot from xy[2][]*/
function plotXY(xy)
{
var x=xy[0],y=xy[1];
/*restore background*/
ctx.putImageData(saved_bk, 1, 1);
/*start plotting*/
ctx.beginPath();
ctx.moveTo(0,0);
for (var i=0;i<x.length;i++)
ctx.lineTo(x[i],y[i]);
ctx.lineTo(1,0);
ctx.lineTo(0,0);
ctx.fill();
}
/*helper function since Firefox does not seem to support setLineDash*/
function setLineDashSafe(style)
{
if (ctx.setLineDash)
ctx.setLineDash(style);
}
## Source Code
First, make sure to download the source code: dsmc0.html (right click and select “save as”)
Modern web browsers are truly powerful tools that allow you to do some amazing things, like run a mesh generator, or show interactive plots. This particular demo relies on three technologies: HTML5 for the page markup and the canvas element, CSS3 to add fancy style with shadows, and Javascript to perform the actual simulation and generate the animated plot. Long gone are days when Javascript was slow! The demo shown here runs 500 time steps with 2000 particles and had to be artificially slowed down to see the evolution in the velocity distribution function.
### HTML / CSS3
Starting with the HTML markup, here is the entire source code in a nutshell:
<!-- Javascript+HTML DSMC Demo, Lubos Brieda, Dec 2012
<canvas id="cell" style="border:1px solid #888;box-shadow:10px 10px 5px #888;" width=600 height=400>
<h2>Your browser does not support the <canvas> element. Please use a different browser or
click <a href="https://www.particleincell.com/2012/dsmc0/">here</a> if you are viewing this in an email.</h2>
</canvas>
<script>
/* --- JAVA SCRIPT HERE --- */
</script>
<input type="button" value="Start the simulation!" style="font-size:1.2em;color:green;" onclick="startSim();">
</form>
Before we get started, notice that the file is not a fully-enclosed HTML document. In other words, it does not have the <html> and <body> tags. This is to allow inserting this snippet into the blog post. If you are developing a stand-alone web-application, you will want to make sure to include these elements as well.
The source code begins with a short notice which serves mainly as a pointer back to this page, in case you forgot where you got the source code from. Next, a canvas element is created. Canvas is one of the several additions in HTML5. You should not have any problem seeing the canvas unless you have a truly archaic browser. One exception however is email. Even modern browsers will not render canvas in email messages. Many people read this blog via the newsletter and as such, we want to include a link for them to click on to display the page outside of the email reader. We also set the dimensions and apply styling to the canvas. The styling consists of 1 pixel thick solid gray border complimented by a drop shadow. In the past, you had to utilize various “*-kit” alternatives to get the shadow working, but this no longer seems to be necessary.
The code ends with a simple form used merely to add a button which starts the simulation. The button is given some styling and we also attach an event handler to it. In between is bunch of Javascript which we will get into later. This Javascript is quite important – without it, your browser would render the following:
### Canvas manipulation with Javascript
Here is a part of the Javascript code:
/*grab canvas and related properties*/
var canvas=document.getElementById("cell");
var ctx = canvas.getContext("2d");
var height=canvas.height;
var width=canvas.width;
/*paint background before we scale the context*/
ctx.fillRect(0,0,width,height);
/*create axis*/
ctx.strokeStyle="black";
ctx.fillStyle="purple";
ctx.lineWidth=3;
setLineDashSafe([10,6]);
ctx.strokeRect(0.05*width,0.05*height,0.9*width,0.9*height);
setLineDashSafe([]);
/*save background*/
var saved_bk = ctx.getImageData(1, 1, width, height);
/*scale to [0,1][0,1] and put origin at left/bottom to simplify graphing*/
ctx.translate(0.05*width,0.95*height);
ctx.scale(0.9*width,-0.9*height);
/*show initial distribution*/
dsmcCell.init();
/*helper function since Firefox does not seem to support setLineDash*/
function setLineDashSafe(style)
{
if (ctx.setLineDash)
ctx.setLineDash(style);
}
To get things rolling, we first add a bit of color to the canvas to make it more visually appealing. We start by grabbing the canvas element and its 2D context. The drawing is actually done not by the canvas but by the context. We also grab the canvas dimensions. We then create a radial gradient transitioning from opaque purple to transparent white. This gradient is centered at point (0,0) which initially happens to be the top-left corner. The radial gradient is specified by providing positions and radii of two circles, here the two dimensions are 1/200th and 1/2 of the width. We fill the entire canvas with this gradient. This gives us the following:
We next create the axis. I got lazy, and instead of painting the x and y axis with tick marks and the appropriate labels, I decided to plot just a dashed rectangle indicating the axis extends. During the testing, I found out that Firefox does not seem to support the setLineDash canvas property and hence I am using a wrapper that sets the dash only if this function is defined. Otherwise the code would crash due to an uncaught extension (I could have also wrapped this call in the try clause). After we plot the rectangle, the canvas should look like this:
We are now done drawing the background. The main difference between the canvas element and SVG (as for instance demonstrated in the Bezier splines article) is that the canvas element is a stateless bitmap. In SVG, we can add and remove elements as needed. But to do an animations with the canvas, we need to repaint the entire frame. Instead of filling the background and replotting the rectangle at each time step, it is much more efficient to save the generated background bitmap. This is done with the
var saved_bk = ctx.getImageData(1, 1, width, height);
command. We will later repaint the background using
/*restore background*/
ctx.putImageData(saved_bk, 1, 1);
Finally, the initial canvas dimensions range from (0,0) in the top right corner to (600,400) in the bottom right. When dealing with XY plots, it may be more natural for the y-axis to go from the bottom to the top. It is also more natural to think of coordinates as ranging from 0 to 1. This scaling is done with
/*scale to [0,1][0,1] and put origin at left/bottom to simplify graphing*/
ctx.translate(0.05*width,0.95*height);
ctx.scale(0.9*width,-0.9*height);
The translate shifts the origin (0,0) to the bottom left corner of the dashed rectangle. The coordinate (1,1) corresponds to the top right corner. The flip over the y-axis is done by scaling the y-direction by a negative number. Also, please note that this scaling is done after we finished drawing the dashed rectangle. The scaling affects everything, not just the position. Namely, it would affect the shape of the gradient as well as the spacing between dashes. Since our canvas is not a square, we would end up with different dash spacing on the horizontal and vertical faces had we applied the transformation earlier. Finally, we call the DSMC init function. This function is described below.
### The XY plot
The actual velocity distribution function is plotted using the following code:
/*generates XY plot from xy[2][]*/
function plotXY(xy)
{
var x=xy[0],y=xy[1];
/*restore background*/
ctx.putImageData(saved_bk, 1, 1);
/*start plotting*/
ctx.beginPath();
ctx.moveTo(0,0);
for (var i=0;i<x.length;i++)
ctx.lineTo(x[i],y[i]);
ctx.lineTo(1,0);
ctx.lineTo(0,0);
ctx.fill();
}
This function takes in as an argument a 2D array in the form xy[2][], with both components assumed to range from 0 to 1. For convenience, this array is split into two 1D arrays. The function then repaints the background using the previously saved bitmap. We then use the beginPath command to start plotting a linear trace. We move to the origin, and then create a bunch of linear segments to all (x,y) tuples from the input data set. At the end, we add a line to the bottom right corner and then back to the origin to create a closed polygon. This polygon is then filled with the fill style, which happens to be purple.
## Javascript DSMC Code
We are now ready to tackle the meat of the project: the actual DSMC simulation. The entire DSMC code is encompassed within a Javascript “class”:
/***** START OF DSMC CLASS *****/
var dsmcCell = (function() {
var AMU = 1.66053886e-27; /*atomic mass*/
var K = 1.3806488e-23; /*Boltzmann constant*/
var V_c=0.001*0.01*0.001; /*cell volume*/
var sigma_cr_max=1e-18; /*initial guess*/
var Fn = 5e8; /*macroparticle weight*/
var delta_t=2e-5; /*time step length*/
var it; /*current iteration*/
var vdf; /*public array to hold VDF data*/
var part_list = new Array(); /*particle list*/
/*particle class, creates particle with isotropic velocity*/
function Part(vdrift,temp,mass)
{
/**** CODE BELOW ****/
}
/*creates initial particle populations*/
function init()
{
/**** CODE BELOW ****/
}
/* performs a single iteration of DSMC*/
function performDSMC()
{
/**** CODE BELOW ****/
}
/*samples from 1D Maxwellian using the Birdsall method*/
function maxw1D()
{
/**** CODE BELOW ****/
}
/*returns random thermal velocity*/
function sampleVth(temp,mass)
{
/**** CODE BELOW ****/
}
/*evaluates cross-section using a simple inverse relationship*/
function evalSigma(rel_g)
{
/**** CODE BELOW ****/
}
/*returns magnitude of a 3 component vector*/
function mag(v)
{
/**** CODE BELOW ****/
}
/*performs momentum transfer collision between two particles*/
function collide (part1, part2)
{
/**** CODE BELOW ****/
}
/* bins velocities into nbins between min and max */
function computeVDF(min, max, nbins)
{
/**** CODE BELOW ****/
}
/*expose public members*/
return {init:init,performDSMC:performDSMC,vdf:vdf};
})();
/**** END OF DSMC CLASS *****/
Although Javascript does not include a direct “class” counterpart to Java or C++, you can define analogous functionality using Javascript functions. What’s important to notice is that only the objects included in the return statement at the end of the code will actually be public – and thus exposed to the rest of the code. The rest of the variables and functions are private to the class. This is true with the variables we define at the beginning. These include some constants, as well as an array that will be used to hold the particles.
### Particle class
We use an inner class to represent each particle. This code is given below:
function Part(vdrift,temp,mass)
{
this.v=[];
/*sample speed*/
var vth = sampleVth(temp,mass);
/*pick a random direction*/
var theta = 2*Math.PI*Math.random();
var R = -1.0+2*Math.random();
var a = Math.sqrt(1-R*R);
this.v[0] = vth*Math.cos(theta)*a;
this.v[1] = vth*Math.sin(theta)*a;
this.v[2] = vth*R + vdrift;
this.mass=mass;
}
/*samples from 1D Maxwellian using the Birdsall method*/
function maxw1D()
{
return 0.5*(Math.random()+Math.random()+Math.random()-1.5);
}
/*returns random thermal velocity*/
function sampleVth(temp,mass)
{
var sum=0;
var vth = Math.sqrt(2*K*temp/mass);
for (var i=0;i<3;i++)
{
var m1 = vth*maxw1D();
sum += m1*m1;
}
return Math.sqrt(sum);
}
The “constructor” takes as inputs drift velocity, gas temperature, and particle mass, and creates a random particle with an isotropic velocity sampled from the Maxwellian at the corresponding thermal velocity. The this.* construct is used to create inner members that hold the particle velocity and mass. Here we use two helper functions to sample the 3D Maxwellian distribution.
### DSMC Init function
The particles are loaded by the init function:
/*creates initial particle populations*/
function init()
{
/*clear data, important if restarting*/
part_list = [];
it = 0;
/*first create slow particles*/
for (var i=0;i<1000;i++)
part_list.push(new Part(100,5,32*AMU));
/*and now the fast ones*/
for (var i=0;i<1000;i++)
part_list.push(new Part(300,5,32*AMU));
/*show initial plot*/
vdf = computeVDF(0,450,25);
plotXY(vdf);
}
The first step here is to clear the list. This is important since we give the user the ability to run the simulation repeatedly. Without this step, the particle list would grow with each click of the “Start Simulation” button! We create two populations, each consisting of 1000 particles. In a real DSMC simulation, you would have a much fewer particles per cell – likely no more than 100 total. However, here we use the higher number to obtain a smoother VDF plot. The particles come in two populations, one with drift velocity of 100 m/s and the other having drift velocity 300 m/s. Both are given initial temperature of 5K to generate a tight beam. This is quite cold, since for the relatively light molecular mass considered here, higher thermal velocities would exceed the drift speed.
In a more involved example, we could get these parameters from a user form, but here for simplicity, they are hardcoded. But feel free to experiment with changing these values on your end. You may also want to change the ratio in particle counts between the two populations, this will influence the final average velocity. At the end of this function, we make a call to compute the VDF and to plot it. This will give us the initial view the user will see until he or she presses the “Start Simulation” button, which is plotted below in Figure 5.
Pressing the button fires the following code:
/*handler for the button click*/
function startSim()
{
dsmcCell.init();
dsmcCell.performDSMC();
}
We first make a call to the init function. This is not needed the first time, in fact, we cheat here a bit in that the populations that will be collided will be slightly different than what the user sees on screen when the page is loaded the first timie. This could be avoided by, for instance, calling init only if the time step (it) is greater than zero – meaning the code had run previously. But for simplicity, this is not done here. We then call performDSMC, which performs a single step of the DSMC method.
### NTC DSMC method
The code is shown below. It implements Bird’s DSMC NTC method.
/* performs a single iteration of DSMC*/
function performDSMC()
{
/*fetch number of particles*/
var np = part_list.length;
/*compute number of groups according to NTC*/
var ng_f = 0.5*np*np*Fn*sigma_cr_max*delta_t/V_c;
var ng = Math.floor(ng_f+0.5); /*number of groups, round*/
var nc = 0; /*number of collisions*/
/*for updating sigma_cr_maxx*/
var sigma_cr_max_temp = 0;
var cr_vec = [];
/*iterate over groups*/
for (var i=0;i<ng;i++)
{
var part1,part2;
/*select first particle*/
part1=part_list[Math.floor(Math.random()*np)];
/*select the second one, making sure the two particles are unique*/
do {part2=part_list[Math.floor(Math.random()*np)];}
while (part1==part2);
/*relative velocity*/
for (var j=0;j<3;j++)
cr_vec[j] = part1.v[j]-part2.v[j];
var cr = mag(cr_vec);
/*eval cross section*/
var sigma = evalSigma(cr);
/*eval sigma_cr*/
var sigma_cr=sigma*cr;
/*update sigma_cr_max*/
if (sigma_cr>sigma_cr_max_temp)
sigma_cr_max_temp=sigma_cr;
/*eval prob*/
var P=sigma_cr/sigma_cr_max;
/*did the collision occur?*/
if (P>Math.random())
{
nc++;
collide(part1,part2);
}
}
/*update sigma_cr_max if we had collisions*/
if (nc)
sigma_cr_max = sigma_cr_max_temp;
/*show diagnostics*/
vdf = computeVDF(0,450,25);
plotXY(vdf);
it++;
/*output more data to the console*/
console.log("it: "+it+"tng: "+ng+"tnc: "+nc+"tscmax: "+sigma_cr_max);
/*run for 500 time steps*/
if (it<500)
setTimeout(performDSMC,10);
}
/*evaluates cross-section using a simple (non-physical) inverse relationship*/
function evalSigma(rel_g)
{
return 1e-18*Math.pow(rel_g,-0.5);
}
/*returns magnitude of a 3 component vector*/
function mag(v)
{
return Math.sqrt(v[0]*v[0]+v[1]*v[1]+v[2]*v[2]);
}
We first estimate the number of collision groups, and then perform that many collision checks. For each check, we select at random two particles, making sure they are unique. We then compute the relative velocity and obtain the cross-section using a simple inverse model. Note: this cross-section model was selected at random and is not supposed to be physically sound! The collision probability is then compared to a random number, and if the collision occurred, we call the collision handler.
### Elastic collision handler
The collision handler is given below. It implements an elastic collision between two particles with isotropic scattering angle.
/*performs momentum transfer collision between two particles*/
function collide (part1, part2)
{
/*center of mass velocity*/
var cm = [];
for (var i=0;i<3;i++)
cm[i] = (part1.mass*part1.v[i] + part2.mass*part2.v[i])/
(part1.mass+part2.mass);
/*relative velocity, magnitude remains constant through the collision*/
var cr=[];
for (var i=0;i<3;i++)
cr[i] = part1.v[i]-part2.v[i];
var cr_mag = mag(cr);
/*pick two random angles, per Bird's VHS method*/
var theta = Math.acos(2*Math.random()-1);
var phi = 2*Math.PI*Math.random();
/*perform rotation*/
cr[0] = cr_mag*Math.cos(theta);
cr[1] = cr_mag*Math.sin(theta)*Math.cos(phi);
cr[2] = cr_mag*Math.sin(theta)*Math.sin(phi);
/*post collision velocities*/
for (var i=0;i<3;i++)
{
part1.v[i] = cm[i]+part2.mass/(part1.mass+part2.mass)*cr[i];
part2.v[i] = cm[i]-part1.mass/(part1.mass+part2.mass)*cr[i];
}
}
The math for this handler is taken from Bird. We first compute the center of mass velocity, as well as the magnitude of the relative velocity (note, this could be passed in to optimize the code). We then sample two angles per Chapter 11 in Bird. These two angles are used to rotate the relative velocity into a new coordinate frame. Finally, the velocities of the two particles are adjusted. As an aside, this model implements isotropic scattering. However, in reality, non-isotropic scattering models tend to be more accurate descriptions of what happens in molecular collisions, especially in the field of plasma ion momentum transfer collisions. Hence in a real simulation, you would want to select the theta angle from the differential cross-section.
### Diagnostics
All that’s needed now is to bin the particle data into a histogram that can be plotted with our plotXY function. This is done via the following function:
/* bins velocities into nbins between min and max */
function computeVDF(min, max, nbins)
{
/*pointers for easier access*/
vel = [];
bin = [];
/*set delta range*/
var delta = (max-min)/(nbins-1);
/*shift left by half delta to center bins*/
min -= 0.5*delta;
max -= 0.5*delta;
/*set initial values*/
for (var i=0;i<nbins;i++)
{
vel[i] = i/(nbins-1);
bin[i] = 0;
}
/*compute histogram*/
for (var i=0;i<part_list.length;i++)
{
var part = part_list[i];
var vmag = mag(part.v);
var b = Math.floor((vmag-min)/(delta));
if (b<0 || b>=nbins-1) continue;
bin[b]++;
}
/*normalize values*/
var max_bin=0;
for (var b=0;b<nbins;b++)
if (bin[b]>max_bin) max_bin=bin[b];
for (var b=0;b<nbins;b++)
bin[b]/=max_bin;
return [vel, bin];
}
This function returns a 2D array consisting of “velocity” and the “bin” count. Both fields are actually normalized. The velocity array is assigned linearly increasing values in the range of 0 to 1. The bin counts are also divided by the maximum bin count, also yielding data in the range of 0 to 1. This normalization is done to simplify the visualization.
In addition, we also output data to the console in the performDSMC function. This is done using the following code:
console.log("it: "+it+"tng: "+ng+"tnc: "+nc+"tscmax: "+sigma_cr_max);
The console is normally hidden, but you will find it under “inspect element” right click menu in Chrome, or “Developer Tools” in Internet Explorer. The console will show you the number of groups, number of collisions, and the sigma_cr_maax parameter determined for each time step.
## Animations with HTML5 Canvas and Javascript
We are now almost done. All that’s needed is the animation part. This is actually quite simple. At the end of each time step, the performDSMC function makes a call to plotXY to show the updated VDF. This code is repeated below:
vdf = computeVDF(0,450,25);
plotXY(vdf);
it++;
/*run for 500 time steps*/
if (it<500)
setTimeout(performDSMC,10);
After the plot is made, the simulation time step is incremented. As long as the time step is less than 500 (the maximum number of time steps), the function then adds a timeout to call itself again in 10 milliseconds.
And that’s all. Feel free to leave a comment if you have any questions or something is not clear.
### See More Interactive Demos
For more interactive demos, make sure to check out the SVG mesh generator, write up on smooth Bezier splines, and the article on HMTL5 plotting.
Subscribe to the newsletter and follow us on Twitter. Send us an email if you have any questions. | 2021-01-27 07:58:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31596839427948, "perplexity": 4385.354689435194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704821253.82/warc/CC-MAIN-20210127055122-20210127085122-00703.warc.gz"} |
http://blog.csdn.net/yuhq/article/details/365483 | # Navigator Proxy Auto-Config File Format
## Navigator Proxy Auto-Config File Format
March 1996
(There are several examples and tips in the end of this document)
The proxy autoconfig file is written in JavaScript. The file must define the function:
function FindProxyForURL(url, host)
{
...
}
which will be called by the Navigator in the following way for every URL that is retrieved by it:
ret = FindProxyForURL(url, host);
where:
url
the full URL being accessed.
host
the hostname extracted from the URL. This is only for convenience, it is the exact same string as between :// and the first : or / after that. The port number is not included in this parameter. It can be extracted from the URL when necessary.
ret
(the return value) a string describing the configuration. The format of this string is defined below.
## Saving the Auto-Config FileSetting the MIME Type
1. You should save the JavaScript function to file with a .pac filename extension; for example:
proxy.pac
Note 1: You should save the JavaScript function by itself, not embed it in HTML.
Note 2: The examples in the end of this document are complete, there is no additional syntax needed to save it into a file and use it (of course, the JavaScripts have to be edited to reflect your site's domain name and/or subnets).
2. Next, you should configure your server to map the .pac filename extension to the MIME type:
application/x-ns-proxy-autoconfig
If using a Netscape server, edit the mime.types file in the config directory. If using Apache, CERN or NCSA servers, use the AddType directive.
## Return Value Format
The JavaScript function returns a single string.
If the string is null, no proxies should be used.
The string can contain any number of the following building blocks, separated by a semicolon:
DIRECT
Connections should be made directly, without any proxies.
PROXY host:port
The specified proxy should be used.
SOCKS host:port
The specified SOCKS server should be used.
If there are multiple semicolon-separated settings, the left-most setting will be used, until the Navigator fails to establish the connection to the proxy. In that case the next value will be used, etc.
The Navigator will automatically retry a previously unresponsive proxy after 30 minutes, then after 1 hour from the previous try (always adding an extra 30 minutes).
If all proxies are down, and there was no DIRECT option specified, the Navigator will ask if proxies should be temporarily ignored, and direct connections attempted. The Navigator will ask if proxies should be retried after 20 minutes has passed (then the next time 40 minutes from the previous question, always adding 20 minutes).
#### Examples:
PROXY w3proxy.netscape.com:8080; PROXY mozilla.netscape.com:8081
Primary proxy is w3proxy:8080; if that goes down start using mozilla:8081 until the primary proxy comes up again.
PROXY w3proxy.netscape.com:8080; PROXY mozilla.netscape.com:8081; DIRECT
Same as above, but if both proxies go down, automatically start making direct connections. (In the first example above, Netscape will ask user confirmation about making direct connections; in this third case, there is no user intervention.)
PROXY w3proxy.netscape.com:8080; SOCKS socks:1080
Use SOCKS if the primary proxy goes down.
## Predefined Functions and Environment for the JavaScript Function
• Hostname based conditions:
• Related utility functions:
• URL/hostname based conditions:
• Time based conditions:
• There is one associative array already defined (because a JavaScript currently cannot define them on its own):
• ProxyConfig.bindings
### isPlainHostName(host)
host
the hostname from the URL (excluding port number).
True iff there is no domain name in the hostname (no dots).
#### Examples:
isPlainHostName("www")
is true.
isPlainHostName("www.netscape.com")
is false.
### dnsDomainIs(host, domain)
host
is the hostname from the URL.
domain
is the domain name to test the hostname against.
Returns true iff the domain of hostname matches.
#### Examples:
dnsDomainIs("www.netscape.com", ".netscape.com")
is true.
dnsDomainIs("www", ".netscape.com")
is false.
dnsDomainIs("www.mcom.com", ".netscape.com")
is false.
### localHostOrDomainIs(host, hostdom)
host
the hostname from the URL.
hostdom
fully qualified hostname to match against.
Is true if the hostname matches exactly the specified hostname, or if there is no domain name part in the hostname, but the unqualified hostname matches.
#### Examples:
localHostOrDomainIs("www.netscape.com", "www.netscape.com")
is true (exact match).
localHostOrDomainIs("www", "www.netscape.com")
is true (hostname match, domain not specified).
localHostOrDomainIs("www.mcom.com", "www.netscape.com")
is false (domain name mismatch).
localHostOrDomainIs("home.netscape.com", "www.netscape.com")
is false (hostname mismatch).
### isResolvable(host)
host
is the hostname from the URL.
Tries to resolve the hostname. Returns true if succeeds.
#### Examples:
isResolvable("www.netscape.com")
is true (unless DNS fails to resolve it due to a firewall or some other reason).
isResolvable("bogus.domain.foobar")
is false.
### isInNet(host, pattern, mask)
host
a DNS hostname, or IP address. If a hostname is passed, it will be resoved into an IP address by this function.
pattern
an IP address pattern in the dot-separated format
mask for the IP address pattern informing which parts of the IP address should be matched against. 0 means ignore, 255 means match.
True iff the IP address of the host matches the specified IP address pattern.
Pattern and mask specification is done the same way as for SOCKS configuration.
#### Examples:
isInNet(host, "198.95.249.79", "255.255.255.255")
is true iff the IP address of host matches exactly 198.95.249.79.
isInNet(host, "198.95.0.0", "255.255.0.0")
is true iff the IP address of the host matches 198.95.*.*.
### dnsResolve(host)
host
hostname to resolve
Resolves the given DNS hostname into an IP address, and returns it in the dot separated format as a string.
#### Example:
dnsResolve("home.netscape.com")
returns the string "198.95.249.79".
Returns the IP address of the host that the Navigator is running on, as a string in the dot-separated integer format.
#### Example:
would return the string "198.95.249.79" if you were running the Navigator on that host.
### dnsDomainLevels(host)
host
is the hostname from the URL.
Returns the number (integer) of DNS domain levels (number of dots) in the hostname.
#### Examples:
dnsDomainLevels("www")
returns 0.
dnsDomainLevels("www.netscape.com")
returns 2.
### shExpMatch(str, shexp)
str
is any string to compare (e.g. the URL, or the hostname).
shexp
is a shell expression to compare against.
Returns true if the string matches the specified shell expression.
Actually, currently the patterns are shell expressions, not regular expressions.
#### Examples:
shExpMatch("http://home.netscape.com/people/ari/index.html", "*/ari/*")
is true.
shExpMatch("http://home.netscape.com/people/montulli/index.html", "*/ari/*")
is false.
### weekdayRange(wd1, wd2, gmt)
wd1
and
wd2
are one of the weekday strings:
SUN MON TUE WED THU FRI SAT
gmt
is either the string: GMT or is left out.
Only the first parameter is mandatory. Either the second, the third, or both may be left out.
If only one parameter is present, the function yeilds a true value on the weekday that the parameter represents. If the string "GMT" is specified as a second parameter, times are taken to be in GMT, otherwise in local timezone.
If both wd1 and wd1 are defined, the condition is true if the current weekday is in between those two weekdays. Bounds are inclusive. If the "GMT" parameter is specified, times are taken to be in GMT, otherwise the local timezone is used.
#### Examples:
weekdayRange("MON", "FRI")
true Monday trhough Friday (local timezone).
weekdayRange("MON", "FRI", "GMT")
same as above, but GMT timezone.
weekdayRange("SAT")
true on Saturdays local time.
weekdayRange("SAT", "GMT")
true on Saturdays GMT time.
weekdayRange("FRI", "MON")
true Friday through Monday (note, order does matter!).
### dateRange(day)dateRange(day1, day2)dateRange(mon)dateRange(month1, month2)dateRange(year)dateRange(year1, year2)dateRange(day1, month1, day2, month2)dateRange(month1, year1, month2, year2)dateRange(day1, month1, year1, day2, month2, year2)dateRange(day1, month1, year1, day2, month2, year2, gmt)
day
is the day of month between 1 and 31 (as an integer).
month
is one of the month strings:
JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC
year
is the full year number, for example 1995 (but not 95). Integer.
gmt
is either the string "GMT", which makes time comparison occur in GMT timezone; if left unspecified, times are taken to be in the local timezone.
Even though the above examples don't show, the "GMT" parameter can be specified in any of the 9 different call profiles, always as the last parameter.
If only a single value is specified (from each category: day, month, year), the function returns a true value only on days that match that specification. If both values are specified, the result is true between those times, including bounds.
#### Examples:
dateRange(1)
true on the first day of each month, local timezone.
dateRange(1, "GMT")
true on the first day of each month, GMT timezone.
dateRange(1, 15)
true on the first half of each month.
dateRange(24, "DEC")
true on 24th of December each year.
dateRange(24, "DEC", 1995)
true on 24th of December, 1995.
dateRange("JAN", "MAR")
true on the first quarter of the year.
dateRange(1, "JUN", 15, "AUG")
true from June 1st until August 15th, each year (including June 1st and August 15th).
dateRange(1, "JUN", 15, 1995, "AUG", 1995)
true from June 1st, 1995, until August 15th, same year.
dateRange("OCT", 1995, "MAR", 1996)
true from October 1995 until March 1996 (including the entire month of October 1995 and March 1996).
dateRange(1995)
true during the entire year 1995.
dateRange(1995, 1997)
true from beginning of year 1995 until the end of year 1997.
### timeRange(hour)timeRange(hour1, hour2)timeRange(hour1, min1, hour2, min2)timeRange(hour1, min1, sec1, hour2, min2, sec2)timeRange(hour1, min1, sec1, hour2, min2, sec2, gmt)
hour
is the hour from 0 to 23. (0 is midnight, 23 is 11 pm.)
min
minutes from 0 to 59.
sec
seconds from 0 to 59.
gmt
either the string "GMT" for GMT timezone, or not specified, for local timezone. Again, even though the above list doesn't show it, this parameter may be present in each of the different parameter profiles, always as the last parameter.
True during (or between) the specified time(s).
#### Examples:
timerange(12)
true from noon to 1pm.
timerange(12, 13)
same as above.
timerange(12, "GMT")
true from noon to 1pm, in GMT timezone.
timerange(9, 17)
true from 9am to 5pm.
timerange(8, 30, 17, 00)
true from 8:30am to 5:00pm.
timerange(0, 0, 0, 0, 0, 30)
true between midnight and 30 seconds past midnight.
## Example #1: Use proxy for everything except local hosts
This would work in Netscape's environment. All hosts which aren't fully qualified, or the ones that are in local domain, will be connected to directly. Everything else will go through w3proxy:8080. If the proxy goes down, connections become automatically direct.
function FindProxyForURL(url, host)
{
if (isPlainHostName(host) ||
dnsDomainIs(host, ".netscape.com"))
return "DIRECT";
else
return "PROXY w3proxy.netscape.com:8080; DIRECT";
}
Note: This is the simplest and most efficient autoconfig file for cases where there's only one proxy.
## Example #1b: As above, but use proxy for local servers which are outside the firewall
If there are hosts (such as the main Web server) that belong to the local domain but are outside the firewall, and are only reachable through the proxy server, those exceptions can be handled using the localHostOrDomainIs() function:
function FindProxyForURL(url, host)
{
if ((isPlainHostName(host) ||
dnsDomainIs(host, ".netscape.com")) &&
!localHostOrDomainIs(host, "www.netscape.com") &&
!localHostOrDoaminIs(host, "merchant.netscape.com"))
return "DIRECT";
else
return "PROXY w3proxy.netscape.com:8080; DIRECT";
}
The above will use the proxy for everything else except local hosts in the netscape.com domain, with the further exception that hosts www.netscape.com and merchant.netscape.com will go through the proxy.
Note the order of the above exceptions for efficiency: localHostOrDomainIs() functions only get executed for URLs that are in local domain, not for every URL. Be careful to note the parentheses around the or expression before the and expression to achieve the abovementioned efficient behaviour.
## Example #2: Use proxy only if cannot resolve host
This example would work in an environment where internal DNS is set up so that it can only resolve internal host names, and the goal is to use a proxy only for hosts which aren't resolvable:
function FindProxyForURL(url, host)
{
if (isResolvable(host))
return "DIRECT";
else
return "PROXY proxy.mydomain.com:8080";
}
The above requires consulting the DNS every time; it can be grouped smartly with other rules so that DNS is consulted only if other rules do not yield a result:
function FindProxyForURL(url, host)
{
if (isPlainHostName(host) ||
dnsDomainIs(host, ".mydomain.com") ||
isResolvable(host))
return "DIRECT";
else
return "PROXY proxy.mydomain.com:8080";
}
## Example #3: Subnet based decisions
In this example all the hosts in a given subnet are connected to directly, others through the proxy.
function FindProxyForURL(url, host)
{
if (isInNet(host, "198.95.0.0", "255.255.0.0"))
return "DIRECT";
else
return "PROXY proxy.mydomain.com:8080";
}
Again, use of DNS in the above can be minimized by adding redundant rules in the beginning:
function FindProxyForURL(url, host)
{
if (isPlainHostName(host) ||
dnsDomainIs(host, ".mydomain.com") ||
isInNet(host, "198.95.0.0", "255.255.0.0"))
return "DIRECT";
else
return "PROXY proxy.mydomain.com:8080";
}
## Example #4: Load balancing/routing based on URL patterns
This example is more sophisticated. There are four (4) proxy servers; one of them is a hot stand-by for all of the other ones, so if any of the remaining three goes down, the fourth one will take over.
Furthermore, the three remaining proxy servers share the load based on URL patterns, which makes their caching more effective (there is only one copy of any document on the three servers -- as opposed to one copy on each of them). The load is distributed like this:
ProxyPurpose
#1.com domain
#2.edu domain
#3all other domains
#4hot stand-by
All local accesses are desired to be direct. All proxy servers run on the port 8080 (they wouldn't need to). Note how strings can be concatenated by the + operator in JavaScript.
function FindProxyForURL(url, host)
{
if (isPlainHostName(host) || dnsDomainIs(host, ".mydomain.com"))
return "DIRECT";
else if (shExpMatch(host, "*.com"))
return "PROXY proxy1.mydomain.com:8080; " +
"PROXY proxy4.mydomain.com:8080";
else if (shExpMatch(host, "*.edu"))
return "PROXY proxy2.mydomain.com:8080; " +
"PROXY proxy4.mydomain.com:8080";
else
return "PROXY proxy3.mydomain.com:8080; " +
"PROXY proxy4.mydomain.com:8080";
}
## Example #5: Setting a proxy for a specific protocol
Most of the standard JavaScript functionality is available for use in the FindProxyForURL() function. As an example, to set different proxies based on the protocol, the substring() function can be used:
function FindProxyForURL(url, host)
{
if (url.substring(0, 5) == "http:") {
return "PROXY http-proxy.mydomain.com:8080";
}
else if (url.substring(0, 4) == "ftp:") {
return "PROXY ftp-proxy.mydomain.com:8080";
}
else if (url.substring(0, 7) == "gopher:") {
return "PROXY gopher-proxy.mydomain.com:8080";
}
else if (url.substring(0, 6) == "https:" ||
url.substring(0, 6) == "snews:") {
return "PROXY security-proxy.mydomain.com:8080";
}
else {
return "DIRECT";
}
}
Note: The same can be accomplished using the shExpMatch() function described earlier; for example:
...
if (shExpMatch(url, "http:*")) {
return "PROXY http-proxy.mydomain.com:8080;
}
...
## Tips
• The autoconfig file can be output by a CGI script. This is useful e.g. when making the autoconfig file act differently based on the client IP address (the REMOTE_ADDR environment variable in CGI).
• Use of isInNet(), isResolvable() and dnsResolve() functions should be carefully considered, as they require DNS server to be consulted (whereas all other autoconfig related functions are mere string matching functions). If a proxy is used, the proxy will perform its own DNS lookup which would double the impact on the DNS server. Most of the time these functions are not necessary to achieve the desired result.
• 本文已收录于以下专栏:
## Auto Proxy
• 2012年02月13日 14:41
• 187KB
• 下载
## iOS开发笔记--使用Auto Layout中的VFL(Visual format language)--代码实现自动布局
• errvv
• 2016年01月27日 10:16
• 261
## iOS开发笔记--使用Auto Layout中的VFL(Visual format language)--代码实现自动布局
• jeffasd
• 2015年10月12日 16:07
• 352
## Ubuntu - 硬盘分区、格式化、自动挂载配置 | Hard disk add new partition, format, auto mount in ubuntu
【引用】Ubuntu - 硬盘分区、格式化、自动挂载配置 | Hard disk add new partition, format, auto mount in ubuntu 2011-08-...
• ajioy
• 2011年09月10日 15:30
• 3106
## Spring Auto proxy creator example
In last Spring AOP examples – advice, pointcut and advisor, you have to manually create a proxy bean...
• gmemai
• 2015年09月06日 12:34
• 214
## mybatis_auto_file实体类Mapper生成工具
• 2017年12月08日 15:33
• 3.92MB
• 下载
## spring security3.x学习(3)_初探过滤器机制和auto-config用法
举报原因: 您举报文章:Navigator Proxy Auto-Config File Format 色情 政治 抄袭 广告 招聘 骂人 其他 (最多只允许输入30个字) | 2017-12-15 19:59:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21321992576122284, "perplexity": 12199.907171072764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948579564.61/warc/CC-MAIN-20171215192327-20171215214327-00034.warc.gz"} |
https://plainmath.net/81011/general-formula-for-sin-x | # General formula for sin ⁡<!-- --> ( A 1 </msub> + A 2 </
General formula for $\mathrm{sin}\left({A}_{1}+{A}_{2}+\cdots +{A}_{n}\right)$
I was wondering if there could be a general formula for this. Somehow, I managed to get (after some very small experimentation, so I'm not sure of what I'm going to say) that
$\mathrm{sin}\left({A}_{1}+{A}_{2}+\cdots +{A}_{n}\right)=\mathrm{cos}{A}_{1}\mathrm{cos}{A}_{1}\cdots \mathrm{cos}{A}_{n}\left(\sum _{i=1}^{n}\mathrm{tan}{A}_{i}-\sum _{1\le i
And I cannot prove it. Can anyone help me prove or disprove it?
You can still ask an expert for help
## Want to know more about Trigonometry?
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Immanuel Glenn
It's not true for n=5:
$\left(\text{left side}\right)-\left(\text{right side}\right)=\mathrm{sin}\left({A}_{1}\right)\mathrm{sin}\left({A}_{2}\right)\mathrm{sin}\left({A}_{3}\right)\mathrm{sin}\left({A}_{4}\right)\mathrm{sin}\left({A}_{5}\right)$
I suspect that on the right side, for each k with $2k+1\le n$ you need
$\left(-1{\right)}^{k}\sum _{1\le {i}_{1}<\dots <{i}_{2k+1}}\mathrm{tan}\left({A}_{{i}_{1}}\right)\dots \mathrm{tan}\left({A}_{{i}_{2k+1}}\right)$ | 2022-09-28 13:22:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 53, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8378702998161316, "perplexity": 2431.3614424314437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00349.warc.gz"} |
https://edurev.in/studytube/Detailed-Notes-Proportion/039ddcb1-b614-4a9a-b049-c0d17ff0cb67_t | Courses
Detailed Notes: Proportion Notes | EduRev
CAT : Detailed Notes: Proportion Notes | EduRev
The document Detailed Notes: Proportion Notes | EduRev is a part of the CAT Course Quantitative Aptitude (Quant).
All you need of CAT at this link: CAT
Introduction
• When two ratios are equal, the four quantities composing them are said to be proportional
• Thus if a/b = c/d, then a, b, c, d are proportional. This is expressed by saying that a is to b as c is to d, and the proportion is written as a:b::c:d or a:b = c:d.
• The terms a and d are called the extremes while the terms b and c are called the means.
Thus, we can say:
Product of Extremes = Product of Means
Example: What is the value of x in the following expression?
5/8 = x/12
Solution:
⇒ 5/8 = x/12
⇒ x = 60/8 = 7.5
It can be calculated with the help of percentages also. In this question, the percentage increase in the denominator is 50%, so the numerator will also increase by 50%.
Operations in Proportions
If four quantities a, b, c and d form a proportion, many other proportions may be deduced by the properties of fractions. The results of these operations are very useful. These operations are:
• Invertendo: If a/b = c/d then b/a = d/c
• Alternando: If a/b = c/d, then a/c = b/d
• Componendo: If a/b=c/d, then (a+b)/b=(c+d)/d
• Dividendo: If a/b=c/d, then (a-b)/b=(c-d)/d
Componendo and Dividendo: If a/b=c/d then (a + b)/(a – b) = (c + d)/(c – d)
Variation
Two quantities A and B are said to vary with each other if there is some relationship between A and B such that the change in A and B is uniform and guided by some rule.
➢ Some Typical Examples of Variation
• Area (A) of a circle = π.R2, where R is the radius of the circle. Area of a circle depends upon the value of the radius of a circle, or, in other words, we can say that the area of a circle varies as the square of the radius of a circle.
• At a constant temperature, the pressure is inversely proportional to the volume.
• If the speed of any vehicle is constant, then the distance traversed is proportional to the time taken to cover the distance.
Essentially there are two kinds of proportions that two variables can be related by:
1. Direct Proportion
When it is said that A varies directly as B, you should understand the following implications:
(a) Logical Implication: When A increases B increases.
(b) Calculation Implication: If A increases by 10%, B will also increase by 10.
(c) Graphical Implications: The following graph is representative of this situation.
2. Inverse Proportion
When A varies inversely as B, the following implication arises.
(a) Logical Implication: When A increases B decreases.
(b) Calculation Implication: If A decreases by 9.09%, B will increase by 10%.
(c) Graphical Implications: The following graph is representative of this situation.
(d) Equation Implication: The product A X B is constant.
Example 1: The height of a tree varies as the square root of its age (between 5 and 17 years). When the age of a tree is 9 years, its height is 4 feet. What will be the height of the tree at the age of 16?
Solution: Let us assume the height of the tree is H and its age is A years.
So, H ∝ √A, or, H = K x √A Now, 4 = K x √9 K = 4/3
So, height at the age of 16 years = H = K x √A = 4/3 x 4 = 16/3 = 5 feet 4 inches.
Try yourself:Total expenses at a hostel is partly fixed and partly variable. When the number of students is 20, total expense is Rs 15,000 and when the number of students is 30, total expense is Rs 20,000. What will be the expense when the number of students is 40?
Try yourself:Rs. 5783 is divided among Sherry, Berry, and Cherry in such a way that if Rs.28, Rs.37 and Rs.18 be deducted from their respective shares, they have money in the ratio 4:6:9. Find Sherry’s share.
Note: For problems based on this chapter we are always confronted with ratios and proportions between different number of variables. For the above problem we had three variables which were in the ratio of 4 : 6 : 9. When we have such a situation we normally assume the values in the same proportion, using one unknown ‘x’ only (in this example we could take the three values as 4x, 6x and 9x, respectively).Then, the total value is represented by the addition of the three giving rise to a linear equation, which on a solution, will result in the answer to the value of the unknown ‘x’.
• However, the student should realise that this unknown ‘x’ is not needed to solve the problem most of the time.
This is illustrated through the following alternative approach to solving the above problem:
Assume the three values as 4, 6 and 9. Then we have:
(4 + 28) + (6 + 37) + (9 + 18) = 5783
⇒ 19 = 5783–83 = 5700
⇒ 1 = 300
Hence, 4 + 28 = 1228.
While adopting this approach, the student should be careful in being able to distinguish the numbers in bold as pointing out the unknown variable.
Try yourself:If 10 persons can clean 10 floors by 10 mops in 10 days, in how many days can 8 persons clean 8 floors by 8 mops?
Try yourself:Three quantities A, B, C are such that AB = KC, where K is a constant. When A is kept constant, B var- ies directly as C; when B is kept constant, A varies directly C and when C is kept constant, A varies inversely as B.Initially, A was at 5 and A:B:C was 1:3:5. Find the value of A when B equals 9 at constant C.
Try yourself:The ratio of water and milk in a 30-litre mixture is 7 : 3. Find the quantity of water to be added to the mixture in order to make this ratio 6 :1.
This document is the final document of EduRev’s notes of Ratios & proportions chapter. In the next document we have tried to solve the different CAT previous year questions & curated some of the best tests for you to practice - as a CAT aspirant you should be well aware of types of questions and should have a good practice of questions. Hence we suggests you to practice all the questions in the upcoming documents & Tests.
Offer running on EduRev: Apply code STAYHOME200 to get INR 200 off on our premium plan EduRev Infinity!
Quantitative Aptitude (Quant)
116 videos|131 docs|131 tests
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
; | 2021-08-04 12:09:22 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.82941073179245, "perplexity": 851.0779499783422}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154805.72/warc/CC-MAIN-20210804111738-20210804141738-00326.warc.gz"} |
https://physicstravelguide.com/advanced_tools/group_theory/representation_theory | Representation Theory
Intuitive
geometry asks, “Given a geometric object X, what is its group of symmetries?” Representation theory reverses the question to “Given a group G, what objects X does it act on?” and attempts to answer this question by classifying such X up to isomorphism." Source
Concrete
A Lie group is in abstract terms a manifold, which obeys the group axioms. A representation is a special type of map $R$ from this manifold to the linear operators of some vector space. The map must obey the condition $$R(gh) = R(g) R(h),$$ where $g$ and $h$ are elements of the group. This means the map must preserve the product structure of the group and the mathematically notion for such a map is homomorphism.
In practice a representation is a map that maps each element of the abstract group onto a matrix. (Matrices are linear operators over a vector space.)
(There are other representations, where the group elements aren't given as matrices, but in physics matrix representations are most of the time sufficient).
Take note that it is possible to introduce a more general notion, called a realisation of the group. A realisation maps the group elements onto the (not necessarily linear) operators over an arbitrary (no necessary vector-)space. However, in physics we usually only deal with representations, because our physical objects live in vector spaces.
Characterization of Representations
One way to label representations is by using the Casimir Operators.
Another possibility is given by the Weyl character formula. This formula allows to compute the "character" of a group. A "character" is a function that yields a number for each group element. Thus, one can compute which representation one is dealing with by computing this "character" function. If two representations that could be constructed very differently are actually the same their character functions are the same.
For more details, take a look at
Abstract
Mathematically a representation is a homomorphism from the group to the group of automorphisms of something.
Why is it interesting?
In physics, we are usually interested in what a group actually does. A group is an abstract object, but representation theory allows us to derive how a group actually acts on a system.
In addition, representation theory is what allows us to understand elementary particles. For example, by using the tools of representation theory to analyze the Lorentz group (=the fundamental spacetime symmetry group), we learn what kind of elementary particles can exist in nature. Moreover, representation theory is crucial to understand what spin is, which is one of the most important quantum numbers. | 2019-06-16 22:37:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8249378800392151, "perplexity": 302.4293116799758}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998325.55/warc/CC-MAIN-20190616222856-20190617004856-00333.warc.gz"} |
https://beta.mxnet.io/api/ndarray/_autogen/mxnet.ndarray.tile.html | # mxnet.ndarray.tile¶
mxnet.ndarray.tile(data=None, reps=_Null, out=None, name=None, **kwargs)
Repeats the whole array multiple times.
If reps has length d, and input array has dimension of n. There are three cases:
• n=d. Repeat i-th dimension of the input by reps[i] times:
x = [[1, 2],
[3, 4]]
tile(x, reps=(2,3)) = [[ 1., 2., 1., 2., 1., 2.],
[ 3., 4., 3., 4., 3., 4.],
[ 1., 2., 1., 2., 1., 2.],
[ 3., 4., 3., 4., 3., 4.]]
• n>d. reps is promoted to length n by pre-pending 1’s to it. Thus for an input shape (2,3), repos=(2,) is treated as (1,2):
tile(x, reps=(2,)) = [[ 1., 2., 1., 2.],
[ 3., 4., 3., 4.]]
• n<d. The input is promoted to be d-dimensional by prepending new axes. So a shape (2,2) array is promoted to (1,2,2) for 3-D replication:
tile(x, reps=(2,2,3)) = [[[ 1., 2., 1., 2., 1., 2.],
[ 3., 4., 3., 4., 3., 4.],
[ 1., 2., 1., 2., 1., 2.],
[ 3., 4., 3., 4., 3., 4.]],
[[ 1., 2., 1., 2., 1., 2.],
[ 3., 4., 3., 4., 3., 4.],
[ 1., 2., 1., 2., 1., 2.],
[ 3., 4., 3., 4., 3., 4.]]]
Defined in src/operator/tensor/matrix_op.cc:L809
Parameters
• data (NDArray) – Input data array
• reps (Shape(tuple), required) – The number of times for repeating the tensor a. Each dim size of reps must be a positive integer. If reps has length d, the result will have dimension of max(d, a.ndim); If a.ndim < d, a is promoted to be d-dimensional by prepending new axes. If a.ndim > d, reps is promoted to a.ndim by pre-pending 1’s to it.
• out (NDArray, optional) – The output NDArray to hold the result.
Returns
out – The output of this function.
Return type
NDArray or list of NDArrays | 2019-06-25 14:34:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26050662994384766, "perplexity": 4028.2072554972424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999838.27/warc/CC-MAIN-20190625132645-20190625154645-00551.warc.gz"} |
http://nzresearch.org.nz/records?i%5Bcreator%5D=Klette%2C+Reinhard&i%5Bdc_type%5D=Report&locale=en&recordset=research | 133 results for Klette, Reinhard, Report
• ### Surface Area Estimation for Digitized Regular Solids
#### Kenmochi, Yukiko; Klette, Reinhard (2000)
Report
The University of Auckland Library
You are granted permission for the non-commercial reproduction, distribution, display, and performance of this technical report in any format, BUT this permission is only for a period of 45 (forty-five) days from the most recent time that you verified that this technical report is still available from the original CITR web site; http://citr.auckland.ac.nz/techreports/ under terms that include this permission. All other rights are reserved by the author(s). Regularly gridded data in Euclidean 3-space are assumed to be digitizations of regular solids with respect to a chosen grid resolution. Gauss and Jordan introduced different digitization schemes, and the Gauss center point scheme is used in this paper. The surface area of regular solids can be expressed finitely in terms of standard functions for specific sets only, but it is well defined by triangulations for any regular solid. We consider surface approximations of regularly gridded data characterized to be polyhedrizations of boundaries of these data. The surface area of such a polyhedron is well defined, and it is parameterized by the chosen grid resolution. A surface area measurement technique is multigrid convergent for a class of regular solids iff it holds that for any set in this class the surface areas of approximating polyhedra of the digitized regular solid converge towards the surface area of the regular solid if the grid resolution goes to infinity. Multigrid convergent volume measurements have been studied in mathematics for more than one hundred years, and surface area measurements had been discussed for smooth surfaces. The problem of multigrid convergent surface area measurement came with the advent of computer-based image analysis. The paper proposes a classification scheme of local and global polyhedrization approaches which allows us to classify different surface area measurement techniques with respect to the underlying polyhedrization scheme. It is shown that a local polyhedrization technique such as marching cubes is not multigrid convergent towards the true value even for elementary convex regular solids such as cubes, spheres or cylinders. The paper summarizes work on global polyhedrization techniques with experimental results pointing towards correct multigrid convergence. The class of general ellipsoids is suggested to be a test set for such multigrid convergence studies.
View record details
• ### Digital Straightness
#### Rosenfeld, Azriel; Klette, Reinhard (2001)
Report
The University of Auckland Library
You are granted permission for the non-commercial reproduction, distribution, display, and performance of this technical report in any format, BUT this permission is only for a period of 45 (forty-five) days from the most recent time that you verified that this technical report is still available from the original CITR web site; http://citr.auckland.ac.nz/techreports/ under terms that include this permission. All other rights are reserved by the author(s). A digital arc is called `straight' if it is the digitization of a straight line segment. Since the concept of digital straightness was introduced in the mid-1970's, dozens of papers on the subject have appeared; many characterizations of digital straight lines have been formulated, and many algorithms for determining whether a digital arc is straight have been defined. This paper reviews the literature on digital straightness and discusses its relationship to other concepts of geometry, the theory of words, and number theory.
View record details
• ### Minimum-Length Polygon of a Simple Cube-Curve in 3D Space
#### Li, Fajie; Klette, Reinhard (2004)
Report
The University of Auckland Library
You are granted permission for the non-commercial reproduction, distribution, display, and performance of this technical report in any format, BUT this permission is only for a period of 45 (forty-five) days from the most recent time that you verified that this technical report is still available from the original CITR web site; http://citr.auckland.ac.nz/techreports/ under terms that include this permission. All other rights are reserved by the author(s). We consider simple cube-curves in the orthogonal 3D grid of cells. The union of all cells contained in such a curve (also called the tube of this curve) is a polyhedrally bounded set. The curve's length is defined to be that of the minimum-length polygonal curve (MLP) fully contained and complete in the tube of the curve. So far, only a"rubber-band algorithm" is known to compute such a curve approximately. We provide an alternative iterative algorithm for the approximative calculation of the MLP for curves contained in a special class of simple cube-curves (for which we prove the correctness of our alternative algorithm), and the obtained results coincide with those calculated by the rubber-band algorithm.
View record details
• ### On Digitization Effects on Reconstruction of Geometric Properties of regions
#### Klette, Reinhard; Zunic, Jovisa (1999)
Report
The University of Auckland Library
You are granted permission for the non-commercial reproduction, distribution, display, and performance of this technical report in any format, BUT this permission is only for a period of 45 (forty-five) days from the most recent time that you verified that this technical report is still available from the original CITR web site; http://citr.auckland.ac.nz/techreports/ under terms that include this permission. All other rights are reserved by the author(s). Representations of real regions by corresponding digital pictures cause an inherent loss of information. there are infinitely many different real regions with and identical corresponding digital picture. So, there are limitations in the reconstruction of the originals and their properties from digital pictures. The problem which will be studied here is what is the impact of a digitization process on the efficiency in the reconstruction of the basic geometric properties
View record details
• ### Shape from Shading and Photometric Stereo Methods
#### Klette, Reinhard; Kozera, Ryszard (1998)
Report
The University of Auckland Library
You are granted permission for the non-commercial reproduction, distribution, display, and performance of this technical report in any format, BUT this permission is only for a period of 45 (forty-five) days from the most recent time that you verified that this technical report is still available from the original CITR web site; http://citr.auckland.ac.nz/techreports/ under terms that include this permission. All other rights are reserved by the author(s). This TR is a review of shading based shape recovery (i.e. shape from shading, and photometric stereo methods). It reports about advances in applied work and about results in theoretical fundamentals.
View record details
• ### Navigation Using Optical Flow Fields: An Application of Dominant Plane Detection
#### Kawamoto, Kazuhiko; Yamada, Daisuke; Klette, Reinhard (2002)
Report
The University of Auckland Library
You are granted permission for the non-commercial reproduction, distribution, display, and performance of this technical report in any format, BUT this permission is only for a period of 45 (forty-five) days from the most recent time that you verified that this technical report is still available from the original CITR web site; http://citr.auckland.ac.nz/techreports/ under terms that include this permission. All other rights are reserved by the author(s). Video sequences capturing real scenes may be interpreted with respect to a dominant plane which is a planar surface covering 'a majority' of pixels in an image of a video sequence, i.e. that planar surface which is represented in the image by a maximum number of pixels. In this paper, we propose an algorithm for the detection of dominant planes from optical flow fields caused by camera motion in a static scene. We, in particular, intend to adopt this algorithm as a module for obstacle detection in vision-based navigation of autonomous robots.
View record details
• ### The Class of Simple Cube-Curves Whose MLPs Cannot Have Vertices at Grid Points
#### Li, Fajie; Klette, Reinhard (2004)
Report
The University of Auckland Library
You are granted permission for the non-commercial reproduction, distribution, display, and performance of this technical report in any format, BUT this permission is only for a period of 45 (forty-five) days from the most recent time that you verified that this technical report is still available from the original CITR web site; http://citr.auckland.ac.nz/techreports/ under terms that include this permission. All other rights are reserved by the author(s). We consider simple cube-curves in the orthogonal 3D grid of cells. The union of all cells contained in such a curve (also called the tube of this curve) is a polyhedrally bounded set. The curve's length is defined to be that of the minimum-length polygonal curve (MLP) fully contained and complete in the tube of the curve. So far only one general algorithm called rubber-band algorithm was known for the approximative calculation of such a MLP. There is an open problem which is related to the design of algorithms for calculation a 3D MLP of a cube-curve: Is there a simple cube-curve such that none of the vertices of its 3D MLP is a grid vertex? This paper constructs an example of such a simple cube-curve. We also characterize this class of cube-curves.
View record details
• ### Curves, Hypersurfaces, and Good Pairs of Adjacency Relations
#### Brimkov, Valentin; Klette, Reinhard (2004)
Report
The University of Auckland Library
You are granted permission for the non-commercial reproduction, distribution, display, and performance of this technical report in any format, BUT this permission is only for a period of 45 (forty-five) days from the most recent time that you verified that this technical report is still available from the original CITR web site; http://citr.auckland.ac.nz/techreports/ under terms that include this permission. All other rights are reserved by the author(s). In this report we propose several equivalent definitions of digital curves and hypersurfaces in arbitrary dimension. The definitions involve properties such as one-dimensionality of curves and (n ? 1)-dimensionality of hypersurfaces that make them discrete analogs of corresponding notions in topology. Thus this work appears to be the first one on digital manifolds where the definitions involve the notion of dimension. In particular, a digital hypersurface in nD is an (n?1)-dimensional object, as it is in the case of continuous hypersurfaces. Relying on the obtained properties of digital hypersurfaces, we propose a uniform approach for studying good pairs defined by separations and obtain a classification of good pairs in arbitrary dimension.
View record details
• ### Shortest Paths in a Cuboidal World
#### Li, Fajie; Klette, Reinhard (2006)
Report
The University of Auckland Library
You are granted permission for the non-commercial reproduction, distribution, display, and performance of this technical report in any format, BUT this permission is only for a period of 45 (forty-five) days from the most recent time that you verified that this technical report is still available from the original CITR web site; http://citr.auckland.ac.nz/techreports/ under terms that include this permission. All other rights are reserved by the author(s). Since 1987 it is known that the Euclidean shortest path problem is NP-hard. However, if the 3D world is subdivided into cubes, all of the same size, defining obstacles or possible spaces to move in, then the Euclidean shortest path problem has a linear-time solution, if all spaces to move in form a simple cube-curve. The shortest path through a simple cube-curve in the orthogonal 3D grid is a minimum-length polygonal curve (MLP for short). So far only one general and linear (only with respect to measured run times) algorithm, called the {\it rubberband algorithm}, was known for an approximative calculation of an MLP. The algorithm is basically defined by moves of vertices along critical edges (i.e., edges in three cubes of the given cube-curve). A proof, that this algorithm always converges to the correct MLP, and if so, then always (provable) in linear time, was still an open problem so far (the authors had successfully treated only a very special case of simple cube-curves before). In a previous paper, the authors also showed that the original rubberband algorithm required a (minor) correction. This paper finally answers the open problem: by a further modification of the corrected rubberband algorithm, it turns into a provable linear-time algorithm for calculating the MLP of any simple cube-curve. The paper also presents an alternative provable linear-time algorithm for the same task, which is based on moving vertices within faces of cubes. For a disticntion, we call the modified original algorithm now the {\it edge-based rubberband algorithm}, and the second algorithm is the {\it face-based rubberband algorithm}; the time complexity of both is in ${\cal O}(m)$, where $m$ is the number of critical edges of the given simple cube-curve.
View record details
• ### 3D Reconstruction Using Shape from Photometric Stereo and Contours
#### Chen, Chia-Yen; Klette, Reinhard (2003)
Report
The University of Auckland Library
You are granted permission for the non-commercial reproduction, distribution, display, and performance of this technical report in any format, BUT this permission is only for a period of 45 (forty-five) days from the most recent time that you verified that this technical report is still available from the original CITR web site; http://citr.auckland.ac.nz/techreports/ under terms that include this permission. All other rights are reserved by the author(s). In this work, we further discuss an approach to 3D shape recovery by combining photometric stereo and shape from contours methods. Surfaces recovered by photometric stereo are aligned, adjusted and merged according to a preliminary 3D model obtained by shape from contours. Comparisons are conducted to evaluate the performances of different methods. It has been found that the proposed combination provides more accurate shape recovery than using either photometric stereo or shape from contours alone.
View record details
• ### A Silhouette Based Human Motion Tracking System
#### Rosenhahn, Bodo; Kersting, Uwe; Andrew, Smith; Brox, Thomas; Klette, Reinhard; Seidel, Hans-Peter (2005)
Report
The University of Auckland Library
You are granted permission for the non-commercial reproduction, distribution, display, and performance of this technical report in any format, BUT this permission is only for a period of 45 (forty-five) days from the most recent time that you verified that this technical report is still available from the original CITR web site; http://citr.auckland.ac.nz/techreports/ under terms that include this permission. All other rights are reserved by the author(s). This paper proposes a system for model based human motion estimation. We start with a human model generation system, which uses a set of input images to automatically generate a free-form surface model of a human upper torso. We subsequently determine joint locations automatically and generate a texture for the surface mesh. Following this, we present morphing and joint transformation techniques to gain more realistic human upper torso models. An advanced model such as this is used in a system for silhouette based human motion estimation. The presented motion estimation system contains silhouette extraction based on level set functions, a correspondence module, which relates image data to model data and a pose estimation module. This system is used for a variety of experiments: Different camera setups (between one to four cameras) are used for the experiments and we estimate the pose configurations of a human upper torso model with 21 degrees of freedom at two frames per second. We also discuss degenerated cases for silhouette based human motion estimation. Next, a comparison of the motion estimation system with a commercial marker based tracking system is performed to gain a quantitative error analysis. The results show the applicability of the system for marker-less human movement analysis. Finally we present experimental results on tracking leg models and show the robustness of our algorithms even for corrupted image data.
View record details
• ### Analysis of Finite Difference Algorithms for Linear Shape from Shading
#### Wei, Tiangong; Klette, Reinhard (2000)
Report
The University of Auckland Library
You are granted permission for the non-commercial reproduction, distribution, display, and performance of this technical report in any format, BUT this permission is only for a period of 45 (forty-five) days from the most recent time that you verified that this technical report is still available from the original CITR web site; http://citr.auckland.ac.nz/techreports/ under terms that include this permission. All other rights are reserved by the author(s). This paper presents and analyzes four explicit, two implicit and four semi-implicit finite difference algorithms for the linear shape from shading problem. Comparisons of accuracy, solvability, stability and convergence of these schemes indicate that the weighted semi-implicit scheme and the box scheme are better than the other ones because they can be calculated more easily, they are more accurate, faster in convergence and unconditionally stable.
View record details
• ### Wide-Angle Image Acquisition, Analysis and Visualisation
#### Klette, Reinhard; Gimel'farb, Georgy (2001)
Report
The University of Auckland Library
You are granted permission for the non-commercial reproduction, distribution, display, and performance of this technical report in any format, BUT this permission is only for a period of 45 (forty-five) days from the most recent time that you verified that this technical report is still available from the original CITR web site; http://citr.auckland.ac.nz/techreports/ under terms that include this permission. All other rights are reserved by the author(s). Recent camera technology provides new solutions for wide-angle image acquisition. Multi- or single-line cameras have been designed for spaceborne and airborne scanners to provide high resolution imagery. Line cameras may also work as panorama scanners, and models of these have already been studied in computer vision for a few years. These cameras or models require studies in calibration, registration and epipolar geometry to ensure accurate imaging and stereo analysis. The resulting images or depth maps also allow new approaches in 3D scene visualisation. The paper informs about line camera models and camera hardware, the historic background in photogrammetry and aerial mapping, calibration of line cameras, registration of captured images, epipolar geometry for along-track and panoramic stereo, stereo matching with a focus on dynamic programming, and visualisation. The paper illustrates sketched concepts using a few of the high-resolution aerial and panoramic image data.
View record details
• ### Length Estimation for Curves with Different Samplings
#### Noakes, Lyle; Kozera, Ryszard; Klette, Reinhard (2001)
Report
The University of Auckland Library
You are granted permission for the non-commercial reproduction, distribution, display, and performance of this technical report in any format, BUT this permission is only for a period of 45 (forty-five) days from the most recent time that you verified that this technical report is still available from the original CITR web site; http://citr.auckland.ac.nz/techreports/ under terms that include this permission. All other rights are reserved by the author(s). This paper looks at the problem of approximating the length of the unknown parametric curve ⋎ [0,1] → IRⁿ from points qᵢ = ⋎ (tᵢ), where the parameters ti are not given. When the tᵢ are uniformly distributed Lagrange interpolation by piecewise polynomials provides efficient length estimates, but in other cases this method can behave very badly [15]. In the present paper we apply this simple algorithm when the tᵢ are sampled in what we call an ε-uniform fashion, where 0 ≤ ε ≤ 1. Convergence of length estimates using Lagrange interpolants is not as rapid as for uniform sampling, but better than for some of the examples of [15]. As a side-issue we also consider the task of approximating ⋎ up to parameterization, and numerical experiments are carried out to investigate sharpness of our theoretical results. The results may be of interest in computer vision, computer graphics, approximation and complexity theory, digital and computational geometry, and digital image analysis.
View record details
• ### Digital Curves in 3D Sapce and a Linear-Time Length Estimation Algorithm
#### Bülow, Thomas; Klette, Reinhard (1999)
Report
The University of Auckland Library
You are granted permission for the non-commercial reproduction, distribution, display, and performance of this technical report in any format, BUT this permission is only for a period of 45 (forty-five) days from the most recent time that you verified that this technical report is still available from the original CITR web site; http://citr.auckland.ac.nz/techreports/ under terms that include this permission. All other rights are reserved by the author(s). We consider simple digital curves in a 3D orthogonal grid as special polyhedrally bounded sets. These digital curves model digitized curves or arcs in three-dimensional euclidian space. The length of such a simple digital curve is defined to be the length of the minimum-length polygonal curve fully contained and complete in the tube of this digital curve. So far no algorithm was known for the calculation of such a shortest polygonal curve. This paper provides an iterative algorithmic solution, including a presentation of its foundations and of experimental results.
View record details
• ### Height data from gradient fields
#### Klette, Reinhard; Schluns, Karsten (1996-08)
Report
The University of Auckland Library
The paper starts with a review of integration techniques for calculating height maps from dense gradient fields. There exist a few proposals of local integration methods (Coleman/Jain 1982, Healey/Jain 1984, Wu/Li 1988, Rodehorst 1993), and two proposals for global optimization (Horn/Brooks 1986 and Frankot/ Chellappa 1988). Several experimental evaluations of such integration techniques are discussed in this paper. The examined algorithms received high markings on curved objects but low markings on polyhedral shapes. Locally adaptive approaches are suggested to cope with complex shapes in general.
View record details
• ### Multigrid Convergence of Surface Approximations
#### Klette, Reinhard; Wu, Feng (1998)
Report
The University of Auckland Library
You are granted permission for the non-commercial reproduction, distribution, display, and performance of this technical report in any format, BUT this permission is only for a period of 45 (forty-five) days from the most recent time that you verified that this technical report is still available from the original CITR web site; http://citr.auckland.ac.nz/techreports/ under terms that include this permission. All other rights are reserved by the author(s). This report deals with multigrid approximations of surfaces. Surface area and volume approximations are discussed for regular grids (3D objects), and surface reconstruction for irregular grids (terrain surfaces). Convergence analysis and approximation error calculations are emphasized.
View record details
• ### Topologies on the Planar Orthogonal Grid
#### Klette, Reinhard (2001)
Report
The University of Auckland Library
You are granted permission for the non-commercial reproduction, distribution, display, and performance of this technical report in any format, BUT this permission is only for a period of 45 (forty-five) days from the most recent time that you verified that this technical report is still available from the original CITR web site; http://citr.auckland.ac.nz/techreports/ under terms that include this permission. All other rights are reserved by the author(s). This paper discusses different topologies on the planar orthogonal grid and shows homeomorphy between cellular models. It also points out that graph-theoretical topologies exist defined by planar extensions of the 4-adjacency graph. All these topologies are potential models for image carriers.
View record details
• ### Dominant Plane Estimation
#### Kawamoto, Kazuhiko; Klette, Reinhard (2001)
Report
The University of Auckland Library
You are granted permission for the non-commercial reproduction, distribution, display, and performance of this technical report in any format, BUT this permission is only for a period of 45 (forty-five) days from the most recent time that you verified that this technical report is still available from the original CITR web site; http://citr.auckland.ac.nz/techreports/ under terms that include this permission. All other rights are reserved by the author(s). Video sequences capturing real scenes may be interpreted with respect to a dominant plane which is a planar surface covering more than 50% of a frame, or being that planar surface which is represented in the image with the largest number of pixels. This note shows a possible way for estimating the surface normal of this plane if just camera rotation is allowed.
View record details
• ### Albedo Recovery Using a Photometric Stereo Method
#### Chen, Chia-Yen; Klette, Reinhard (2001)
Report
The University of Auckland Library
You are granted permission for the non-commercial reproduction, distribution, display, and performance of this technical report in any format, BUT this permission is only for a period of 45 (forty-five) days from the most recent time that you verified that this technical report is still available from the original CITR web site; http://citr.auckland.ac.nz/techreports/ under terms that include this permission. All other rights are reserved by the author(s). This paper describes a method for the calculation of surface reflectance values via photometric stereo. Experimental results show that surfaces rendered with reflectance values calculated by the proposed method have more realistic appearances than those with constant albedo.
View record details | 2017-03-27 16:30:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21315205097198486, "perplexity": 1914.512426909879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189490.1/warc/CC-MAIN-20170322212949-00190-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/129995/is-it-difficult-to-prove-that-nature-is-chaotic | # Is it difficult to prove that nature is chaotic?
If we have a Markov coding or another symbolic description of a dynamical system it is usually easy to prove that the system is chaotic (in the sense of of Li-York, Devaney, positive entropy of what ever). My impression is that most (interesting) dynamical systems coming of natural science are in fact chaotic. But this is rigourously proved only for a few systems. What is the reason for this? Is it difficult to find a symbolic coding for systems coming from natural science or are we (as mathematicians) not so much interested in these systems?
-
Proving that a mathematical model is (or isn't) chaotic is not quite the same thing as proving that Nature is (or isn't) chaotic. – Gerry Myerson May 8 '13 at 6:09
The proofs of chaotic behaviour tend to rely on hyperbolic behaviour (or at least non-uniformly hyperbolic behaviour). Proving that this holds in many real systems (or even in lots of toy models) is extremely hard / apparently beyond the reach of current technology. – Anthony Quas May 8 '13 at 8:26
Dear Gerry, of course You are right. Please apologize the provoking title. – Jörg Neunhäuserer May 8 '13 at 16:30
Dear Anthony, i understand what You mean. But sometimes I wonder if it is necessary to use the machinery of (non-uniform) hyperbolic dynamics to prove that a systems is chaotic. If you directly construct an appropriate dynamical partition, you may prove that a system is chaotic without showing hyperbolicity (or?). In fact beside toy models I do not know a strategy to find such a partition, but may be there is a (black magic) way. – Jörg Neunhäuserer May 8 '13 at 16:42 | 2014-09-18 06:00:07 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8549787402153015, "perplexity": 513.830809490619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657125654.84/warc/CC-MAIN-20140914011205-00284-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://physics.stackexchange.com/questions/6384/does-dilation-scale-invariance-imply-conformal-invariance | # Does dilation/scale invariance imply conformal invariance?
Why does a quantum field theory invariant under dilations almost always also have to be invariant under proper conformal transformations? To show your favorite dilatation invariant theory is also invariant under proper conformal transformations is seldom straightforward. Integration by parts, introducing Weyl connections and so on and so forth are needed, but yet at the end of the day, it can almost always be done. Why is that?
• You can look for Polchinski's paper "Scale and Conformal Invariance in QFT" which discusses the issue in detail. Maybe there have been some developments since then, but that is a good starting point. – user566 Mar 5 '11 at 18:56
• For 4d field theory it's still an open question whether scale invariance implies conformal invariance. There's been some recent work on this topic by Slava Rychkov and collaborators, see e.g. 1101.5385. – Matt Reece Mar 6 '11 at 23:06
• By the way, given that scale invariance does not imply conformal invariance, maybe the question can be rephrased. – user566 Mar 9 '11 at 18:14
• I'd like to point to a review by Yu Nakayama, available at arxiv.org/abs/1302.0884 – Siva Sep 20 '13 at 7:28
As commented in previous answers, conformal invariance implies scale invariance but the converse is not true in general. In fact, you can have a look at Scale Vs. Conformal Invariance in the AdS/CFT Correspondence. In that paper, authors explicitly construct two non trivial field theories which are scale invariant but not conformally invariant. They proceed by placing some conformal field theories in flat space onto curved backgrounds by means of the AdS/CFT correspondence.
• Thanks for that, somehow I missed this one, it is really interesting. – user566 Mar 9 '11 at 18:14
• I knew about this paper but reading this question, made it came to my mind again (fortunately, because it's very interesting) – xavimol Mar 9 '11 at 19:23
The rule-of-thumb is that 'conformal ⇒ scale', but the converse is not necessarily true (some condition(s) needs to be satisfied) — but, of course, this varies with the dimensionality of the problem you're dealing.
PS: Polchinski's article: Scale and conformal invariance in quantum field theory.
• It's not a rule-of-thumb that conformal implies scale, it's just a fact. The conditions are mostly locality and low-order of derivatives, which is sometimes imposed by unitarity and renormalizability. – Ron Maimon May 4 '12 at 19:43
• @RonMaimon: Conformal invariance requires scale invariance in a Poincare invariant theory simply because of the commutator $[K_\mu,P_\nu]=2i(\eta_{\mu\nu}D-M_{\mu\nu})$. The notation should be obvious. – AndyS Jun 15 '12 at 23:10
• @AndyS: The very existence of D in the conformal group is enough to show conformal implies scale--- it's not a rule of thumb, it's an obvious implication, that's what the comment above was trying to say. You don't need the commutator business to show this, the dilatation is a conformal transformation all by itself. – Ron Maimon Jun 16 '12 at 6:51
• @RonMaimon: What you're saying is not true; you need the commutator in order to prove what you call "an obvious implication". Also, there is a clear distinction between dilatations and special conformal transformations. – AndyS Jun 19 '12 at 3:23
• @AndyS: What I am saying is true, and you are saying nonsense. Dilatations are to conformal invariance as rotations about the z-axis are to rotations. They are a special case. If you have rotational invariance, you have rotational invariance around the z-axis. If you have conformal invariance, you have dilatation invariance. This is not an arguable point, it is not a difficult point, and I don't know why you make the comment above. – Ron Maimon Jun 19 '12 at 15:48
Maybe this does it:
$\begin{array}{rccl} \textrm{Translation:}&P_\mu&=&-i\partial_\mu\\ \textrm{Rotation:}&M_{\mu\nu}&=&i(x_\mu\partial_\nu-x_\nu\partial_\mu)\\ \textrm{Dilation:}&D&=&ix^\mu\partial^\mu\\ \textrm{Special Conformal:}&C_\mu&=&-i(\vec{x}\cdot\vec{x}-2x_\mu\vec{x}\cdot\partial) \end{array}$
Then the commutation relation gives:
$[D,C_\mu] = -iC_\mu$
so $C^\mu$ acts as raising and lowering operators for the eigenvectors of the dilation operator $D$. That is, suppose:
$D|d\rangle = d|d\rangle$
By the commutation relation:
$DC_\mu - C_\mu D = -iC_\mu$
so
$DC_\mu|d\rangle = (C_\mu D -iC_\mu)|d\rangle$
and
$D(C_\mu|d\rangle) = (d-i)(C_\mu|d\rangle)$
But given the dilational eigenvectors, it's possible to define the raising and lowering operators from them alone. And so that defines the $C_\mu$.
P.S. I cribbed this from:
http://web.mit.edu/~mcgreevy/www/fall08/handouts/lecture09.pdf
• Let's say you cited Mr. McGreevy. – user68 Mar 6 '11 at 13:04
• The issue is that it just isn't true that scale invariance implies conformal invariance. The simplest counterexample is some self-interacting Levy field theory. – Ron Maimon May 4 '12 at 19:44
• What are these $|d\rangle$ kets you refer to? Are they fields? And does $d$ correspond to the scaling dimension of the field? I realise this is an old thread, so I would appreciate comments from anyone. – CAF Jul 26 '14 at 16:36 | 2019-08-21 11:56:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.840569019317627, "perplexity": 552.6649392528299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315936.22/warc/CC-MAIN-20190821110541-20190821132541-00515.warc.gz"} |
https://www.bartleby.com/questions-and-answers/problem-4-counts-as-two-problems-let-x1-x2-...-xbe-a-collection-of-independent-discrete-random-varia/ad993b4c-6f3a-4450-9ac3-c999f06dcc97 | Problem 4 (counts as two problems): Let X1, X2, ... , X,be a collection of independent discrete random variables that all take the value 1 with probability p and take the value 0 with probability (1-p). The following set of steps illustrates the Law of Large Numbers at work. a) Compute the mean and the variance of X1 (which is the same for X2, X3, etc.) b) Use your answer to (a) to compute the mean and variance of p = (X1 + X2 + .+ X„), which is the proportion of "ones" observed in the n instances of X;. c) Suppose n= 10,000. Use Chebyshev's inequality to provide an upper bound for the probability that the difference between p and p exceeds 0.05. d) Use calculus to show that if p is a number between 0 and 1, then p(1 – p) < e) Use your answers to (c) and (d) to provide an upper bound, that does not depend on p, for the probability that the difference between p and p exceeds 0.05. f) Interpret this problem in the context of randomly sampling 10,000 people from a large population, asking them a yes-no question, and using the result to make an inference about the whole population. %3D
Question
Want to see this answer and more?
Experts are waiting 24/7 to provide step-by-step solutions in as fast as 30 minutes!*
*Response times may vary by subject and question complexity. Median response time is 34 minutes for paid subscribers and may be longer for promotional offers.
Tagged in
Math
Statistics
Other
Transcribed Image Text
Problem 4 (counts as two problems): Let X1, X2, ... , X,be a collection of independent discrete random variables that all take the value 1 with probability p and take the value 0 with probability (1-p). The following set of steps illustrates the Law of Large Numbers at work. a) Compute the mean and the variance of X1 (which is the same for X2, X3, etc.) b) Use your answer to (a) to compute the mean and variance of p = (X1 + X2 + .+ X„), which is the proportion of "ones" observed in the n instances of X;. c) Suppose n= 10,000. Use Chebyshev's inequality to provide an upper bound for the probability that the difference between p and p exceeds 0.05. d) Use calculus to show that if p is a number between 0 and 1, then p(1 – p) < e) Use your answers to (c) and (d) to provide an upper bound, that does not depend on p, for the probability that the difference between p and p exceeds 0.05. f) Interpret this problem in the context of randomly sampling 10,000 people from a large population, asking them a yes-no question, and using the result to make an inference about the whole population. %3D | 2021-07-31 06:21:43 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8466634750366211, "perplexity": 293.8062841048522}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154053.17/warc/CC-MAIN-20210731043043-20210731073043-00363.warc.gz"} |
https://www.physicsforums.com/threads/scrubbing-the-middle-of-your-back.243684/ | # Scrubbing the middle of your back
1. Jul 6, 2008
### Staff: Mentor
It seems I can never quite get enough scrubbing power to exfoliate the upper middle portion of my back. I have purchased a number of "back scrubbing devices" but I can never seem to make them work well enough.
Since I live alone and have no love life, am I doomed to forever be in need of exfoliation?
I was thinking maybe installing something like a cat scratching post on the shower wall that would allow me to rub my back up against it like a bear scatches his back on a tree. Maybe I will make millions of .
Don't tell me to go to a spa, I'm not into that kind of thing.
Anyone else have a solution?
2. Jul 6, 2008
### rewebster
Stand in line, guys
(I've seen a loofa attached to a 'handled' thin towel--almost rope-like, but still a terry cloth)
3. Jul 6, 2008
### TheStatutoryApe
That actually sounds like a good idea.
I have a fairly long scrub towel that I pullback and forth across my back, one end in each hand. Probably similar to the use of what rewebster discribed.
4. Jul 6, 2008
### Astronuc
Staff Emeritus
5. Jul 6, 2008
### Danger
I make house calls. :!!)
6. Jul 6, 2008
7. Jul 6, 2008
### Crosson
I suggest working on flexibility:
http://www.yogaspirits.com/images/cow%20pose.gif" [Broken]
Last edited by a moderator: May 3, 2017
8. Jul 6, 2008
### Staff: Mentor
I've tried all of the different scrubbers, I can't get enough pressure going for them to be effective.
9. Jul 6, 2008
### rewebster
The very best thing, though, without a doubt, is a shower mate.
"something like a cat scratching post on the shower wall "
have one installed and see if it works
10. Jul 6, 2008
### NeoDevin
Best suggestion so far may be Danger's
11. Jul 6, 2008
### binzing
How would that keep you from needing exfoliation???
12. Jul 6, 2008
### Staff: Mentor
Because a significant other could scrub my back and wouldn't go "eeeewww, are you kidding?"
13. Jul 6, 2008
### Cyrus
ewwwwwww, are you kidding me. :yuck:
14. Jul 6, 2008
### robphy
15. Jul 6, 2008
### binzing
LOL, I've got an idea! Attach a large brush like you'd use to wash a car, or the type you use on horses, to your shower at mid back height.
16. Jul 6, 2008
### Tsu
A large piece of the scratchy side of velcro stuck to the shower wall?
17. Jul 6, 2008
### Staff: Mentor
Oooh, that's it!!!
18. Jul 6, 2008
### Kurdt
Staff Emeritus
I never bother touching my back since I can't imagine it getting that dirty.
19. Jul 6, 2008
### rewebster
well, there goes your house call, Danger
20. Jul 6, 2008
### Astronuc
Staff Emeritus
Better add a no-slip cover to the shower floor.
21. Jul 6, 2008
### Moonbear
Staff Emeritus
I've never considered the need for back exfoliation, just normal washing. Not sure why you can't get enough scrubbing done with one of those loofah's on a stick though. I tried one of them once and thought sandpaper would be more gentle. I just use a regular washcloth...there's only about a spot the size of a quarter that I can't reach in any direction with my hands...I know where that spot is because it's the spot that gets itchy, of course :grumpy:...but just using a washcloth gives enough extra reach to clean everything.
22. Jul 6, 2008
### Staff: Mentor
You forget my improperly healed broken arm.
23. Jul 6, 2008
### Danger
Damn! :grumpy:
24. Jul 6, 2008
### Moonbear
Staff Emeritus
Oh, yes, that I did forget. Don't know HOW I could forget, but I did. Have you tried those loofahs on a stick? Seriously, if exfoliation is what you're after, that really should do the trick. They're brutal! Or if that's too much, you might be able to just wrap the washcloth around the loofah part.
Speaking of reaching spots on one's back, why is it so hard to find a good, sturdy back scratcher anymore? I was thinking you could just wrap a washcloth on one of those, but then remembered I can't even find one of those. All the ones I find are kind of flimsy, no good for getting that one little spot in the middle of the back that gets itchy.
25. Jul 8, 2008
### Ouabache
You and Wolly should get a special award for coming up with such witty subjects :tongue:
Last edited: Jul 8, 2008 | 2018-07-21 00:50:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2487902194261551, "perplexity": 7112.472799664121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592001.81/warc/CC-MAIN-20180720232914-20180721012914-00249.warc.gz"} |
https://solvedlib.com/10-11-let-x-and-y-be-2-independent-random,340031 | # 10) (11) Let X and Y be 2 independent random variables. Suppose X ~ Gamma(0, 38)...
###### Question:
10) (11) Let X and Y be 2 independent random variables. Suppose X ~ Gamma(0, 38) and Y ~ Gamma(a, 2B). Let 2 = 2X +3Y. Determine the probability distribution of Z. (Hint: use the method of moment-generating functions
#### Similar Solved Questions
##### To determine (he number of troul in a lake them , and releases them a conservationist catches 203 trout; lags Later 148 trout are caught; and it i5 found thal 50 of them are tagged Estimate how many there are in the IakeThe approximate number of troul IS (Round to the nearest whole number 4s needed
To determine (he number of troul in a lake them , and releases them a conservationist catches 203 trout; lags Later 148 trout are caught; and it i5 found thal 50 of them are tagged Estimate how many there are in the Iake The approximate number of troul IS (Round to the nearest whole number 4s needed...
##### HWO9 determinant properties: Problem 1 Previous Problem Problem List Next Problempoint) Given the matrix~8 -1find all values of a that make |A| = 0. Give your answer as a comma-separated listValues of a:
HWO9 determinant properties: Problem 1 Previous Problem Problem List Next Problem point) Given the matrix ~8 -1 find all values of a that make |A| = 0. Give your answer as a comma-separated list Values of a:...
##### Wuninorna matal sphere rendaced Choose podundsns Woj p strang 1 ol ungih L, forming penduiun Tuninouind 420 Inc Eucu poijad 041 1 U osclilation? :
Wuninorna matal sphere rendaced Choose podundsns Woj p strang 1 ol ungih L, forming penduiun Tuninouind 420 Inc Eucu poijad 041 1 U osclilation? :...
##### Down rupe on thie sicle ol clifE_ IE Aelecca Speras Rebecca (JUO V) rappels during the descent (2 I/s") the how much tension is in the rope?
down rupe on thie sicle ol clifE_ IE Aelecca Speras Rebecca (JUO V) rappels during the descent (2 I/s") the how much tension is in the rope?...
##### Carbon disulfide is liquid that can be used the production of rayon: It is manufactured from methane and elemental suliur according to the following chemical equation:CH:(lt 45 (4 CSzo +2 H,S How many moles of CS2 can be formed by the complete reaction of 10.6 mol of S?OA mol 0 B.2.65 mol C.4 mol00.10.6 mol 0E. 42.4 mol
Carbon disulfide is liquid that can be used the production of rayon: It is manufactured from methane and elemental suliur according to the following chemical equation: CH:(lt 45 (4 CSzo +2 H,S How many moles of CS2 can be formed by the complete reaction of 10.6 mol of S? OA mol 0 B.2.65 mol C.4 mol...
##### Detertine convergence Or" divergence of the following series:Determine convergence or divergence of the following series:(Yl"' 3"
Detertine convergence Or" divergence of the following series: Determine convergence or divergence of the following series: (Yl"' 3"...
##### Find the gradient vector field for the function flx;y) xyein Sr + 4y), (Your inatructora preterang Eracke- notafionTor vecton ;
Find the gradient vector field for the function flx;y) xyein Sr + 4y), (Your inatructora preterang Eracke- notafion Tor vecton ;...
##### Find all solutions of Ax = b where A =b =73SII of particular solution Xo and vectors in the null-space of A.
Find all solutions of Ax = b where A = b = 73 SII of particular solution Xo and vectors in the null-space of A....
##### Find the remaining five trigonometric functions of 8 given that cOs(e) -3/4,where € in quadrant II.Do not approximate and rationalize the denominator:sinu) =tan(0) =cot(e)csc(0)sec(0) =
Find the remaining five trigonometric functions of 8 given that cOs(e) -3/4,where € in quadrant II. Do not approximate and rationalize the denominator: sinu) = tan(0) = cot(e) csc(0) sec(0) =...
##### 9. Find the critical value of a right tailed t-test where n and.05. 12 10. What...
9. Find the critical value of a right tailed t-test where n and.05. 12 10. What would the decision be if the test value was 1.48 and the critical value was 1.22? 11. What is the symbol for a type I error? 12. When the sample standard deviation is known which of the following tests it used? T-test or...
##### Both the quality of the conversation and the misattribution of arousal on the date influence romantic...
Both the quality of the conversation and the misattribution of arousal on the date influence romantic attraction. Researchers examined the individual and combined effects of both of these factors. Participants went on a date that consisted of either a walk on a paved path in the woods (“standa...
##### A spring of negligible mass is displaced out of its equilibriumposition by 2 cm. Later, the spring is moved out of equilibriumagain by a distance of 4 cm. How many times has the potentialenergy of the spring changed when it has been displaced from 2 cmto 4 cm?
A spring of negligible mass is displaced out of its equilibrium position by 2 cm. Later, the spring is moved out of equilibrium again by a distance of 4 cm. How many times has the potential energy of the spring changed when it has been displaced from 2 cm to 4 cm?...
##### In a 2013 study on heart rate, physical fitness, and mortality,the researchers report high risk of mortality in patients withresting heart rates of 81bpm or higher. Assume the average restingheart rate for adults is 65bpm with a standard deviation of 8 bpm.Answer the following questions:(a) Find the ð‘§ð›¼ corresponding to a patient with a heart rate of81bpm.(b) What is the 𛼠- value?(c) Interpret the 𛼠- value in the context of the problem.(Hint: Mention something a probability of an ev
In a 2013 study on heart rate, physical fitness, and mortality, the researchers report high risk of mortality in patients with resting heart rates of 81bpm or higher. Assume the average resting heart rate for adults is 65bpm with a standard deviation of 8 bpm. Answer the following questions: (a) Fin...
##### QUESION 9the densiry Water I51000 18 m3 isfilled with Ilquid water Assuming AG-kE plasuc tank that hus = volume of 1258NGuenl1285 N18524 1825 N
QUESION 9 the densiry Water I51000 18 m3 isfilled with Ilquid water Assuming AG-kE plasuc tank that hus = volume of 1258N Guenl 1285 N 1852 4 1825 N...
##### Clonex Labs, Inc., uses the weighted-average method in its process costing system. The following data are...
Clonex Labs, Inc., uses the weighted-average method in its process costing system. The following data are available for one department for October Percent Completed Materials Conversion 95% 50% 65% Units 54,000 39,000 Work in process, October 1 Work in process, October 31 The department started 399,...
##### Probkem your financial advisor has recommended two stocks and B The probability of Suppose increasing value over the next year for the Stocks B are respectively 0.6 and 0.85 that the performance of stock independent of the Other, find the probability that Assumingboth stocks will increuse value next ycar; none of the stocks wil increase value next Year; least one stock will increase value next year; only stock will increase value next Vejr
Probkem your financial advisor has recommended two stocks and B The probability of Suppose increasing value over the next year for the Stocks B are respectively 0.6 and 0.85 that the performance of stock independent of the Other, find the probability that Assuming both stocks will increuse value nex...
##### Why is part b wrong? (a) How can a driver steer a car traveling at constant...
Why is part b wrong? (a) How can a driver steer a car traveling at constant speed so that the acceleration is zero? (Assume that the road is level. Select all that apply.) on a circular path on a straight line on a parabolic path on an elliptical path (b) How can a driver steer a car traveling at ...
##### Let $p$ be a prime and $n \in \mathbb{N}$ . Prove that $p^{n}$ is not a perfect number. (Hint: Prove by contradiction.)
Let $p$ be a prime and $n \in \mathbb{N}$ . Prove that $p^{n}$ is not a perfect number. (Hint: Prove by contradiction.)...
##### CHAPTER3 Determine the electric field at the center of the uniformly polarized sphere of Proble If...
CHAPTER3 Determine the electric field at the center of the uniformly polarized sphere of Proble If the bound charges of a polarized dielectric are symmetrically disposed, then a special form of Gauss law may be applicable. The integral form of Eq. by surface S, is (3-9a), btained by integrating both...
##### QUESTION 2: What are three kinds of cell division? What are the products of ( of...
QUESTION 2: What are three kinds of cell division? What are the products of ( of cells and genetic makeup) of each? In what situations are they used? - 3 kind of cell division= Mitosis, meiosis, and binary fission I'm having a hard time answering the second and third part of the question. Pleas... | 2023-03-25 04:04:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6010687947273254, "perplexity": 3369.525360326918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945315.31/warc/CC-MAIN-20230325033306-20230325063306-00076.warc.gz"} |
http://www.global-sci.org/intro/article_detail/ijnam/596.html | Volume 10, Issue 4
Grid Approximation of a Singularly Perturbed Parabolic Equation with Degenerating Convective Term and Discontinuous Right-Hand Side
Int. J. Numer. Anal. Mod., 10 (2013), pp. 795-814.
Published online: 2013-10
Cited by
Export citation
• Abstract
The grid approximation of an initial-boundary value problem is considered for a singularly perturbed parabolic convection-diffusion equation with a convective flux directed from the lateral boundary inside the domain in the case when the convective flux degenerates inside the domain and the right-hand side has the first kind discontinuity on the degeneration line. The high-order derivative in the equation is multiplied by $\varepsilon^2$, where $\varepsilon$ is the perturbation parameter, $\varepsilon\in (0,1]$. For small values of $\varepsilon$, an interior layer appears in a neighbourhood of the set where the right-hand side has the discontinuity. A finite difference scheme based on the standard monotone approximation of the differential equation in the case of uniform grids converges only under the condition $N^{-1} = o(\varepsilon)$, $N^{-1}_0 = o(1)$, where $N +1$ and $N_0+1$ are the numbers of nodes in the space and time meshes, respectively. A finite difference scheme is constructed on a piecewise-uniform grid condensing in a neighbourhood of the interior layer. The solution of this scheme converges $\varepsilon$-uniformly at the rate $\mathcal{O}(N^{-1}lnN+N^{-1}_0)$. Numerical experiments confirm the theoretical results.
• Keywords
parabolic convection-diffusion equation, perturbation parameter, degenerating convective term, discontinuous right-hand side, interior layer, technique of derivation to a priori estimates, piecewise-uniform grids , finite difference scheme, $\varepsilon$-uniform convergence, maximum norm.
65M06, 65N06, 65N12
• BibTex
• RIS
• TXT
@Article{IJNAM-10-795, author = {Clavero , C.Gracia , J. L.Shishkin , G. I. and Shishkina , L. P.}, title = {Grid Approximation of a Singularly Perturbed Parabolic Equation with Degenerating Convective Term and Discontinuous Right-Hand Side}, journal = {International Journal of Numerical Analysis and Modeling}, year = {2013}, volume = {10}, number = {4}, pages = {795--814}, abstract = {
The grid approximation of an initial-boundary value problem is considered for a singularly perturbed parabolic convection-diffusion equation with a convective flux directed from the lateral boundary inside the domain in the case when the convective flux degenerates inside the domain and the right-hand side has the first kind discontinuity on the degeneration line. The high-order derivative in the equation is multiplied by $\varepsilon^2$, where $\varepsilon$ is the perturbation parameter, $\varepsilon\in (0,1]$. For small values of $\varepsilon$, an interior layer appears in a neighbourhood of the set where the right-hand side has the discontinuity. A finite difference scheme based on the standard monotone approximation of the differential equation in the case of uniform grids converges only under the condition $N^{-1} = o(\varepsilon)$, $N^{-1}_0 = o(1)$, where $N +1$ and $N_0+1$ are the numbers of nodes in the space and time meshes, respectively. A finite difference scheme is constructed on a piecewise-uniform grid condensing in a neighbourhood of the interior layer. The solution of this scheme converges $\varepsilon$-uniformly at the rate $\mathcal{O}(N^{-1}lnN+N^{-1}_0)$. Numerical experiments confirm the theoretical results.
}, issn = {2617-8710}, doi = {https://doi.org/}, url = {http://global-sci.org/intro/article_detail/ijnam/596.html} }
TY - JOUR T1 - Grid Approximation of a Singularly Perturbed Parabolic Equation with Degenerating Convective Term and Discontinuous Right-Hand Side AU - Clavero , C. AU - Gracia , J. L. AU - Shishkin , G. I. AU - Shishkina , L. P. JO - International Journal of Numerical Analysis and Modeling VL - 4 SP - 795 EP - 814 PY - 2013 DA - 2013/10 SN - 10 DO - http://doi.org/ UR - https://global-sci.org/intro/article_detail/ijnam/596.html KW - parabolic convection-diffusion equation, perturbation parameter, degenerating convective term, discontinuous right-hand side, interior layer, technique of derivation to a priori estimates, piecewise-uniform grids KW - , finite difference scheme, $\varepsilon$-uniform convergence, maximum norm. AB -
The grid approximation of an initial-boundary value problem is considered for a singularly perturbed parabolic convection-diffusion equation with a convective flux directed from the lateral boundary inside the domain in the case when the convective flux degenerates inside the domain and the right-hand side has the first kind discontinuity on the degeneration line. The high-order derivative in the equation is multiplied by $\varepsilon^2$, where $\varepsilon$ is the perturbation parameter, $\varepsilon\in (0,1]$. For small values of $\varepsilon$, an interior layer appears in a neighbourhood of the set where the right-hand side has the discontinuity. A finite difference scheme based on the standard monotone approximation of the differential equation in the case of uniform grids converges only under the condition $N^{-1} = o(\varepsilon)$, $N^{-1}_0 = o(1)$, where $N +1$ and $N_0+1$ are the numbers of nodes in the space and time meshes, respectively. A finite difference scheme is constructed on a piecewise-uniform grid condensing in a neighbourhood of the interior layer. The solution of this scheme converges $\varepsilon$-uniformly at the rate $\mathcal{O}(N^{-1}lnN+N^{-1}_0)$. Numerical experiments confirm the theoretical results.
C. Clavero, J. L. Gracia, G. I. Shishkin & L. P. Shishkina. (1970). Grid Approximation of a Singularly Perturbed Parabolic Equation with Degenerating Convective Term and Discontinuous Right-Hand Side. International Journal of Numerical Analysis and Modeling. 10 (4). 795-814. doi:
Copy to clipboard
The citation has been copied to your clipboard | 2023-02-05 10:43:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8681203126907349, "perplexity": 633.8472369223463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500251.38/warc/CC-MAIN-20230205094841-20230205124841-00493.warc.gz"} |
https://w3.lnf.infn.it/event/the-complex-structure-of-nucleon-form-factors-exploring-the-riemann-surfaces-of-their-ratio/ | • Questo evento è passato.
## The complex structure of nucleon form factors exploring the Riemann surfaces of their ratio
### 15 Dicembre 2021 @ 14:30 - 17:10
After defining the theoretical bases behind the properties of the nucleon form factor, we study the particular case of the Lambda baryon, obtaining for the first time crucial information concerning the ratio $G_E/G_M$, such as determinations for the phase and the presence of space-like zeros.
Join Zoom Meeting
https://infn-it.zoom.us/j/2589495113?pwd=RjRVOEhGYTRqYkRuR09uME10bXA0UT09
Meeting ID: 258 949 5113
Passcode: 8GPN4j
### Dettagli
Data:
15 Dicembre 2021
Ora:
14:30 - 17:10
Categoria Evento:
Target:
Sito web:
https://agenda.infn.it/event/29088/
Aula Salvini | 2022-10-04 10:24:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6714199781417847, "perplexity": 7676.630041158886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337490.6/warc/CC-MAIN-20221004085909-20221004115909-00644.warc.gz"} |
https://labs.tib.eu/arxiv/?author=Joseph%20Huehnerhoff | • ### Towards Space-like Photometric Precision from the Ground with Beam-Shaping Diffusers(1710.01790)
Oct. 4, 2017 astro-ph.EP, astro-ph.IM
We demonstrate a path to hitherto unachievable differential photometric precisions from the ground, both in the optical and near-infrared (NIR), using custom-fabricated beam-shaping diffusers produced using specialized nanofabrication techniques. Such diffusers mold the focal plane image of a star into a broad and stable top-hat shape, minimizing photometric errors due to non-uniform pixel response, atmospheric seeing effects, imperfect guiding, and telescope-induced variable aberrations seen in defocusing. This PSF reshaping significantly increases the achievable dynamic range of our observations, increasing our observing efficiency and thus better averages over scintillation. Diffusers work in both collimated and converging beams. We present diffuser-assisted optical observations demonstrating $62^{+26}_{-16}$ppm precision in 30 minute bins on a nearby bright star 16-Cygni A (V=5.95) using the ARC 3.5m telescope---within a factor of $\sim$2 of Kepler's photometric precision on the same star. We also show a transit of WASP-85-Ab (V=11.2) and TRES-3b (V=12.4), where the residuals bin down to $180^{+66}_{-41}$ppm in 30 minute bins for WASP-85-Ab---a factor of $\sim$4 of the precision achieved by the K2 mission on this target---and to 101ppm for TRES-3b. In the NIR, where diffusers may provide even more significant improvements over the current state of the art, our preliminary tests have demonstrated $137^{+64}_{-36}$ppm precision for a $K_S =10.8$ star on the 200" Hale Telescope. These photometric precisions match or surpass the expected photometric precisions of TESS for the same magnitude range. This technology is inexpensive, scalable, easily adaptable, and can have an important and immediate impact on the observations of transits and secondary eclipses of exoplanets.
• We describe the Sloan Digital Sky Survey IV (SDSS-IV), a project encompassing three major spectroscopic programs. The Apache Point Observatory Galactic Evolution Experiment 2 (APOGEE-2) is observing hundreds of thousands of Milky Way stars at high resolution and high signal-to-noise ratio in the near-infrared. The Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey is obtaining spatially-resolved spectroscopy for thousands of nearby galaxies (median redshift of z = 0.03). The extended Baryon Oscillation Spectroscopic Survey (eBOSS) is mapping the galaxy, quasar, and neutral gas distributions between redshifts z = 0.6 and 3.5 to constrain cosmology using baryon acoustic oscillations, redshift space distortions, and the shape of the power spectrum. Within eBOSS, we are conducting two major subprograms: the SPectroscopic IDentification of eROSITA Sources (SPIDERS), investigating X-ray AGN and galaxies in X-ray clusters, and the Time Domain Spectroscopic Survey (TDSS), obtaining spectra of variable sources. All programs use the 2.5-meter Sloan Foundation Telescope at Apache Point Observatory; observations there began in Summer 2014. APOGEE-2 also operates a second near-infrared spectrograph at the 2.5-meter du Pont Telescope at Las Campanas Observatory, with observations beginning in early 2017. Observations at both facilities are scheduled to continue through 2020. In keeping with previous SDSS policy, SDSS-IV provides regularly scheduled public data releases; the first one, Data Release 13, was made available in July 2016.
• The Sloan Digital Sky Survey (SDSS) has been in operation since 2000 April. This paper presents the tenth public data release (DR10) from its current incarnation, SDSS-III. This data release includes the first spectroscopic data from the Apache Point Observatory Galaxy Evolution Experiment (APOGEE), along with spectroscopic data from the Baryon Oscillation Spectroscopic Survey (BOSS) taken through 2012 July. The APOGEE instrument is a near-infrared R~22,500 300-fiber spectrograph covering 1.514--1.696 microns. The APOGEE survey is studying the chemical abundances and radial velocities of roughly 100,000 red giant star candidates in the bulge, bar, disk, and halo of the Milky Way. DR10 includes 178,397 spectra of 57,454 stars, each typically observed three or more times, from APOGEE. Derived quantities from these spectra (radial velocities, effective temperatures, surface gravities, and metallicities) are also included.DR10 also roughly doubles the number of BOSS spectra over those included in the ninth data release. DR10 includes a total of 1,507,954 BOSS spectra, comprising 927,844 galaxy spectra; 182,009 quasar spectra; and 159,327 stellar spectra, selected over 6373.2 square degrees.
• ### A Study of the Unusual Z Cam Systems IW Andromedae and V513 Cassiopeia(1311.1557)
Nov. 7, 2013 astro-ph.SR
The Z Cam stars IW And and V513 Cas are unusual in having outbursts following their standstills in contrast to the usual Z Cam behavior of quiescence following standstills. In order to gain further understanding of these little-studied systems, we obtained spectra correlated with photometry from the AAVSO throughout a 3-4 month interval in 2011. In addition, time-resolved spectra were obtained in 2012 that provided orbital periods of 3.7 hrs for IW And and 5.2 hrs for V513 Cas. The photometry of V513 Cas revealed a regular pattern of standstills and outbursts with little time at quiescence, while IW And underwent many excursions from quiescence to outburst to short standstills. The spectra of IW And are similar to normal dwarf novae, with strong Balmer emission at quiescence and absorption at outburst. In contrast, V513 Cas shows a much flatter/redder spectrum near outburst with strong HeII emission and prominent emission cores in the Balmer lines. Part of this continuum difference may be due to reddening effects. While our attempts to model the outburst and standstill states of IW And indicate a mass accretion rate near 3E-9 solar masses per year, we could find no obvious reason why these systems behave differently following standstill compared to normal Z Cam stars. | 2021-04-11 07:04:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5983908176422119, "perplexity": 7681.107858805803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038061562.11/warc/CC-MAIN-20210411055903-20210411085903-00594.warc.gz"} |
https://shelah.logic.at/papers/1020/ | # Sh:1020
• Shelah, S., & Usvyatsov, A. (2019). Minimal stable types in Banach spaces. Adv. Math., 355, 106738, 29.
• Abstract:
We prove existence of wide types in a continuous theory expanding a Banach space, and density of minimal wide types among stable types in such a theory. We show that every minimal wide stable type is “generically” isometric to an \ell_2 space. We conclude with a proof of the following formulation of Henson’s Conjecture: every model of an uncountably categorical theory expanding a Banach space is prime over a spreading model, isometric to the standard basis of a Hilbert space.
• published version (29p)
Bib entry
@article{Sh:1020,
author = {Shelah, Saharon and Usvyatsov, Alexander},
title = {{Minimal stable types in Banach spaces}},
} | 2021-06-18 12:01:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7086175680160522, "perplexity": 2168.7186458893984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487636559.57/warc/CC-MAIN-20210618104405-20210618134405-00603.warc.gz"} |
http://www.sciforums.com/threads/nuclear-india-thread-2.53219/#post-1004036 | Discussion in 'World Events' started by Anomalous, Mar 14, 2006.
1. ### AnomalousBannedBanned
Messages:
1,710
Thoes who wanted have a say on India having 150 Nuclear Missiles can post here.
The original thread was closed so If this one is closed create a new thread,
Dont let SciForums take away your FREEDOM OF EXPRESSION,
I urge all to create hundreds of threads when closed, bring down this form owner to his keens
, Lets get mass Banned.
PS> dont forget to give the thread no, this is thread 2. Lets see howmany threads we will have to create to end this attempt to control of Internet.
3. ### James RJust this guy, you know?Staff Member
Messages:
36,484
Anomalous has been banned for 14 days for spamming links to this thread across multiple threads. This is at least his second offence.
Next time, the ban will be permanent.
5. ### LightRegistered Senior Member
Messages:
2,258
Yea!!!!!! I can hardly wait two weeks for his next offense!
7. ### DiamondHeartsRegistered Senior Member
Messages:
2,557
Can anyone tell me why the old thread was closed? James?
8. ### Wounded Iraqi MuslimBannedBanned
Messages:
11
Becuase SCiForums is A DICTATORSHIP BASED FORUM.
9. ### Wounded Iraqi MuslimBannedBanned
Messages:
11
HEY JAMES R, this is not a private Forum, this public forum, hence Public should decide not U,
BLOODY JAMES R
10. ### James RJust this guy, you know?Staff Member
Messages:
36,484
Diamondhearts:
Actually, I can't remember closing that thread. Maybe it wasn't me. Or, maybe it was and I can't remember why. Probably it was due to the squabbling going on.
Anyway, to give it the benefit of the doubt, I've left the current thread open, despite Anomalous's spamming.
11. ### James RJust this guy, you know?Staff Member
Messages:
36,484
Anomalous/Wounded Iraqi Muslim:
Yes, it is. As are most forums on the internet, I think you'll find. Site owners and moderators have a right to moderate content.
That has nothing to do with it. Break the forum rules, and your thread might be closed. Post racist comments, or obscenities, or other offensive material, which you agreed not to post when you signed up, and your thread might be closed.
But in your specific case, you decided you would spam the forums with posts. In fact, see the first post of the current thread. Even you must be bright enough to know that spamming will not be tolerated. I am not going to chase you around cleaning up your mess; you're not worth the effort.
Also, you may not be aware, but there is a new policy in place that people who come back as sock puppets during a temporary ban will, from now on, have their original accounts permanently banned.
Last edited: Mar 15, 2006
12. ### AsguardKiss my dark sideValued Senior Member
Messages:
23,049
no james it was me
13. ### James RJust this guy, you know?Staff Member
Messages:
36,484
Anomalous:
I have just reviewed the posts you made under the name "Wounded Iraqi Muslim".
That name is obviously offensive and misleading, since you are neither wounded, Iraqi or Muslim. And you have used it almost exclusively to make fun of the religion of Islam.
But, more to the point, you were banned yesterday for spamming more than 20 threads with a link to this current thread. I am sure it would have been many more, had I not caught you at it and stopped you.
A temporary ban aims for the member who has infringed the rules to go away from the forum for a while, take some time out and think about their actions. It is then hoped that they will return with a modified attitude.
Immediately following your temporary banning, what did you do? You chose to create a sock puppet and immediately post another 66 posts, practically all of them insulting either other users or a religion.
You have had ample warnings by personal messaging and on the forum itself. You have previously been banned for a period of 7 days. Even then, you returned with sock puppets during your ban.
Since you display no ability to learn, and no inclination to abide by the rules of this forum, you are no longer wanted here.
I hope you will learn from this experience and apply it constructively in your activities on other internet forums.
14. ### Cottontop3000Death BeckonedRegistered Senior Member
Messages:
2,959
Thank you, James R.
15. ### spidergoatLiddle' Dick TaterValued Senior Member
Messages:
53,966
Do we still update the ban list? I don't wanna be the last one since 2005.
16. ### Michael歌舞伎Valued Senior Member
Messages:
20,285
Well anyway, India’s nuclear weapons program is supported because India stands a good chance of peacefully, and willingly, integrating into the international institutions that the USA/Europe/Japan have setup and implicated since WWII - to ensure their mutual power and influence. And has a LOT to OFFER the international community in return.
At present Russia and China have both forgone trying to create a parallel system and are also looking to join and exploit the current system of international power brokerage. Think of them when you think of Iran.
As to Pakistan, it does have a growing economy based around labor, but they can in no way compete with India and China both terms of raw labor and soon to be educated populous. Pakistan is also stuck with the very old-fashioned idea that Religious Law should be used to rule the people. While that system had a place 5000 years ago it’s been superseded in countries with educated populous. That Pakistan STILL uses this antiquated system is a telltale sign that Pakistan isn’t setting itself up for longer-term success. As such, Pakistan is not given the amount of attention due India. The best I can see Pakistan doing is becoming the labor pool for the Atheistic Chinese in the future.
Back to Iran. Iran gets some attention because it is an OIL resource rich country (well since they plundered the Kurds for their oil resources in the North East). Similar to the communists of the last century they are proposing a new international system. And while they would love that to include Islam they will settle with changing some of the petrol-dollars into Euros. Well expect a negative response by America.
THAT has nothing to do with the Religion Islam. For all the USA could care Iran could be communist or spaghetti-bolognaise-ish!
And a comment on my last post in the locked thread is fine too
Michael
PS: Sorry if any feelings were hurt.
PSS: wounded Iraqi; come on stop ruining this forum. We can all take it and give it, but youre are really getting on my nerves with those sorts of satirical posts.
If you have a point make it.
If not then remember: Sarcasm is the lowest form of wit!
17. ### DiamondHeartsRegistered Senior Member
Messages:
2,557
My ancestors converted to Islam of their own free will. As a matter of fact, we were persecuted by Hindus and my family which was a royal family of Rajput and lost their belongings and we forced to leave our home because of hatred and prejudice against us. We were made from high caste to low caste, and treated horribly. This goes for many people of the region, not only Muslims but Dalits as well who were the oppressed native dark people and Indian Christians.
If you know anything about Pakistan, you would see the beauty and magnificience of our culture which is very, very similar to Indian culture because the modern culture of India was made by the Muslim Mongol kings like Sultan Babar, Aurangzeb, Shah Jahan, etc. The difference is only our love for our religion and the mighty principles of Islam. Hence, the abloishment of Indian caste system and idol worship in our culture. Hazrat Muhammad bin Qasim conquered only Sindh, which he had a dispute with the Raja for its sea pirates who robbed Muslim boats.
You would be right if infact you believed that Islam and Muslims are inherently violent, but you are completely wrong. Islam won by hearts and minds, not the swords. The fact that India is still majority Hindu is a testiment to the fair treatment and plurality of the Muslim kings. You talk of culture and civilization, it was the Muslims who united the disunited Indian subcontinent, Muslims who built the great mosques and shrines like the Taj Mahal. What more do you say that this a proof of culture?
You misunderstood completely. I meant the Muslim sultans were the ones to unify all of India into one body and bring all the cultures together. Muslims are the first to use the word Hindu to refer to the people as one religion, they infact thought of themselves as having different religions prior to this. The Pakistani culture is based highly on the enlightened age brought by the Muslim sultanates, it is always refered in text books as the golden age of India.
You misunderstood, I didn't say that we are strong to show any superiority to anyone. I meant to reply to the post of one of the posters who said Pakistanis are disunited. Nuclear weapons are bad, but if India got them, we had to as well, otherwise India would have destroyed Pakistan by now. And then you would witness a real massacre and genocide like that in Indian regions of Gujarat and Kashmir on a population of 160 million in Pakistan.
The Japanese were forced to surrender on the demands of the Americans. This is hardly fair, Japan has no real relevance in today's politics. Their weakness and servitude is a direct result of defeat by America.
The Buddhist statues issue was not about insulting Buddhism. It was about the Scandinavian foreignors in Afghanistan who funded millions of dollars to rebuild stone statues, while spending nothing on the populace. The Taliban asked them why didn't they help people and spend on poor people if they love Afghani culture rather than spend money fixing some stone statues that dont help anyone. Their point was valid, but when the foreignors disagreed, the Taliban exploded the statues in response to show the importance of people over states. Also many Buddhists weren't offended because Buddha said himself not to make an image of him after he dies. We had this talk before. Many Muslims respect Buddha and believe he was a righteous man.
Truth is hardly that clear, especially coming from a Westerner. Most Indians themselves will agree that Pakistanis share an almost mirror culture to them. What are you trying to prove?
Pakistan holds many ancient monuments. The city of Lahore is 3000 years old. Harappa and Mohen Jaro are in Pakistan. We have a culture and a heritage, we just happened to embrace Islam. Britian also made India, as well as Pakistan, but only after both sides demanded their rights. The religion of Adam and Eve is the oldest religion. That religion Muslims believe it to be Islam.
Pakistan built nuclear weapons from their own efforts, we have a highly structured army, we are the gateway to the MiddleEast, Central Asia, and South Asia. We are a rival state to India prevent the defeat of the freedom movements of Tamils, and Kashmiris. We are the only nation which prevents Indian dominace and control of the other small nations in South Asia who are at odds with India like Sri Lanka and Bangladesh. We are a nation which protects and patrons the rights of mistreated minorities in India like Muslims, Christians, and Dalits. Pakistan is a poor country, but we are honest and loving. We are a 160 million people nation which withstood attack from a nation ten times our size and still retained our strength. We have great diplomatic ties with China, Iran, the Central Asian nations, Turkey, African Muslim nations, Arab nations, and the Malayu nations. We sent one of the biggest peacekeeping missions to Bosnia and Kosovo, and fought hard for their rights. We are the second most populous Muslim nation, the fifth largest in populationwise in the world. Pakistan is strategic to world security and checking the balance of Asia to bring down the probability of war. Pakistan was the first main nation which brought America and China together since the Communist revolution. Pakistan was partly responsible for the defeat of the USSR. We want to agree on a joint signing of the NPT treaty, reducing of arms with India, and also more peace initiatives between India and Pakistan, as well as a UN organized plebiscite vote in Kashmir to end tensions between the two countries. India however does not agree with us, nor does it follow up our goodwill gestures.
Pakistan is one of the most important and vital nations in the world which has always been a uniting factor int he world.
18. ### Michael歌舞伎Valued Senior Member
Messages:
20,285
Are you trying to tell me that the Arab and then Persian/Turk Muslims didn’t decimate the Hindus in over 600 years of war?
http://india.indymedia.org/en/2003/02/3066.shtml
I wonder why this historian says brutality?
Lets get the actual words recorded by Muslim Historians shall we?
Mahmud Khalji of Malwa (1436-69 AD) also destroyed Hindu temples and built mosques on their sites. He heaped many more insults on the Hindus. Ilyas Shah of Bengal (1339-1379 AD) invaded Nepal and destroyed the temple of Svayambhunath at Kathmandu. He also invaded Orissa, demolished many temples, and plundered many places. The Bahmani sultans of Gulbarga and Bidar considered it meritorious to kill a hundred thousand Hindu men, women, and children every year. They demolished and desecrated temples all over South India.
The climax came during the invasion of Timur in 1399 AD. He starts by quoting the Quran in his Tuzk-i-Timuri:
To start with he stormed the fort of Kator on the border of Kashmir. He ordered his soldiers "to kill all the men, to make prisoners of women and children, and to plunder and lay waste all their property". Next, he "directed towers to be built on the mountain of the skulls of those obstinate unbelievers". Soon after, he laid siege to Bhatnir defended by Rajputs. They surrendered after some fight, and were pardoned. But Islam did not bind Timur to keep his word given to the "unbelievers". His Tuzk-i-Timuri records:
At Sarsuti, the next city to be sacked, "all these infidel Hindus were slain, their wives and children were made prisoners and their property and goods became the spoil of the victors". Timur was now moving through (modern day) Haryana, the land of the Jats. He directed his soldiers to "plunder and destroy and kill every one whom they met". And so the soldiers "plundered every village, killed the men, and carried a number of Hindu prisoners, both male and female".
Loni which was captured before he arrived at Delhi was predominantly a Hindu town. But some Muslim inhabitants were also taken prisoners. Timur ordered that "the Musulman prisoners should be separated and saved, but the infidels should all be dispatched to hell with the proselytizing sword".
By now Timur had captured 100,000 Hindus. As he prepared for battle against the Tughlaq army after crossing the Yamuna, his Amirs advised him "that on the great day of battle these 100,000 prisoners could not be left with the baggage, and that it would be entirely opposed to the rules of war to set these idolators and enemies of Islam at liberty". Therefore, "no other course remained but that of making them all food for the sword".
Tuzk-i-Timuri continues:
The Tughlaq army was defeated in the battle that ensued next day. Timur entered Delhi and learnt that a "great number of Hindus with their wives and children, and goods and valuables, had come into the city from all the country round".
He directed his soldiers to seize these Hindus and their property. Tuzk-i-Timuri concludes:
So this is how We Muslims Ruled India for 1000 years came about huh? Yeah, just great. The truth is DiamondHearts, no conquered peoples are happy to convert their religions. Sure, maybe people worshiping trees and walking around square rocks, but certainly not people of 5000 year old civilizations. You are Muslim because your forefathers lost in a series of brutal wars fought for centuries. If not you’d be Buddhist or Hindu.
When you mentioned that your forefathers were lower on the caste system, I thought this was ironic, as that caste system was set up as part of religion and in many Islamic countries non-Muslims are of a lower, if not outlawed, status! Also look at the life the leaders in Islam lead! Look at the hundreds of thousands of poor Muslims in Pakistan! If you dont get it then I dont see how you ever will. I suppose that is the power of religion.
I’m not sure if you can even appreciate that? The irony?
Maybe not?
Michael
Here you go.
Tell me DiamondHearts, from all of your knowledge and insight you have gained by being a Muslim and reading the Quran; What is one enlightened idea that you have gained by being a Muslim?
Some insight into humanity, that hasn’t been expressed a thousand years before by a thousand religions before?
Anything unique?
Or are you just parroting what your parents taught you to think because that’s what their parents taught them to think because some Arabs realized religion can be used to conquer India for its resources and riches?
Messages:
2,883
20. ### DiamondHeartsRegistered Senior Member
Messages:
2,557
This is an INDIAN website, what are you talking about. This is pure propaganda against Pakistan and the Islamic rule of the mughals.
Your entire list of quotation is completely fradulent. This was taken from Hindu extremist propaganda spreading to weaken the influence and honor of Muslim rule of India. Did you even bother reading the comments on these articles?
The quotes you presented were copied from the following Hindu websites:
http://www.hindunet.org/
www.bharatvani.org/
www.hindu-religion.net/
www.audarya-fellowship.com/
You are a fool who knows nothing of the history of India or the history of Pakistan. You have no right to tell me about my history.
Have you even visited any Muslim country? You are living on medievel notions which have no base in reality?
Yes, Pakistan is poor, and so is India. The two countries have a large gap between rich and poor and both countries have high illeteracy rates and make less than a dollar a day for each individual.
This has nothing to due with religion, infact when the Islamic sultanates of the Mughals ruled India, it was an age of learning, wealth, and prosperity for the people of India. It was the colonial invasion and modern constant inference in Muslim lands and third world countries which interferes with the development of these places.
I have gained knoweldge of Allah (swt) and have experienced the truth and justice of my religion and culture. I have seen the superiority of Islamic values over western values. I have seen the perfect system for living life, and have obtained peace in my subservience to Allah (swt). By learning the Seerah of the Holy Prophet Muhammad (s), I have read the beauty and magnificience of the most perfect man to have ever lived and a model to all people who desire to learn righteousness.
This same message was preached by all prophets, some of which are, Hazrat Adam, Noah, Abraham, Ismail, Ishaq, David, Solomon, Moses, Jesus and many more (peace be upon them both).
Allah (swt) sent his final prophet Muhammad (s), descendant of Ismail the son of Abraham (peace to them both), to preach the universal message to all of humanity rather than a select group such as the Bani Isra'il (Children of Israel) like the prophets which were the descendants of Ishaq. Islam is the religion which Allah (swt) has ordained as the true religion, which was preached to all prophets and has ordained to protect this religion from change. All prophets had a miracle and the miracle of the Prophet Muhammad (s) is the blessed Quran which is the ultimate guidance for all humanity.
Are you telling me what you have read from some bigots or preachers?
I have studied on my own and have extensive knowledge on this subject. It's my history and I have read on it, so you cannot present fake quotes and extremist propaganda without me realizing what is false and what is real.
Allah guide you. Peace.
Messages:
2,557
22. ### Cottontop3000Death BeckonedRegistered Senior Member
Messages:
2,959
Put him on ignore, like the rest of us have.
23. ### Michael歌舞伎Valued Senior Member
Messages:
20,285
I bet this decorated square rock could have paid for a few bowls of rice in afganistan? Good thing the Taliban couldn’t get to KSA!!!
Incidentally, square rocks were used to represent Gods all over Arabia for hundreds of years before Islam. Which is just ironic I guess and a little interesting…huh?
Anyway, I was still curious as to Islamic enlightenment? Being a Muslim and having been blessed with accesses to the Qur`an - the very words of Allah, surely you have gained some new insight into the human condition? Some revelation that would have amazed the ancient Hindus and Buddhists ....and enlightened them?
A personal revelation of sorts?
Something?
……..anything?
Michael | 2022-01-18 00:24:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17491388320922852, "perplexity": 5630.150530930963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300658.84/warc/CC-MAIN-20220118002226-20220118032226-00080.warc.gz"} |
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.em/1276784790 | ### The Growth of CM Periods over False Tate Extensions
Daniel Delbourgo and Thomas Ward
Source: Experiment. Math. Volume 19, Issue 2 (2010), 195-210.
#### Abstract
We prove weak forms of Kato's ${\rm K}_1$-congruences for elliptic curves with complex multiplication, subject to two technical hypotheses. We next use "Magma" to calculate the $\mu$-invariant measuring the discrepancy between the "motivic'' and "automorphic'' {$p$-adic} $L$-functions. Via the two-variable main conjecture, one can then estimate growth in this $\mu$-invariant using arithmetic of the $\Z_p^2$-extension.
First Page:
Primary Subjects: 11R23
Secondary Subjects: 11G40, 19B28 | 2013-06-19 16:39:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18881267309188843, "perplexity": 4841.049248228848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708882773/warc/CC-MAIN-20130516125442-00039-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://par.nsf.gov/biblio/10320816-fractionalized-conductivity-emergent-self-duality-near-topological-phase-transitions | Fractionalized conductivity and emergent self-duality near topological phase transitions
Abstract The experimental discovery of the fractional Hall conductivity in two-dimensional electron gases revealed new types of quantum particles, called anyons, which are beyond bosons and fermions as they possess fractionalized exchange statistics. These anyons are usually studied deep inside an insulating topological phase. It is natural to ask whether such fractionalization can be detected more broadly, say near a phase transition from a conventional to a topological phase. To answer this question, we study a strongly correlated quantum phase transition between a topological state, called a $${{\mathbb{Z}}}_{2}$$ Z 2 quantum spin liquid, and a conventional superfluid using large-scale quantum Monte Carlo simulations. Our results show that the universal conductivity at the quantum critical point becomes a simple fraction of its value at the conventional insulator-to-superfluid transition. Moreover, a dynamically self-dual optical conductivity emerges at low temperatures above the transition point, indicating the presence of the elusive vison particles. Our study opens the door for the experimental detection of anyons in a broader regime, and has ramifications in the study of quantum materials, programmable quantum simulators, and ultra-cold atomic gases. In the latter case, we discuss the feasibility of measurements in optical lattices using current techniques.
Authors:
; ; ;
Award ID(s):
Publication Date:
NSF-PAR ID:
10320816
Journal Name:
Nature Communications
Volume:
12
Issue:
1
ISSN:
2041-1723 | 2022-12-07 15:58:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6372236609458923, "perplexity": 1063.0501770841024}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711200.6/warc/CC-MAIN-20221207153419-20221207183419-00134.warc.gz"} |
https://www.esaral.com/q/a-constant-magnetic-field-50717 | # A constant magnetic field
Question:
A constant magnetic field of $1 \mathrm{~T}$ is applied in the $x>0$ region. A metallic circular ring of radius $1 \mathrm{~m}$ is moving with a constant velocity of $1 \mathrm{~m} / \mathrm{s}$ along the $\mathrm{x}$-axis. At $\mathrm{t}=0 \mathrm{~s}$, the centre of $\mathrm{O}$ of the ring is at $\mathrm{x}=-1 \mathrm{~m} .$ What will be the value of the induced emf in the ring at $\mathrm{t}=1 \mathrm{~s}$ ? (Assume the velocity of the ring does not change.)
1. $1 \mathrm{~V}$
2. $2 \pi V$
3. $2 \mathrm{~V}$
4. $0 \mathrm{~V}$
Correct Option: , 3
Solution:
$e m f=B L V$
$=1 \cdot(2 \mathrm{R}) \cdot 1$
$=2 \mathrm{~V}$ | 2023-02-06 00:22:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7667904496192932, "perplexity": 200.29505305122757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500294.64/warc/CC-MAIN-20230205224620-20230206014620-00746.warc.gz"} |
https://studydaddy.com/question/bus-375-week-4-dq-3-ethics-training | QUESTION
BUS 375 Week 4 DQ 3 Ethics Training
This paperwork of BUS 375 Week 4 DQ 3 Ethics Training contains:
Read at least one article on subject of "ethics" and "corporate training" from the University library and then answer the following: Can ethics be taught in a corporate training environment? If so, discuss how you would conduct such training? If not, discuss why and how employees would come to understand the corporate ethical values and act appropriately. Respond to at least two of your classmates' postings.
• @
• 2 orders completed
Tutor has posted answer for $5.19. See answer's preview$5.19 | 2018-04-20 05:08:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19700315594673157, "perplexity": 7784.735896467246}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937114.2/warc/CC-MAIN-20180420042340-20180420062340-00038.warc.gz"} |
https://mathinfocusanswerkey.com/math-in-focus-grade-8-cumulative-review-chapters-3-4-answer-key/ | # Math in Focus Grade 8 Cumulative Review Chapters 3-4 Answer Key
Go through the Math in Focus Grade 8 Workbook Answer Key Cumulative Review Chapters 3-4 to finish your assignments.
## Math in Focus Grade 8 Course 3 A Cumulative Review Chapters 3-4 Answer Key
Concepts and Skills
Solve each equation. Show your work. (Lesson 3.1)
Question 1.
0.2(x + 2) – 2 = 0.4
0.2x + 0.4 – 2 = 0.4
0.2x = 2
x = 2/0.2
x = 10
Question 2.
2(x – 5) – 3(3 – x) = $$\frac{1}{2}$$ (x – 2)
2x – 10 – 9 + 3x = $$\frac{1}{2}$$x – 1
5x – 18 = $$\frac{1}{2}$$x
4 $$\frac{1}{2}$$x = 18
4x = 36
x = 36/4 = 9
Question 3.
$$\frac{x}{3}$$ + $$\frac{3+x}{6}$$ = 3
$$\frac{x}{3}$$ + $$\frac{3+x}{6}$$ = 3
2x + 3 + x = 18
3x = 18 – 3
3x = 15
x = 15/3
x = 5
Question 4.
$$\frac{2(x+3)}{5}$$ – $$\frac{x-1}{2}$$ = 2
$$\frac{2(x+3)}{5}$$ – $$\frac{x-1}{2}$$ = 2
$$\frac{2x+6}{5}$$ – $$\frac{x-1}{2}$$ = 2
4x + 12 – 5x + 5 = 10
-x + 17 = 10
-x = 10 – 17
-x = -7
x = 7
Express each decimal as a fraction, without the use of calculator. (Lesson 3.1)
Question 5.
$$0 . \overline{5}$$
$$\frac{5}{9}$$
Question 6.
$$0 . \overline{8}$$
Answer: $$\frac{8}{9}$$
Question 7.
$$0.2 \overline{7}$$
Answer: $$\frac{5}{18}$$
Question 8.
$$0 . \overline{09}$$
Answer: $$\frac{90}{999}$$
Tell whether each equation has one solution, no solution, or an infinite number of solutions. Show your work. (Lesson 3.2)
Question 9.
3x – 2 = -3($$\frac{2}{3}$$ – x)
3x – 2 = -3($$\frac{2}{3}$$ – x)
3x – = -2 + 3x
3x – 3x + 2 = 0
2
Infinite solutions
Question 10.
3x + 6 = -2($$\frac{3}{2}$$ – x)
3x + 6 = -2($$\frac{3}{2}$$ – x)
3x + 6 = -3 + 2x
3x – 2x = -3 – 6
x = -9
One solution
Question 11.
5(6a – 6) + 40 = 3(10a – 7) + 31
5(6a – 6) + 40 = 3(10a – 7) + 31
30a – 30 + 40 = 30a – 21 + 31
Infinte solution
Question 12.
3x + 7 = -8($$\frac{3}{4}$$ – x)
3x + 7 = -8($$\frac{3}{4}$$ – x)
3x + 7 = -6 + 8x
3x – 8x = -6 – 7
-5x = -13
x = 13/5
One solution
Question 13.
$$\frac{1}{4}$$(2x – 1) = $$\frac{1}{2}$$x + $$\frac{3}{8}$$
Given,
$$\frac{1}{4}$$(2x – 1) = $$\frac{1}{2}$$x + $$\frac{3}{8}$$
$$\frac{1}{4}$$x – $$\frac{1}{4}$$ = $$\frac{1}{2}$$x + $$\frac{3}{8}$$
$$\frac{1}{4}$$x – $$\frac{1}{2}$$x = $$\frac{1}{4}$$ + $$\frac{3}{8}$$
–$$\frac{1}{4}$$x = $$\frac{5}{8}$$
x = –$$\frac{5}{2}$$
No solution
Question 14.
$$\frac{1}{8}$$x + 6 = $$\frac{1}{16}$$(2x – 96)
Given,
$$\frac{1}{8}$$x + 6 = $$\frac{1}{16}$$(2x – 96)
$$\frac{1}{8}$$x + 6 = $$\frac{1}{8}$$x – 6
No solution
Find the value of y when x = 4. (Lesson 3.3)
Question 15.
2x – 1 = $$\frac{1}{2}$$ + y
2x – 1 = $$\frac{1}{2}$$ + y
x = 4
2(4) – 1 = $$\frac{1}{2}$$ + y
7 = $$\frac{1}{2}$$ + y
y = 7 – $$\frac{1}{2}$$
y = 6 $$\frac{1}{2}$$ or 6.5
Question 16.
$$\frac{1}{4}$$(2y – 1) = 0.6 + $$\frac{5x}{8}$$
Given,
$$\frac{1}{4}$$(2y – 1) = 0.6 + $$\frac{5x}{8}$$
X = 4
$$\frac{1}{4}$$(2y – 1) = 0.6 + $$\frac{5(4)}{8}$$
$$\frac{1}{4}$$(2y – 1) = 0.6 + $$\frac{20}{8}$$
0.25 (2y – 1) = 0.6 + 2.5
0.5y – 0.25 = 3.1
0.5y = 3.1 + 0.25
0.5y = 3.35
y = 6.7
Express y in terms of x. Find the value of y when x = 4. (Lesson 3.4)
Question 17.
6(3x + y) = 3
Given equation
6(3x + y) = 3
18x + 6y = 3
Substitute the value of x in the given equation
x = 4
18(4) + 6y = 3
72 + 6y = 3
24 + 2y = 1
2y = -23
y = -23/2
y = -11 1/2
Question 18.
$$\frac{2 x-1}{4}$$ = 3y
Given,
$$\frac{2 x-1}{4}$$ = 3y
Substitute the value of x in the given equation
3y = $$\frac{2 x-1}{4}$$
3y = (2(4) – 1)4
3y = 7/4
y = 7/12
Express x in terms of y. Find the value of x when y = -2. (Lesson 3.4)
Question 19.
($$\frac{2x-y}{5}$$) = 9
Given,
($$\frac{2x-y}{5}$$) = 9
2x – y = 9 × 5
2x – y = 45
Substitute the value of y in the given equation
y = -2
2x – (-2) = 45
2x + 2 = 45
2x = 45 – 2
2x = 43
x = 43/2
x = 21.5
Question 20.
0.75(x + y) = 12
Given,
0.75(x + y) = 12
0.75x + 0.75y = 12
Substitute the value of y in the given equation
y = -2
0.75x + 0.75 (-2) = 12
0.75x – 1.5 = 12
0.75x = 12 – 1.5
0.75x = 10.5
x = 14
Find the slope of the line passing through each pair of points. (Lesson 4.1)
Question 21.
A (1, 2), B (4, 8)
The line passes through the points (1, 2) and (4, 8).
Slope m = $$\frac{8-2}{4-1}$$
= $$\frac{6}{3}$$
= 2
Thus the slope, m = 2
Question 22.
C(1, 4), D(2, 7)
The line passes through the points (1, 4) and (2, 7).
Slope m = $$\frac{7-4}{2-1}$$
= $$\frac{3}{1}$$
= 3
Thus the slope, m = 3
Question 23.
E (0, 0), F (-7, 7)
The line passes through the points (0, 0) and (-7, 7).
Slope m = $$\frac{7-0}{-7-0}$$
= $$\frac{7}{-7}$$
= -1
Thus the slope, m =-1
Question 24.
G (-3, 0), F (0, -6)
The line passes through the points (-3, 0) and (0, -6).
Slope m = $$\frac{-6-0}{0+3}$$
= $$\frac{-6}{3}$$
= -2
Thus the slope, m = -2
Identify the y-intercept. Then calculate the slope using the points indicated. (Lessons 4.1, 4.2)
Question 25.
The line passes through the points (0, 10) and (2.5, 30).
Slope m = $$\frac{30-10}{2.5-0}$$
= $$\frac{20}{2.5}$$
= 8
Thus the slope, m = 8
y-intercept = 10
Question 26.
The line passes through the points (0, 3) and (0, 14).
m = 14/3
y-intercept = 14
For each equation, find the slope and the y-intercept of the graph of the equation. (Lesson 4.3)
Question 27.
y = 7x + 1
Given,
y = 7x + 1
slope, m = 7
y-intercept, b = 1
Question 28.
y = -2x – 5
Given,
y = -2x – 5
slope, m = -2
y-intercept, b = -5
Question 29.
2y = 4x + 6
Given,
2y = 4x + 6
y = 2x + 3
slope, m = 2
y-intercept, b = 3
Question 30.
4y + 3x = 8
Given,
4y + 3x = 8
4y = -3x + 8
y = 3/4 x + 2
slope, m = 3/4
y-intercept, b = 2
Use the given slope and y-intercept of a line to write an equation in slope-intercept form. (Lesson 4.3)
Question 31.
Slope, m = 3
y-intercept, b = 2
Slope, m = 3
y-intercept, b = 2
The equation in slope-intercept is y = mx + b
y = 3x + 2
Question 32.
Slope, m = -1
y-intercept, b = 4
Slope, m = -1
y-intercept, b = 4
The equation in slope-intercept is y = mx + b
y = -1x + 4
Question 33.
Slope, m = 5
y-intercept, b = -2
Slope, m = 5
y-intercept, b = -2
The equation in slope-intercept is y = mx + b
y = 5x – 2
Question 34.
Slope, m = –$$\frac{3}{2}$$
y-intercept, b = -5
Slope, m = –$$\frac{3}{2}$$
y-intercept, b = -5
The equation in slope-intercept is y = mx + b
y = –$$\frac{3}{2}$$x – 5
Solve. Show your work. (Lesson 4.3)
Question 35.
Write an equation of the line parallel to 2y = 4x + 3 that has a y-intercept of 4.
Answer: y = 2x + 4
Question 36.
A line has slope -4 and passes through the point ($$\frac{3}{4}$$, 3). Write an equation of the line.
y = -4x + 6
Question 37.
Write an equation of the line that passes through the point (2, 3) and is parallel to 3y + 2x = 7.
y = –$$\frac{2}{3}$$x + $$\frac{13}{3}$$
Use graph paper. Graph each linear equation. Use 1 grid scale to represent 1 unit on both axes for the interval -5 to 5. (Lesson 4.4)
Question 38.
y = -2x + 8
Question 39.
y = -2 – 3x
Question 40.
y = $$\frac{1}{2}$$x – 3
Solve. Show your work. (Lesson 4.5)
Question 41.
Bobby and Chloe each have a bank account. The balance, y dollars, in each account for x weeks, is shown in the graph.
a) Who saved money and who withdrew money during the 10 weeks?
Answer: Bobby’s savings increased, so he saved money. Chloe’s savings decreased, so she withdrew money.
b) Whose balance changed more over 10 weeks?
Bobby changed by $100, Chloe’s changed by -$50; Bobby’s changed more.
c) Explain what information the coordinates of P give about the situation.
After 5 weeks, Bobby and Chloe have the same amount of money, which is $75. Problem Solving Solve. Show your work. Question 42. The diagram shows a sheet of metal of width y inches. It is bent into a U-shaped gutter that is used to channel rain from a roof. The horizontal section of the gutter shown on the right is 10 inches wide and the heights are in the ratio of 2 : 3. (Chapter 3). a) Let x represents the longer height of the gutter, in inches. Write a linear equation for the width of the sheet of metal, y inches-, in terms of the longer height of the gutter, x inches. Answer: y = 5/3 x + 10 b) The width of the sheet of metal is 30 inches. Calculate the longer height of the gutter. Answer: 12 in Question 43. In a grocery store, each apple costs$0.50, each orange costs $0.40, and each pear costs$0.30. Mrs. Fortney bought y apples, three times as many oranges as apples, and 7 fewer pears than apples. $he spent a total of$19.90 on the fruits. (Chapter 3)
a) Write a linear equation to find the amount spent on each fruit.
Answer: 50y + 120y + 30y – 210 = 1990
200y = 2200
b) Find the total cost spent on apples and pears.
Question 44.
Jack traveled from his home to Denver at an average speed of x miles per hour. He arrived in $$\frac{3}{4}$$ hour and took a 15-minute break. From Denver, he traveled at an average speed of (x + 2) miles per hour and reached his grandmother’s place in 1.5 hours. (Chapter 3)
a) Write a linear equation for the total distance traveled, D miles, in terms of average speed, x miles per hour.
D = $$\frac{3}{4}$$x + $$\frac{3}{2}$$
D = $$\frac{3}{4}$$x + 1.5
D = $$\frac{9}{4}$$x + 3
b) The total distance traveled for the whole journey was 120 miles. Find the average speed for both parts of the journey.
First part: 52 mi/h
Second part: 54 mi/h
Use graph paper. Solve.
Question 45.
Xavier walks into an elevator in the basement of a building. Its control panel displays “0” for the floor number. As Xavier goes up, the numbers increase one by one on the display. The table shows the floor numbers and the distance from ground level. (Chapter 4)
a) Graph the relationship between the distance of the elevator from ground level at different floor numbers. Use 1 grid square to represent 1 unit on the horizontal axis for the x interval 0 to 4, and 1 grid square for 10 units on the vertical axis for the y interval -10 to 30.
b) Find the vertical intercept of the graph and explain what information it gives about the situation. | 2022-05-18 02:26:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37907296419143677, "perplexity": 1788.0261565337814}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521041.0/warc/CC-MAIN-20220518021247-20220518051247-00633.warc.gz"} |
https://www.lmfdb.org/L/rational/2/5550/1.1/c1-0 | ## Results (44 matches)
Label $\alpha$ $A$ $d$ $N$ $\chi$ $\mu$ $\nu$ $w$ prim $\epsilon$ $r$ First zero Origin
2-5550-1.1-c1-0-101 $6.65$ $44.3$ $2$ $2 \cdot 3 \cdot 5^{2} \cdot 37$ 1.1 $$1.0 1 -1 1 1.60469 Elliptic curve 5550.be Modular form 5550.2.a.be Modular form 5550.2.a.be.1.1 2-5550-1.1-c1-0-102 6.65 44.3 2 2 \cdot 3 \cdot 5^{2} \cdot 37 1.1$$ $1.0$ $1$ $-1$ $1$ $1.61284$ Elliptic curve 5550.bc Modular form 5550.2.a.bc Modular form 5550.2.a.bc.1.1
2-5550-1.1-c1-0-105 $6.65$ $44.3$ $2$ $2 \cdot 3 \cdot 5^{2} \cdot 37$ 1.1 $$1.0 1 -1 1 1.64878 Elliptic curve 5550.bf Modular form 5550.2.a.bf Modular form 5550.2.a.bf.1.1 2-5550-1.1-c1-0-106 6.65 44.3 2 2 \cdot 3 \cdot 5^{2} \cdot 37 1.1$$ $1.0$ $1$ $-1$ $1$ $1.66700$ Elliptic curve 5550.bi Modular form 5550.2.a.bi Modular form 5550.2.a.bi.1.1
2-5550-1.1-c1-0-107 $6.65$ $44.3$ $2$ $2 \cdot 3 \cdot 5^{2} \cdot 37$ 1.1 $$1.0 1 -1 1 1.69898 Elliptic curve 5550.bj Modular form 5550.2.a.bj Modular form 5550.2.a.bj.1.1 2-5550-1.1-c1-0-108 6.65 44.3 2 2 \cdot 3 \cdot 5^{2} \cdot 37 1.1$$ $1.0$ $1$ $-1$ $1$ $1.72612$ Elliptic curve 5550.bl Modular form 5550.2.a.bl Modular form 5550.2.a.bl.1.1
2-5550-1.1-c1-0-109 $6.65$ $44.3$ $2$ $2 \cdot 3 \cdot 5^{2} \cdot 37$ 1.1 $$1.0 1 -1 1 1.73427 Elliptic curve 5550.bh Modular form 5550.2.a.bh Modular form 5550.2.a.bh.1.1 2-5550-1.1-c1-0-112 6.65 44.3 2 2 \cdot 3 \cdot 5^{2} \cdot 37 1.1$$ $1.0$ $1$ $-1$ $1$ $1.84959$ Elliptic curve 5550.bm Modular form 5550.2.a.bm Modular form 5550.2.a.bm.1.1
2-5550-1.1-c1-0-113 $6.65$ $44.3$ $2$ $2 \cdot 3 \cdot 5^{2} \cdot 37$ 1.1 $$1.0 1 1 2 1.85101 Elliptic curve 5550.a Modular form 5550.2.a.a Modular form 5550.2.a.a.1.1 2-5550-1.1-c1-0-14 6.65 44.3 2 2 \cdot 3 \cdot 5^{2} \cdot 37 1.1$$ $1.0$ $1$ $1$ $0$ $0.602657$ Elliptic curve 5550.m Modular form 5550.2.a.m Modular form 5550.2.a.m.1.1
2-5550-1.1-c1-0-15 $6.65$ $44.3$ $2$ $2 \cdot 3 \cdot 5^{2} \cdot 37$ 1.1 $$1.0 1 1 0 0.608888 Elliptic curve 5550.f Modular form 5550.2.a.f Modular form 5550.2.a.f.1.1 2-5550-1.1-c1-0-16 6.65 44.3 2 2 \cdot 3 \cdot 5^{2} \cdot 37 1.1$$ $1.0$ $1$ $1$ $0$ $0.624104$ Elliptic curve 5550.j Modular form 5550.2.a.j Modular form 5550.2.a.j.1.1
2-5550-1.1-c1-0-21 $6.65$ $44.3$ $2$ $2 \cdot 3 \cdot 5^{2} \cdot 37$ 1.1 $$1.0 1 1 0 0.718969 Elliptic curve 5550.k Modular form 5550.2.a.k Modular form 5550.2.a.k.1.1 2-5550-1.1-c1-0-32 6.65 44.3 2 2 \cdot 3 \cdot 5^{2} \cdot 37 1.1$$ $1.0$ $1$ $1$ $0$ $0.853394$ Elliptic curve 5550.x Modular form 5550.2.a.x Modular form 5550.2.a.x.1.1
2-5550-1.1-c1-0-33 $6.65$ $44.3$ $2$ $2 \cdot 3 \cdot 5^{2} \cdot 37$ 1.1 $$1.0 1 1 0 0.864934 Elliptic curve 5550.bg Modular form 5550.2.a.bg Modular form 5550.2.a.bg.1.1 2-5550-1.1-c1-0-43 6.65 44.3 2 2 \cdot 3 \cdot 5^{2} \cdot 37 1.1$$ $1.0$ $1$ $1$ $0$ $0.929357$ Elliptic curve 5550.r Modular form 5550.2.a.r Modular form 5550.2.a.r.1.1
2-5550-1.1-c1-0-45 $6.65$ $44.3$ $2$ $2 \cdot 3 \cdot 5^{2} \cdot 37$ 1.1 $$1.0 1 1 0 0.945378 Elliptic curve 5550.p Modular form 5550.2.a.p Modular form 5550.2.a.p.1.1 2-5550-1.1-c1-0-47 6.65 44.3 2 2 \cdot 3 \cdot 5^{2} \cdot 37 1.1$$ $1.0$ $1$ $1$ $0$ $0.948181$ Elliptic curve 5550.bk Modular form 5550.2.a.bk Modular form 5550.2.a.bk.1.1
2-5550-1.1-c1-0-49 $6.65$ $44.3$ $2$ $2 \cdot 3 \cdot 5^{2} \cdot 37$ 1.1 $$1.0 1 -1 1 0.972238 Elliptic curve 5550.b Modular form 5550.2.a.b Modular form 5550.2.a.b.1.1 2-5550-1.1-c1-0-51 6.65 44.3 2 2 \cdot 3 \cdot 5^{2} \cdot 37 1.1$$ $1.0$ $1$ $1$ $0$ $1.00485$ Elliptic curve 5550.l Modular form 5550.2.a.l Modular form 5550.2.a.l.1.1
2-5550-1.1-c1-0-56 $6.65$ $44.3$ $2$ $2 \cdot 3 \cdot 5^{2} \cdot 37$ 1.1 $$1.0 1 1 0 1.03747 Elliptic curve 5550.ba Modular form 5550.2.a.ba Modular form 5550.2.a.ba.1.1 2-5550-1.1-c1-0-58 6.65 44.3 2 2 \cdot 3 \cdot 5^{2} \cdot 37 1.1$$ $1.0$ $1$ $-1$ $1$ $1.05567$ Elliptic curve 5550.c Modular form 5550.2.a.c Modular form 5550.2.a.c.1.1
2-5550-1.1-c1-0-61 $6.65$ $44.3$ $2$ $2 \cdot 3 \cdot 5^{2} \cdot 37$ 1.1 $$1.0 1 1 0 1.08935 Elliptic curve 5550.bp Modular form 5550.2.a.bp Modular form 5550.2.a.bp.1.1 2-5550-1.1-c1-0-64 6.65 44.3 2 2 \cdot 3 \cdot 5^{2} \cdot 37 1.1$$ $1.0$ $1$ $1$ $0$ $1.13957$ Elliptic curve 5550.s Modular form 5550.2.a.s Modular form 5550.2.a.s.1.1
2-5550-1.1-c1-0-65 $6.65$ $44.3$ $2$ $2 \cdot 3 \cdot 5^{2} \cdot 37$ 1.1 $$1.0 1 1 0 1.14030 Elliptic curve 5550.bo Modular form 5550.2.a.bo Modular form 5550.2.a.bo.1.1 2-5550-1.1-c1-0-67 6.65 44.3 2 2 \cdot 3 \cdot 5^{2} \cdot 37 1.1$$ $1.0$ $1$ $1$ $0$ $1.14892$ Elliptic curve 5550.bq Modular form 5550.2.a.bq Modular form 5550.2.a.bq.1.1
2-5550-1.1-c1-0-68 $6.65$ $44.3$ $2$ $2 \cdot 3 \cdot 5^{2} \cdot 37$ 1.1 $$1.0 1 -1 1 1.16758 Elliptic curve 5550.e Modular form 5550.2.a.e Modular form 5550.2.a.e.1.1 2-5550-1.1-c1-0-70 6.65 44.3 2 2 \cdot 3 \cdot 5^{2} \cdot 37 1.1$$ $1.0$ $1$ $1$ $0$ $1.17258$ Elliptic curve 5550.bd Modular form 5550.2.a.bd Modular form 5550.2.a.bd.1.1
2-5550-1.1-c1-0-72 $6.65$ $44.3$ $2$ $2 \cdot 3 \cdot 5^{2} \cdot 37$ 1.1 $$1.0 1 1 0 1.20634 Elliptic curve 5550.bn Modular form 5550.2.a.bn Modular form 5550.2.a.bn.1.1 2-5550-1.1-c1-0-73 6.65 44.3 2 2 \cdot 3 \cdot 5^{2} \cdot 37 1.1$$ $1.0$ $1$ $-1$ $1$ $1.21069$ Elliptic curve 5550.d Modular form 5550.2.a.d Modular form 5550.2.a.d.1.1
2-5550-1.1-c1-0-74 $6.65$ $44.3$ $2$ $2 \cdot 3 \cdot 5^{2} \cdot 37$ 1.1 $$1.0 1 -1 1 1.22827 Elliptic curve 5550.i Modular form 5550.2.a.i Modular form 5550.2.a.i.1.1 2-5550-1.1-c1-0-76 6.65 44.3 2 2 \cdot 3 \cdot 5^{2} \cdot 37 1.1$$ $1.0$ $1$ $-1$ $1$ $1.24320$ Elliptic curve 5550.g Modular form 5550.2.a.g Modular form 5550.2.a.g.1.1
2-5550-1.1-c1-0-79 $6.65$ $44.3$ $2$ $2 \cdot 3 \cdot 5^{2} \cdot 37$ 1.1 $$1.0 1 1 0 1.28074 Elliptic curve 5550.br Modular form 5550.2.a.br Modular form 5550.2.a.br.1.1 2-5550-1.1-c1-0-80 6.65 44.3 2 2 \cdot 3 \cdot 5^{2} \cdot 37 1.1$$ $1.0$ $1$ $-1$ $1$ $1.28716$ Elliptic curve 5550.h Modular form 5550.2.a.h Modular form 5550.2.a.h.1.1
2-5550-1.1-c1-0-82 $6.65$ $44.3$ $2$ $2 \cdot 3 \cdot 5^{2} \cdot 37$ 1.1 $$1.0 1 -1 1 1.31460 Elliptic curve 5550.n Modular form 5550.2.a.n Modular form 5550.2.a.n.1.1 2-5550-1.1-c1-0-83 6.65 44.3 2 2 \cdot 3 \cdot 5^{2} \cdot 37 1.1$$ $1.0$ $1$ $-1$ $1$ $1.32252$ Elliptic curve 5550.v Modular form 5550.2.a.v Modular form 5550.2.a.v.1.1
2-5550-1.1-c1-0-84 $6.65$ $44.3$ $2$ $2 \cdot 3 \cdot 5^{2} \cdot 37$ 1.1 $$1.0 1 -1 1 1.33313 Elliptic curve 5550.o Modular form 5550.2.a.o Modular form 5550.2.a.o.1.1 2-5550-1.1-c1-0-86 6.65 44.3 2 2 \cdot 3 \cdot 5^{2} \cdot 37 1.1$$ $1.0$ $1$ $-1$ $1$ $1.35954$ Elliptic curve 5550.y Modular form 5550.2.a.y Modular form 5550.2.a.y.1.1
2-5550-1.1-c1-0-87 $6.65$ $44.3$ $2$ $2 \cdot 3 \cdot 5^{2} \cdot 37$ 1.1 $$1.0 1 -1 1 1.36049 Elliptic curve 5550.w Modular form 5550.2.a.w Modular form 5550.2.a.w.1.1 2-5550-1.1-c1-0-89 6.65 44.3 2 2 \cdot 3 \cdot 5^{2} \cdot 37 1.1$$ $1.0$ $1$ $-1$ $1$ $1.39406$ Elliptic curve 5550.q Modular form 5550.2.a.q Modular form 5550.2.a.q.1.1
2-5550-1.1-c1-0-96 $6.65$ $44.3$ $2$ $2 \cdot 3 \cdot 5^{2} \cdot 37$ 1.1 $$1.0 1 -1 1 1.48756 Elliptic curve 5550.t Modular form 5550.2.a.t Modular form 5550.2.a.t.1.1 2-5550-1.1-c1-0-97 6.65 44.3 2 2 \cdot 3 \cdot 5^{2} \cdot 37 1.1$$ $1.0$ $1$ $-1$ $1$ $1.53152$ Elliptic curve 5550.bb Modular form 5550.2.a.bb Modular form 5550.2.a.bb.1.1
2-5550-1.1-c1-0-98 $6.65$ $44.3$ $2$ $2 \cdot 3 \cdot 5^{2} \cdot 37$ 1.1 $$1.0 1 -1 1 1.54077 Elliptic curve 5550.u Modular form 5550.2.a.u Modular form 5550.2.a.u.1.1 2-5550-1.1-c1-0-99 6.65 44.3 2 2 \cdot 3 \cdot 5^{2} \cdot 37 1.1$$ $1.0$ $1$ $-1$ $1$ $1.54679$ Elliptic curve 5550.z Modular form 5550.2.a.z Modular form 5550.2.a.z.1.1 | 2021-09-16 19:01:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9812163710594177, "perplexity": 1207.5775857102462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053717.37/warc/CC-MAIN-20210916174455-20210916204455-00658.warc.gz"} |
https://mathoverflow.net/questions/290573/around-algebraic-equivalence-of-cycles | # Around algebraic equivalence of cycles
Let $k$ be a finitely generated field, $X$ a smooth projective $k$-variety, $\ell$ a prime number, $\ell\in k^{\times}$, $r\ge 0$ an integer.
The Tate conjecture asserts surjectivity of the cycle class map:
$$c^r_{\ell}(X): Z_r(X)\otimes_{\mathbf{Z}}\mathbf{Q}_{\ell}\to H^{2r}_{\rm et}(X_{k^{\rm sep}},\mathbf{Q}_{\ell}(r))^{\text{Gal}(k^{\rm sep}/k)}$$
$c^r_{\ell}(X)$ factors through the group of $r$-cycles modulo algebraic equivalence, $\text{NS}^r(X)\otimes_{\mathbf{Z}}\mathbf{Q}_{\ell}$. We denote by $\text{ns}_{\ell}^r(X)$ the resulting cycle map.
• It would seem the Tate conjecture would then follow from surjectivity of $\text{ns}^r_{\ell}(X)$. Am I missing something up there? Chow groups of cycles modulo rational equivalence are much larger than Néron Severi groups, so this is saying Tate cycles are expected to all be images of cycles modulo algebraic equivalence. Is this correct?
• It is known (Thm. of the Base) that $\text{NS}^1(X)$ is finitely generated. Is this known for arbitrary $r\ge 0$?
• Is anything known about $\ker(\text{CH}^r(X)\twoheadrightarrow\text{NS}^r(X))$?
A summary of some of these results is given in §19.3 of Fulton's Intersection theory [Ful98]. To address for example the questions you ask:
• If $A \stackrel f\twoheadrightarrow B \stackrel g \to C$ are maps, then the images of $g$ and $gf$ agree. It doesn't matter that $B$ is 'smaller'.
• It is not true that $B^2(X) = \operatorname{CH}^2(X)/\!\sim_{\text{rat}}$ is finitely generated; in fact even $B^2(X) \otimes \mathbb Q$ need not be finite dimensional [Ful98, 19.3.3]; this is due to Clemens [Cle83]. The example is three-dimensional. It was not known at the time of writing of [Ful98] whether the torsion in $B^r(X)$ is finite; I don't know if this is known yet.
• The kernel of $\operatorname{CH}^r(X) \to B^r(X)$ can sometimes be given the structure of an abelian variety, but sometimes it's bigger than that [Ful98, 19.3.4]. Even for $r = \dim(X)$ (i.e. $0$-cycles), this group can be very big [Ful98, 19.3.5].
The notation $B$ for what you call $\operatorname{NS}$ seems to be standard, but possibly not the only standard.
References.
[Cle83] Clemens, Herbert, Homological equivalence, modulo algebraic equivalence, is not finitely generated. Publ. Math. Inst. Hautes Étud. Sci. 58, p. 231-250 (1983). Available online through Numdam. ZBL0529.14002.
[Ful98] Fulton, William, Intersection theory. Second edition. Ergebnisse der Mathematik und ihrer Grenzgebiete (3) 2. Springer-Verlag, Berlin, 1998. ZBL0885.14002.
• I think that I have met a paper with an example of infinite torsion.:) – Mikhail Bondarko Jan 13 '18 at 11:33 | 2020-10-29 21:06:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8764804005622864, "perplexity": 324.3498130926481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107905777.48/warc/CC-MAIN-20201029184716-20201029214716-00326.warc.gz"} |
https://www.physicsforums.com/threads/acceleration-of-a-car.433413/ | # Acceleration of a car?
1. Sep 29, 2010
### ScienceGirl90
1. The problem statement, all variables and given/known data
A car is moving at 50.0 m/s and brakes to a halt in 6.00 seconds.
(1)From the time it starts braking, how far does the car travel before the car comes to a halt if its acceleration is constant?
(2)What is the car's speed 1.35 s after it starts to brake if the cars acceleration is constant?
2. Relevant equations
v=a*t+vo
3. The attempt at a solution
2. Sep 29, 2010
### fss
The equation you listed will get you the acceleration, but for the distance you'll need
$$v_f^2 = v_0^2 + 2ad$$
3. Sep 29, 2010
### ScienceGirl90
Ok thank you! I was able to find the distance for question one but I am still stuck as to what is the car's speed 1.35 s after it starts to brake if the cars acceleration is constant?
4. Sep 29, 2010
### fss
Just use the first formula you gave in your original post. Pay attention to your signs and you'll be golden.
5. Sep 29, 2010
### ScienceGirl90
Got it. Thanks! | 2018-03-21 05:53:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5703871250152588, "perplexity": 1106.990883830883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647576.75/warc/CC-MAIN-20180321043531-20180321063531-00206.warc.gz"} |
http://eprint.iacr.org/2008/030/20080206:171158 | ## Cryptology ePrint Archive: Report 2008/030
Detection of Algebraic Manipulation with Applications to Robust Secret Sharing and Fuzzy Extractors
Ronald Cramer and Yevgeniy Dodis and Serge Fehr and Carles Padró and Daniel Wichs
Abstract: Consider an abstract storage device $\Sigma(\G)$ that can hold a single element $x$ from a fixed, publicly known finite group $\G$. Storage is private in the sense that an adversary does not have read access to $\Sigma(\G)$ at all. However, $\Sigma(\G)$ is non-robust in the sense that the adversary can modify its contents by adding some offset $\Delta \in \G$. Due to the privacy of the storage device, the value $\Delta$ can only depend on an adversary's {\em a priori} knowledge of $x$. We introduce a new primitive called an {\em algebraic manipulation detection} (AMD) code, which encodes a source $s$ into a value $x$ stored on $\Sigma(\G)$ so that any tampering by an adversary will be detected, except with a small error probability $\delta$. We give a nearly optimal construction of AMD codes, which can flexibly accommodate arbitrary choices for the length of the source $s$ and security level $\delta$. We use this construction in two applications:
\begin{itemize} \item We show how to efficiently convert any linear secret sharing scheme into a {\em robust secret sharing scheme}, which ensures that no \emph{unqualified subset} of players can modify their shares and cause the reconstruction of some value $s'\neq s$.
\item We show how how to build nearly optimal {\em robust fuzzy extractors} for several natural metrics. Robust fuzzy extractors enable one to reliably extract and later recover random keys from noisy and non-uniform secrets, such as biometrics, by relying only on {\em non-robust public storage}. In the past, such constructions were known only in the random oracle model, or required the entropy rate of the secret to be greater than half. Our construction relies on a randomly chosen common reference string (CRS) available to all parties. \end{itemize}
Category / Keywords: foundations / Secret Sharing, Fuzzy Extractors, Information Theory, Authentication Codes
Publication Info: This is the full version of a paper accepted to Eurocrypt 2008
Date: received 22 Jan 2008, last revised 6 Feb 2008
Contact author: wichs at cs nyu edu
Available format(s): Postscript (PS) | Compressed Postscript (PS.GZ) | PDF | BibTeX Citation
Short URL: ia.cr/2008/030
[ Cryptology ePrint archive ] | 2016-08-28 17:11:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9392491579055786, "perplexity": 1607.0023800037882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982946797.95/warc/CC-MAIN-20160823200906-00115-ip-10-153-172-175.ec2.internal.warc.gz"} |
http://clay6.com/qa/18527/if-a-and-b-are-square-matrices-of-the-same-order-and-ab-3i-then-a-is-equal- | Browse Questions
# If $A$ and $B$ are square matrices of the same order and $AB=3I$ then $A^{-1}$ is equal to
$(a)\;3B\qquad(b)\;\large\frac{1}{3}$$B\qquad(c)\;3B^{-1}\qquad(d)\;\large\frac{1}{3}$$B^{-1}$
Given
$AB=3I$
Multiply $A^{-1}$ on both the side
$A^{-1}(AB)=A^{-1}(3I)$
$B=3A^{-1}$
$A^{-1}=\large\frac{B}{3}$
Hence (b) is the correct answer.
edited Mar 19, 2014 | 2016-10-27 18:49:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8729745745658875, "perplexity": 415.8283118237033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721387.11/warc/CC-MAIN-20161020183841-00031-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://eprints.iisc.ernet.in/7903/ | # Effects of curvature and inertia on the peristaltic transport in a two-fluid system
Usha, Srinivasan and Rao, Adabala Ramachandra (2000) Effects of curvature and inertia on the peristaltic transport in a two-fluid system. In: International Journal of Engineering Science, 38 (12). pp. 1355-1375.
PDF sdarticle.pdf Restricted to Registered users only Download (373Kb) | Request a copy
## Abstract
The peristaltic pumping of a biofluid consisting of two immiscible fluids of di€erent viscosities, one occupying the core and the other the peripheral layers on either side, in a two-dimensional channel is investigated including the effects of curvature and inertia. The flow is proved to be steady in a wave frame provided that the interface and the peristaltic wave have the same period. An asymptotic solution for the low Reynolds number flow is presented in powers of a geometric parameter which is the ratio of the channel width to the wavelength. Velocity and stress balance at the interface at di€erent orders are reduced and transferred to the known zeroth order interface. An expression for the jump in the first order pressure across the interface is obtained through the balance of the normal stress and the solutions are presented up to first order. or some non-zero wall curvature, the trapping in the core splits into three eddies, a larger bolus with two small boluses on either side, for $\mu > 1$, where l is the ratio of the viscosity of the peripheral layer fluid to that of the core. The trapped bolus volume decreases with an increase in curvature for all $\mu$. The inertia force mainly causes asymmetry in the streamline structure as pointed out by the earlier authors. However, the trapped bolus being pushed forward for $\mu < 1$ is a new phenomenon. Another interesting feature for a single fluid in copumping range is the displacement of the trapped bolus near the wall to the downstream side of the wave.
Item Type: Journal Article Copyright of this article belongs to Elsevier. Peristaltic transport;Two-fluid system Division of Physical & Mathematical Sciences > Mathematics 14 Jul 2006 19 Sep 2010 04:30 http://eprints.iisc.ernet.in/id/eprint/7903 | 2015-10-08 18:06:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5785804986953735, "perplexity": 1019.0341404381869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737898933.47/warc/CC-MAIN-20151001221818-00234-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://www.codingame.com/training/hard/a-game-of-go | • 4
## Goal
Go is an abstract strategy board game for two players, in which the aim is to surround more territory than the opponent.
Given a board (where some stones are already added) and a list of moves to be executed next, check whether the moves are valid and (if they are valid) execute them.
The initial board state in the input is always valid.
After executing all moves in the list the expected output is either NOT_VALID if at least one of the moves is invalid, or the new state of the board after all moves were executed.
Rules of Go:
- Two players place stones of their color (by turns)
- If a stone is completely surrounded by stones of the other player (or by the edge of the field) this stone is beaten and is removed from the board
- If stones of the same color are placed next to each other they build a group
- If the group is completely surrounded by stones of the other player (or by the edge of the fields) this group is beaten and all of it's stones are removed from the board
- A stone can be placed on every free field on the board except for:
--- A position in which it would be completely surrounded by the stones of the other player (no suicidal moves allowed). Except this move beats some other stones which leads to it not being surrounded anymore
--- A position that would create the same board that was there before the other player made his move (to prevent an infinite loop of killing stones) which is named KO-rule (see a test case for an example)
- Points (which are not important for this puzzle) are made by empty fields that are surrounded by the stones of a player or by enemy stones that are beaten
For a more detailed description please visit wikipedia:
https://en.wikipedia.org/wiki/Go_(game)#Rules
Input
Line 1: An integer S for the size of the field (the field is a square of SxS)
Line 2: An integer M for the number of moves that are to be made on the board
Next S lines: A line of the board where . is an empty field, B is a field with a black stone and W is a field with a white stone on it (each line contains S characters).
Next M lines: A move that is described by the player color, and the position (seperated by a space character.
Example: B 0 11 means a black stone on the 1st line and the 12th column.
Output
If at least one of the moves in the list is not valid just print: NOT_VALID
If all moves from the input list are valid you have to output the board after the execution of all moves, just like in the input. So a line of output could look like this:
.BW. which means the first and the last field in this line is empty and in between there is a black and a white stone.
Constraints
S is <= 19
S lines contain only '.', 'B' or 'W'.
M is <= 15
A Move input starts with either B or W followed by two integers i and j that define the position of the placed stone (seperated by a space character). The integers to define the positions can be 0 <= i < S (from 0 (included) to the size of the board (not included)).
Example
Input
5
5
.....
.....
.....
.....
.....
B 2 2
W 2 3
B 3 2
W 3 4
B 4 3
Output
.....
.....
..BW.
..B.W
...B.
A higher resolution is required to access the IDE
Join the CodinGame community on Discord to chat about puzzle contributions, challenges, streams, blog articles - all that good stuff!
Online Participants | 2020-05-26 04:35:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2298572063446045, "perplexity": 896.2974483956212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390442.29/warc/CC-MAIN-20200526015239-20200526045239-00098.warc.gz"} |
https://www.acmicpc.net/problem/6457 | 시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율
1 초 128 MB 0 0 0 0.000%
## 문제
Your local computer user's group publishes a quarterly newsletter, and in each issue there is a small Turbo Pascal programming problem to be solved by the membership. Members submit their solutions to the problem to the newsletter editor, and the member submitting the shortest solution to the problem receives a prize.
The length of a program is measured in units. The unit count is determined by counting all occurrences of reserved words, identifiers, constants, left parentheses, left brackets, and the following operators: +, -, *, /, =, <, >, <=, >=, <>, @, {}, and :=. Comments are ignored, as are all other symbols not falling into one of the categories mentioned above. The program with the lowest unit count is declared the winner. Two or more programs with equal unit counts split the prize for the quarter.
In an effort to speed the judging of the contest, your team has been asked to write a program that will determine the length of a series of Pascal programs and print the number of units in each.
## 입력
Input to your program will be a series of Turbo Pascal programs. Each program will be terminated by a line containing tilde characters in the first two columns, followed by the name of the submitting member. Each of these programs will be syntactically correct and use the standard symbols for comments (braces) and subscripts (square brackets).
## 출력
For each program, you are print a separate line containing the name of the submitting member and the unit count of the program. Use a format identical to that of the sample below.
In the Southern California Regional finals, all teams were using Turbo Pascal. Here are some additional notes on Turbo Pascal for those not familiar with the language:
• Identifiers start with an underscore (_) or a letter (upper or lower case) which is followed by zero or more characters that are underscores, letters or digits.
• The delimiter for the beginning and ending of a string constant is the single forward quote ('). Each string is entirely on a single source line (that is a string constant cannot begin on one line and continue on the next). If '' appears within a string then it represents a single ' character that is part of the string. A string constant consisting of a single ' character is, therefore, represented by '''' in a Turbo Pascal program. The empty string is allowed.
• The most general form of a numeric constant is illustrated by the constant 10.56E-15. The 10 is the integral part (1 or more digits) and is always present. The .56 is the decimal part and is optional. The E-15 is the exponent and it is also optional. It begins with an upper or lower case E, which is followed by a sign (+ or -). The sign is optional.
• Turbo Pascal supports hexadecimal integer constants which consist of a \$followed by one or more hex digits ('0' to '9', 'a' to 'f', 'A' to 'F'). For example, \$a9F is a legal integer constant in Turbo Pascal.
• The only comment delimiters that you should recognise are {}, and not (**). Comments do not nest.
• '+' and '-' should be considered as operators wherever possible. For example in x := -3 the '-' and the '3' are separate tokens.
• Subranges of ordinal types can be expressed as lower..upper. For example, 1..10 is a subrange involving the integers from 1 to 10.
• All tokens not mentioned anywhere above consist of a single character.
## 예제 입력 1
PROGRAM SAMPLEINPUT;
VAR
TEMP : RECORD
FIRST, SECOND : REAL;
END;
BEGIN {Ignore this }
TEMP.FIRST := 5.0E-2;
Program by A. N. Onymous contains 29 units. | 2020-06-01 06:02:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20498348772525787, "perplexity": 2039.5824970615702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347414057.54/warc/CC-MAIN-20200601040052-20200601070052-00447.warc.gz"} |
https://www.clutchprep.com/physics/practice-problems/144361/when-the-temperature-of-an-ideal-gas-is-increased-which-of-the-following-also-in | Introduction to Ideal Gasses Video Lessons
Concept
# Problem: When the temperature of an ideal gas is increased, which of the following also increases?A. The mass of the gas atoms.B. The number of gas atoms.C. The average kinetic energy of the gas atoms.D. The average potential energy of gas atoms.
###### FREE Expert Solution
For an ideal gas, the average transitional kinetic energy of any molecule is expressed by:
$\overline{){{\mathbf{K}}}_{\mathbf{a}\mathbf{v}\mathbf{g}}{\mathbf{=}}\frac{\mathbf{3}}{\mathbf{2}}{\mathbf{k}}{\mathbf{T}}}$ per molecule.
or
$\overline{){{\mathbf{K}}}_{{\mathbf{avg}}}{\mathbf{=}}\frac{\mathbf{3}}{\mathbf{2}}{\mathbit{R}}{\mathbf{T}}}$ per mole.
86% (160 ratings)
###### Problem Details
When the temperature of an ideal gas is increased, which of the following also increases?
A. The mass of the gas atoms.
B. The number of gas atoms.
C. The average kinetic energy of the gas atoms.
D. The average potential energy of gas atoms. | 2020-11-28 16:43:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.654802680015564, "perplexity": 787.1498172792237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195687.51/warc/CC-MAIN-20201128155305-20201128185305-00147.warc.gz"} |
https://typeset.io/papers/quasiperiodicity-band-topology-and-moire-graphene-3o2xavbtcx | Open accessJournal Article
# Quasiperiodicity, band topology, and moiré graphene
04 Mar 2021-Physical Review B (American Physical Society (APS))-Vol. 103, Iss: 11, pp 115110
Abstract: A number of moir\'e graphene systems have nearly flat topological bands where electron motion is strongly correlated. Though microscopically these systems are only quasiperiodic, they can typically be treated as translation invariant to an excellent approximation. Here we reconsider this question for magic angle twisted bilayer graphene that is nearly aligned with a hexagonal boron nitride (hBN) substrate. We carefully study the effect of the periodic potential induced by hBN on the low energy physics. The combination of this potential and the moir\'e lattice produced by the twisted graphene generates a quasiperiodic term that depends on the alignment angle between hBN and the moir\'e graphene. We find that the alignment angle has a significant impact on both the band gap near charge neutrality and the behavior of electrical transport. We also introduce and study toy models to illustrate how a quasiperiodic potential can give rise to localization and change in transport properties of topological bands.
##### Citations
More
6 results found
Open accessJournal Article
11 Feb 2021-Physical Review B
Abstract: The quantum anomalous Hall effect sometimes occurs in twisted bilayer graphene when it is nearly aligned with an encapsulating hexagonal boron nitride (hBN) layer. The authors argue that the quantum anomalous Hall effect is likely only when the graphene/graphene and graphene/hBN moir\'e patterns are nearly commensurate. This picture gives rise to the series of Hall windows'' in the twist-angle space illustrated here.
Topics: , Bilayer graphene (67%), Graphene (60%)
16 Citations
Open accessJournal Article
25 Mar 2021-Physical Review B
Abstract: We study the effect of twisting on bilayer $h\text{\ensuremath{-}}\mathrm{BN}$. The effect of lattice relaxation is included; we look at the electronic structure, piezoelectric charges, and spontaneous polarization. We show that the electronic structure without lattice relaxation shows a set of extremely flat in-gap states similar to Landau levels, where the spacing scales with twist angle. With lattice relaxation we still have flat bands, but now the spectrum becomes independent of twist angle for sufficiently small angles. We describe in detail the nature of the bands and we study appropriate continuum models, at the same time explaining the structure of the in-gap states. We find that even though the spectra for both parallel and antiparallel alignment are very similar, the spontaneous polarization effects only occur for parallel alignment. We argue that this suggests a large interlayer hopping between boron and nitrogen.
6 Citations
Open accessJournal Article
15 Jul 2021-Physical Review B
Abstract: The effects of downfolding a Brillouin zone can open gaps and quench the kinetic energy by flattening bands. Quasiperiodic systems are extreme examples of this process, which leads to new phases and critical eigenstates. We analytically and numerically investigate these effects in a two-dimensional topological insulator with a quasiperiodic potential and discover a complex phase diagram. We study the nature of the resulting eigenstate quantum phase transitions; a quasiperiodic potential can make a trivial insulator topological and induce topological insulator-to-metal phase transitions through a unique universality class distinct from random systems. This wealth of critical behavior occurs concomitantly with the quenching of the kinetic energy, resulting in flat topological bands that could serve as a platform to realize the fractional quantum Hall effect without a magnetic field.
6 Citations
Open accessJournal Article
27 Sep 2021-Physical Review B
Abstract: We study the symmetries of twisted trilayer graphene's band structure under various extrinsic perturbations, and analyze the role of long-range electron-electron interactions near the first magic angle. The electronic structure is modified by these interactions in a similar way to twisted bilayer graphene. We analyze electron pairing due to long-wavelength charge fluctuations, which are coupled among themselves via the Coulomb interaction and additionally mediated by longitudinal acoustic phonons. We find superconducting phases with either spin-singlet/valley-triplet or spin-triplet/valley-singlet symmetry, with critical temperatures up to a few Kelvin for realistic choices of parameters.
3 Citations
Open accessPosted Content
Abstract: We study magic angle graphene in the presence of both strain and particle-hole symmetry breaking due to non-local inter-layer tunneling. We perform a self-consistent Hartree-Fock study that incorporates these effects alongside realistic interaction and substrate potentials, and explore a comprehensive set of competing orders including those that break translational symmetry at arbitrary wavevectors. We find that at all non-zero integer fillings very small strains, comparable to those measured in scanning tunneling experiments, stabilize a fundamentally new type of time-reversal symmetric and spatially non-uniform order. This order, which we dub the 'incommensurate Kekule spiral' (IKS) order, spontaneously breaks both the emergent valley-charge conservation and moire translation symmetries, but preserves a modified translation symmetry $\hat{T}'$ -- which simultaneously shifts the spatial coordinates and rotates the $U(1)$ angle which characterizes the spontaneous inter-valley coherence. We discuss the phenomenological and microscopic properties of this order. We argue that our findings are consistent with all experimental observations reported so far, suggesting a unified explanation of the global phase diagram in terms of the IKS order.
Topics: Symmetry breaking (56%),
1 Citations
##### References
More
62 results found
Open accessJournal Article
Abstract: The Hall conductance of a two-dimensional electron gas has been studied in a uniform magnetic field and a periodic substrate potential $U$. The Kubo formula is written in a form that makes apparent the quantization when the Fermi energy lies in a gap. Explicit expressions have been obtained for the Hall conductance for both large and small $\frac{U}{\ensuremath{\hbar}{\ensuremath{\omega}}_{c}}$.
Topics: , Kubo formula (52%), Fermi gas (51%)
3,954 Citations
Open accessJournal Article
Yuan Cao1, Valla Fatemi1, Shiang Fang2, Kenji Watanabe3 +3 moreInstitutions (3)
05 Mar 2018-Nature
Abstract: The behaviour of strongly correlated materials, and in particular unconventional superconductors, has been studied extensively for decades, but is still not well understood. This lack of theoretical understanding has motivated the development of experimental techniques for studying such behaviour, such as using ultracold atom lattices to simulate quantum materials. Here we report the realization of intrinsic unconventional superconductivity-which cannot be explained by weak electron-phonon interactions-in a two-dimensional superlattice created by stacking two sheets of graphene that are twisted relative to each other by a small angle. For twist angles of about 1.1°-the first 'magic' angle-the electronic band structure of this 'twisted bilayer graphene' exhibits flat bands near zero Fermi energy, resulting in correlated insulating states at half-filling. Upon electrostatic doping of the material away from these correlated insulating states, we observe tunable zero-resistance states with a critical temperature of up to 1.7 kelvin. The temperature-carrier-density phase diagram of twisted bilayer graphene is similar to that of copper oxides (or cuprates), and includes dome-shaped regions that correspond to superconductivity. Moreover, quantum oscillations in the longitudinal resistance of the material indicate the presence of small Fermi surfaces near the correlated insulating states, in analogy with underdoped cuprates. The relatively high superconducting critical temperature of twisted bilayer graphene, given such a small Fermi surface (which corresponds to a carrier density of about 1011 per square centimetre), puts it among the superconductors with the strongest pairing strength between electrons. Twisted bilayer graphene is a precisely tunable, purely carbon-based, two-dimensional superconductor. It is therefore an ideal material for investigations of strongly correlated phenomena, which could lead to insights into the physics of high-critical-temperature superconductors and quantum spin liquids.
3,452 Citations
Journal Article
15 Sep 1976-Physical Review B
Abstract: An effective single-band Hamiltonian representing a crystal electron in a uniform magnetic field is constructed from the tight-binding form of a Bloch band by replacing $\ensuremath{\hbar}\stackrel{\ensuremath{\rightarrow}}{\mathrm{k}}$ by the operator $\stackrel{\ensuremath{\rightarrow}}{\mathrm{p}}\ensuremath{-}\frac{e\stackrel{\ensuremath{\rightarrow}}{A}}{c}$. The resultant Schr\"odinger equation becomes a finite-difference equation whose eigenvalues can be computed by a matrix method. The magnetic flux which passes through a lattice cell, divided by a flux quantum, yields a dimensionless parameter whose rationality or irrationality highly influences the nature of the computed spectrum. The graph of the spectrum over a wide range of "rational" fields is plotted. A recursive structure is discovered in the graph, which enables a number of theorems to be proven, bearing particularly on the question of continuity. The recursive structure is not unlike that predicted by Azbel', using a continued fraction for the dimensionless parameter. An iterative algorithm for deriving the clustering pattern of the magnetic subbands is given, which follows from the recursive structure. From this algorithm, the nature of the spectrum at an "irrational" field can be deduced; it is seen to be an uncountable but measure-zero set of points (a Cantor set). Despite these-features, it is shown that the graph is continuous as the magnetic field varies. It is also shown how a spectrum with simplified properties can be derived from the rigorously derived spectrum, by introducing a spread in the field values. This spectrum satisfies all the intuitively desirable properties of a spectrum. The spectrum here presented is shown to agree with that predicted by A. Rauh in a completely different model for crystal electrons in a magnetic field. A new type of magnetic "superlattice" is introduced, constructed so that its unit cell intercepts precisely one quantum of flux. It is shown that this cell represents the periodicity of solutions of the difference equation. It is also shown how this superlattice allows the determination of the wave function at nonlattice sites. Evidence is offered that the wave functions belonging to irrational fields are everywhere defined and are continuous in this model, whereas those belonging to rational fields are only defined on a discrete set of points. A method for investigating these predictions experimentally is sketched.
2,371 Citations
Open accessJournal Article
Yuan Cao1, Valla Fatemi1, Ahmet Demir1, Shiang Fang2 +8 moreInstitutions (3)
05 Mar 2018-Nature
Abstract: A van der Waals heterostructure is a type of metamaterial that consists of vertically stacked two-dimensional building blocks held together by the van der Waals forces between the layers. This design means that the properties of van der Waals heterostructures can be engineered precisely, even more so than those of two-dimensional materials. One such property is the 'twist' angle between different layers in the heterostructure. This angle has a crucial role in the electronic properties of van der Waals heterostructures, but does not have a direct analogue in other types of heterostructure, such as semiconductors grown using molecular beam epitaxy. For small twist angles, the moire pattern that is produced by the lattice misorientation between the two-dimensional layers creates long-range modulation of the stacking order. So far, studies of the effects of the twist angle in van der Waals heterostructures have concentrated mostly on heterostructures consisting of monolayer graphene on top of hexagonal boron nitride, which exhibit relatively weak interlayer interaction owing to the large bandgap in hexagonal boron nitride. Here we study a heterostructure consisting of bilayer graphene, in which the two graphene layers are twisted relative to each other by a certain angle. We show experimentally that, as predicted theoretically, when this angle is close to the 'magic' angle the electronic band structure near zero Fermi energy becomes flat, owing to strong interlayer coupling. These flat bands exhibit insulating states at half-filling, which are not expected in the absence of correlations between electrons. We show that these correlated states at half-filling are consistent with Mott-like insulator states, which can arise from electrons being localized in the superlattice that is induced by the moire pattern. These properties of magic-angle-twisted bilayer graphene heterostructures suggest that these materials could be used to study other exotic many-body quantum phases in two dimensions in the absence of a magnetic field. The accessibility of the flat bands through electrical tunability and the bandwidth tunability through the twist angle could pave the way towards more exotic correlated systems, such as unconventional superconductors and quantum spin liquids.
Topics: Bilayer graphene (60%), van der Waals force (59%), Superlattice (57%) ... show more
2,052 Citations
Open accessJournal Article
Nicola Marzari1, Arash A. Mostofi2, Jonathan R. Yates3, Ivo Souza4 +1 moreInstitutions (5)
Abstract: The electronic ground state of a periodic system is usually described in terms of extended Bloch orbitals, but an alternative representation in terms of localized "Wannier functions" was introduced by Gregory Wannier in 1937. The connection between the Bloch and Wannier representations is realized by families of transformations in a continuous space of unitary matrices, carrying a large degree of arbitrariness. Since 1997, methods have been developed that allow one to iteratively transform the extended Bloch orbitals of a first-principles calculation into a unique set of maximally localized Wannier functions, accomplishing the solid-state equivalent of constructing localized molecular orbitals, or "Boys orbitals" as previously known from the chemistry literature. These developments are reviewed here, and a survey of the applications of these methods is presented. This latter includes a description of their use in analyzing the nature of chemical bonding, or as a local probe of phenomena related to electric polarization and orbital magnetization. Wannier interpolation schemes are also reviewed, by which quantities computed on a coarse reciprocal-space mesh can be used to interpolate onto much finer meshes at low cost, and applications in which Wannier functions are used as efficient basis functions are discussed. Finally the construction and use of Wannier functions outside the context of electronic-structure theory is presented, for cases that include phonon excitations, photonic crystals, and cold-atom optical lattices. | 2022-05-26 09:12:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7442284822463989, "perplexity": 1144.0880811307093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604495.84/warc/CC-MAIN-20220526065603-20220526095603-00726.warc.gz"} |
https://hal.inria.fr/hal-01573292 | # Practical Attacks on HB and HB+ Protocols
Abstract : HB and HB+ are a shared secret-key authentication protocols designed for low-cost devices such as RFID tags. HB+ was proposed by Juels and Weis at Crypto 2005. The security of the protocols relies on the “learning parity with noise” (LPN) problem, which was proven to be NP-hard.The best known attack on LPN by Levieil and Fouque [13] requires sub-exponential number of samples and sub-exponential number of operations, which makes that attack impractical for the RFID scenario (one cannot assume to collect exponentially-many observations of the protocol execution).We present a passive attack on HB protocol in detection-based model which requires only linear (in the length of a secret key) number of samples. Number of performed operations is exponential, but attack is efficient for some real-life values of the parameters, i. e. noise $\frac{1}{8}$ and key length 152-bits. Passive attack on HB can be transformed into active one on HB+.
Keywords :
Document type :
Conference papers
Domain :
Cited literature [15 references]
https://hal.inria.fr/hal-01573292
Contributor : Hal Ifip <>
Submitted on : Wednesday, August 9, 2017 - 10:24:16 AM
Last modification on : Wednesday, August 9, 2017 - 10:25:13 AM
### File
978-3-642-21040-2_17_Chapter.p...
Files produced by the author(s)
### Citation
Zbigniew Gołębiewski, Krzysztof Majcher, Filip Zagórski, Marcin Zawada. Practical Attacks on HB and HB+ Protocols. 5th Workshop on Information Security Theory and Practices (WISTP), Jun 2011, Heraklion, Crete, Greece. pp.244-253, ⟨10.1007/978-3-642-21040-2_17⟩. ⟨hal-01573292⟩
Record views | 2021-06-25 06:28:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3488907217979431, "perplexity": 11482.935419498679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487622113.11/warc/CC-MAIN-20210625054501-20210625084501-00228.warc.gz"} |
http://math.stackexchange.com/questions/87542/roots-of-complex-numbers | # Roots of complex numbers
It is known that exists formula for geting a square root of complex number without use of De Moivre formula. Will be interesting if we can find the cubic roots of complex numbers without using De Moivre formula.
-
It is not that widely taught that one is pretty much stuck when taking the cube root of a complex number. The theorem is called the Casus Irreducibilis
If an irreducible cubic, with rational coefficients, has three real (and thereby irrational) roots, these may only be found by taking the cube roots of complex numbers, which is what one gets with Cardano's formula. Cardano gives the real roots, in brief, by taking the cube root of a complex number and adding the cube root of its complex conjugate. If one insists on separately finding the real and imaginary parts of the cube root of a complex number, one is led to finding roots of cubics with three real roots. So the whole thing is a bit circular, and in the end we must keep Cardano's description, no further "simplification" is possible.
In conclusion, for cube roots of complex numbers, there is not really anything "better" than De Moivre.
-
Put another way: Cardano really isn't that useful in the casus irreduciblis. The more "practical" expressions are the ones that express the three roots in terms of trigonometric or hyperbolic functions. – J. M. Dec 2 '11 at 0:24
Good, the real answer. – André Nicolas Dec 2 '11 at 0:26
Something related... – J. M. Dec 2 '11 at 1:12
The formula for square roots comes from saying simple that if $\alpha + \beta i = (x + iy)^2$, then you can separate imaginary and reals and solve, more or less. If you want more, I can direct you to a write-up I did recently that happened to include that in it.
Further, it is a (not-so-)good exercise to solve for $x$ and $y$ in $\alpha + \beta i = (x + iy)^3$. There will be a time when it looks like there are more than 3 answers, but you must consider the sign of $\alpha\beta$, which is either positive or negative (this is how you limit solutions in the quadratic case, too).
I say it's not so good because it's painful. Cardano's formula is painful. But I suspect you can convince yourself that it can be done by playing with it for a bit.
- | 2016-05-01 16:21:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8802697658538818, "perplexity": 271.48187496526185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860116587.6/warc/CC-MAIN-20160428161516-00008-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://homework.zookal.com/questions-and-answers/solve-the-following-montecarlo-simulation-problem-both-with-excel-data-348069248 | 2. Accounting
3. solve the following montecarlo simulation problem both with excel data...
# Question: solve the following montecarlo simulation problem both with excel data...
###### Question details
Solve the following Monte-Carlo simulation problem both with Excel data tables and @Risk. Include all your answers directly in your Excel file using text boxes or comment boxes. Please show formulas used within excel.
NCAA Sweatshirt Problem
An enterprising OU student is trying to decide how many sweatshirts to print for an upcoming NCAA Basketball Tournament game. The final four teams have emerged from the quarterfinal round, and there is a week left until the semi-finals, which are then followed by the finals a few days later. Each sweatshirt costs $10 to produce and sells for$25. However, in 3 weeks, any leftover sweatshirts will be put on sale for half price, $12.50. The student assumes that the demand for her sweatshirts during the next three weeks has the distribution shown in the file NCAASweatshirts-shell.xlsx in F5:G10. The residual demand, after the sweatshirts have been put on sale, has the distribution also shown in the same file in J5:K10. The student, being a profit maximizer, realizes that every sweatshirt sold, even at the sale price, makes a profit. However, she also realizes that any sweatshirts printed and still unsold (even at the sale price) must be given away, resulting in a loss of$10 per discarded shirt. Your job is:
1) Build an Excel based Monte-Carlo simulation model that will produce a probability distribution of profit for a given number of sweatshirts printed. Your simulation model should do 1000 replications.
2) Set the number of sweatshirts printed in your model to 10000. Run your model and find the mean and standard deviation of profit as well as the 5th and 95th percentiles of profit.
3) Construct a 95% confidence interval on the mean profit.
4) Now use your model to explore different numbers of printing quantities and find the:
a. Quantity that maximizes mean profit
b. Quantity that maximizes 5th percentile of profit
c. Quantity that maximizes 95th percentile of profit
d. Do these quantities differ? | 2021-04-23 08:43:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5748950242996216, "perplexity": 2496.1463560205684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039568689.89/warc/CC-MAIN-20210423070953-20210423100953-00465.warc.gz"} |
http://www.mat.univie.ac.at/~kratt/artikel/heine.html | # Determinant evaluations and U(n) extensions of Heine's 2\phi1-transformations
### (6 pages)
Abstract. We give new proofs for U(n) extensions of Heine's three 2\phi1-transformation formulas that were recently discovered in another paper by the authors. These new proofs proceed by combining Heine's original transformation formulas with certain determinant evaluations. As a by-product we are able to generalize two of the U(n) transformation formulas.
The following versions are available: | 2017-12-13 09:19:19 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.932687520980835, "perplexity": 2116.933928837428}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948522343.41/warc/CC-MAIN-20171213084839-20171213104839-00390.warc.gz"} |
https://itprospt.com/num/1246718/ktati-the-gurling-principla-ot-modemn-envltonmentl-eclande | 5
# #ktati the gurling principla ot modemn envltonmentl eclande Recycing Sustainabity Entnmenfdl [ustce Bcentrism41) What Ihree things help lo lon pollcy? Scence. econo...
## Question
###### #ktati the gurling principla ot modemn envltonmentl eclande Recycing Sustainabity Entnmenfdl [ustce Bcentrism41) What Ihree things help lo lon pollcy? Scence. economics, and publla Interael Scence economlcs, and alhlas Sdence. economles. and cuttent puradipine Sdence, cconomics, and current cout of Iiving42) Mhich bogeochemical cycle most Involved In Iha Iha problma ol clneata enala aater Cycle njuogen Gycle camen cicle phosphorus cycle#)An drea surface water where Ireshwatar und aek welur mk
#ktati the gurling principla ot modemn envltonmentl eclande Recycing Sustainabity Entnmenfdl [ustce Bcentrism 41) What Ihree things help lo lon pollcy? Scence. economics, and publla Interael Scence economlcs, and alhlas Sdence. economles. and cuttent puradipine Sdence, cconomics, and current cout of Iiving 42) Mhich bogeochemical cycle most Involved In Iha Iha problma ol clneata enala aater Cycle njuogen Gycle camen cicle phosphorus cycle #)An drea surface water where Ireshwatar und aek welur mk caled Matershed estuany shamp Mer 4) The release of maller energy Ihat causes undesirabla Im pecb 0n haanel humans er otheromganisms called enery phion rencensumpion secondary Ireatment 45) Polution that resulls from may sources over lerge area of land Ralrrd(uab point polution nonpoint pollution som Follution water pollutan
#### Similar Solved Questions
##### I7 1 2 ] ~ of the following: G-[ inforualiom il 2 51)'
i7 1 2 ] ~ of the following: G-[ inforualiom il 2 51)'...
##### 8 1 [ 1 2 L ] L 1 1 [ # J 0 E 1 ej| 1 F { [ 1 1 1 1 1 1
8 1 [ 1 2 L ] L 1 1 [ # J 0 E 1 ej| 1 F { [ 1 1 1 1 1 1...
##### 0 -7 ?Sign2 Which statement best illustrates the concept of the interrelationship of living things with the physical environment, as found in the definition of ecology? Hawks and eagles often compete with each other: 2 White-tailed deer shed their antlers_ 3 Algae release oxygen and absorb carbon dioxide from pond water: 4 Frogs produce many eggs in single reproductive cycle:8 Which two organisms represented below are heterotrophic? A 01 A and B 2 Band C 3 C and E 4 D and E
0 - 7 ? Sign 2 Which statement best illustrates the concept of the interrelationship of living things with the physical environment, as found in the definition of ecology? Hawks and eagles often compete with each other: 2 White-tailed deer shed their antlers_ 3 Algae release oxygen and absorb carbo...
##### (10 points) Solve the heat problemuxx + 5 sin(Sx) - 2 sin(3x) , 0 < x < %,u(0,+) = 0, u(z,0) =0u(x,0) = 0u(x,t)Steady State Solution lim;-o u(x,
(10 points) Solve the heat problem uxx + 5 sin(Sx) - 2 sin(3x) , 0 < x < %, u(0,+) = 0, u(z,0) =0 u(x,0) = 0 u(x,t) Steady State Solution lim;-o u(x,...
##### 29. DISCUSSIONIf your unknown were either an alcohol, alkane or a ketone, which two tests would be sufficient to identify your unknown? Note that it is not necessary to identify which type of ketone nor which type of alcohol:KMnO_ and DNPH DNPH and [odoform DNPH and Cerric KMnO; and Cerric Cerric and Chromic No two tests would be sufficient ANSWERED CHANGES ALLOWED
29. DISCUSSION If your unknown were either an alcohol, alkane or a ketone, which two tests would be sufficient to identify your unknown? Note that it is not necessary to identify which type of ketone nor which type of alcohol: KMnO_ and DNPH DNPH and [odoform DNPH and Cerric KMnO; and Cerric Cerric ...
##### 11. You are given the following information:Within each age interval, [x k,x+k+ 1), the force of mortality; Hx + k, is constant_~Ux-kUx+k)/ uxtk0.980.990.960.98Calculate the completed expectation of life over the next 2 years for person_ age X.
11. You are given the following information: Within each age interval, [x k,x+k+ 1), the force of mortality; Hx + k, is constant_ ~Ux-k Ux+k)/ uxtk 0.98 0.99 0.96 0.98 Calculate the completed expectation of life over the next 2 years for person_ age X....
##### An acorn sits at the bottom of a 1.2m-deep pool, filled with water: When you stand at the edge of the pool and look down at 45" angle, you see the acorn: If your eyes are 1.Sm above the surface of the water; what is the horizontal distance from YOu to the acorn?
An acorn sits at the bottom of a 1.2m-deep pool, filled with water: When you stand at the edge of the pool and look down at 45" angle, you see the acorn: If your eyes are 1.Sm above the surface of the water; what is the horizontal distance from YOu to the acorn?...
##### List of Equations1) W; = FcosOAx = Fil Ax2) WNET = (Fi cOs0 + Fzcos8z + FscosOs + ~JAx = (Flll + Fzi + Fyj + JAx 3) Ek = mv/24) WNET = mv}n2 - mva/25) Epg mghwhere g = 9.8 mls?Elastic Potential energy: EpE kxn2where k is the Elastic constant of the Spring (deformed body)7) EMEcH = Ek EpWPUSH WTENSION WERICTION EMr - EMo WPUsH WTENSION mvjn2 mgho mvzn2 mghr + fAx
List of Equations 1) W; = FcosOAx = Fil Ax 2) WNET = (Fi cOs0 + Fzcos8z + FscosOs + ~JAx = (Flll + Fzi + Fyj + JAx 3) Ek = mv/2 4) WNET = mv}n2 - mva/2 5) Epg mgh where g = 9.8 mls? Elastic Potential energy: EpE kxn2 where k is the Elastic constant of the Spring (deformed body) 7) EMEcH = Ek Ep WP...
##### Block of mass 2.00 kg is held stationary against = compressed spring atthe bottom of a ramp. When released, while being propelled by strong wind, the block = can juSE reach point Aat the top off the ramp, shown in the figure: The magnitude of the work done by friction is 220 ], the work done by propulsion is 62.0 and the height h= 12.0 m. If the spring compression is 0.80 m, what is the spring constant?MM
block of mass 2.00 kg is held stationary against = compressed spring atthe bottom of a ramp. When released, while being propelled by strong wind, the block = can juSE reach point Aat the top off the ramp, shown in the figure: The magnitude of the work done by friction is 220 ], the work done by prop...
##### 2 (10 points) Find the complete splution 1 zhe equations: ipliown asrndiner15+1n2+249+32+4=3 21+1n+24+42+6373 01 +012 ~0-1-47}b) (5 points) Is the system71-4814481consistent? Why or why noz
2 (10 points) Find the complete splution 1 zhe equations: ipliown asrndiner 15+1n2+249+32+4=3 21+1n+24+42+6373 01 +012 ~0-1-47} b) (5 points) Is the system 71-481 4481 consistent? Why or why noz...
##### Q#3) The demand and supply functions of a good are given by 15pts) P =-4 Qd + 120 P = 1 /3 Qs + 29 where P Q D and Q S denote the price, quantity demanded and quantity supplied respectively: (a) Calculate the equilibrium price and quantity: Show your answer graphically (b) Calculate the new equilibrium price and quantity after the imposition of fixed tax of S13 per good. Who pays the tax?
Q#3) The demand and supply functions of a good are given by 15pts) P =-4 Qd + 120 P = 1 /3 Qs + 29 where P Q D and Q S denote the price, quantity demanded and quantity supplied respectively: (a) Calculate the equilibrium price and quantity: Show your answer graphically (b) Calculate the new equilibr...
##### Use appropriate identities to find the exact value of each expression. $$\cos (7 \pi / 12)$$
Use appropriate identities to find the exact value of each expression. $$\cos (7 \pi / 12)$$...
##### Calculate. $$\int \frac{d x}{x+1}$$
Calculate. $$\int \frac{d x}{x+1}$$...
##### Suppose strain of Escherichia coli is identified as an Hfr strain: In a conjugational experiment between the donor Hfr strain and a recipient E coli strain that is F_, what would you expect to result?A) The recipient to become an Hfr strainB) The recipient to become F+tThe recipient to remain F_D) The donor to become F_
Suppose strain of Escherichia coli is identified as an Hfr strain: In a conjugational experiment between the donor Hfr strain and a recipient E coli strain that is F_, what would you expect to result? A) The recipient to become an Hfr strain B) The recipient to become F+t The recipient to remain F_ ...
##### Solve each system by substitution. See Example 5. $$\left\{\begin{array}{l} {x-6 y=9-5 x-2 y} \\ {\frac{x}{2}-\frac{3}{4}=2 y} \end{array}\right.$$
Solve each system by substitution. See Example 5. $$\left\{\begin{array}{l} {x-6 y=9-5 x-2 y} \\ {\frac{x}{2}-\frac{3}{4}=2 y} \end{array}\right.$$... | 2023-03-20 15:16:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6016285419464111, "perplexity": 11600.382080872268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00290.warc.gz"} |
http://mathhelpforum.com/statistics/226922-standard-deviation-help.html | # Math Help - Standard Deviation Help
1. ## Standard Deviation Help
What is the standard deviation of a normal distribution whose mean is 122 if a score of 170 corresponds to a z-score of 1.9?
2. ## Re: Standard Deviation Help
Originally Posted by khalyar
What is the standard deviation of a normal distribution whose mean is 122 if a score of 170 corresponds to a z-score of 1.9?
z-score = $\dfrac{x-\mu}{\sigma}$ where $\mu$ is the mean, and $\sigma$ is the standard deviation
$1.9 = \dfrac{170 - 122}{\sigma}$
solve for $\sigma$ | 2014-10-26 09:49:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9577785134315491, "perplexity": 452.94864568223807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119662145.58/warc/CC-MAIN-20141024030102-00013-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://hal.inria.fr/inria-00126462v2 | # A GMP-based implementation of Schönhage-Strassen's large integer multiplication algorithm
1 CACAO - Curves, Algebra, Computer Arithmetic, and so On
INRIA Lorraine, LORIA - Laboratoire Lorrain de Recherche en Informatique et ses Applications
Abstract : Schönhage-Strassen's algorithm is one of the best known algorithms for multiplying large integers. Implementing it efficiently is of utmost importance, since many other algorithms rely on it as a subroutine. We present here an improved implementation, based on the one distributed within the GMP library. The following ideas and techniques were used or tried: faster arithmetic modulo $2^n+1$, improved cache locality, Mersenne transforms, Chinese Remainder Reconstruction, the $\sqrt{2}$ trick, Harley's and Granlund's tricks, improved tuning.
Document type :
Conference papers
Complete list of metadatas
Cited literature [22 references]
https://hal.inria.fr/inria-00126462
Contributor : Pierrick Gaudry <>
Submitted on : Wednesday, May 23, 2007 - 7:41:16 AM
Last modification on : Thursday, January 11, 2018 - 6:21:04 AM
Long-term archiving on: Tuesday, September 21, 2010 - 1:13:00 PM
### File
fft.final.pdf
Files produced by the author(s)
### Citation
Pierrick Gaudry, Alexander Kruppa, Paul Zimmermann. A GMP-based implementation of Schönhage-Strassen's large integer multiplication algorithm. ISSAC 2007, Jul 2007, Waterloo, Ontario, Canada. pp.167-174, ⟨10.1145/1277548.1277572⟩. ⟨inria-00126462v2⟩
Record views | 2020-01-18 04:17:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1863275170326233, "perplexity": 12560.034720565533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250591763.20/warc/CC-MAIN-20200118023429-20200118051429-00126.warc.gz"} |
https://mathoverflow.net/questions/347599/exponential-object-in-the-category-of-simple-undirected-graphs | # Exponential object in the category of simple, undirected graphs
Let $$G_i = (V_i, E_i)$$ be simple, undirected graphs for $$i=1,2$$. A graph homomorphism is a map $$f:V_1\to V_2$$ such that $$\{f(v), f(w)\}\in E_2$$ whenever $$\{v,w\}\in E_1$$.
By $$\text{Hom}(G_1, G_2)$$ we denote the collection of graph homomorphisms from $$G_1$$ to $$G_2$$. Note that it is possible that $$\text{Hom}(G_1, G_2)=\emptyset$$.
This paper is supposed to describe how we can make $$\text{Hom}(G_1, G_2)$$ into a graph - but I can't flesh out a criterion for: when do $$f, g \in \text{Hom}(G_1, G_2)$$ form an edge?
• I don't read the paper, but how about considering free groupoids $\mathcal{G}_1$, $\mathcal{G}_2$ generated by $G_1$, $G_2$ and then taking appropriate subgraph of the underlying graph of the category $\textbf{Cat}(\mathcal{G}_1,\mathcal{G}_2)$?
– Slup
Dec 4, 2019 at 9:12
• Good point - I think that this exactly amounts to what is described in the answer below. Dec 5, 2019 at 13:00
The title of your question asks about "exponential object[s] in the category of simple, undirected graphs". I can tell you what they are.
(The body of your question asks about a construction in a specific paper, which I haven't read, so I can't answer that question directly. But of course, exponentials are unique when they exist.)
So: in the category of simple graphs that you mention, the exponentials $$\mathrm{HOM}(G_1, G_2)$$ are as follows. A vertex of $$\mathrm{HOM}(G_1, G_2)$$ is a function from the set of vertices of $$G_1$$ to the set of vertices of $$G_2$$. Two vertices $$\phi, \psi$$ of $$\mathrm{HOM}(G_1, G_2)$$ are adjacent iff whenever $$x$$ and $$y$$ are adjacent vertices of $$G_1$$, then $$\phi(x)$$ and $$\psi(y)$$ are adjacent vertices of $$G_2$$.
I'd recommend Godsil and Royle's book Algebraic Graph Theory. This blog post also says more about the cartesian closed category of simple graphs.
• Thanks for the explanation and the useful link to the blog post you wrote! Dec 4, 2019 at 16:31
• As a point of clarification: In your answer you say that vertices are graph homomorphisms, but the post you link says that a vertex is any function on the underlying sets. Sep 4, 2020 at 8:51
• Oops! Corrected now. Thanks. Sep 18, 2020 at 18:57
• This graph is also not simple in the standard meaning of this term in graph theory, as those maps which are graph homomorphisms have loops on them. Sep 18, 2020 at 23:24
Let $$\textbf{Graph}$$ be the category of (undirected) graphs. Consider a graph $$I$$ that consists of a single edge. Note that for every graph $$G$$ there exists a bijection $$\mathrm{Hom}_{\textbf{Graph}}(I,G) \cong \mbox{ the set of edges of }G$$ natural in $$G$$. In other words the functor sending each graph to its set of edges is represented by $$I$$.
Let $$pt$$ be a graph with exactly one vertex. Then we have another natural bijection $$\mathrm{Hom}_{\textbf{Graph}}(pt,G) \cong \mbox{ the set of vertices of }G$$
Suppose that $$i_0,i_1:pt\rightarrow I$$ are two distinct morphisms (endpoints of $$I$$).
Now pick graphs $$G_1, G_2$$ and let $$\textbf{Hom}(G_1,G_2)$$ be their exponential object (we assume that it exists). Then
$$\mathrm{Hom}_{\textbf{Graph}}(G_1\times I,G_2) \cong \mathrm{Hom}_{\textbf{Graph}}\left(I,\textbf{Hom}(G_1,G_2)\right) \cong \mbox{ the set of edges of }\textbf{Hom}(G_1,G_2)$$
So each edge in $$\textbf{Hom}(G_1,G_2)$$ corresponds uniquely to a morphism of graphs $$f:G_1\times I \rightarrow G_2$$. Now you can also verify that the edge corresponding to $$f$$ has precisely $$f\cdot i_0,f\cdot i_1:G_1\cong G_1\times pt\rightarrow G_2$$ as its endpoints.
This I think recovers the answer given by Tom Leinster above.
• Thanks for this nice exposition @Slup! Dec 6, 2019 at 9:00
• @DominicvanderZypen I am happy that you find it useful.
– Slup
Dec 6, 2019 at 9:08 | 2022-12-02 20:51:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 40, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8539295792579651, "perplexity": 189.69800576344613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710916.40/warc/CC-MAIN-20221202183117-20221202213117-00131.warc.gz"} |
https://techutils.in/blog/2018/11/12/stackbounty-regression-random-forest-missing-data-data-imputation-predicting-spendings-overall-and-spendings-for-subcategories/ | # #StackBounty: #regression #random-forest #missing-data #data-imputation Predicting spendings overall and spendings for subcategories
### Bounty: 50
I have a Dataset containing information about spendings of customers in various shops. There are 10 spending-variables related to some categories (like spendings on clothing, spendings on hardware, spendings on service) and one variable which is spendings_overall. Spendings_overall should be the sum of the 10 single subcategorie spendings.
There are some additional variables describing the customers (age, sex, customergroup, …)
The Problem: Participants hat the possibility to say “i don’t know the amount i’ve spent any more, but i know that i have spent something”.
So in some cases all 10 subcategorie-spendings and the overall spendings variables might be not NA. In some cases some of the variable might be NA and in some cases all of the variables could be NA.
My goal is to do Data-Imputation, but i have no idea how to deal with the constraint of spendings_subcategorie_1 + spendings_subcategorie 2 + … + spedings_subcategorie_10 = spendings_overall.
Usually i would try to hit the missings values with missForest, but i don’t think, that there is any possibility to include the constraint i need (or at least i have no idea how to do so).
So i would like to ask which approaches i could try for the given problem.
Any hints and tips are very welcome.
Unfortunately i cant share the original data, but the dataframe looks like this:
``````set.seed(123)
data_spendings = data.frame(matrix(rep(NA, 140), ncol = 14))
names(data_spendings) = c("age", "sex", "customergroup", "spendings_overall", paste0("spendings_subcat_", 1:10))
data_spendings$$age = round(rnorm(10, 50, 20)) # participants age data_spendings$$sex = sample(c("male", "female"), 10, replace = T) # participants gender
data_spendings$$customergroup = sample(c(1:5), 10, replace = T) # grouping of customers, depending on crs data data_spendings[5:14] = matrix(rnorm(100, 100, 20), ncol = 10) # spendings on 10 different subcategories (like spendings_clothing, spendings_hardware, spendings_service etc.) data_spendings$$spendings_overall = rowSums(data_spendings[5:14]) # overall spendings of the person (which should be the sum of the single subcategorie spendings)
# Problem: People had the option to say "i know i spent something, but i can't remember how much it was"
cant_remebers = rep(FALSE, NROW(data_spendings)*11)
cant_remebers[sample(1:length(cant_remebers), round(length(cant_remebers)) *0.3)] = TRUE # approximately 30% of the spendings cant be remembered
data_spendings[4:14][matrix(cant_remebers, ncol = 11, byrow = T)] = NA
data_spendings
``````
Get this bounty!!!
This site uses Akismet to reduce spam. Learn how your comment data is processed. | 2018-12-17 02:51:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5203194618225098, "perplexity": 2890.684995202839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828056.99/warc/CC-MAIN-20181217020710-20181217042710-00486.warc.gz"} |
https://derive-it.com/tag/unit-vectors/ | Tag: unit vectors
Writing Unit Vectors for a Cartesian Coordinate System in Terms of Unit Vectors for a Spherical Coordinate SystemWriting Unit Vectors for a Cartesian Coordinate System in Terms of Unit Vectors for a Spherical Coordinate System
Objective of this Post The | 2022-05-22 11:30:29 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9104682207107544, "perplexity": 374.63953760919605}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545326.51/warc/CC-MAIN-20220522094818-20220522124818-00088.warc.gz"} |
https://hal.inria.fr/hal-01092563 | # Scaling limits of random graph models at criticality: Universality and the basin of attraction of the Erdős-Rényi random graph
Abstract : Over the last few years a wide array of random graph models have been postulated to understand properties of empirically observed networks. Most of these models come with a parameter t (usually related to edge density) and a (model dependent) critical time $t_c$ which specifies when a giant component emerges. There is evidence to support that for a wide class of models, under moment conditions, the nature of this emergence is universal and looks like the classical Erdős-Rényi random graph, in the sense of the critical scaling window and (a) the sizes of the components in this window (all maximal component sizes scaling like $n^{2/3}$) and (b) the structure of components (rescaled by $n^{−1/3}$) converge to random fractals related to the continuum random tree. Till date, (a) has been proven for a number of models using different techniques while (b) has been proven for only two models, the classical Erdős-Rényi random graph and the rank-1 inhomogeneous random graph. The aim of this paper is to develop a general program for proving such results. The program requires three main ingredients: (i) in the critical scaling window, components merge approximately like the multiplicative coalescent (ii) scaling exponents of susceptibility functions are the same as the Erdős-Rényi random graph and (iii) macroscopic averaging of expected distances between random points in the same component in the barely subcritical regime. We show that these apply to a number of fundamental random graph models including the configuration model, inhomogeneous random graphs modulated via a finite kernel and bounded size rules. Thus these models all belong to the domain of attraction of the classical Erdős-Rényi random graph. As a by product we also get results for component sizes at criticality for a general class of inhomogeneous random graphs.
Type de document :
Pré-publication, Document de travail
2014
Domaine :
https://hal.inria.fr/hal-01092563
Contributeur : Nicolas Broutin <>
Soumis le : mardi 9 décembre 2014 - 07:28:53
Dernière modification le : vendredi 25 mai 2018 - 12:02:03
### Identifiants
• HAL Id : hal-01092563, version 1
• ARXIV : 1411.3417
### Citation
Shankar Bhamidi, Nicolas Broutin, Sanchayan Sen, Xuan Wang. Scaling limits of random graph models at criticality: Universality and the basin of attraction of the Erdős-Rényi random graph. 2014. 〈hal-01092563〉
### Métriques
Consultations de la notice | 2018-09-21 20:10:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5220319628715515, "perplexity": 856.7592033767506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157503.43/warc/CC-MAIN-20180921190509-20180921210909-00035.warc.gz"} |
https://www.physicsforums.com/threads/irrational-demonstration.638708/ | # Irrational demonstration
1. Sep 25, 2012
### inverse
Demonstrate that $\sqrt{2}+\sqrt{3}$ is irrational.
Thanks
2. Sep 25, 2012
### Eval
This feels like homework. However, I will give a proof just to make sure I still can:
For purposes of contradiction, assume $\sqrt{2}+\sqrt{3}$ is rational. That is, there are some integers n and m (m≠0) such that $\sqrt{2}+\sqrt{3}=\frac{n}{m}$ and gcd(n,m)=1 (in other words, n and m are relatively prime). We have, then, that:
$\sqrt{2}+\sqrt{3}=\frac{n}{m}$
$m(\sqrt{2}+\sqrt{3})=n$
$m(\sqrt{2}+\sqrt{3})(\sqrt{2}-\sqrt{3})=n(\sqrt{2}-\sqrt{3})$
$m(4+9)=n(\sqrt{2}-\sqrt{3})$
$13m=n(\sqrt{2}-\sqrt{3})$
Since 13 is prime and m does not divide n, 13|n and $m|(\sqrt{2}-\sqrt{3})$. This implies, then, that $\sqrt{2}-\sqrt{3}$ is in fact an integer. Disregarding the fact that this is not the case, we still have, then, that $l=\sqrt{2}-\sqrt{3}$ for some l, but $\sqrt{2}+\sqrt{3}=l+2\sqrt{3}$. Since $\sqrt{12}$ is irrational and l is rational, their sum is irrational, further contradicting the original assumption.
To use the latter contradiction, you would need to further show that $\sqrt{12}$ is irrational, so I would stick with the former contradiction (that $\sqrt{2}-\sqrt{3}$ is an integer)
3. Sep 25, 2012
### inverse
I fail to understand how I show from the expression $\sqrt{2}-\sqrt{3}$that must comply something for m and n is irrational.
4. Sep 25, 2012
### Norwegian
If Eval had done the first part correctly, he/she would have arrived at the otherwise immediate
√3 - √2 = m/n. Adding this equation to √3 + √2 = n/m, you obtain that √3 is rational, which is a contradiction.
Another way is just squaring both sides of √3 + √2 = n/m, giving √6 rational, and again a contradiction.
5. Sep 25, 2012
### micromass
Staff Emeritus
Moved to homework. Please do not give out full solutions.
6. Sep 25, 2012
### inverse
It is not to homwork, is to pass an exam.
Thanks
7. Sep 25, 2012
### micromass
Staff Emeritus
It is a homework-type question. It still belongs in homework and the homework rules apply.
8. Sep 25, 2012
### inverse
I have proposed a model for solving methodology see, because if I have a reference (Worked) can not practice more similar exercises.
9. Sep 26, 2012
### Eval
This post contains incorrect info (after the quote, of course)
To avoid misinforming the person asking the question, I would like to note that a conjugate is not an inverse. A conjugate of (a+b) is (a-b) whereas the inverse of (a+b) is (a-b)/(a2-b2). Applying this:
If √3 + √2 = n/m, then 1/(√3 + √2) = m/n. Then, multiplying the top and bottom of the lefthand side by its conjugate, we get (√3 - √2)/(92+22) = (√3 - √2)/13 = m/n, not √3 - √2 = m/n. Then you can add this to (√3 + √2)/13 = n/(13m) to get that (√3)/13 = m/n+n/13m = (13m2+n2)/(13nm), so √3 = (13m2+n2)/(nm) implying that √3 is the ratio of two integers, which is a contradiction. This is no more immediate than the proof I gave :P
Also, squaring both sides of √3 + √2 = n/m would imply that √6 is rational after several more steps and is probably the easiest method. However, in all the methods given by Norwegian, I would like to point out that you would further have to show that √3 and √6 are also irrational, respectively, so you will have some more work to do.
The easiest way to prove this would be to assume that they are rational (so the ration of two integers).
Last edited: Sep 26, 2012
10. Sep 26, 2012
### Norwegian
Hi Eval,
I invite you to reconsider the above. In particular, I want you to observe the elementary identity (√3 + √2)(√3 - √2)=1 (which by the way shows that conjugation equals inverse in this case).
11. Sep 26, 2012
### Eval
Ah, right, sorry. It is not exactly an identity, but yes, you are right. It is because I was still thinking (a+b) as opposed to (sqrt(a)+sqrt(b)). The identitiy there is a-b, which in thios case happens to be 1 (which is extremely useful).
Thanks for the catch, I'll edit my previous post so that I don't mislead people. Blame it on the lack of sleep I have had, my brain is failing me :/ | 2017-08-17 16:09:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8845322132110596, "perplexity": 1240.4744112976148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103579.21/warc/CC-MAIN-20170817151157-20170817171157-00038.warc.gz"} |
https://codegolf.stackexchange.com/questions/182496/relevant-part-for-a-badminton-serve | # Relevant Part for a Badminton Serve
## Introduction:
I saw there was only one other badminton related challenge right now. Since I play badminton myself (for the past 13 years now), I figured I'd add some badminton-related challenges. Here the second (first one can be found here):
## Challenge:
• A serve will always be done diagonally over the net.
• You must always serve after the line that's parallel and nearest to the net.
• The area in which you are allowed to serve differs depending on whether it's a single (1 vs 1) or double/mix (2 vs 2).
• For singles (1 vs 1), the blue area in the picture below is where you are allowed to serve. So this is including the part at the back, but excluding the parts at the side.
• For doubles/mix (2 vs 2), the green area in the picture below is where you are allowed to server. So this is excluding the part at the back, but including the parts at the side.
• You may not stand on the lines when serving. But the shuttle will still be inside if they land on top of a line.
Here the layout of a badminton field:
## Challenge rules:
Input:
You will be given two inputs:
• Something to indicate whether we're playing a single or double/mix (i.e. a boolean)
• Something to indicate which block you're serving from (i.e. [1,2,3,4] or ['A','B','C','D'] as used in the picture above).
Output:
Only the relevant lines for the current serve (including the net), including an F to indicate where you serve from, and multiple T to indicate where you will potentially serve to.
Although in reality you're allowed to serve from and to anywhere in the designated areas, we assume a person that will serve will always stands in the corner of the serve area closes to the middle of the net, which is where you'll place the F. And they will serve to any of the four corners of the area where they have to serve to, which is where you'll place the Ts.
As ASCII-art, the entire badminton field would be the following (the numbers are added so you don't have to count them yourself):
2 15 15 2
+--+---------------+---------------+--+
| | | | | 1
+--+---------------+---------------+--+
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | 9
| | | | |
| | | | |
| | | | |
| | | | |
+--+---------------+---------------+--+
| | | | | 2
| | | | |
O=====================================O 37 times '='
| | | | |
| | | | | 2
+--+---------------+---------------+--+
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | 9
| | | | |
| | | | |
| | | | |
| | | | |
+--+---------------+---------------+--+
| | | | | 1
+--+---------------+---------------+--+
Examples:
Here two examples for outputting only the relevant parts of the serve:
Input: Single and serve block A
Output:
T---------------T
| |
+---------------+
| |
| |
| |
| |
| |
| |
| |
| |
| |
T---------------T
| |
| |
O=====================================O
| |
| |
+---------------+
| F|
| |
| |
| |
| |
| |
| |
| |
| |
+---------------+
| |
+---------------+
As you can see, the F is added in the corner within the block, but the T are replacing the + in the ASCI-art output.
Input: Double and serve block C
Output:
+--+---------------+
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | F|
+--+---------------+
| | |
| | |
O=====================================O
| | |
| | |
T---------------+--T
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
T---------------+--T
## Challenge rules:
• Leading and trailing newlines are optional (including the leading and trailing two empty lines when the input is single). Trailing spaces are optional as well. Leading spaces are mandatory however.
• Any four reasonable distinct inputs to indicate which block we're serving from are allowed (for integers, stay within the [-999,999] range); as well as any two reasonable distinct inputs to indicate whether its a single or double/mix (please note this relevant forbidden loophole, though). Please state the I/O you've used in your answer!
• You are allowed to use a lowercase f and t (or mixed case) instead of F and T.
• You are allowed to return a list of lines or matrix of characters instead of returning or printing a single output-string.
## General rules:
• This is , so shortest answer in bytes wins.
Don't let code-golf languages discourage you from posting answers with non-codegolfing languages. Try to come up with an as short as possible answer for 'any' programming language.
• Standard rules apply for your answer with default I/O rules, so you are allowed to use STDIN/STDOUT, functions/method with the proper parameters and return-type, full programs. Your call.
• Default Loopholes are forbidden.
• Ah, badminton. The one game I’ve always wanted to play but never got around to because forgot/couldn’t find players to play with – Quintec Apr 1 at 11:59
• @Quintec Feel free to come visit our club in The Netherlands for a free evening ;p – Kevin Cruijssen Apr 1 at 12:20
• Deal, if you pay for my plane ticket and hotel? :) – Quintec Apr 1 at 13:13
• @Quintec If you win, I'll pay the flight ticket back, haha xD – Kevin Cruijssen Apr 1 at 13:21
• @MagicOctopusUrn Yes, there are some professional badminton players from The Netherlands. Not sure what rank they have on the world list tbh, I don't watch badminton that often (and usually it's only 5 minutes on the sport news if mentioned at all anyway.. all other time is wasted with soccer). And no, as top player you might barely make an income I think. Definitely not millions. – Kevin Cruijssen Apr 2 at 6:59
# Python 2, 285 284 bytes
R=str.replace
s,q=input()
A=' '*19
l='| '[s]+' |'+A[4:]+'|'+A
r=['T--+',' T'][s]+'-'*15+'T'+A
h=[r]+[l,R(r,*'T+')]*s+[l]*8+[l[:18]+'F'+'|'+A,r,l,l,'O'+'='*37+'O']
h+=[R(l[::-1],*'T+')for l in h[-2::-1]]
h[9+2*s]=R(h[9+2*s],*'F ')
for l in[l[::q%2*2-1]for l in h[::q/2*2-1]]:print l
Try it online!
Takes input as 0/1 (or False/True) for game type (Double/Single),
and 0-3 for serving block (0,1,2,3 = C,D,A,B)
• That was fast! Nice answer. – Kevin Cruijssen Apr 1 at 8:20
• @Yeah, I kinda tried it from sandbox last week :P – TFeld Apr 1 at 8:21
# Charcoal, 81 bytes
NθF⮌Eθ⁺¹⁶׳ιF✂541⊖θURι±×³Iκ×=¹⁸O⟲O↙⁴J¹±³FF²F²«J×ι±⁺¹²×³θ⁺²×⁻¹⁵׳θκT»F№ABη‖↑F№ACη‖
Try it online! Link is to verbose version of code. Takes the first input as 1 or 2 for singles or doubles, second input as one of ABCD as in the question. Explanation:
F⮌Eθ⁺¹⁶׳ιF✂541⊖θURι±×³Iκ
Loop over the relevant widths and the heights of the D court and draw the rectangles.
×=¹⁸O⟲O↙⁴
Draw the net and apply rotational symmetry to add the A court.
J¹±³F
Add the F to the D court.
F²F²«J×ι±⁺¹²×³θ⁺²×⁻¹⁵׳θκT»
Add the Ts to the relevant places in the A court.
F№ABη‖↑F№ACη‖
Reflect the output as necessary to serve from the correct court.
• @KevinCruijssen Sorry for overlooking that, should be fixed now, thanks. – Neil Apr 2 at 8:32
# JavaScript (ES7), 216 ... 205 201 199 bytes
Takes input as (block)(double), where block is either $$\-2\$$ (top right), $$\-1\$$ (bottom left), $$\1\$$ (bottom right) or $$\2\$$ (top left) and double is a Boolean value.
b=>d=>(g=x=>y<31?+-| =OTF
[X=x-19,Y=y-15,p=X*Y*b<0,q=Y>0^b&1,X*=X,Y*=Y,i=x*24%35>2|~16>>Y%62%6&2,x<39?Y?p*X|(d?Y:X-87)>169?3:i?X-1|Y-16|q?i:7:q*(d?X-87:Y)%169&&6:x%38?4:5:++y&&8]+g(-~x%40):'')(y=0)
Try it online!
Formatted version
### How?
We iterate from $$\y=0\$$ to $$\y=30\$$ and from from $$\x=0\$$ to $$\x=39\$$ for each value of $$\y\$$.
We first define $$\X=x-19\$$ and $$\Y=y-15\$$.
The variables p = X * Y * b < 0 and q = Y > 0 ^ b & 1 are used to determine what do draw in each quarter according to the block $$\b\$$.
From now on, both $$\X\$$ and $$\Y\$$ are squared in order to easily test absolute positions within each quarter of the field.
The expression x * 24 % 35 > 2 yields false if $$\x\$$ belongs to $$\\{0, 3, 19, 35, 38\}\$$ (the positions of the vertical lines) or true otherwise.
Try it online!
The expression ~16 >> Y % 62 % 6 & 2 yields $$\0\$$ if $$\y\$$ belongs to $$\\{0, 2, 12, 18, 28, 30\}\$$ (the positions of the horizontal lines, excluding the net) or $$\2\$$ otherwise.
Try it online!
The variable $$\i\$$ is defined as the result of a bitwise OR between the two values above, and is therefore interpreted as:
• 3: space
• 2: |
• 1: -
• 0: + or T
The expression (d ? Y : X - 87) > 169 is used to crop the field according to the game type $$\d\$$ (single or double). The similar expression (d ? X - 87 : Y) % 169 is used to draw the T's at the appropriate positions.
• *Opens TIO and starts verifying the output* Looks good; all eight outputs are correct, as expected. *Looks at actual code* Huh.. wth is going on¿.. :S Looking forward to that explanation later on, @Arnauld. An unexpected amount of arithmetic, ternary, and bitwise calculations for an ASCII-art challenge. xD – Kevin Cruijssen Apr 1 at 17:29
• @KevinCruijssen Actually, I wish my formulas were even more freaky so that I could get this under 200 bytes, which was my initial target. ;) But my approach is probably too much optimized towards drawing the full field and not enough towards taking the parameters into account at a reasonable byte cost. – Arnauld Apr 2 at 8:51
• Probably yeah, since I allowed any input-value in the range [-999,999] for the four distinct inputs, so perhaps you could somehow use that to your advantage to golf some bytes. It would mean to partially start over, which is perhaps not be worth the effort. Unfortunately I can't give you any golfing tips to help you under 200 bytes; I can only wish you good luck in your attempts. ;p – Kevin Cruijssen Apr 2 at 9:02
• @KevinCruijssen Done. :) – Arnauld Apr 2 at 10:17
# Jelly, 108 99 bytes
“¢¥Þ‘Ṭ+þ³ḤN+“¢¤€‘¤ṬḤ;Ø0¤×3R¤¦€³+0x39¤µ‘03³?‘;20¤¦€1,-2¦;565DWx“¢%¢‘¤;UṚ\$ị“|-+TO= ””F21¦€³Ḥ_⁵¤¦UṚƭ⁴¡
Try it online!
I’m sure this can be better golfed.
Dyadic link with left argument 0 or 1 for singles/doubles and right argument 0,1,2,3 for different serve quadrants. Returns a list of strings
Thanks to @KevinCruijssen for saving a byte!
• I don't know Jelly, so I probably say something stupid here, but with “|-+TO= ”“F”, can't the “F” be golfed to another type of string for single characters? In 05AB1E for example, there are builtins for strings of size 1 ('), 2 („), or 3 (…), so it could be 'F. Don't know if Jelly has something similar, or if you have another reason for it to be “|-+TO= ”“F”? – Kevin Cruijssen Apr 2 at 8:51
• @KevinCruijssen Thanks, and nice challenge. I don’t think so. There are two character literals (with ⁾), but not one. I could use a number 7 and add the F to the lookup, but it’s the same number of characters because of the need to follow the 7 with a 21 which therefore needs a space to separate the two. – Nick Kennedy Apr 2 at 13:13
• Well, as I said, I don't know Jelly. Thought it might have some builtins for 1 or 2-character strings as well, but if you say not I believe you. :) – Kevin Cruijssen Apr 2 at 13:15
• @KevinCruijssen I’m happy for someone else to jump in - still learning! – Nick Kennedy Apr 2 at 13:15
• @KevinCruijssen I completely missed that ” could be used for a single character literal - oops! Thanks for saving a byte. – Nick Kennedy Apr 10 at 21:53 | 2019-10-23 05:27:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 22, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4095498025417328, "perplexity": 1665.403698513041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987829458.93/warc/CC-MAIN-20191023043257-20191023070757-00039.warc.gz"} |
https://proxies-free.com/tag/lvert/ | ## norms – What are the functions such that \$ lVert f + g rVert_p^p = lVert f rVert_p^p + lVert g rVert_p^p\$?
Let $$1 leq p leq 2$$. I am looking for a characterization of the couples $$(f,g)$$ of functions $$f,g in L_p(mathbb{R})$$ such that
$$lVert f + g rVert_p^p = lVert f rVert_p^p + lVert g rVert_p^p.$$
For $$p = 2$$, this relation is satisfied if and only if $$langle f, g rangle = 0$$. For $$p = 1$$, it has been shown in this post that the condition is equivalent to $$f g geq 0$$ almost everywhere.
For a general $$p$$, the relation is clearly satisfied as soon as the product $$fg=0$$ almost everywhere. Is this latter sufficient condition also necessary?
If not, then what if we reinforce the condition with
$$lVert alpha f + beta g rVert_p^p = lVert alpha f rVert_p^p + lVert beta g rVert_p^p$$
for any $$alpha, beta in mathbb{R}$$?
## formal languages – Using pumping lemma to prove that \$L = { a^ib^j mid lvert i – j rvert le 2 } \$ is irregular
Given the following language:
$$L = { a^ib^j mid lvert i – j rvert le 2 }$$
I am trying to prove that it is not regular. On the one hand my intuition tells me that the language is non-regular as there is no way of tracking $$a^{i’s}$$ and $$b^{j’s}$$.
However, when I try to prove that it is irregular using the pumping lemma I have trouble finding which word I should use to arrive at a contradiction.
Any suggestions?
## functional analysis – If \$f_nrightarrow f\$ in \$L^2\$ and \$lVert f_nrVert_inftyle C\$ then why is \$fin L^infty\$?
This exercise from the $$L^p$$-theory section of my measure theory course is giving me some trouble.
Let $$(f_n)_nsubset L^2cap L^infty$$ be a sequence such that $$f_nrightarrow fin L^2$$ in $$L^2$$ and $$lVert f_nrVert_inftyle C$$ for all $$n$$. I want to show that
1. $$fin L^infty$$ and
2. $$f_nrightarrow f$$ in $$L^p$$ for any $$2le p
My most promising attempt at 1. was
$$lVert frVert_infty=lVert f-f_n+f_nrVert_inftyle lVert f-f_nrVert_infty+C,$$
but how can we bound $$lVert f-f_nrVert_infty$$?
I’m having similar problems with 2., except of course there we want to bound $$lVert f-f_nrVert_p$$.
## If \$d\$ is a metric in \$X\$ and \$A subset X\$, does \$lvert d(x,A) – d(y,A) rvert le d(x,y) phantom{5} forall x, y in X\$?
I´m trying to prove that if $$d$$ is a metric in $$X$$ and $$A subset X$$, then:
$$lvert d(x,A) – d(y,A) rvert le d(x,y), phantom{5} forall x, y in X$$
The question seems very simple but I´m having problems to solve it. Any suggestions?
## complex analysis – Analytic continuation of a function on the right half-plane to a region enclosing the circle \${ lvert z rvert = 3}\$
I am trying to solve the following question:
Show that there exists an analytic function $$f$$ in the open right half-plane such that $$(f(z))^2 + 2f(z) equiv z^2$$. Show that your function $$f$$ can be continued analytically to a region
containing the set $${z in mathbb{C} , lvert z rvert = 3}$$.
I can solve the first part. Since $$z^2+1$$ is a non-vanishing holomorphic function in the open right half-plane, which is simply connected, the function admits an analytic square root $$h(z)$$ in this region; it then suffices to choose $$h(z)-1$$ as our required function $$f(z)$$.
I don’t know how to proceed with the analytic continuation part though. Any suggestions?
## Functional analysis – Uniform binding to \$ lVert chi _ { {u_n = 0 }} rVert_ {W ^ {s, p} ( Omega)} \$ for a limited sequence \$ u_n \$ in \$ H ^ 1_0 ( Omega) \$?
Suppose I have a sequence $$u_n to u$$ in the $$H ^ 1_0 ( Omega)$$ on a smooth and limited domain. For some $$p> 1$$ and $$s in (0, frac 12)$$it is possible to estimate the norm of the characteristic function of the zero level set $$u_n$$, $$lVert chi _ { {u_n = 0 }} rVert_ {W ^ {s, p} ( Omega)}$$
in relation to norms of $$u_n$$? In particular, I am looking for a uniform that is bound for the expression above.
We know that it belongs, for example $$W ^ { epsilon, 2} ( Omega)$$ to the $$epsilon < frac 12$$I just want to know if it can be consistently limited.
## Why is \$ lvert e ^ {i * Im (s) * log (n)} rvert = 1 \$
Where $$n in mathbb {N}$$ and $$s in mathbb {C}$$,
## For which \$ alpha geq 1 \$ is \$ left lVert x right rVert ^ alpha \$ differentiable?
To let $$left lVert cdot right rVert_ infty$$ to be a norm $$mathbb {R} ^ n$$,
How can you find out for whom? $$alpha geq 1$$ the picture $$f$$ With $$f (x): = left lVert x right rVert ^ alpha$$ is completely differentiable in $$0$$?
## Why are \$(lVert .rVert_p) \$-norms equivalent in infinite direct sum?
If for i$$in mathbb{N}$$ the space $$X_i$$ is a Banach space, Why are the spaces
$$(oplus _{i=1}^{infty} X_i)_{lVert . rVert_p}$$ for
$$1leq p$$ equivalent? | 2021-07-24 20:32:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 61, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9078060984611511, "perplexity": 606.4000064898162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150308.48/warc/CC-MAIN-20210724191957-20210724221957-00102.warc.gz"} |
http://legisquebec.gouv.qc.ca/en/showversion/cr/T-0.1,%20r.%202?code=se:350_51_1r3&pointInTime=20201120 | ### T-0.1, r. 2 - Regulation respecting the Québec sales tax
350.51.1R3. Where the person referred to in the first paragraph of section 350.51.1 of the Act is a registrant and makes a supply in connection with a group event pursuant to a written agreement relating to the supply, the prescribed information is the following:
(1) the information required under subparagraphs 4, 5, 7 and 8 of the first paragraph of section 350.51R7.2;
(2) a unique reference number entered on the written agreement by the person;
(3) the estimated value of the consideration payable in respect of the supply;
(4) the date or dates of the group event;
(5) the estimated maximum number of persons attending the event;
(6) a row of 42 equal signs (=) immediately preceding the information required under subparagraphs 7 to 12;
(7) mention that the event is a group event;
(8) the information required under subparagraphs 13 and 14 of the first paragraph of section 350.51R5;
(9) the information required under subparagraphs 9 and 10 of the first paragraph of section 350.51.1R2;
(10) the information required under paragraphs 1 and 2 of section 350.51.1R1;
(11) the information required under subparagraph 12 of the first paragraph of section 350.51.1R2; and
(12) a row of 42 equal signs (=) immediately following the information required under subparagraphs 6 to 11.
The information required under subparagraphs 6 to 12 of the first paragraph are generated in that order by the device referred to in section 350.52.1 of the Act.
O.C. 586-2015, s. 7. | 2021-01-24 07:18:04 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.934938371181488, "perplexity": 2790.4475175932894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703547333.68/warc/CC-MAIN-20210124044618-20210124074618-00104.warc.gz"} |