url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://math.stackexchange.com/questions/158594/my-first-course-in-algebraic-geometry-two-simple-questions
# My first course in algebraic geometry: two simple questions I'm attending my first course in algebraic geometry, and my professor has chosen an approach which is a middle-way between the basic algebraic geometry done in $\mathbb A^n_k$ and the approach with schemes, so substantially like in Milne's notes. I have for you some simple question and I hope that the answers will not involve scheme theory: 1) Let $(X,\mathcal O_X)$ an affine variety, so it is isomorphic as ringed space to $(V,\mathcal O_V)$ where $V$ is an affine algebraic set and $\mathcal O_V$ is the sheaf of regular functions. When one says "take the covering of $X$ with standard open sets", it means that we consider the covering of $X$ done with those open sets of $X$ which are homeomorphic to the standard open sets $D(f)\subseteq V$? 2) In class we have defined a quasi-coherent sheaf on $(V,\mathcal O_V)$ (we are in $\mathbb A^n_k$) as the sheaf $\widetilde M$ uniquely associated to the assignment $\widetilde M(D(f))=M_f$ where $M$ is a $\Gamma(V,\mathcal O_V)$-module. When we talk about a quasi-coherent sheaf defined on a abstract affine variety $(X,\mathcal O_X)$ so isomorphic to $(V,\mathcal O_V)$, do we intend a sheaf $\mathcal F$ on $X$ with an isomorphism $\mathcal F\cong f_{\ast}\widetilde M$ (assuming that $f$ is the omeomorphism between $V$ and $X$)? - (1) No. It is not just homeomorphic but identified with the corresponding standard open set in $V$ by a fixed isomorphism $X \cong V$. (2) Yes. You can check that this notion is independent of the choice of $V$ and of the choice of isomorphism $X \cong V$. –  Zhen Lin Jun 15 '12 at 10:59 1) Standard open sets are defined for every locally ringed space. If $f \in \Gamma(X,\mathcal{O}_X)$, then $X_f$ (sometimes also called $D(f)$) is by definition the set of all $x \in X$ such that $f_x \notin \mathfrak{m}_x$, where $\mathfrak{m}_x$ is the maximal ideal of the local ring $\mathcal{O}_{X,x}$. Equivalently, $f(x) \neq 0$ in the residue field $k(x) = \mathcal{O}_{X,x}/\mathfrak{m}_x$. This is also the reason why often $X_f$ is called the "locus where $f$ doesn't vanish" or "where $f$ is invertible". It is an easy exercise to show that $X_f$ is in fact open, and that we have the standard identities $X_1 = X$, $X_f \cap X_g = X_{fg}$. When $X$ is an affine algebraic set, this coincides with the locus where $f$ doesn't vanish defined in the usual sense (and probably this is what you meant by $D(f) \subseteq V$). 2) Again quasi-coherent sheaves make sense for arbitrary ringed spaces. And it is a very bad idea to give definitions only for algebraic sets $\subseteq \mathbb{A}^n$ and try to extend them via chosen isomorphisms! You should work with intrinsic geometric objects instead, and (locally) ringed spaces provide a nice framework for that. So let's use this language. A quasi-coherent module on a ringed space $X$ is just a module $M$ (i.e. what most people call a sheaf of modules) on $X$ such that locally on $X$ there is a presentation $\mathcal{O}^{\oplus I} \to \mathcal{O}^{\oplus J} \to M \to 0$. So to be more precise: There is an open covering $X = \cup_i X_i$ such that for each $i$ there is an exact sequence (which, of course, does not belong to the data) $\mathcal{O}|_{X_i}^{\oplus I} \to \mathcal{O}|_{X_i}^{\oplus J} \to M|_{X_i} \to 0$. Quasi-coherent modules constitute a (tensor) category $\mathrm{Qcoh}(X)$, which is by the way a very interesting and deep invariant of $X$, especially when $X$ is a variety. How to construct quasi-coherent modules on a ringed space $X$? Well pick a $\Gamma(X,\mathcal{O}_X)$-module $M$. Then I claim that we can construct a quasi-coherent module $\tilde{M}$ on $X$ as follows: Choose a presentation $\Gamma(X,\mathcal{O}_X)^{\oplus I} \to \Gamma(X,\mathcal{O}_X)^{\oplus J} \to M \to 0.$ Represent the morphism on the left as a "relation matrix" consisting of elements of $\Gamma(X,\mathcal{O}_X)$. Now, every such global section corresponds to a homomorphism $\mathcal{O}_X \to \mathcal{O}_X$. Thus we can produce a matrix consisting of endomorphisms over $\mathcal{O}_X$, and thus a morphism $\mathcal{O}_X^{\oplus I} \to \mathcal{O}_X^{\oplus J}$. Define $\tilde{M}$ to be the cokernel. By definition, this is quasi-coherent! This already produces lots of examples; in fact all $X$ is an affine variety, but only few if $X$ is projective. To give a more concise definition which does not depend on the presentation: Just define $\tilde{M}$ to be the sheaf associated to the presheaf $U \mapsto \Gamma(U,\mathcal{O}_X) \otimes_{\Gamma(X,\mathcal{O}_X)} M$. This definition easily implies a more conceptual characterization of the functor $M \to \tilde{M}$ from $\Gamma(X,\mathcal{O}_X)$-modules to quasi-coherent modules modules on $X$: It is left adjoint to the global section functor! In fact, everything you want to know about $\tilde{M}$ already follows from this adjunction. You may forget about the details of the construction, you just have to remember $\hom(\tilde{M},F) \cong \hom(M,\Gamma(X,F))$, which actually holds for every module $F$ on $X$. So what happens when $X$ is some affine variety? Then the sets $X_f$ constitute a basis for the topology of $X$, and we have $\Gamma(X_f,\mathcal{O}_X) = \Gamma(X,\mathcal{O}_X)_f$. Namely, this is well-known if $X \subseteq \mathbb{A}^n$ and then generalizes immediately to affine varieties, which are isomorphic as ringed spaces to such concrete varieties. Let $M$ be a $\Gamma(X,\mathcal{O}_X)$-module. Now it turns out that the presheaf defined above is actually a sheaf! This comes down to the following: If $f_1,\dotsc,f_n \in \Gamma(X,\mathcal{O}_X)$ generate the unit ideal (i.e. the corresponding sets $X_{f_i}$ cover $X$), then the canonical sequence $$0 \to M \to \prod_{i} M_{f_i} \to \prod_{i,j} M_{f_i f_j}$$ is exact. Everyone should have done this proof instead of looking it up in the standard sources. Because I think it is quite enlightening and in fact purely geometric if you think of $f_1,\dotsc,f_n$ as a partition of unity. Anyway, so this tells us that we don't need associated sheaves in the definition of $\tilde{M}$. Thus, by definition, on the open subset $X_f$ it is given by $$\Gamma(X_f,\tilde{M}) = \Gamma(X_f,\mathcal{O}_X) \otimes_{\Gamma(X,\mathcal{O}_X)} M = \Gamma(X,\mathcal{O}_X)_f \otimes_{\Gamma(X,\mathcal{O}_X)} M = M_f.$$ So this describes some quasi-coherent sheaves on affine varieties. In fact, one can show that every quasi-coherent sheaf on an affine variety $X$ has the form $\tilde{M}$. Namely, one shows that for every such sheaf $F$ the canonical counit morphism of the adjunction mentioned above $\tilde{\Gamma(X,F)} \to F$ is an isomorphism. Again, this is a very nice exercise. After some thought you will see that this is just another application of the exact sequence above. So this provides, for every affine variety, an equivalence of categories $$\mathrm{Qcoh}(X) \cong \mathrm{Mod}(\Gamma(X,\mathcal{O}_X)).$$ By the way, if you define $\tilde{M}$ on an affine variety by $\Gamma(X_f,\tilde{M}) = M_f$ and extended via projective limits to arbitrary open subsets, then you probably would like to know that this is a sheaf. And again this comes down to the exact sequence above. You cannot get around it. I don't like this approach because it is somewhat clumsy, you don't get the general picture, and it doesn't produce a formula for $\tilde{M}(U)$ for arbitrary $U$. Therefore I've chosen the rather abstract but hopefully concise approach above. Of course nothing is new, you can find all that in EGA I, the Stacks Project, etc. - can you give me a hint to prove that $X_f$ is open in $X$? –  Galoisfan Jun 26 '12 at 14:14 Hint: $x \in X_f$ iff $f_x$ has an inverse in the stalk $\mathcal{O}_{X,x}$. –  Martin Brandenburg Jun 26 '12 at 15:08
2014-03-10 07:01:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9709441661834717, "perplexity": 117.95465282616625}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010683198/warc/CC-MAIN-20140305091123-00050-ip-10-183-142-35.ec2.internal.warc.gz"}
https://physics.meta.stackexchange.com/questions/7196/maybe-anonymous-users-should-not-be-allowed-edits
# Maybe anonymous users should not be allowed edits? This suggested edit erased the first line in my answer and replaced it with I think that gravity has an effect on any particle in our universe. Even the photons are attracted by the back holes (despite the fact that some physicists consider photons as having no mass !) But in quantic physics, particles may adopt one these two behaviours: being "real objects" i.e with a mass or being a wave! So what is the mass ? Does the mass really exist ?? Edit suggested by anonymous user. It was approved by one reviewer and I just managed to catch it and reject it. Why are anonymous users allowed to mess up things? At least if an anonymous user does the editing the person who wrote the entry should be primarily responsible for approval or rejection. I am asking, a feature request, for a veto from the original person who wrote the question/answer that was edited by an anonymous user, as in this case already there was an approval! It would have made the answer really stupid. Edit in response to comment by @user36790. Rolling back is an option that corrects things, IF the original author reads the site often. We have many good answers by very good physicists who have stopped looking in everyday or even month and some not at all. A distortion of their answers might go unnoticed and get approval by ignoramuses and destroy the integrity of the site. Moderators cannot be reading everything as postings go through first page very fast. • Possible duplicate of Why do we allow anonymous users to suggest edits? – ACuriousMind Nov 3 '15 at 16:28 • As that other question is a discussion and this is a feature request, I'm inclined to think they're not quite duplicates. – user10851 Nov 3 '15 at 16:37 • Let me reiterate that if anyone sees an edit being approved which really should not be, please bring it to the mods' attention via a flag. (I'm already looking into this one.) – David Z Nov 3 '15 at 16:39 • Effectively, it seems like a duplicate since a feature request usually belongs on the main meta.SE site rather than here. – Qmechanic Nov 3 '15 at 17:00 • @Qmechanic I do not contribute there, and the tag is here too. I suppose the moderators can transfer the feature request. In any case the answers in the duplicate do not address the same problem. I am not only asking but also suggesting a change. If when an anonymous edits only the author is alerted this would solve the problem. – anna v Nov 3 '15 at 17:10 • If when an anonymous edits only the author is alerted this would solve the problem sounds like an interesting fix, I'd be curious to see how the SE team thinks of it. – Kyle Kanos Nov 3 '15 at 17:25 • Also, it seems the proposed dupe is asking if anonymous edits are useful whereas this one is asking for a new way to deal with those edits. – Kyle Kanos Nov 3 '15 at 17:27 • I've seen this problems quite often. Don't know who accepted the edit; probably he was bit reluctant or probably he mistakenly pressed the accept button. But even if the edit has been accepted, you can rollback to your previous unedited version. Anonymous users rarely & barely suggest a good edit, after all. – user36790 Nov 4 '15 at 3:31 • "Maybe anonymous users should not be allowed edits", or maybe we could do something to improve the reviewing standards! That can be corollary, if I'm right? – 299792458 Nov 4 '15 at 12:55 • I don't think users should be allowed to edit somebody else's answer. I wouldn't dream of doing so. – John Duffield Nov 8 '15 at 14:40 Single cases and anectodal evidence are tricky bases to work with here. We notice the ugly suggested edits, but who's to say that there isn't a large body of reasonable suggested edits by drive-by anonymous users. To sort this out, I've written two SEDE queries: The obvious observation is that anonymous edits do have a much higher rejection rate than edit suggestions from registered users - 56% approved for anonymous suggestions vs. 87% for registered users, over the latest 500 edits of each. They are also much less common: as Frequency of anonymous suggested edits shows, about one every 10 suggested edits in the entire site history is an anonymous suggestion. I feel that overall this is an acceptable state of affairs, and that the ~45% of bad apples in the anonymous-suggestions apple cart is well handled by the combination of a review queue and a notification for the OP. The thing to do is for people to dig into the actual edit suggestions by anonymous users and form an opinion on whether they're generally salvageable or mostly just rubbish that occasionally passes the bar, and whether there are hidden gems in there that make the whole pile worth it. (I'll leave that bit to others - writing SQL is more fun than looking at a hundred edit reviews.) I will also note that in this particular case the problem is not the fact that the suggested edit was bad, but that the review was bad: this was one bad edit that might have got through the review queue when it shouldn't. (It's speculation whether someone else would've let it through had anna not intervened. I tend to think it wouldn't, with other reviewers stepping in and the review queue working like it should, but it's a moot question now.) This particular bad edit happened to come from an anonymous user, but I think it's a red herring unless one can rule out the existence of similar bad-edits-that-slipped-the-net from registered users. Unfortunately those are really hard to find, and getting reliable statistics will be even harder. However, I don't see why the review queue should function less well for anonymous suggestions than for registered-user ones - if anything, the anonymous source ought to make reviewers more wary of the edit. Let's make this decision based on the actual overall quality of the edit submissions we get, rather than anecdotal impressions from a few ugly ones. • You have not addressed my latest edit: what if some anonymous"crackpot" subtly changes posts of former users who will never look at the alert, but are very good posts from good physicists enriching this site and will become useless as a physics data base. – anna v Nov 6 '15 at 15:33 • @annav I see that as covered by the second-to-last paragraph. I think the problem is bad reviews and not anonymous edits (unless one can argue convincingly that anonymous edits are more likely to get bad reviews, but I just don't see why this would be the case). If we have a bad-reviews problem, then that's what we need to address (and there's a huge toolbox, developed for StackOverflow, to do just that). – Emilio Pisanty Nov 6 '15 at 15:42 • On the other hand, I think that the first response should not be to completely ban anonymous edits, but to increase the number of required reviews to three or four approvals. This is sustainable at the current conditions (that queue is never in stress, and anonymous edits are ~10% of the total), it can probably be justified well (given the much lower accept rate on anonymous edits, plus your valid concerns on valuable users no longer visiting the site), it exponentially decreases the chance of bad edits getting approved, and it can probably be implemented easily. Why not ask for that? – Emilio Pisanty Nov 6 '15 at 15:47 • well , maybe for anonymous edits only. I think it works fine with registered users. On the other hand , why would it not be as easy to implement that for anonymous users, the author should do the accept. In this way, if an old good answer is changed, nothing will happen to the answer. If it were a bad answer needing a correction a registered user would probably have intervened. – anna v Nov 6 '15 at 16:56 • Again, I still don't see why anonymous edits would be more susceptible to a bad review, and you haven't addressed that. Please step back and think in terms of the broader picture: you got a bad anonymous edits, but you're not necessarily seeing the bad-edit-plus-bad-review cases from registered users if they are there. The problem isn't bad edits, it's bad reviews. – Emilio Pisanty Nov 6 '15 at 16:59 • From a practical standpoint, this sort of feature request is much more likely to get traction with the SE team if it's simply altering a few lines of code (demanding three reviews instead of two if the edit is anonymous) which works within the current review mechanisms and doesn't break functionality (the possibility for any visitor to improve this answer), against something that does break current functionality. I think the team will be quite likely to roll with it if we ask for it - but you're not (yet) actually asking for that. – Emilio Pisanty Nov 6 '15 at 17:07 • The probability that somebody with his/her own theory will try to push it on a site that is becoming a data base ( often in google searches a physics.SE answer comes up) is much higher in anonymity. No repercursions. Whereas a registered user has given some details and can be tagged. That is why I think the anonymity is insidious. – anna v Nov 6 '15 at 19:02 • Again, I still don't see why anonymous edits would be more susceptible to a bad review, and you haven't addressed that. We have mechanisms that work well for this, and it's at most a matter of fine-tuning them. – Emilio Pisanty Nov 6 '15 at 21:23 • The review is a gate. Let us suppose that they are equally susceptible to pass the gate. I believe that the percentage of bad edits that will pass review will be much higher in anonymous edits, because of the 56% to 87% that you found. . Anyway, I am just raising a question. Maybe nothing will ever happen. It is all a matter of probabilities. One can wait until it happens and is caught to do something about it. – anna v Nov 7 '15 at 4:04
2020-10-25 08:34:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27352404594421387, "perplexity": 1286.5003168197388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107888402.81/warc/CC-MAIN-20201025070924-20201025100924-00529.warc.gz"}
https://cob.silverchair.com/jeb/article/211/1/35/17430/Development-partly-determines-the-aerobic
Previous studies suggest that genetic factors and acclimation can account for differences in aerobic performance(O2max) between high and low altitude populations of small mammals. However, it remains unclear to what extent development at different oxygen partial pressures(PO2) can affect aerobic performance during adulthood. Here we compared the effects of development at contrasting altitudes versus effects of acclimation during adulthood on O2max. Two groups of deer mice were born and raised for 5 weeks at one of two altitudes(340 and 3800 m above sea level). Then, a subset of each group was acclimated to the opposite altitude for 8 weeks. We measured O2max for each individual in hypoxia (PO2=13.5 kPa, 14%O2 at 3800 m) and normoxia (PO2=20.4 kPa, 21% O2 at 340 m) to control for PO2 effects. At 5 weeks of age, high altitude born mice attained significantly higher O2max than low altitude born mice (37.1% higher in hypoxia and 72.1% higher in normoxia). Subsequently, deer mice acclimated for 8 weeks to high altitude had significantly higher O2max regardless of their birth site (21.0% and 72.9% difference in hypoxia and normoxia,respectively). A significant development × acclimation site interaction comparing O2maxin hypoxia and normoxia at 13 weeks of age suggests that acclimation effects depend on development altitude. Thus, reversible plasticity during adulthood cannot fully compensate for developmental effects on aerobic performance. We also found that differences in aerobic performance in previous studies may have been underestimated if animals from contrasting altitudes were measured at different PO2. Physiological ecologists have always been interested in how organisms accommodate extreme environmental conditions such as cold(Almeida-Val et al., 1994; Guderley and St-Pierre, 1996),high salinity (Tay and Garside,1975) and hypoxia (Hochachka,1988; Singer,1999; West, 1991). Recently, much attention has focused on how humans and other mammals cope with the rigors of montane and other high altitude environments (e.g. Hochachka et al., 1982; Curran et al., 1998; Hammond et al., 1999; Hammond et al., 2001; Hammond et al., 2004; Hayes, 1989a; Hayes, 1989b; Hayes and Shonkwiler, 1996; Hayes and O'Connor, 1999; Rezende et al., 2001; Rezende et al., 2005; Sheafor, 2003). In particular,small mammals living at high altitude face several important challenges because of their size. Energy demands increase because they are active in low ambient temperatures, but hypoxia limits aerobic metabolism so it is harder to fuel these needs. Additionally, primary productivity is often low at high altitude so there are fewer resources to fuel higher demands(Hammond et al., 2004). Studies comparing populations from high and low altitudes have reported a variety of physiological differences that were initially interpreted as adaptations (in a Darwinian sense) to different altitudes and oxygen partial pressures (PO2). However, subsequent studies have shown that chronic exposure to high altitude (i.e. acclimation or acclimatization) can result in important physiological responses (e.g. McClelland et al., 1998; McClelland et al., 2001),suggesting that differences between populations might be partly determined by phenotypic plasticity. Although some studies have attempted to control for acclimatory effects when comparing aerobic performance across populations or species inhabiting different altitudes (e.g. Rezende et al., 2001; Hammond et al., 2001), the role of development as a source of variation has not been previously addressed(but see Chappell et al.,2007). Disturbances of the developmental process, whether genetic,environmental or phylogenetic, may not be reversible (developmental canalization) and can result in significant variability within a species(Spicer and Gaston, 1999; Dzialowski et al., 2002; Spicer and Burggren,2003). Therefore, we tested whether developmental effects can contribute to variation in aerobic metabolism during adulthood. We focused on maximum aerobic performance during exercise(O2max) because of the large role aerobic exercise plays in an organism's everyday life,particularly at high altitude (Pough,1980; Hayes and Shonkwiler,1996). We used the North American deer mouse (Peromyscus maniculatus Le Conte) as a model for study for several reasons. Deer mice inhabit a wide altitudinal range, from below sea level to above 4000 m(Hock, 1961) and are North America's most widespread mammal. They also display an array of polymorphisms in the α-chains of their hemoglobin that are geographically correlated with altitude (Snyder, 1981; Snyder et al., 1988),influence blood oxygen affinity, and differentially affect aerobic metabolism at low and high altitude (Chappell and Snyder, 1984; Chappell et al.,1988). Finally, field studies at high altitude suggest that natural selection favors high aerobic capacity during thermogenesis in P. maniculatus (Hayes and O'Connor,1999), hence our results are certainly relevant in an evolutionary context. Fig. 1. Experimental design employed in this study. Testing conditions are described within each column, sample sizes and results are summarized in Table 1. Fig. 1. Experimental design employed in this study. Testing conditions are described within each column, sample sizes and results are summarized in Table 1. We quantified whether effects of development at high altitude exist and to what extent variation in aerobic performance might be associated with developmental canalization. With this purpose, we estimated and compared effects associated with two sources of phenotypic plasticity: developmental plasticity (during in utero development and growth) and reversible plasticity during adulthood. If developmental canalization and physiological heterokairy [changes in the timing and/or onset of a particular physiological system during development (Blacker et al.,2004)] are partly responsible for variation in aerobic performance during adulthood, we expect that reversible acclimation cannot fully compensate for the effects of developmental acclimation. ### Experimental animals and design For this study we used individuals of P. m. sonoriensis from a colony that is 4–7 generations removed from the wild (from a founder population trapped near Mt Barcroft, about 3800 m elevation, in eastern California). The breeding program in the colony was managed to maximize outcrossing and there was no intentional selection. To obtain two groups of mice born at different altitudes (high versus low), breeding pairs were established at the Barcroft Laboratory of the University of California's White Mountain Research Station (elevation 3800 m; 14% O2 or PO2=13.5 kPa) and at Riverside (elevation 340 m; 21% O2 or PO2=20.4 kPa). From these breeding pairs, we obtained 20 offspring born at low altitude and 19 born at high altitude. At 5 weeks of age, half of the offspring of each group were moved and acclimated to the opposite altitude for an additional 8 weeks. Transportation between Riverside and Barcroft took approximately 6–7 h in an air-conditioned vehicle; the majority of the ride was over smooth highway and care was taken not to cause undue stress to the animals during this time. Therefore, we ultimately had four treatment groups(Fig. 1; Table 1): (1) low born/low measured; (2) low born/high measured; (3) high born/high measured; and (4)high born/low measured. Table 1. Sample size, body mass and aerobic performance means (± s.e.m.) for each of the two groups at 5 weeks of age and the four groups at 13 weeks of age 5 weeks of age 13 weeks of age Birth siteNBody massO2max hypoxiaO2max normoxiaAcclimation siteNBody massO2max hypoxiaO2max normoxia Low 20 18.03±0.74 3.31±0.11 3.98±0.28 Low 10 22.78±0.86 3.98±0.17 4.05±0.41 High 10 21.86±0.86 5.32±0.17 8.64±0.40 High 19 17.38±0.50 4.54±0.11 6.85±0.28 Low 21.73±1.11 4.31±0.22 4.89±0.52 High 13 21.86±0.75 4.69±0.16 6.71±0.37 5 weeks of age 13 weeks of age Birth siteNBody massO2max hypoxiaO2max normoxiaAcclimation siteNBody massO2max hypoxiaO2max normoxia Low 20 18.03±0.74 3.31±0.11 3.98±0.28 Low 10 22.78±0.86 3.98±0.17 4.05±0.41 High 10 21.86±0.86 5.32±0.17 8.64±0.40 High 19 17.38±0.50 4.54±0.11 6.85±0.28 Low 21.73±1.11 4.31±0.22 4.89±0.52 High 13 21.86±0.75 4.69±0.16 6.71±0.37 See Fig. 1 for experimental design. Body mass is given in g, and O2max is given in ml O2 min–1. Aerobic performance values are body mass corrected All mice were housed individually in plastic shoebox cages (27 cm ×21 cm × 14 cm) at 23–25°C. They were given ad libfood (23% protein, 4.5% fat, 6% fiber, 8% ash and 2.5% minerals), water and bedding. In the lab, they were maintained on a photoperiod cycle that approximates the natural cycle at Barcroft during the summer months (i.e.∼14 h:10 h L:D in mid-July). ### V̇O2maxduring exercise Maximum O2was determined using open flow respirometry by running mice in an enclosed motorized treadmill. The treadmill's working section (the portion of the total enclosed gas volume that the mouse was constrained to) was 6 cm wide, 7 cm high and 13 cm long, and the enclosed total gas volume was approximately 970 ml. We used a flow rate of approximately 2100 ml min–1,standard temperature and pressure (stp) of dry air. Gas flow was regulated with Tylan and Applied Materials mass flow controllers (Santa Clarita, CA, USA) upstream from the treadmill. Approximately 100 ml min–1 of excurrent gas was scrubbed of CO2 and water vapor (using soda lime and drierite, respectively) and routed through the oxygen sensors. Changes in O2 concentration were measured with Ametek/Applied Electrochemistry S-3A analyzers (Naperville, IL, USA) and recorded on a Macintosh computer with a National Instruments A–D converter (Austin, TX, USA) using custom-made data acquisition software(http://warthog.ucr.edu). We calculated O2(in ml min–1) as: ${\dot{V}}_{\mathrm{O}_{2}}={\dot{V}}{\times}(F\mathrm{I}_{\mathrm{O}_{2}}-F\mathrm{E}_{\mathrm{O}_{2}}){/}(1-F\mathrm{E}_{\mathrm{O}_{2}}),$ where is the flow rate (ml min–1; stp) and FiO2 and FeO2 are the fractional oxygen concentrations of incurrent and excurrent gases, respectively. To begin a test, after measuring body mass, we placed a mouse in the treadmill's enclosed chamber and allowed for a 3–5 min adjustment period before starting the tread at low speed (approximately 0.1 m s–1). We continued to increase speed in increments of 0.1 m s–1 every 30 s until the mouse could either no longer maintain position on the tread or O2 did not increase with increasing speed, at which time the tread was stopped. At the end of the exercise we confirmed that O2 fell rapidly;all mice showed signs of exhaustion but none were injured. Tests lasted a total of 6–20 min. Reference readings of incurrent gas were obtained at the beginning and end of the trial. Due to the treadmill's relatively large volume, we applied theinstantaneous' correction (Bartholomew et al., 1981) to compensate for mixing characteristics and to resolve short-term changes in metabolic rate. The effective volume of the treadmill respirometry chamber, calculated from washout curves, was 903 ml. We computed O2max as the highest instantaneous O2 averaged over continuous 1 min intervals (Chappell et al., 1995). Measurements of aerobic performance for each individual were carried out at the end of the developmental period (5 weeks of age) and after acclimation (13 weeks of age), at two different PO2 to obtain comparable measurements simulating high and low altitudes. In Riverside,measurements were performed with ambient air (normoxia, PO2=20.4 kPa) and employing a gas mixture of 14% O2 and 86% N2 (hypoxia, PO2=13.5 kPa). Similar PO2 values were obtained in Barcroft employing ambient air (hypoxia) and a mixture of 32% O2 and 68% N2(normoxia). Despite the mix appearing hyperoxic, these testing conditions approximate the normoxic' conditions encountered in Riverside. Ambient PO2 in Riverside (340 m above sea level) is∼20.4 kPa, but at Barcroft (PO2 ∼13.5 kPa), a PO2 of 20.4 kPa can only be achieved by exposing the animal to a fractional O2 content of 0.32; in this instance, barometric pressure must be taken into account to know the true amount of oxygen available to the animal. From here on, we will refer to measurements made at 20.4 kPa as normoxic and measurements made at 13.5 kPa as hypoxic. ### Analyses and statistics We initially assessed how body mass and aerobic performance of our mice changed with age, controlling for effects of developmental and acclimatory altitudes (below). This was carried out using repeated-measures ANOVA comparing body mass and aerobic performance obtained at 5 weeks versus 13 weeks of age. Because aerobic performance was measured at two different PO2, we performed separate repeated-measures ANOVAs for hypoxia and normoxia. In addition, we performed pairwise Pearson correlations between residuals controlling for development and acclimation site (and for mass differences in the case of O2max) to determine whether body mass and aerobic performance were repeatable across ages and different PO2. Subsequently, several analyses were performed to disentangle the effects of development and acclimation. To estimate developmental effects in aerobic performance at 5 weeks of age, we compared the aerobic performance of mice born at high versus low altitude with an analysis of covariance(ANCOVA), with birth altitude as the main effect and body mass as a covariate. Because mice were tested twice at different PO2, separate ANCOVAs were performed for measurements in hypoxia and normoxia. To determine whether responses to different PO2 varied as a function of birth site, we employed a repeated-measures ANOVA comparing the aerobic performance of each individual in hypoxia versus normoxia, with birth site as a between-subject factor. We then compared aerobic performance obtained at 13 weeks of age (5 weeks of development followed by 8 weeks of acclimation) for the same individuals,to partition the effects of development versus acclimation in O2max. We employed an ANCOVA with both birth altitude and acclimation altitude as main effects and body mass as a covariate (analyses were performed separately for hypoxia and normoxia). To determine whether individuals within groups showed different responses to PO2 during aerobic performance measurements as a function of development and/or acclimation, we performed a second repeated-measures ANOVA with birth site and acclimation site as between-subject factors. Unless stated otherwise, F values are from these statistical tests and we used an alpha value of 0.05 for statistical significance. ### Repeatability across PO2 and 8 weeks of acclimation To test whether a trait is repeatable is to ask (1) whether the individual changed over time and (2) whether that individual maintained its relative rank in the population (with regard to that trait) over time. Aerobic performance measured in normoxia and hypoxia were significantly correlated at both 5 and 13 weeks of age in models controlling for site of birth and/or acclimation altitude, with and without body mass as a covariate(Table 2). After correcting for site of birth and acclimation altitude, body mass was significantly repeatable between 5 and 13 weeks of age (r=0.71, P<0.0001). Raw O2max measured in hypoxia, but not in normoxia, was significantly correlated after 8 weeks of acclimation (P=0.001 and P=0.28, respectively; Table 2). After accounting for body mass differences, O2max in hypoxia and normoxia was not repeatable over the 8 weeks of acclimation(Table 2). Table 2. Repeatability of aerobic performance and body mass across different Po2 or 8 weeks of acclimation Raw values Mass-corrected values rPrP PO2 – hypoxia versus normoxia O2maxat 5 weeks 0.66 <0.0001 0.49 0.002 O2maxat 13 weeks 0.59 <0.0001 0.38 0.02 Age – 5 weeks versus 13 weeks Body mass 0.71 <0.0001 – – O2maxin hypoxia 0.53 0.001 0.20 0.23 O2maxin normoxia 0.18 0.28 –0.14 0.41 Raw values Mass-corrected values rPrP PO2 – hypoxia versus normoxia O2maxat 5 weeks 0.66 <0.0001 0.49 0.002 O2maxat 13 weeks 0.59 <0.0001 0.38 0.02 Age – 5 weeks versus 13 weeks Body mass 0.71 <0.0001 – – O2maxin hypoxia 0.53 0.001 0.20 0.23 O2maxin normoxia 0.18 0.28 –0.14 0.41 Results were estimated as Pearson product-moment correlations between ANOVA residuals controlling only for site of birth and/or acclimation altitude (raw values), or residuals including mass as an additional covariate(mass-corrected values). Statistically significant results according to a one-tailed hypothesis are shown in bold ### Effects of birth altitude and PO2 at 5 weeks of age In hypoxia (PO2=13.5 kPa), mice born at high altitude had a 37% higher O2max that those born at low altitude (F1,36=64.1, P<0.0001; Table 1). When tested in normoxia (PO2=20.4 kPa), mice born at high altitude had a 72% higher O2max than mice born at low altitude (F1,36=50.7, P<0.0001; Table 1). Accordingly, the repeated-measures ANOVA shows that animals born at high altitude attained a significantly higher O2max than mice born at low altitude, regardless of PO2(between-subject effect, F1,37=36.87, P<0.0001). Nonetheless, PO2 effects were more pronounced in animals born at high altitude(Fig. 2), which is supported by the significant PO2 × site of birth interaction in this analysis (F1,37=70.64, P<0.0001). Fig. 2. Aerobic performance at 5 weeks of age measured in hypoxia and normoxia. Asterisks represent statistical significance within birth altitude; lower-case letters represent statistical significance in hypoxia and upper-case letters represent statistical significance in normoxia. Means are body mass-corrected values from ANCOVA and error bars are s.e.m. Fig. 2. Aerobic performance at 5 weeks of age measured in hypoxia and normoxia. Asterisks represent statistical significance within birth altitude; lower-case letters represent statistical significance in hypoxia and upper-case letters represent statistical significance in normoxia. Means are body mass-corrected values from ANCOVA and error bars are s.e.m. ### Effects of birth versus acclimation altitude at 13 weeks of age Regular ANCOVAs comparing O2max as a function of birth site and acclimation altitude show that high altitude acclimated mice attained significantly higher O2max than animals acclimated at low altitude in hypoxia and normoxia(F1,33=22.4, P<0.0001 and F1,33=56.47, P<0.0001, respectively; Fig. 3). Although the main effects of birth site were not significant in these analyses(P>0.21 in both cases), there was a significant birth altitude× acclimation altitude interaction for O2max in both hypoxia and normoxia (F1,33=6.82, P=0.013 and F1,33=10.5, P=0.003, respectively). Among animals acclimated to low altitude, those born at low altitude attained lower O2max than those born at high altitude, whereas the opposite pattern was observed among mice acclimated to high altitude. In other words, mice acclimated to their native'altitude had consistently lower O2max than those that were switched to the opposite altitude at 5 weeks of age(Fig. 3). Repeated-measures ANOVAs testing for PO2effects showed that mice acclimated to high altitude, regardless of birth site, increased aerobic performance by 52% in normoxia compared with measurements in hypoxia (acclimation, F1,34=30.44, P<0.0001 and birth site, F1,34=1.75, P=0.2). The relative increase in aerobic performance as a function of PO2 was significantly higher in animals acclimated at high altitude (PO2 ×acclimation altitude, F1,34=25.11, P<0.001; Fig. 3). Separate repeated-measures analyses within each treatment support this conclusion: PO2 effects were significant only in high born/high acclimated and low born/high acclimated groups(F1,11=23.27, P=0.001 and F1,9=26.61, P=0.001, respectively). ### Age effects between 5 and 13 weeks Body mass increased about 4 g during the 8 week duration of the acclimation(within-individual effect, F1,36=166.13, P<0.0001). Neither birth altitude nor acclimation altitude significantly affected growth rate (F1,36=0.34, P=0.56 and F1,36=0.32, P=0.58,respectively), and there were no significant interactions (P>0.45 in all cases; Fig. 4). Fig. 3. Top panel, aerobic performance at 13 weeks of age measured in hypoxia and normoxia. Mice were pooled by acclimation altitude because this was the only significant main effect (see Results). Asterisks represent statistical significance within acclimation altitude; lower-case letters represent statistical significance in hypoxia and upper-case letters represent statistical significance in normoxia. Means are body mass-corrected values from ANCOVA and error bars are s.e.m. Bottom panels, adjusted means ±s.e.m. from ANCOVAs performed separately for trials in hypoxia (left) and normoxia (right). The birth site × acclimation altitude interaction was significant in both ANCOVAs (see Results). Fig. 3. Top panel, aerobic performance at 13 weeks of age measured in hypoxia and normoxia. Mice were pooled by acclimation altitude because this was the only significant main effect (see Results). Asterisks represent statistical significance within acclimation altitude; lower-case letters represent statistical significance in hypoxia and upper-case letters represent statistical significance in normoxia. Means are body mass-corrected values from ANCOVA and error bars are s.e.m. Bottom panels, adjusted means ±s.e.m. from ANCOVAs performed separately for trials in hypoxia (left) and normoxia (right). The birth site × acclimation altitude interaction was significant in both ANCOVAs (see Results). As for aerobic performance, O2max measured in hypoxia changed significantly during the 8 weeks of acclimation(within-individual effect, F1,36=28.2, P<0.0001; Table 1). Acclimation altitude was a significant between-individual effect, with O2max being significantly higher in mice acclimated at high altitude regardless of their birth site (F1,36=4.84, P=0.03). Nonetheless, all interactions were statistically significant (P<0.005 for age× birth altitude, age × acclimation altitude, and age ×birth × acclimation altitude), showing that the magnitude of change during the 8 weeks of acclimation depends on birth altitude, acclimation altitude and the interaction between both(Fig. 4). Results were qualitatively identical for measurements in normoxia. We found no evidence that aerobic performance is repeatable between 5 and 13 weeks of age, after controlling for body mass and birth and acclimation altitude (Table 2). This finding is in contrast to reports of high aerobic performance repeatability in deer mice (Hayes and Chappell,1990) over the course of 3 months. Chappell et al.(Chappell et al., 1995)reported high exercise aerobic performance repeatability in Belding's ground squirrels (Spermophilus beldingi) over short periods (2 h) and over months or years in adult animals, but noted no repeatability when animals were tested as juveniles and later as adults. This measurement period was significantly longer than ours (1–2 years versus 8 weeks), but we do point out that physiological traits may not be repeatable across ontogenetic stages (but see Nespolo and Franco, 2007). Conversely, O2max measured at the same age in different PO2 was significantly repeatable, suggesting that the physiological basis underlying this trait remains relatively the same in a given time period (despite the testing PO2). ### PO2 effects on aerobic performance Several studies have compared O2 between high versus low altitudes employing measurements at ambient PO2 (e.g. Hammond et al., 2002; Calbet et al., 2003; Chappell et al., 2007). However, this approach does not fully control for PO2 effects during measurements: whereas mice from high altitudes were measured in a hypoxic atmosphere, animals from low altitudes would have been measured in normoxia. Thus, it remains unclear how phenotypic (anatomical and physiological) responses to chronic exposure to different PO2 affected aerobic performance,because acute effects of ambient PO2 were not accounted for. Our results suggest, for instance, that not accounting for PO2 may underestimate the degree of physiological accommodation following high altitude acclimation. Whereas differences in aerobic performance between groups controlling for PO2 ranged between 21% and 73%, comparisons between high altitude mice measured in hypoxia versus low altitude mice measured in normoxia resulted in differences of around 14% (see middle two bars in Fig. 2 and Fig. 3 top panel). As such,most studies in humans (Gonzalez et al.,1998; Calbet et al.,2003; Ventura et al.,2003) and in small mammals like deer mice(Hammond et al., 2002; Chappell et al., 2007) report significantly higher aerobic performance in high PO2 environments than in low PO2 environments. These results, however, might not reflect the acclimatory physiological responses to chronic exposure to different PO2. This study controls for differences in PO2 across different altitudes by testing animals in multiple PO2environments, and using this approach we were able to focus on the physiological accommodations made at high altitude and how they affect aerobic performance. Fig. 4. Top panel, changes in body mass and aerobic performance, measured in hypoxia (middle panel) and normoxia (bottom panel) during the 8 week period between the first and second measurements. Open symbols represent low-born animals and filled symbols represent high-born animals. Means ± s.e.m. are adjusted estimates controlling for within-individual effects. Fig. 4. Top panel, changes in body mass and aerobic performance, measured in hypoxia (middle panel) and normoxia (bottom panel) during the 8 week period between the first and second measurements. Open symbols represent low-born animals and filled symbols represent high-born animals. Means ± s.e.m. are adjusted estimates controlling for within-individual effects. ### Developmental effects versus reversible plasticity during adulthood We have shown that deer mice residing at 3800 m have a higher O2max, both early (Fig. 2) and later in life (Fig. 3). However, in terms of aerobic performance, does development at high altitude manifest itself differently from acclimation as a low-born adult? Apparently, it does. Our results from regular ANCOVAs and repeated-measures ANOVA show that aerobic performance during adulthood is partly determined by site of development and birth. In this context, main effects of birth site were not significant,suggesting that being born at high or low altitude per se does not determine whether mice will have a high or a low O2max during adulthood. However, the significant interaction term between birth site and acclimation highlights that the outcome of acclimation to different altitudes depends on where mice were born and raised (Figs 3 and 4). These results suggest that developmental canalization partly accounts for aerobic performance during adulthood, but additional studies are necessary to disentangle which factors underlie the patterns described here. Mice born at low altitude apparently have a larger flexibility to increase O2max when acclimated to high altitude (Fig. 4), which was quite unexpected and apparently counterintuitive. This result demonstrates that high PO2 during in utero development and growth might ultimately enable animals born at low altitude to attain an increased O2max compared with highlander natives following acclimation to high altitude. In this context, it is worth emphasizing that responses associated with developmental canalization are not necessarily beneficial or adaptive in a Darwinian sense(see Wilson and Franklin,2002). Instead, they might reflect constraints associated with growing in a more restrictive environment, as might be the case at higher altitudes with lower PO2. Interestingly, mice born at high altitude cannot decrease O2max following low altitude acclimation to levels comparable with those of animals born at low altitude. This result reflects developmental canalization' in a more traditional sense (Wilson and Franklin,2002), suggesting that development at high altitude leads to a constrained degree of plasticity in aerobic performance during adulthood. To our knowledge, this is the first study to report significant developmental effects on aerobic performance during adulthood. Future studies with similar experimental designs may help in elucidating the physiological basis underlying our results. Aerobic performance is a complex trait that depends on a variety of subordinate traits in the O2 cascade, hence it is possible that developmental effects may be detected in subordinate organs as we have reported here for whole-individual O2max. Additional research should primarily focus on traits that are known to be affected by acclimation at different altitudes. For example, Hammond et al.(Hammond et al., 1999; Hammond et al., 2001) reported that both heart and lung mass were ∼17% higher in deer mice born at and acclimated to high altitude when compared with deer mice maintained at low altitude. Additionally, after acclimation to 3800 m, deer mice increased hematocrit by ∼9% (Hammond et al.,1999; Hammond et al.,2001) (G.A.R. and K.A.H., unpublished data). All else being equal,mice with larger cardiopulmonary organs and higher hematocrits should have a higher aerobic performance (Bishop,1997; Rezende et al.,2006). Thus, it would be worth addressing how subordinate traits at different levels in the O2 cascade might be affected by different environmental conditions during development(Burggren and Crossley, 2002). For instance, it is possible that mice can develop larger hearts when developing in normoxia, and this might ultimately explain why mice from low altitude attained the highest O2max following acclimation to high altitude. ### Summary and perspectives Deer mice perform better in normoxic PO2than they do in hypoxic PO2, which is consistent with previous results in this species. Here, we show this trend is consistent, regardless of the altitude at which mice reside. This result is important because it illustrates that previous studies, which have cited decreased aerobic performance at high altitude, might be reporting the confounding effects of decreased PO2, not the real changes in the functional machinery that ultimately determines individual aerobic performance. Our data also suggest that mice that have undergone gestational development at high altitude might have experienced early, rapid growth of the organs and organ systems that contribute to aerobic performance, and this was manifested functionally as a high aerobic performance at 5 weeks of age. Low-born mice acclimated to high altitude late in life were able to generate a high aerobic performance, especially in normoxia. High-born mice did not experience an increase in aerobic performance in response to 8 additional weeks of exposure to high altitude. In this context, although acclimation during adulthood was able to partly compensate for differences attained during development (see also Chappell et al., 2007),we have detected significant and apparently irreversible effects associated with in utero development and growth at a given altitude. Therefore,differences between populations inhabiting high and low altitudes can now be attributed to at least three sources of variation: genetic variation,phenotypic plasticity during adulthood and developmental effects. Future studies should therefore address which subordinate traits in the O2cascade are more susceptible to canalization during development, which traits are `hard-wired' and not open for modification by the environment(Spicer and Gaston, 1999), and how physiological heterokairy of different subordinate traits might ultimately translate into differences in aerobic performance during adulthood. M. A. Chappell assisted with open flow respirometry set-up, and C. Miller offered logistical assistance. S. Red, A. L. Sport and P. Addison provided husbandry in our absence from the White Mountain Research Station. Finally, we thank E. Hice and J. Urrutia in the UCR Biology machine shop for constructing the treadmill. All animal work was approved by the UCR Animal Care and Use Committee. This work was funded by the National Science Foundation grant no. IBN0073229 to K.A.H. and M. A. Chappell. G.A.R. was supported by a minigrant from the University of California White Mountain Research Station. Almeida-Val, V. M. F., Buck, L. T. and Hochachka, P. W.( 1994 ). Substrate and acute temperature effects of turtle heart and liver mitochondria. Am. J. Physiol. 266 , R858 -R862. Bartholomew, G. A., Vleck, D. and Vleck, C. M.( 1981 ). Instantaneous measurements of oxygen consumption during pre-flight warm-up and post-flight cooling in sphingid and saturnid moths. J. Exp. Biol. 90 , 17 -32. Bishop, C. M. ( 1997 ). Heart mass and the maximum cardiac output of birds and mammals: implications for estimating the maximum aerobic power input of flying animals. Philos. Trans. R. Soc. Lond. B Biol. Sci. 352 , 447 -456. Blacker, H. A., Orgeig, S. and Daniels, C. B.( 2004 ). Hypoxic control of the development of the surfactant system in the chicken: evidence for physiological heterokairy. Am. J. Physiol. 287 , R403 -R410. Burggren, W. W. and Crossley, D. A., II ( 2002 ). Comparative cardiovascular development: improving the conceptual framework. Comp. Biochem. Physiol. 132A , 661 -674. Calbet, J. A. L., Boushel, R., Radegran, G., Sondergaard, H.,Wagner, P. D. and Saltin, B. ( 2003 ). Why is VO2max after altitude acclimatization still reduced despite normalization of arterial O2 content? Am. J. Physiol. 284 , R304 -R316. Chappell, M. A. and Snyder, L. R. G. ( 1984 ). Biochemical and physiological correlates of deer mouse alpha-chain hemoglobin polymorphisms. Proc. Natl. Acad. Sci. USA 81 , 5484 -5488. Chappell, M. A., Hayes, J. P. and Snyder, L. R. G.( 1988 ). Hemoglobin polymorphisms in deer mice (Peromyscus maniculatus): physiology of beta-globin variants and alpha-globin recombinants. Evolution 42 , 681 -688. Chappell, M. A., Bachman, G. C. and Odell, J. P.( 1995 ). Repeatablility of maximal aerobic performance in Belding's ground squirrels, Spermopholis beldingi. Funct. Ecol. 9 , 498 -504. Chappell, M. A., Hammond, K. A., Cardullo, R. A., Russell, G. A., Rezende, E. L. and Miller, C. ( 2007 ). Deer mouse aerobic performance across altitudes: effects of developmental history and temperature acclimation. Physiol. Biochem. Zool. 80 , 652 -662. Curran, L. S., Zhuang, J., Droma, T. and Moore, L. G.( 1998 ). Superior exercise performance in lifelong Tibetan residents of 4,400 m compared with Tibetan residents of 3,658 m. Am. J. Phys. Anthropol. 105 , 21 -31. Dzialowski, E. M., von Plettenberg, D., Elmonoufy, N. A. and Burggren, W. W. ( 2002 ). Chronic hypoxia alters the physiological and morphological trajectories of developing chicken embryos. Comp. Biochem. Physiol. 131A , 713 -724. Gonzalez, N. C., Clancy, R. L., Moue, Y. and Richalet, J. P.( 1998 ). Increasing maximal heart rate increases maximal O2 uptake in rats acclimatized to simulated altitude. J. Appl. Physiol. 84 , 164 -168. Guderley, H. and St-Pierre, J. ( 1996 ). Phenotypic plasticity and evolutionary adaptations of mitochondria to temperature. In Animals and Temperature: Phenotypic and Evolutionary Adaptation (ed. I. A. Johnston and A. F. Bennett),pp. 127 -152. Cambridge: Cambridge University Press. Hammond, K. A., Roth, J., Janes, D. N. and Dohm, M. R.( 1999 ). Morphological and physiological responses to altitude in deer mice Peromyscus maniculatus. Physiol. Biochem. Zool. 72 , 613 -622. Hammond, K. A., Szewczak, J. and Krol, E.( 2001 ). Effects of altitude and temperature on organ phenotypic plasticity along an altitudinal gradient. J. Exp. Biol. 204 , 1991 -2000. Hammond, K. A., Chappell, M. A. and Kristan, D. M.( 2002 ). Developmental plasticity in aerobic performance in deer mice (Peromyscus maniculatus). Comp. Biochem. Physiol. 133A , 213 -224. Hammond, K. A., Chmura, C. A., Russell, G. A. and Ortiz, S.( 2004 ). Genetic and phenotypic responses of small mammals to life at high altitudes. Integr. Comp. Biol. 44 , 564 . Hayes, J. P. ( 1989a ). Altitudinal and seasonal effects on aerobic metabolism of deer mice. J. Comp. Physiol. B 159 , 453 -459. Hayes, J. P. ( 1989b ). Field and maximal metabolic rates of deer mice (Peromyscus maniculatus) at low and high altitudes. Physiol. Zool. 62 , 732 -744. Hayes, J. P. and Chappell, M. A. ( 1990 ). Individual consistency of maximal oxygen consumption in deer mice. Funct. Ecol. 4 , 495 -503. Hayes, J. P. and O'Connor, C. S. ( 1999 ). Natural selection on thermogenic capacity of high-altitude deer mice. Evolution 53 , 1280 -1287. Hayes, J. P. and Shonkwiler, J. S. ( 1996 ). Altitudinal effects on water fluxes of deer mice: a physiological application of structural equation modeling with latent variables. Physiol. Zool. 69 , 509 -531. Hochachka, P. W. ( 1988 ). Metabolic responses to reduced O2 availability. In Hypoxia: The Tolerable Limits (ed. J. R. Sutton, C. S. Houston and G. Coates), pp. 41 -48. Indianapolis: Benchmark Press. Hochachka, P. W., Stanley, C., Merkt, J. and Sumar-Kalinowski,J. ( 1982 ). Metabolic meaning of elevated levels of oxidative enzymes in high altitude adapted animals: an interpretive hypothesis. Respir. Physiol. 52 , 303 -313. Hock, R. ( 1961 ). Effect of altitude on endurance running of Peromyscus maniculatus. J. Appl. Physiol. 16 , 435 -438. McClelland, G. B., Hochachka, P. W. and Weber, J.-M.( 1998 ). Carbohydrate utilization during exercise after high-altitude acclimation: a new perspective. Proc. Natl. Acad. Sci. USA 95 , 10288 -10293. McClelland, G. B., Hochachka, P. W., Reidy, S. P. and Weber,J.-M. ( 2001 ). High altitude acclimation increases the triacylglycerol/fatty acid cycle at rest and during exercise. Am. J. Physiol. 281 , E537 -E544. Nespolo, R. F. and Franco, M. ( 2007 ). Whole-animal metabolic rate is a repeatable trait: a meta-analysis. J. Exp. Biol. 210 , 2000 -2005. Pough, F. H. ( 1980 ). The advantages of ectothermy for tetrapods. Am. Nat. 115 , 92 -112. Rezende, E. L., Silva-Duran, I., Fernando Novoa, F. and Rosenmann, M. ( 2001 ). Does thermal history affect metabolic plasticity?: a study in three Phyllotis species along an altitudinal gradient. J. Therm. Biol. 26 , 103 -108. Rezende, E. L., Gomes, F. R., Ghalambor, C. K., Russell, G. A. and Chappell, M. A. ( 2005 ). An evolutionary frame of work to study physiological adaptation to high altitudes. Rev. Chil. Hist. Nat. 78 , 323 -336. Rezende, E. L., Gomes, F. R., Malisch, J. L., Chappell, M. A. and Garland, T., Jr ( 2006 ). Maximal oxygen consumption in relation to subordinate traits in lines of house mice selectively bred for high voluntary wheel running. J. Appl. Physiol. 101 , 477 -485. Sheafor, B. A. ( 2003 ). Metabolic enzyme activities across an altitudinal gradient: an examination of pikas (genus Ochotona). J. Exp. Biol. 206 , 1241 -1249. Singer, D. ( 1999 ). Neonatal tolerance to hypoxia: a comparative-physiological approach. Comp. Biochem. Physiol. 123A , 221 -234. Snyder, L. R. G. ( 1981 ). Deer mouse hemoglobins: is there genetic adaptation to high altitude? BioScience 31 , 299 -304. Snyder, L. R. G., Hayes, J. P. and Chappell, M. A.( 1988 ). Alpha-chain hemoglobin polymorphisms are correlated with altitude in the deer mouse, Peromyscus maniculatus. Evolution 42 , 689 -697. Spicer, J. I. and Burggren, W. W. ( 2003 ). Development of physiological regulatory systems: altering the timing of crucial events. Zoology 106 , 91 -99. Spicer, J. I. and Gaston, K. J. ( 1999 ). Physiological Diversity and its Ecological Implications . Oxford: Blackwell Sciences. Tay, K. L. and Garside, E. T. ( 1975 ). Some embryogenic responses of mummichog, Fundulus heterclitus (L.)(Cyprinodontidae), to continuous incubation in various combinations of temperature and salinity. Can. J. Zool. 53 , 920 -933. Ventura, N., Hoppeler, H., Seiler, R., Binggeli, A., Mullis, P. and Vogt, M. ( 2003 ). The response of trained athletes to six weeks of endurance training in hypoxia or normoxia. Int. J. Sports Med. 24 , 166 -172. West, J. B. ( 1991 ). Acclimatization and adaptation: organ to cell. In Response and Adaptation to Hypoxia:Organ to Organelle (ed. S. Lahiri, N. S. Cherniack and R. S. Fitzgerald), pp. 177 -190. Oxford: Oxford University Press. Wilson, R. S. and Franklin, C. E. ( 2002 ). Testing the beneficial acclimation hypothesis. Trends Ecol. Evol. 17 , 66 -70.
2023-02-04 00:10:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3999153971672058, "perplexity": 9616.858747789856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500076.87/warc/CC-MAIN-20230203221113-20230204011113-00852.warc.gz"}
http://abeaco.org.br/75w2h/bfhn2.php?39a793=vcov-function-in-r-package
# vcov function in r package Descriptive statistics for ED patients with homicidal ideation appear in Table 1. r(430); I wonder why it happens and if there is a way to make it converge, such as changing the starting values or other techniques. But of course, instead of doing all the calculus, you can use the deltamethod function of R’s msm package. But avoid …. x��ZKw�8��W��s��B�.�L����d��"킀35��ǿ�+$�>�uvl��WWW�w .v��\��糷�X�D(T8�C0F�'$ 9�Թu��e���;N�LFHj:��Jũ�a��C��F� ��S�(�f�'����(a(�A��)�YR{> ���I���Q�/v��x All object classes which are returned by model fitting functions should provide a coef method or use the default one. The sandwich package is designed for obtaining covariance matrix estimators of parameter estimates in statistical models where certain model assumptions have been violated. The thing is that when the data is analyzed in Stata, Stata fits the model and corrects for Clustered SE's on 32,915 Observations but R fits the same model and corrects for Clustered SE's on 34,576 observations. 1. Please be sure to answer the question.Provide details and share your research! xڝXmo�6��_�o���&%K��.�����4-��-16[YH*]���EJ�Yn )�{��z�/�#ק�G��A4�1�"?,�>��8�����t�a�fD�&_蚍�ÿ�� �_y��e�i��L��d����������¼N�X1i!�3w�>6 ��O��ȏ�G�)"11��ZA�FxȤ�"?���IV[� a�_YP� 2. The former (back-compatible) behavior is given by vcov(*,complete = FALSE). The purpose of this page is to introduce estimation of standard errors using the delta method. is a finite positive number a t test with. approximation is used or a t distribution with df degrees 3. a vcov method exists, such that vcov(x) yields As R doesn’t have this function built it, we will need an additional package in order to find a confidence interval in R. There are several packages that have functionality which can help us with calculating confidence intervals in R. white.adjust: logical or character. R packages are an ideal way to package and distribute R code and data for re-use by others. So, before you can use a package, you have to load it into R by using the library() function. Overview. .vcov.aliased() is an auxiliary function useful for vcov method implementations which have to deal with singular model fits encoded via NA coefficients: It augments a vcov–matrix vc by NA rows and columns where needed, i.e., when some entries of aliased are true and vc is of smaller dimension than length(aliased). Details. /First 791 complete = TRUE which makes the vcov() methods more consistent with the coef() methods in the case of singular designs. The cluster robust standard errors were computed using the sandwich package. Most users rst see the packages of functions distributed with R or from CRAN. If set to TRUE isoacf uses the acf.R and pava.blocks function from the original weave package, otherwise R’s own acf and isoreg functions are used. The first argument of the coeftest function contains the output of the lm function and calculates the t test based on the variance-covariance matrix provided in the vcov argument. Mathematical notation and/or English descriptions would be good choices. Usage Useful tools for documenting functions within R packages. Hi all I am hoping this is just a minor problem, I am trying to implement a best subsets regression procedure on some ecological datasets using the regsubsets function in the leaps package. Geol., 24(3): 269-286; reproduced in Goovaerts’ 1997 book) but uses simply two steps: first, each variogram model is fitted to a direct or cross variogram; next each of the partial sill coefficient for something like y ~ log(x); Inverse Gaussian with log link; tests using testthat; parallelization in Rcpp with omp will be methods for this function. Asking for help, clarification, or … Skip wasted object summary steps computed by base R when computing covariance matrices and standard errors of common model objects. Functions with names beginning in vcov. For illustrations see below. Details. (Note that the method is for coef and not coefficients.). will be methods for this function. The hypothesis matrix can be supplied as a numeric matrix (or vector), the rows of which specify linear combinations of the model coefficients, which are tested equal to the corresponding entries in the righ-hand-side … ~N0"�(��?+��q"���Y���Ó~8�_D�(:���:@c�� -X����sBPH&���u�]��p�-�jw0���m!����ȏ�Z��T+��J �w��B�Q�e�m�^C�� ��W��:ߤ[�+u;8U��a�n�w������l��x�ڇM)3SFU����P�˜t��ZA�m�J��*L��AZ�3~�4Y&Ɇ�k֙Ȫ��ܴ3�Ӳ�N�kpA�؉9Ϛ9�śkϷ���s'85���.��.�[2��$l�ra����&M�m�.���z>B� ��s!׬���bz,�{㶾cN�*Z\���{��?D9Q� �ģ)�7z���JY+�7���Rln���@��{kڌ�y���[�棪�70\��S�&��+d�l����~���>�Z��En2�)��|���~��\]�FW+���YnĶ��mfG���O�wC5�#����n���!ѫn��b�����s��G%��u��r� +z]������w;_���&:O*�^�m����E��7�Q0��Y�*RF�o�� �D �����W�{�uZ����reƴSi?�P0|��&G���׻���Ԁ@��c0����ڧ����7�jV A function for extracting the covariance matrix from 3. Convenience interface to hccm (instead of using the argument vcov).$\begingroup$For the question and the answer to be on topic here, they need to be expressed in a way that is understandable to non-R users. Documenting data is like documenting a function with a few minor differences. Examples include manual calculation of standard errors via the delta method and then confirmation using the function deltamethod so that the reader may understand the calculations and know how to use deltamethod.. /Length 1443 If this vcov-methods: Methods for Function 'vcov' in Package 'stats4' Description Methods Description. The miles per gallon value(mpg) of a car can also depend on it besides the value of horse power("hp"). Most functions written in R can be accessed in a similar manner to MATLAB. If you are unsure about how user-written functions work, please see my posts about them, here (How to write and debug an R function) and here (3 ways that functions can improve your R code). will be methods for this function. Asking for help, clarification, or … The package supports parallelisation thereby, making it easier to work with large datasets.$�I�����eɑ:F�}@����Ǫ"�H&K��P$o�PrĖ��A���X����X&W������%I������Α�xr!�K䊐�x�'��=W^����&R�p� ��ø�(d�P(�B����b�U���(�k���'b>�R�G���u�. 2. It is done by using the aov() function followed by the anova()function to compare the multiple regressions. endobj %���� So if we look at the simple$2 \times 2$variance-covariance matrix in our simple reg using vcov, we see. specified as a matrix or as a function yielding Objects in data/ are always effectively exported (they use a slightly different mechanism than NAMESPACE but the details are not important). are three possibilities: �p�븊s��g"@�vz����'D��O]U��d�3����\�ya�n�թΎ+⼏�؊eŁ���KD���T�CK)�/}���'��BZ�� U��'�H���X��-����Dl*��:E�b��7���q�j�y��*S�v�ԡ#�"�fGxz���|�L�p3�(���&2����.�;G��m�Aa�2[\�U�������?� Classes with methods for this function include: lm, mlm, glm, nls, summary.lm, summary.glm, negbin, polr, rlm (in package MASS), multinom (in package nnet) gls, lme (in package nlme), coxph and survreg (in package survival). coeftest(m1, vcov=function(x) vcovHAC(x,order.by=...)) Please suggest what should be the argument of order.by and whether that will give me the desired result ... coeftest(pm1, vcov=vcovHC) Please refer to the package vignette for 'plm' to check what it does exactly. It is pre-computed and supplied in argument vcov.. 2. Ever wondered how to estimate Fama-MacBeth or cluster-robust standard errors in R? To specify a covariance matrix vcov. vcov-methods: Methods for Function 'vcov' in Package 'stats4' Description Methods Description. �'�O�|0��n�%7ɲ,WP�y8Չ�B]�B����1K���)Ϝ�qo 3. vcov. Functions with names beginning in vcov. or vcovHAC from package sandwich. The theoretical background, exemplified for the linear regression model, is described below and in Zeileis (2004). 14.1.1 Documenting datasets. This can be The function meatHC is the real work horse for estimating the meat of HC sandwich estimators -- the default vcovHC method is a wrapper calling sandwich and bread.See Zeileis (2006) for more implementation details. Extract the approximate variance-covariance matrix from "mle" objects. Consider the R built in data set mtcars. coeftest is a generic function for performing I would like to retreive the proportions in each class for the two groups. CRAN. First, let’s define the data matrix, which is the essentially a matrix with n rows and k columns. Walkthrough. The formula variables must be labeled x1, x2 and so on. The R function regsubsets() [leaps package] can be used to identify different best models of different sizes. /Filter /FlateDecode Developing Packages with RStudio Overview. A function then saves the results into a data frame, which after some processing, is read in texreg to display/save the output. coeftest(p,vcov=hccm(p)) will give you the results of the tests using this matrix. Calculate confidence interval for sample from dataset in R; Part 1. Unfortunately, stats:::summary.lm wastes precious time computing other summary statistics about your model that you may not care about. Two functions are exported from the package, cluster.vcov() and cluster.boot(). a function for estimating the covariance matrix of the regression coefficients, e.g., hccm, or an estimated covariance matrix for model. The R Stats Package Documentation for package ‘stats’ version 4.1.0. See this short, easy-to-read blog post on writing R packages, as well as the roxygen2 introductory vignette. This page uses the following packages Make sure that you can load them before trying … "breakpointsfull" Classes with methods for this function include: lm, mlm, glm, nls, summary.lm, summary.glm, negbin, polr, rlm (in package MASS), multinom (in package nnet) gls, lme (in package nlme), coxph and survreg (in package survival). An almost-as-famous alternative to the famous Maximum Likelihood Estimation is the Method of Moments. Installing Rmisc package. I settled on using the mitools package (to combine the imputation results just using the lm function). Thus, I assume your variable/column Pol_Constitution suffers from linear dependence. and if this is NULL a z test is performed. fit.intercept a boolean controlling whether we add a column of ones to the data, or fit the Value vcov() is a generic function and functions with names beginning in vcov. �yY>��t� ���C���'灎{�y�:�[@��)YGE� ش�qz�QN;y�c���������@����ײ���G�g��zV��٭�>�N|����jl1���+�74=��8��_�N���>���S�����Z����3pLP(�������|�ߌt�d� �$F�'���vR���c�t;���� �6����ٟ�X��-� [.F�� ���)��QE���8��]���X��9�1������_a@������y�����U�I����ߡt��$K�*T��U�Eb>To����������܋����,��^t3�Y*sb�C�i�0�~�E�hӝ2�9m! ( �:���{�wi�.u����v�|�~zc�!�$cl8�h�a�v\n��P�����b�g�i�(a^�jeѼ�W% �Q�5�o5�$@�������-7��]�u�[Ӕ�*�,�t?�7&��ۋ��Z�{���>�\�=��,�8+:����7�C�Է�I���8������ҁw�N���8t�7�F*��1����w���(m,,~���X��R&ݶn���Y_S,p�T]gqY�����/$��,�$E�vc#�j#_/�v�%wVG\��j� >> # Multiple Linear Regression Example fit <- lm(y ~ x1 + x2 + x3, data=mydata) summary(fit) # show results# Other useful functions coefficients(fit) # model coefficients confint(fit, level=0.95) # CIs for model parameters fitted(fit) # predicted values residuals(fit) # residuals anova(fit) # anova table vcov(fit) # covariance matrix for model parameters influence(fit) # regression diagnostics of freedoms is used. Creating a new R package with pretty simple with RStudio. The degrees of freedom df determine whether a normal In this combination, coefficients for linear dependend columns are silently dropped in coeftest's output. summary.gamlss() now have an argument “save” for saving the output, thanks to Wilmar Igl; gamlssML(): a bug with vcov.gamlssML() function is fixed also “nlminb” is now the default maximisation procedure rather than “optim” Vignettes. R packages are (after a short learning phase) a comfortable way to maintain collections of R functions and data sets. rdrr.io Find an R package R language docs Run R in your browser R Notebooks. >>> Get the cluster-adjusted variance-covariance matrix. signature(object = "mle") Extract the estimated variance-covariance matrix for the estimated parameters (if any). The dataset contains 43 predictor variables plus the response (logcount) all in a … R packages are collections of functions and data sets developed by the community. The "aov" method does not report aliased coefficients (see alias) by default where complete = FALSE.. Usage Please be sure to answer the question.Provide details and share your research! Let me know if … Hi all I am hoping this is just a minor problem, I am trying to implement a best subsets regression procedure on some ecological datasets using the regsubsets function in the leaps package. to be used, there Due to this there is a slight change in the estimated coefficients at 3rd or 4th decimal place. If I use the package emmeans to do so I get the results, as reported below. Dismiss Join GitHub today. In R there is the usual parallel, but also some oddities to be aware of. "glm" objects) and a method for objects of class x��XMo9����crX6��=08x&@fư��� |P�N�[ << 132 0 obj The function lht also dispatches to linear.hypothesis. Usually, it can show the source code after input the command and enter. The default method assumes that a coef methods exists, ii) The tp() function within lms() and quantSheets() has changed name and modified slightly iii) The vcoc.gamlss() has the warnings changed and allows if theinverse of the Hessian (R) fails to recalucated […] Methods signature(object = "ANY") Generic function: see vcov. Package overview README.md Functions. For example, if you are usually working with data frames, probably you will have heard about dplyr or data.table, two of the most popular R packages. The function has three parameters: g is a formula object representating of the transformation g(x). Open this post in threaded view ♦ ♦ | vcov.nlminb Hello all, I am trying to get the variance-covariance (VCOV) matrix of the parameter estimates produced from the nlminb minimizing function, using vcov.nlminb, but it seems to have been expunged from the MASS library. RStudio includes a variety of tools that make developing R packages easier and more productive, including: Build pane with package development commands and a view of build output and errors Figure 5.3 is an example of using the effect() function to plot the partial effect of a quadratic independent variable. This means that they must be documented. Where To Download Package Lmtest R Furthermore, some generic tools for inference in parametric models are provided. But avoid …. >> Dear R Help, I wonder the way to show the source code of [vcov] command. R/vcov.R defines the following functions: se Vcov Vcov.lm Vcov.glm. matrix of the estimated coefficients. GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together. a covariance matrix. vcov Variance-Covariance Matrices and Standard Errors. Hi, So I was trying to replicate results from one of the papers in JDE. Extract the approximate variance-covariance matrix from "mle" objects.. Methods A correlation matrix is a table of correlation coefficients for a set of variables used to determine if a relationship exists between the variables. An object of class "coeftest" which is essentially ronment\R". vcov(reg) ... (reg), we need to use the coeftest function, which is a part of the lmtest package. associated standard errors, test statistics and p values. As an article distributes scienti c ideas to others, a package distributes statistical methodology to others.$\endgroup$– whuber ♦ Mar 29 '14 at 20:14 method (which works in particular for "lm" and The default method tries to extract vcov and nobs and simply computes their product. If you've visited the CRAN repository of R packages lately, you might have noticed that the number of available packages has now topped a dizzying 12,550. >> The generic function coeftest currently has a default Furthermore, It is pre-computed and supplied in argument vcov.. 2. try eval on variables that are not part of data, e.g. I assume you mean functions coeftest() from package lmtest and vcovHC() from package sandwich. Source code. This function does not use the iterative procedure proposed by M. Goulard and M. Voltz (Math. If equal to the string "normal", we assume multivariate normal returns. After a while, you can end up with a collection of many packages. Details. We study the effect of the value of "am" on the regression between "mpg" and "hp". Classes with methods for this function include: lm, mlm, glm, nls, lme, gls, coxph and survreg (the last two in package survival). The full R code for this post is available on my GitHub.. Understanding what a covariance matrix is can be helpful in understanding some more advanced statistical concepts. A matrix of the estimated covariances between the parameter estimates in the linear or non-linear predictor of the model. 12. Description. You need to specify the data (best in the form of a glmhdfe_data object), call (for information on clustering and variable of interest), and info (for information on degrees of freedom, etc.).. is set to NULL, then it is assumed that a vcov method exists, such that vcov(x) yields a … vcov() is a generic function and functions with names beginning in vcov. endstream 3. vcov. In this post I show you how to calculate and visualize a correlation matrix using R. LazyData yes Depends R (>= 3.0.0), stats, zoo Package ‘lmtest’ - R lmtest: Testing Linear Regression Models A collection of tests, data sets, and examples for diagnostic checking in linear regression models. Package index. That is, stats:::vcov.lm first summarizes your model, then extracts the covariance matrix from this object. The dataset contains 43 predictor variables plus the response (logcount) all in a … To specify a covariance matrix vcov. /Filter /FlateDecode This is a generic function. Details. is set to NULL, then it is assumed that 2 0 obj to be used, there are three possibilities: 1. Best wishes. vcov( estResult ) vcov( estResult, logSigma = FALSE ) logLik( estResult ) coef.summary.censReg Coefficients of Censored Regression Models and their Statistical Prop-erties Description This function returns the estimated coefficients of censored regression models as well as their stan-dard errors, z-values, and P-values. vcov. << It is a categorical variable with values 0 and 1. First, for some background information read Kevin Goulding’s blog post, Mitchell Petersen’s programming advice, Mahmood Arai’s paper/note and code (there is an earlier version of the code with some more comments in it). To compute Variance-Covariance matrix in R program by ('maxLik' or 'bbmle') package in R use "vcov (fit)". Man pages. �vh��Q��t�4���c�G@�U䄬��]��l�uvJ��o�-�j��a����0Q���JN���Ւ�c�WJ�-�B�S���+�J$/� ������z��%���\�ԒP�N��J:�w�e�V�,�>��Q��@��,�'lN�ؼݳ�56#{�VS�y��;Q:�;)�v�7fEO*6���O����^����� ��ԋ����ވT� ϓ�Lٹ�m�fR���LI���uqJD����h+����%�%�����C� �T�����W�R���㤪�;�E�E"�d5^'��h���d��$!���$����)Qe�|���RP���8�ڛ.�9���qs��ƾ��n��ͪd;;����������a>�wڝAf1Y�Q7�D�o�L����U�/hcc�nuϫ•���t�� �)������45�zp���%��U:�B+-iq�����(2��U�RG��5˙���O#�9��-ʵ���5���n\�r�ȶt���>|bJ�ר�8�)Gn��ҔFMGM�vhugT�:]�F�r�j�6h9�����mMy�����]�Fq��/�3Ɲ ӵ)h�fsT�l� First, I’ll show how to write a function to obtain clustered standard errors. The logic link function in the package gamlss.dist is amended so it does not call the R .C function. Instead of documenting the data directly, you document the name of the dataset and save it in R/. x is supplied, e.g., vcovHC The inversion of Variance-Covariance matrix is Fisher information matrix. They increase the power of R by improving existing base R functionalities, or by adding new ones. View source: R/vcov.R. endobj 96 0 obj I am fitting a multinomial logit model in R by using the multinom() function in the nnet package. Value. �� (�B �0r��O��x4@iH&�_��S�Ks��r����1l�c k���EA�Pu�h�1��ZT��Tؠx_����(=� ܸ�J���p��g��"�}�q��:�y>#d��tD�����2N�/ ��~-*�(*�>��~�@�gl*էg!�ª2堂 �T^� �t����J�ȣ����Ob]�=_H6�ب��V��jU\|�7 Package car has a function hccm that gives you the heteroscedasticity-corrected covariance matrix (there is a similar function in package sandwich also). /Type /ObjStm a coefficient matrix with columns containing the estimates,
2021-01-15 20:29:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5461427569389343, "perplexity": 2417.560007022697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703496947.2/warc/CC-MAIN-20210115194851-20210115224851-00664.warc.gz"}
https://dmoj.ca/problem/ioi94p1
## IOI '94 P1 - The Triangle View as PDF Points: 7 (partial) Time limit: 0.6s Memory limit: 16M Problem type Allowed languages Ada, Assembly, Awk, Brain****, C, C#, C++, COBOL, CommonLisp, D, Dart, F#, Forth, Fortran, Go, Groovy, Haskell, Intercal, Java, JS, Kotlin, Lisp, Lua, Nim, ObjC, OCaml, Octave, Pascal, Perl, PHP, Pike, Prolog, Python, Racket, Ruby, Rust, Scala, Scheme, Sed, Swift, TCL, Text, Turing, VB, Zig 7 3 8 8 1 0 2 7 4 4 4 5 2 6 5 (Figure 1) Figure 1 shows a number triangle. Write a program that calculates the highest sum of numbers passed on a route that starts at the top and ends somewhere on the base. • Each step can go either diagonally down to the left or diagonally down to the right. • The number of rows in the triangle is but . • The numbers in the triangle, all integers, are between and . #### Input Specification The first line of input will contain an integer . The of the next lines will contain space-separated integers, denoting the values of the triangle. #### Output Specification The highest sum as required by the problem statement. #### Sample Input 5 7 3 8 8 1 0 2 7 4 4 4 5 2 6 5 #### Sample Output 30
2020-10-28 08:55:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19370374083518982, "perplexity": 10491.456786750192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107897022.61/warc/CC-MAIN-20201028073614-20201028103614-00296.warc.gz"}
https://kobiecaodnowa.pl/coal/Jul_180.html
## Solution: The football is kicked over the goalpost with an ... The football is kicked over the goalpost with an initial velocity of 80 ft/s as shown. Determine the point B (x,y) where it strikes the bleachers. The football is kicked over the goalpost with an initial velocity of 80 ft/s as shown. Determine the point B (x,y) where it ## The football is kicked over the goalpost with an initial ... 2011-4-4 · A football is kicked with an initial velocity of 25 m/s at an angle of 45 degrees with the horizontal. Determine the time of flight, the horizontal distance, and the peak height of the long jumper. ## The football is to be kicked over the goalpost, which ... Transcribed image text: The football is to be kicked over the goalpost, which is 15 ft high. If its initial speed is v_A = 80 ft/s, determine if it makes it over the goalpost, and if so, by how much, h. ## Football kicked from 63 yards away from goalpost that is ... Football kicked from 63 yards away from goalpost that is 10 feet above the ground. If the initial speed of the football is 15 m/s, and it's kicked at a 45-degree angle, will it clear the uprights? ## SOLUTION: A football is kicked at 40 yards away from a ... A football is kicked at 40 yards away from a goal post that is 10 feet high. Its path is modeled by y = -0.03x^2 + 1.6x, where x is the horizontal distance in yards traveled by the football and y is the corresponding height above the ground in feet. Does the football go over the goal post? How far above or below the goal post is the football? ## A History of the Football Goalpost | Harrod Sport 2018-5-3 · A History of the Football Goalpost By 05/03/2018 From the number of substitutions, to goal-line technology and ABBA penalty shootouts, football is always evolving. Even equipment that might be taken for granted, like goalposts, have changed dramatically over ## Kicking a field goal: should you move to the center? | WIRED 2011-1-5 · There are 20 seconds left on the clock. Your team is down by 2 points such that a field goal would win it. The ball is spotted on the hash mark at the 15 yard line and it is first down. ## Answered: football kickedfrom 63 yards away from | Solution for football kickedfrom 63 yards away from goalpost that is 10 feet above the ground. If the initial speed of the football is 15 m/s, and it's kicked ## Football Flow of the Game — Goalposte Click to enlarge. Then the offense has 4 attempts or “ downs ” to advance 10 yards down the field, by throwing or running with the football. If they succeed, they begin again on a 1st down with another 4 downs to advance another 10 yards, if not, the ball is turned over to the other team. A ball is stopped or down when the player in control ... ## GOALPOST | Definition of GOALPOST by Oxford Dictionary ‘The ball came off the goalpost and Torsten Frings' arm stopped it going over the line.’ ‘Facilities will include a platform for skateboarders, and goalposts and basketball hoops.’ ‘Goals are scored when the ball is kicked through goalposts or carried into the end zone.’ ## Field Goal! The Science Behind a Perfect Football Kick ... 2020-11-20 · Build your goalpost. Use household materials like cardboard and duct tape to build a goalpost that is roughly 0.3 m wide, like the one shown in Figure 5. The exact size of your goalpost does not matter for this science project; just make sure it is a reasonably sized target for your toy football ## A History of the Football Goalpost | Harrod Sport 2018-5-3 · While the shape and structure of football goalposts has remained the same for over 130 years, the desire to test new innovations and try to improve the game has resulted in a consistent evolution. Modern crossbars, for example, now curve up slightly to counteract the pull of gravity, which could sometimes see the centre pulled downwards over ## fall on the ice break foot Jack play football kick the ... fall on the ice break foot Jack play football kick the goalpost break leg Bob cycle fall off the bike injure hand Ann skate fall over break arm Alice dance trip over the carpet sprain ankle Make dialogues as in the example: Example: You : Hi, Tina! ## SOLVED:In U.S. football, after a touchdown the te In U.S. football, after a touchdown the team has the opportunity to earn one more point by kicking the ball over the bar between the goal posts. The bar is 10.0 $\mathrm{ft}$ above the ground, and the ball is kicked from ground level, 36.0 $\mathrm{ft}$ horizontally from the bar (Fig. P3.62). ## Football Flow of the Game — Goalposte Click to enlarge. Then the offense has 4 attempts or “ downs ” to advance 10 yards down the field, by throwing or running with the football. If they succeed, they begin again on a 1st down with another 4 downs to advance another 10 yards, if not, the ball is turned over to the other team. A ball is stopped or down when the player in control ... ## SOLVED:A placekicker is about to kick a field goa A placekicker is about to kick a field goal. The ball is 26.9 ${m}$ from the goalpost. The ball is kicked with an initial velocity of 19.8 ${m} / {s}$ at an angle $\theta$ above the ground. Between what two angles, $\theta_{1}$ and $\theta_{2},$ will the ball clear the $2.74-{m}$ ## NFHS soccer rules - Kickology 2021-8-5 · ART. 2 . . . At the moment of the kickoff, all players shall be in their team’s half of the field. Players opposing the kicker shall be at least 10 yards from the ball until it is kicked. ART. 3 . . . The ball shall be kicked while it is stationary on the ground in the center of the field of play. ## A placekicker is about to kick a field goal. The ball is ... In a soccer match, the goal keeper stands on the midpoint of her goal line. she kicks the ball 25m at an angle of 35deg to the goal line. her teammate takes the pass and kicks it 40m farther, parallel to the sideline. The resultant vector is 62.2m if the . Physics. A football is kicked at an angle of 60 degrees above the horizontal. ## SOLUTION: from the center of the 20yd (60ft) line, a ... Question 211525: from the center of the 20yd (60ft) line, a football player attempts to make a field goal by kicking the ball directly toward the goal posts, which are 90ft away. The goal-post crossbar is 10 ft above the ground. The ball reaches its highest altitude of 32 ft at a point 48 ft from where it was kicked. ## The Garden and The Goalpost — Lisa Hentrich 2021-6-20 · For over 40 years now, Tom has been mowing the grass around that old monument to Craig’s football career. Only recently has he taken an interest in planting the little vegetable garden that now grows next to it during summer months — an interest that was inspired, ironically, by Craig. ## Person Scored Amazing Goal By Kicking Ball On Goalpost ... 2021-8-4 · They kicked a football towards the goalpost and hit the corner of the post. The person scored an amazing goal as the. This person attempted to show a trick. They kicked a football towards the goalpost and hit the corner of the post. ... Person Witnesses Magnificent Cloud Floating In Sky Over the Green Mountains From Their Porch. Date Added: 18 ... ## Field Goal! The Science Behind a Perfect Football Kick ... 2020-11-20 · Build your goalpost. Use household materials like cardboard and duct tape to build a goalpost that is roughly 0.3 m wide, like the one shown in Figure 5. The exact size of your goalpost does not matter for this science project; just make sure it is a reasonably sized target for your toy football ## NFL to penalize goalpost dunk next season - ESPN 2014-3-25 · According to the league's vice president of officiating, Dean Blandino, players will no longer be allowed to dunk the ball over the goalpost after touchdowns. ## Answered: football kickedfrom 63 yards away from | Solution for football kickedfrom 63 yards away from goalpost that is 10 feet above the ground. If the initial speed of the football is 15 m/s, and it's kicked ## SOLVED:In U.S. football, after a touchdown the te In U.S. football, after a touchdown the team has the opportunity to earn one more point by kicking the ball over the bar between the goal posts. The bar is 10.0 $\mathrm{ft}$ above the ground, and the ball is kicked from ground level, 36.0 $\mathrm{ft}$ horizontally from the bar (Fig. P3.62). ## SOLVED:A placekicker is about to kick a field goa A placekicker is about to kick a field goal. The ball is 26.9 ${m}$ from the goalpost. The ball is kicked with an initial velocity of 19.8 ${m} / {s}$ at an angle $\theta$ above the ground. Between what two angles, $\theta_{1}$ and $\theta_{2},$ will the ball clear the $2.74-{m}$ ## A placekicker is about to kick a field goal. The ball is ... In a soccer match, the goal keeper stands on the midpoint of her goal line. she kicks the ball 25m at an angle of 35deg to the goal line. her teammate takes the pass and kicks it 40m farther, parallel to the sideline. The resultant vector is 62.2m if the . Physics. A football is kicked at an angle of 60 degrees above the horizontal. ## Football Glossary - First Base Sports a place kick that goes over the crossbar and between the uprights of the goalpost, gaining the group that kicked it 3 focuses. field position: the area of a group on the field in respect to the two goal line s; great field position for a group is close to its adversary’s objective line, while terrible field position is ## Football Ground Measurement | Field Length | Dimensions ... 2018-7-27 · The Football Association’s (FA) recommendations of which goalpost size is suitable for each age or team size: – 11 A-Side: The 11 a side game for players aged 14 and over commend a football goal size with the dimensions 24ft x 8ft (7.32m x 2.44m). This is the goal post ## A football is kicked off the ground at an initial upward ... A football is kicked from the ground with a speed of 23m/s at an angle of 42 degrees above the horizontal a) how long is the football in the air b) how far does it travel Calculus Suppose you throw an object from a great height, so that it reaches very nearly terminal velocity by time it hits the ground.
2021-09-20 23:34:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3873796761035919, "perplexity": 1785.6544381759538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057119.85/warc/CC-MAIN-20210920221430-20210921011430-00425.warc.gz"}
https://physics.stackexchange.com/questions/linked/72368
19 questions linked to/from Why are most metals gray/silver? 295 views ### Why do mirrors look “gray”? [duplicate] Or any other silver surface? Perfect silver "colour" just seems to be "reflecting gray", so something that absorbs all wavelengths by the same fraction and reflects everything else specularly. When I ... 67 views ### How is gold (the element) gold (the color)? [duplicate] This question is related to the inquiry here: What enables protons to give new properties to an atom every time one is added? and here: What causes atoms to have their specific colors? In general, ... 7k views ### Why are gold mirrors yellow? Why are golden mirrors yellow? Do they add a yellow component to the spectrum or absorb non-yellow components? If they absorb, then why are they used in telescopes being imperfect? If they add a ... 2k views ### What is the source for Osmium's colour? The majority of metals are known for appearing grey to our eyes, cf. e.g. Why are most metals gray/silver? But the main exceptions to this are the "famous" group eleven metals, where their distinctive ... 7k views ### Is there 100% reflective mirror…i just want one I have heard there are none right now...but i saw something that said something about 100% reflection. Forgotten completely. sorry. I want it, i think it'd be very amazing to save sunlight in it from ... 3k views ### Why do most metals appear silver in color with gold being an exception from a scattering and EM viewpoint? Related: Why are most metals gray/silver? After reading Johannes’ impressive answer to Ali Abbasinasab question of why do most metals appear silver in color with the exception of gold (and copper), ... 933 views ### Principle of Reflection on atomic level This well-observed phenomenon has, besides several others, always been a fascination to me. We are well aware of several theories, experiments, and practical applications of this well-known phenomenon,... 1k views ### What about a surface determines its color? Light falls on a surface. Some wavelengths get absorbed. The other are reflected. The reflected ones are the colors that we perceive to be of the surface. What is the property that determines, what ... 659 views ### Why are metals opaque? [closed] Why are metals opaque? Is it due to the free electrons in a metal or a material's intrinsic properties? 559 views ### Why does silver have such a strong UV resonance compared to other metals? Related: Why did high quality mirrors use aluminum coatings instead of silver? After reading Chris White’s and LDC3’s comments in the above related link, it got me wondering about silver’s atomic ... 543 views ### Electromagnetic waves in a perfect conductor [closed] What happens when an electromagnetic wave strikes a perfect conductor at normal incidence? Is the wave transmitted or reflected through the conductor? 182 views ### Why are some elemental materials grey? How does grey occur in elemental materials such as metals? I believe that grey arises from the simultaneous reflection and absorption of all colors of the spectrum (in different atoms of course), as ... 611 views ### Can electrons reflect light? Lately, I have been watching sparks while connecting my electronic devices and I can notice that electricity is kind of blue, and theoretically it's blue because it reflects blue wavelengths?? And ... Looking at reflectance spectrum of silver, given e.g. here, we can say that more than about $95\%$ of light is reflected by silver. Similarly for white notebook paper, the spectrum is given here, and ...
2019-12-15 18:11:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6069566607475281, "perplexity": 2549.6946964206245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541309137.92/warc/CC-MAIN-20191215173718-20191215201718-00046.warc.gz"}
https://quant.stackexchange.com/questions/28197/is-there-a-relation-between-total-futures-and-the-amount-of-production
# Is there a relation between total futures and the amount of production? I have a multipart question about futures and production. Lets take corn as an example. We add up the total 1 year futures of corn, call this weight $A$ kg. Next, we can get a reasonable estimate of how much the world corn production will be in 1 year call this $B$ kg. (1) Is there a relation between $A$ and $B$? (2) Can nonproducers of a good sell futures? If yes, suppose noncorn farmers sell corn futures. Then $A$ may appear greater than $B$. (3) Then how does the supply and demand work? i.e., will this cause the corn price to go down in 1 year? My confusion is, the supply and demand curves I have seen only assume $B$ and not $A$. Can the demand depend on $A$? (4) When $A>B$, what happens if everybody who bought the futures want their corn? PS: I am a newbie, so if this sounds dumb please explain that too. Thanks. It is not a dumb question, but very confusing to me at a basic level. The Open Interest in futures (what you call A) has nothing at all to do with the total production (what you call B). The futures market is just a derivative market, where side bets are made as to what the PRICE of the crop will be. You cannot use it to infer what the PRODUCTION will be. Of course speculators (non-producers) can both sell and buy futures. What consumers will eat however, is physical wheat (i.e. B), not A. A is not edible. A only tells you how many people are interested in making side bets on the price of wheat (for speculative or hedging purposes). Keep in mind also that for every seller of futures there is a buyer (i.e. futures are in zero net supply, there are A futures long but also A futures short, netting out to zero).
2022-05-27 18:43:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5075244903564453, "perplexity": 872.7797945765352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662675072.99/warc/CC-MAIN-20220527174336-20220527204336-00100.warc.gz"}
https://www.storyofmathematics.com/parallel-planes
# Parallel Planes – Explanation & Examples Parallel planes are planes that never intersect. Need some refresher? You can check out the following links: These concepts were extended to parallel planes. In the next sections, we’ll learn how to: • define parallel lines • find parallel planes from a figure • verify whether two planes (from equations) are parallel ## What are parallel planes? As mentioned in the first section, when two planes lie in the same direction but do not meet, we call them parallel planes. The figure above shows an example of two parallel planes. Observe how the two extends in the same direction, but these planes will never meet. What do we call planes that intersect? Yes, you guessed it right. Planes that are not parallel and intersect along a line are called intersecting planes. ### What are some real-world examples of parallel planes? • Our homes’ ceilings and floors are great examples of parallel planes. They extend along with the same space (our home), but these two planes will never meet. • The steps on our stairs are also examples of parallel planes. Each step extends along the same direction as one another, but these steps will never intersect each other. • Two bookshelves facing each other are another great example of parallel planes. We’ve now learned about parallel planes, so it’s time for us to practice looking for parallel planes in three-dimensional figures. ## How to determine if planes are parallel? To identify parallel planes, we have to ensure that the planes we’re comparing are lying along with the same space. Look for a reference plane and find a second plane that is facing opposite it. The rectangular prism shown above contains multiple pairs of parallel planes. To find one pair, we can start with Plane $\boldsymbol{ABCD}$. Find the surface lying in the same direction, but opposite to it, and that will be Plane $\boldsymbol{HEFG}$. Since both planes are within the same prism, we can say that the two are parallel to each other or Plane $\boldsymbol{ABCD ||}$ Plane $\boldsymbol{HEFG}$. ### How to check if the equations of two planes are parallel? • In coordinate geometry, when the graphs of equations of the form $A_x + B_y + C_z = D$ are parallel, the two equations’ dot product is zero. • Given two equations, $A_1x + B_1y + C_1z = D_1$ and $A_2x + B_2y + C_2z = D_2$, the two planes are parallel when the ratios of each pair of coefficients are equal. $\dfrac{A_1}{A_2}=\dfrac{B_1}{B_2}=\dfrac{C_1}{C_2}$ For now, let’s focus on the parallel planes’ fundamental definitions, and let’s practice identifying parallel planes in three-dimensional figures. ### Summary of parallel planes definition and properties Before we start checking our new knowledge on parallel planes, let’s make sure we summarize everything that we know so far: • Parallel planes lie along with the same space. • These planes can never meet. • We can apply the transitive property to parallel planes. • Equations that represent planes are parallel when the ratios of their terms’ coefficients are equal. Example 1 Which of the following is not true about parallel planes? 1. They share the same space. 2. They lie in the same direction. 3. Their intersection is a line. 4. They will never meet. Solution Go back to the definition of parallel planes: they share the same space and will never meet. Planes that do meet are called intersecting planes. When they do, they intersect through a line. This means that parallel planes will never intersect at a line. Example 2 Which of the following are examples of parallel planes? 1. A writing pad’s cover and its page. 2. The surfaces of a triangular tent. 3. A library’s ceiling and floor. 4. The corner of a room. Solution Let’s discuss each example shown and see if they satisfy the conditions of parallel planes: • The writing pad’s cover and pages share a common side and are glued or stapled there, so they stick together. This means that they are not parallel planes. • A triangular tent will not have surfaces that lie along the same direction, and each pair of planes share a common side. • The corner of a room share a common side, so they do not represent parallel planes. However, the library’s ceiling and floor are in the same direction and space, but they will never meet. This means that the third option is the correct answer. Example 3 Take a snippet or a screenshot of the problem and construct a second plane so that you now have a pair of parallel planes. Solution Since parallel planes extend along the same direction, draw either a plane above or below the given one. Make sure that the two planes will never meet so that they satisfy the conditions of parallel planes. The figure shown above is an example. Yours might look different, but as long as they meet the conditions, they are valid answers. Example 4 List down three pairs of parallel planes you can find in the figure shown below. Solution Rectangular prisms have six surfaces, so it makes sense that it has three pairs of parallel planes. Let’s start with Plane $\boldsymbol{ABCD}$, the surface lying opposite it is Plane $\boldsymbol{HEFG}$. This means that these two planes are parallel. We can do the same for the two remaining pairs. Plane $\boldsymbol{HABG}$ and Plane $\boldsymbol{ECDF}$ are facing opposite each other. The third pair, Plane $\boldsymbol{ACEH}$ and Plane $\boldsymbol{BDFG}$  are also facing opposite each other. Hence, we have the following pairs of parallel planes: • $Plane\ \boldsymbol{ABCD ||} Plane \ \boldsymbol{HEFG}$ • $Plane\ \boldsymbol{HABG || }Plane\ \boldsymbol{ECDF}$ • $Plane\ \boldsymbol{ACEH || }Plane\ \boldsymbol{BDFG}$ Example 5 Which of the following pairs of planes are parallel to each other? 1. $\boldsymbol{PQWV}$ and $\boldsymbol{PQRS}$ 2. $\boldsymbol{VWUT}$ and $\boldsymbol{RSUT}$ 3. $\boldsymbol{VWUT}$ and $\boldsymbol{PQRS}$ 4. $\boldsymbol{PVTR}$ and $\boldsymbol{RSUT}$ Solution Recall that parallel planes do not intersect and share the same side. Observe the three pairs: $\boldsymbol{PQWV}$ and $\boldsymbol{PQRS}$, $\boldsymbol{VWUT}$ and $\boldsymbol{RSUT}$, as well as $\boldsymbol{PVTR}$ and $\boldsymbol{RSUT}$. • $\boldsymbol{PQWV}$ and $\boldsymbol{PQRS}$ intersect each other at their common side, $\boldsymbol{PQ}$. • $\boldsymbol{VWUT}$ and $\boldsymbol{RSUT}$ intersect each other at their common side, $\boldsymbol{UT}$. • $\boldsymbol{PVTR}$ and $\boldsymbol{RSUT}$ intersect each other at their common side,$\boldsymbol{RT}$ This means that the only possible pair of parallel lines are $\boldsymbol{VWUT}$ and $\boldsymbol{PQRS}$. We can also see that the two faces opposite each other, confirming that they are the correct option. Example 6 Does the figure below contain any parallel planes? Name one pair and describe its shape. Solution The bases lie in the same direction and will never meet. They also do not share a common side. This means that the figure contains a pair of parallel planes. Planes AGFJI and BHEDC each contain five sides. Pentagons are polygons that contain five sides, so the parallel planes are parallel pentagons. Example 7 Determine whether the planes, $4x – 5y + 2z = 5$ and $8x -10y + 4z = 12$ , are parallel. Solution Recall that two planes are parallel when the ratios of their coefficients share the relationship shown below. $\dfrac{A_1}{A_2}=\dfrac{B_1}{B_2}=\dfrac{C_1}{C_2}$ Substitute the coefficients and find their respective ratios: • $A_1 = 4$, $A_2= 8$, $\dfrac{A_1}{A_2}=2$ • $B_1 = -5$, $B_2=10$, $\dfrac{B_1}{B_2}= 2$ • $C_1 = 2$, $C_2 =4$, $\dfrac{C_1}{C_2}=2$ We can see that the ratios are equal, so the two planes are parallel. Example 8 What must be the value of $a$ so that the planes shown below are parallel? $3x – 4y + z = 4$ $6x – (a + 2) y + 2z = 9$ Solution Recall that two planes are parallel when the ratios of their coefficients share the relationship shown below. $\dfrac{A_1}{A_2}=\dfrac{B_1}{B_2}=\dfrac{C_1}{C_2}$ Substitute the coefficients and find their respective ratios: • $A_1 = 3$, $A_2=6$,  $\dfrac{A_1}{A_2}=\dfrac{1}{2}$ • $B_1=4$, $B_2=a + 2$,  $\dfrac{B_1}{B_2}=\dfrac{4}{a + 2}$ • $C_1 = 1$, $C_2=2$, $\dfrac{C_1}{C_2}=\dfrac{1}{2}$ For the planes to be parallel, the three ratios must be equal. This means that $\dfrac{4}{a + 2}$ must be equal to $\dfrac{1}{2}$. Equate the two and solve for $a$. $\dfrac{4}{a + 2} = \dfrac{1}{2}$ Cross-multiply and simplify the expressions on both sides of the equation. \begin{aligned}4(2)&=1(a+2)\\8 &= a + 2\\ a &= 8 – 2\\ a &= 6\end{aligned} This means that $a$ must be $6$ for the two planes to be parallel. ### Practice Questions 1. True or False? Parallel planes can intersect with each other. 2. Which of the following are examples of parallel planes? a. The corners of a billiard table. b. The first and second levels of a cat tower. c. The front and back surfaces of a wallet. d. A hardbound book’s cover and its page. 3. List down three pairs of parallel planes you can find in the figure shown below. 4. Take a snippet or a screenshot of the problem and construct a second plane so that you now have a pair of parallel planes. 5. Which of the following pairs of planes are parallel to each other? a. $\boldsymbol{CDGH}$ and $\boldsymbol{EGHF}$ b. $\boldsymbol{ABCD}$ and $\boldsymbol{EFGH}$ c.$\boldsymbol{AEGC}$ and $\boldsymbol{PQRS}$ d. $\boldsymbol{PVTR}$ and $\boldsymbol{RSUT}$ 6. Does the figure below contain any parallel planes? Name one pair and describe its shape. 7. Determine whether the planes, $8x – 7y + 4z = 12$ and $32x -28y + 16z = 56$ , are parallel. 8. What must be the value of $n$ so that the planes shown below are parallel? $48x – 24y + 30z = 90$ $8x – (5n + 4) y + 5z = 15$
2021-11-30 21:16:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6281068921089172, "perplexity": 521.0381652835908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359073.63/warc/CC-MAIN-20211130201935-20211130231935-00626.warc.gz"}
https://www.centralbanking.com/central-banking/news/2135594/iceland-receives-final-nordic-cash-injection
# Iceland receives final Nordic cash injection The Central Bank of Iceland has received the last tranche of bilateral loans from its Nordic partners, as part of an International Monetary Fund (IMF)-supported economic programme, but has agreed to extend an agreement with Poland to draw down a loan at a later date.
2020-10-28 06:12:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21515271067619324, "perplexity": 9795.4061180573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107896778.71/warc/CC-MAIN-20201028044037-20201028074037-00375.warc.gz"}
http://www.physicsforums.com/showpost.php?p=564473&postcount=5
View Single Post Sci Advisor HW Helper P: 11,833 There's a nice mathematical explanation to the existence of antiparticles.It involves involuted associative algebras.For example,the complex numbers form such an algebra and the complex scalar field,which is an element of such an algebra,describes,after quantization both particles & antiparticles. See if the electromagnetic field $A_{\mu}(x)$ and the gravity field (well,either one of the 3 possible fields describing it) could form such an algebra. Daniel.
2014-03-09 17:54:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5081397891044617, "perplexity": 2459.7584017920503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010048333/warc/CC-MAIN-20140305090048-00028-ip-10-183-142-35.ec2.internal.warc.gz"}
https://gamedev.stackexchange.com/questions/136712/locking-a-rotation-on-an-axisyaw-pitch-roll-based-on-a-parental-transform
# locking a rotation on an axis(yaw/pitch/roll) based on a parental transform I am currently attempting to lock a non rigid body transform rotation around on an axis. It is parented to a gameobject that is rotating in 3D space. I want the Original transform to have the same rotation as the parent's but I want it to be locked on a certain certain axis. For the sake of clarifying my question I will provide some code and an example. Example: There is a 3D model with a plane attached to the left upper arm subsegment(see hierarchy) whose initial normal is aligned with the global x-axis <1,0,0> or the initial right vector of the subsegment. In the context of this post, what I would like is to be able to align the normal of the plane with the subsegment's right vector but keep the plane's up axis locked. For instance, I do not want to the yaw component of the transform to change. Trying the following code does not result in the intended effect. Vector3 eulerRepresentation = parent.transform.rotation.eulerAngles; eulerRepresentation.y = 0; Quaternion fromEuler = Quaternion.Euler(eulerRepresentation); transform.rotation = fromEuler; This is very likely because of the duality of a rotation of Euler angles. An example is (180,0,180) is the same as (0,180,0) When inspecting the euler values, it is exactly like I suspected. There doesn't seem to be correlation with the object's yaw and the object's euler.y value. How would I go about locking either Yaw/pitch/roll? • Perhaps consider locking the rotation using a Transform.LookAt, then modify the lookat vector to suit your needs. Hint: transform.position + parent.transform.forward will give you a point to look at, but then you'd have to specify the up vector yourself to suit your roll. – Morten Feb 2 '17 at 14:58 I'd solve this using LookRotation. We can generalize Unity's helper method to let us point any particular local axis we want at any particular global axis, like so: Quaternion GeneralizedLookRotation( Vector3 localExactAxis, Vector3 globalExactAxis, Vector3 localApproximateAxis, Vector3 globalApproximateAxis ) { // Rotate our chosen local axes into a standard orientation. Quaternion standardize = Quaternion.Inverse( Quaternion.LookRotation(localExactAxis, localApproximateAxis) ); // Rotate the standard orientation to point to the chosen global axes. Quaternion turn = Quaternion.LookRotation(globalExactAxis, globalApproximateAxis); // Chain both operations to rotate the local axes to the global axes. return turn * standardize; } This function will take a vector in local coordinates, localExactAxis, and rotate it to point exactly along globalExactAxis in world coordinates. That still leaves us with one degree of freedom (twisting around globalExactAxis), so we also provide a localApproximateAxis which should map as close as possible to globalApproximateAxis. Here's what this looks like: The left is normal parenting behaviour. In the other two, I added a LateUpdate() method (so that it runs after any animations or update scripts have rotated the parent) that adjusts the child rotation like so: if (parentExact) // Middle version { // Point my right vector exactly along my parent's forward, // and my up vector as close as possible to world up. transform.rotation = GeneralizedLookRotation( Vector3.right, transform.parent.forward, Vector3.up, Vector3.up ); // (Effectively, yaw & pitch with parent, but don't roll) } else // Rightmost version { // Point my up vector exactly along world up, // and my right vector as close as possible to my parent's forward. transform.rotation = GeneralizedLookRotation( Vector3.up, Vector3.up, Vector3.right, transform.parent.forward ); // (Effectively, yaw with parent, but don't pitch/roll) } There are a few approaches here. Freeze the axis in the Unity Inspector Select the object you are rotating, I am assuming a rigidbody, and go into the inspector. Find the constraints property and check off the axis you want to freeze. That will not allow the rotation or movement (whichever you check off since they're both clickable) of your object in that axis. Make a float to insert and lock the axis Create a float lockAxis = 0; Add it into your Update() void Update() { transform.rotation = Quaternion.Euler(transform.rotation.eulerAngles.x, lockPos, lockPos); } This is setting your specific axis to 0 every time so that is effectively locked. Sources. Freeze the axis using eulerAngles transform.eulerAngles can be assigned a Vector3. Vector3 eulerAngles=new Vector3(0, 0, 0); transform.eulerAngles=eulerAngles; However you cannot directly assign a quaternion. Quaternion rotation=new Quaternion(0, 0, 10, 10); transform.eulerAngles=q.eulerAngles; In contrast, transform.rotation can be assigned a Quaternion. Quaternion rotation=new Quaternion(0, 0, 10, 10); transform.rotation=rotation; However you cannot directly assign a Vector3. Vector3 eulerAngles=new Vector3(0, 0, 0); transform.rotation=Quaternion.Euler(eulerAngles); Use Quaternions R/Y/P equations roll = Mathf.Atan2(2*y*w + 2*x*z, 1 - 2*y*y - 2*z*z); pitch = Mathf.Atan2(2*x*w + 2*y*z, 1 - 2*x*x - 2*z*z); yaw = Mathf.Asin(2*x*y + 2*z*w); You can use those if you directly want the roll/pitch/yaw - that's the formula for them specifically. Sources. Also make sure that you don't end up with a Gimbal Lock :) • Unfortunately the suggested solution doesn't appear to be working well. I think the only solution I might have is to find an appropriate look at vector and lock the up vector according to the parent's up vector. The issue with using Euler angles appears to be the same as my initial issue. If I am correct this is because of Eulers duality and the solution doesn't lie with Euler angles but rather with vectors. I'll update my question soon to represent what I'm truly trying to accomplish(part of it is in fact locking an axis ) – fryBender Feb 2 '17 at 16:23 • Sorry it didn't correct it. But let us know when you've update it or any progress you make and I can see what else I can do to help then. – n_plum Feb 2 '17 at 17:07 • @fryBender go to the source I listed under Freeze the axis using eulerAngles. I might be able to help solve that issue with duality. – n_plum Feb 2 '17 at 17:49
2021-08-03 16:33:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38773101568222046, "perplexity": 1997.5953416179173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154466.61/warc/CC-MAIN-20210803155731-20210803185731-00002.warc.gz"}
https://topoptionsedvs.netlify.app/oliveros3161xyky/online-chi-square-test-calculator-449.html
## Online chi square test calculator To calculate the expected numbers a constant multiplier for each sample is obtained by dividing the total of the sample by the grand total for both samples. In table  Calculates the probability density function and lower and upper cumulative distribution functions of the chi-square distribution. Contingency tables, cross-tabs, Chi-Square tests. Regression, correlation, least squares curve-fitting, non-parametric correlation. Analysis of survival data a) Test statistic and p-values (1 tail and 2 tails) of McNemar's Test. b) Odds Ratio. 3. Click the button “Reset” for another new calculation. Formula: Variables:  The chi square calculator will help you conduct the goodness of fit test, also You can use the chi square tables (I'm sorry, the source is no longer online) to  12 Aug 2013 The calculator. Enter the four counts in the four boxes and select 'Calculate'. If the minimum expected number is at least 1, you will be given the  Chi-square (2 by 2) · t-test · t-test p-value · + Odds-ratio (OR) and Risk Ratio (RR) · FORMULAS. Chi-square (df = 1; 2 by 2 contingency table) and Sample Size ## A Chi-Square Test calculator for a 2x2 table. This simple chi-square calculator tests for association between two categorical variables - for example, sex An easy chi-square test calculator for a 2 x 2 table. Note that the chi-square test is more commonly used in a very different situation -- to analyze a contingency table. This is appropriate when you wish to compare  Calculation for the Chi-Square test: An interactive calculation tool for chi-square tests of goodness of fit and independence. Kristopher J. Preacher (Vanderbilt  Fisher's test is the best choice as it always gives the exact P value, while the chi- square test only calculates an approximate P value. Only choose chi-square if  Are the groups different by random chance? The Chi-Square Test o helps us decide. An online Chi-square calculator for large tables, from 2X2, 3X5, to 10X10 or more . The accurate Chi-square test p value is simply one step away! Instructions: This calculator conducts a Chi-Square test of independence. Please first indicate the number of columns and rows for the cross tabulation. ### This test is used to determine if two categorical variables are independent or if they To calculate a chi-square test in Excel, you must first create a contingency Chi-square (2 by 2) · t-test · t-test p-value · + Odds-ratio (OR) and Risk Ratio (RR) · FORMULAS. Chi-square (df = 1; 2 by 2 contingency table) and Sample Size  12 Jan 2020 The student enters in the observed and expected values. Then the computer calculates the test statistic and p-value using the chi-square  Difference between Chi Square Test and Anova. Formula for Chi Square Test of Independence. Requirements for Chi-Square Test. How to Calculate Chi ### Calculation for the Chi-Square test: An interactive calculation tool for chi-square tests of goodness of fit and independence. Kristopher J. Preacher (Vanderbilt Contingency tables, cross-tabs, Chi-Square tests. Regression, correlation, least squares curve-fitting, non-parametric correlation. Analysis of survival data 8 May 2017 How to Calculate a Chi Square: Formula & Example. Chapter 1 / Lesson 28 GRE Test: Practice & Study Guide. AP Calculus AB & BC: Help  This test is used to determine if two categorical variables are independent or if they To calculate a chi-square test in Excel, you must first create a contingency   19 Dec 2019 Calculate a one-way chi-square test. The chi-square test tests the null hypothesis that the categorical data has the given frequencies. Parameters. ## The test statistics for McNemar's test is: $\chi^2 = \frac{(b-c)^2}{b+c}$ where the test statistic has a Chi-Square distribution with $$df = 1$$ degree of freedom. If the nominal variables you are analyzing are not paired, the you should use a Chi-Square test for independence instead. a) Test statistic and p-values (1 tail and 2 tails) of McNemar's Test. b) Odds Ratio. 3. Click the button “Reset” for another new calculation. Formula: Variables:  The chi square calculator will help you conduct the goodness of fit test, also You can use the chi square tables (I'm sorry, the source is no longer online) to  12 Aug 2013 The calculator. Enter the four counts in the four boxes and select 'Calculate'. If the minimum expected number is at least 1, you will be given the Mcnemar Chi Square Test Calculator. Online statistics calculator which helps to compare paired or correlated proportions of data using Mcnemar Chi Square. Where a,b,c,d - McNemar Values n - Summation Values of a,b,c,d Χ 2 - Chi Square Related articles: This is a chi-square calculator for goodness of fit (for alternative chi-square calculators, see the column to your right). Explanation The chi-square test for goodness of fit tests whether an observed frequency distribution of a nominal variable matches an expected frequency distribution. Visual, interactive, 2x2 chi-squared test for comparing the success rates of two groups.
2022-01-17 03:42:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7585411667823792, "perplexity": 1735.7041746208984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300289.37/warc/CC-MAIN-20220117031001-20220117061001-00545.warc.gz"}
https://mathematica.stackexchange.com/questions/132691/sort-does-not-sort-0-pi-2
# Sort does not sort {0, -Pi/2} [duplicate] Why is this list returned unchanged by Sort? Sort[{0,-Pi/2}] {0,-Pi/2} While this list is returned reversed? Sort[{0,-1}] {-1,0} Mathematica 11.0.0.0 MacOS ## marked as duplicate by J. M. is away♦Dec 10 '16 at 2:52 • also see (2729) – WReach Dec 3 '16 at 20:41 The Wolfram Documentation says: Sort by default orders integers, rational, and approximate real numbers by their numerical values. It also says: Sort usually orders expressions by putting shorter ones first, and then comparing parts in a depth‐first manner. The phrase -Pi/2 is an expression, so this second definition is applying. i.e. if we convert each expression to 'atomic'/numerical values first... Sort[{0 // N, -Pi/2 // N}] > {-1.5708, 0.} ... then it behaves as expected. You can force it to sort by numerical value by explicitly specifying the method used to compare entries for Sorting: Sort[{0, -Pi/2}, Less] > {-Pi/2, 0} • This is no bug; see Sort bug in Mathematica 10?. – corey979 Dec 3 '16 at 20:04 • @corey979 - Thanks, edited my answer accordingly. – Myridium Dec 3 '16 at 20:19 • Thanks to all who replied. I don't personally think Sort ought to treat -Pi/2 as anything but a number by default, but now that I understand what is going on I will just always explicitly specify a sorting method such as Less. – Ralph Dratman Dec 6 '16 at 10:56 • SortBy[list, N] can be used as well. – J. M. is away Dec 10 '16 at 2:51
2019-06-20 10:12:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26992204785346985, "perplexity": 3586.931852518654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999200.89/warc/CC-MAIN-20190620085246-20190620111246-00119.warc.gz"}
https://socratic.org/questions/how-do-you-factor-5y-2-26y-5
# How do you factor 5y^2-26y+5? May 28, 2015 Use a version of the AC method... $A = 5$, $B = 26$, $C = 5$. $A C = 5 \times 5 = 25$ since the sign of the constant term is $+$, look for a pair of factors of $A C = 25$ which add to give $26$. $1$ and $25$ work. Use this pair to split the middle term into two, then factor by grouping... $5 {y}^{2} - 26 y + 5 = 5 {y}^{2} - 25 y - y + 5$ $= \left(5 {y}^{2} - 25 y\right) - \left(y - 5\right)$ $= 5 y \left(y - 5\right) - \left(y - 5\right)$ $= \left(5 y - 1\right) \left(y - 5\right)$
2020-01-19 07:48:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 13, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8274771571159363, "perplexity": 528.9603818891677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594333.5/warc/CC-MAIN-20200119064802-20200119092802-00487.warc.gz"}
http://jhealthscope.com/en/articles/88454.html
# Removal of Malathion by Sodium Alginate/Biosilicate/Magnetite Nanocomposite as a Novel Adsorbent: Kinetics, Isotherms, and Thermodynamic Study AUTHORS Mehdi Hosseini 1 , * , Hossein Kamani 2 , Ali Esrafili 1 , Mojtaba Yegane Badi 1 , Mitra Gholami 1 , ** 1 Department of Environmental Health Engineering, Research Center for Environmental Health Technology, School of Public Health, Iran University of Medical Sciences, Tehran, Iran 2 Department of Environmental Health, Health Promotion Research Center, Zahedan University of Medical Sciences, Zahedan, Iran Corresponding Authors: How to Cite: Hosseini M , Kamani H, Esrafili A, Yegane Badi M, Gholami M. Removal of Malathion by Sodium Alginate/Biosilicate/Magnetite Nanocomposite as a Novel Adsorbent: Kinetics, Isotherms, and Thermodynamic Study, Health Scope. Online ahead of Print ; 8(4):e88454. doi: 10.5812/jhealthscope.88454. ARTICLE INFORMATION Health Scope: 8 (4); e88454 Published Online: October 29, 2019 Article Type: Research Article Revised: August 1, 2019 Accepted: February 18, 2019 Crossmark CHEKING ##### Abstract Background: Organophosphorus pesticides are one of the widely consumed poisons in agriculture. The consumption of drinking water, which contains an excessive amount of poison, therefore, contributes to adverse health and hygiene outcomes in humans. Methods: In this study, a new sodium alginate/biosilicate/magnetite (SABM) nanocomposite made by the precipitation method was used to remove Malathion from aqueous solutions. The properties of MBSA were analyzed using XRD, SEM, EDX, and FTIR techniques. The possible impact of several parameters such as contact time, pH, initial Malathion concentration, temperature, and MBSA dosage on the adsorption process were investigated. The equilibrium isotherm and kinetic models were employed to evaluate the fitness of the experimental data. Results: The highest removal (94.82%) of MBSA was obtained at an optimum pH of 7, the contact time of 120 minutes, the adsorbent dosage of 4 g/L, Malathion concentration of 10 mg/L, and temperature of 318°K. The adsorption process followed the Freundlich isotherm model (R2 = 0.999), which implied that the adsorption process of Malathion molecules onto MBSA might be mainly a multi-molecular layer. Conclusions: The results of this study showed that MBSA had a good removal efficiency, lower cost of processing, and as well as not producing substances harmful to the environment, which make it a promising adsorbent to remove Malathion from aqueous environments. ## Keywords Malathion Removal Sodium Alginate Biosilicate Nanocomposite Copyright © 2019, Author(s). This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/) which permits copy and redistribute the material just in noncommercial usages, provided the original work is properly cited. ### 1. Background In the last few decades, pesticide contamination of water resources has emerged as a worldwide ecological concern. These compounds also have been detected in surface water and in bed sediments that its documents are available. Their concentration in aqueous sources is very variable and much higher concentrations of them have been reported in the effluents of farmlands. Organophosphorus pesticides are the most common pesticides in the world. Unfortunately, their uncontrolled consumption in many parts of the world has contributed to their overabundance in the environment (1, 2). The pesticides used in agriculture can find a way into the surface water bodies through irrigation and precipitation which results in pollution of these waters (3). One of these pesticides is Malathion [(2dimethoxyphosphorothioyl) sulfanyl], which is a frequently used pesticide. The pesticides are widely used to increase the productivity of agricultural products as well as to control the diseases transmitted by arthropods (4). Organophosphorus pesticides such as Malathion are considered a serious threat to human health due to their effects on the cholinesterase activity and central nervous system disorder (5). Malathion may persist in water with a half-life of months or even years. The World Health Organization (WHO) has set the pesticide in drinking water at 0.1 µg L-1 (6). Malathion has a high solubility in water and its removal by conventional treatment processes such as sand filtration and coagulation is a really difficult process (7). Results from the previous studies indicate that in order to eliminate these pesticides, various methods such as photocatalytic degradation (8), biological oxidation (9), advanced oxidation processes (AOPs) (10), and adsorption (11, 12) have been employed. Among these techniques, adsorption process is a desirable method to remove pesticides. The adsorption technique is known for its simplicity, reliability, safety, and low costs that is an economical and effective method, and also a friendly environmental process (13-15). ### 2. Objectives The purpose of this study was to determine the efficiency of SABM nanocomposite in the removal of Malathion from aqueous environments. The possible impact of several parameters such as contact time, pH, initial Malathion concentration, temperature, and adsorbent dosage on the sorption process were investigated. The Langmuir and Freundlich adsorption isotherm was used to evaluate the adsorption capacity of SABM. ### 3. Methods #### 3.1. Chemicals In this study, sodium hydroxide (NaOH), hydrochloric acid (HCl), acetic acid (CH3CO2H), ferric chloride (FeCl3.6H2O), ferrous sulfate (FeSO4.7H2O), sodium triphosphate (Na5P3O10), ammonia solution, and sodium alginate were from Merck and Diatomite was from Sigma Aldrich. Also, Malathion, 95.0% of active ingredient was purchased from Sigma Aldrich. To make solutions needed by the experiments, deionized water was used. There are two main steps in preparation of the adsorbent used in this study. First, making magnetite nanoparticles. Second, preparing the nanocomposite of SABM. These two steps are explained in the following. #### 3.3. Preparation of Magnetite Nanoparticles A chemical coprecipitation method was used to prepare the magnetite nanoparticles. In this method, FeSO4.7H2O and FeCl3.6H2 were first dissolved in a 1:1 ratio with a concentration of 3.2 M in the deionized water. The mixture was stirred in the presence of nitrogen gas at 80°C for 30 minutes. Then the ammonia solution with a purity of 25.0% was added to the mixture to reach the pH 10, and again it was washed under nitrogen gas for 60 minutes. The resulted magnetite nanoparticles were separated from the solution using a magnet, and then it was washed several times with ethanol and deionized water. The washed nanoparticles were dried at 70°C for 24 hours (25). #### 3.4. Preparation of Sodium Alginate/Biosilicate/Magnetite Nanocomposite Adsorbent To prepare the nanocomposite, first 1 g of sodium alginate was added to 100 mL of acetic acid solution (1 M) and mixed for 2 hours. Then 1g of biosilicate and 1g of magnetite were added to the solution, and that was stirred by a stirrer at a fixed speed. To remove all bubbles in the solution and to obtain a no-bubble mixture, the resulting mixture was placed under a stable situation for 10 hours. In the next, a 100 mL mixture of NaOH (15.0%) and ethanol (95.0%) with ratio of 4:1 was prepared, and then the mixed solution of sodium alginate/biosilicate/magnetite was added to the mixture using a droplet, and then the solution was stored for 24 hours to form granular particles. Then the granular particles were washed with distilled water and dried at the ambient temperature to reach a constant weight. Finally, the dried mixture was chopped and then it was passed through a sieve to obtain the nanocomposite in a proper size (26). In the following, sodium alginate/biosilicate/magnetite nanocomposite adsorbent was signified by abbreviation of SABM adsorbent. #### 3.5. Characterization of SABM Adsorbent A scanning electron microscope (SEM, Jeol ModelJsm-T330) with equipped an X-ray energy spectroscopy (EDX) under vacuum stable was used to determine the surface morphology and composition of the prepared SABM adsorbent. The crystal structure and purity of the SABM adsorbent particles in this work were characterized by X-ray diffraction (XRD) patterns, which were obtained on a Bruker D8 Advance X-ray diffractometer with Cu Kα radiation. The diffraction images were recorded at 40 mA and 40 kV in the 2θ range of 10° - 80°. Also, to recognize the SABM adsorbent functional groups involved in the adsorption process, it was used a Fourier transform infrared (FTIR) adsorption spectrophotometer (JASCO, FT/IR-6300Japan) using KBr disc method. The FTIR adsorption spectra were recorded in the range of 400 to 4000 cm-1. #### 3.6. The pH at Point Zero Charge (pHzpc) To determine pH at zero pH point (pHZPC) of the SABM adsorbent, the following steps were conducted. At first of all, a sufficient amount of 0.1 M NaNO3 solution was spilled into 250 mL flacks and their pH was adjusted between 2 and 11 by either 1 M HCl or NaOH. The whole volume of the solution in each flask was reached to 100 mL by adding NaNO3 solution of the same known concentration; meanwhile, the initial pH values of the solutions were accurately recorded. In the next step, 0.15 g of SABM adsorbent was added to each of the flasks and placed on a shaker at 200 rpm for 24 hours. Finally, the SABM adsorbent was separated from the suspensions, and then the pH values of the solution (final pH) were recorded. The pHZPC was obtained by plotting the initial pH values versus the final pH values (27). This study was conducted in a batch system as a factor at the time. The effect of parameters, including contact time, initial pH values of the solution, SABM adsorbent dose, initial concentration of Malathion, and temperature of the solution was investigated on the adsorption of Malathion onto the SABM adsorbent. Also, adsorption kinetics and isotherms were studied. The experiments were carried out in the following steps. In the first stage of experiments, a 100 mL suspension, including Malathion (5.0 mg L-1) and SABM adsorbent (0.5, 1, 1, 2 and 2.5 g L-1) were spilled into the 250 mL conical flasks and initial pH values were adjusted at 3, 5, 7, 9 and 11 using NaOH and HCl solution, and then it was placed on the thermoshaker at 200 rpm and 25°C, to shake for 120 minutes. The pH was measured with a pH meter (Aqua lytic (AL15)). To investigate Malathion concentration effect, the experiments were carried out at various concentrations (5, 25, 50 and 100 mgL-1) at pH 7 and SABM adsorbent dose 2 gL-1 at 25°C. Finely, effect of temperature on the adsorption process was carried out at various temperatures of solution (25, 35 and 40°C) and pH 7, adsorbent dose 2 gL-1 and Malathion concentration 5 mgL-1. The temperature was adjusted by incubator shaker. #### 3.8. Analysis At the end of each experiment, a magnet was used to remove the SABM adsorbent from the suspension, (1.3 T), and then the residual of Malathion in the solution was measured with a UV-Vis spectrophotometer (DR 6000) at λmax of 236 nm. The removal efficiency of the adsorption process was calculated using Equation 1. Where C0 and Ct are initial and final concentrations of Malathion, respectively. The adsorption capacity was also calculated using Equation 2. $qt:(C0-Ct)Vm$ Where C0, Ct, V, and m, are initial concentration and the final concentration of Malathion, the volume of solution (L) and the mass of adsorbent particles (g), respectively. ### 4. Results and Discussions The scanning electron micrographs of SABM, sodium alginate, biosilica, and magnetic are shown in Figure 1. As shown in Figure 1D, the porosity of the SABM adsorbents is much more than other adsorbents. Such porosity level enhances the capacity and efficiency of Malathion adsorption onto SABM adsorbents. The elemental analysis of the adsorbent composition of SABM is shown in Figure 2. As can be seen, sodium, oxygen, iron, silica and aluminum presence in adsorbent structure. Moreover, the results revealed that silica can prevent the oxidation of iron nanoparticles by acid, which has been used in the process of adsorbent synthesis. These findings point out the suitable composition of the materials applied to the synthesis of SABM. Figure 1. SEM of adsorbent particles, A, sodium alginate, B, biosilica, C, magnetic, and D, SABM is shown Figure 2. EDX analytical results of SABM are shown The XRD pattern of SABM is shown in Figure 3. In the magnetite pattern, the deflected peaks at the 2θ of 30.6, 36.04, 43.6, 54.2, 57.6, and 63.25 which are related to the crystalline plates (220, 311, 400, 422, 511, and 440) and they agree with the Fe3O4 cubic phase JCPDS (card No. 19-0629) (22). Also, there are some peaks in the SABM pattern indicating the presence of Fe3O4 in the SABM compound. As can be seen in Figure 3, the peaks obtained for the biosilica are in accordance with the pure silica phase (JCPDS ICDD File Card # 00-001-0647), and are quite obvious in the SABM pattern. Moreover, as shown in Figure 3 the intensity of the peaks in the composite SABM is reduced to the Fe3O4 and biosilica which can be related to the combination of these two substances with alginate because alginate have amorphous nature and it affects the pattern of SABM (26). FTIR spectroscopy is a powerful, well-developed method to determine the structure and identification of chemical species. It is mainly used to identify organic compounds because of the complexity of their spectra (5). The FTIR spectroscopy is a powerful, well-developed method to determine the structure and identification of chemical species. It is mainly used to identify organic compounds because of the complexity of their spectra (28). The FTIR spectrum of sodium alginate, magnetite, biosilica and SABM (before and after the adsorption of Malathion onto the SABM) is depicted in Figure 4. It’s shown that some obvious changes take place in the spectrum of SABM in comparison with the pristine sodium alginate spectrum and bare magnetite. Also, considering Figure 4, the bands 1626 and 1453 are carboxylic anions (COO-). Owing to the polysaccharide property of the alginate, the band 1093 (C-O-C asymmetric traction) is visible. The strong and broadband 3442 is related to the stretching vibration of O-H groups (18). As known (Figure 4) in the magnetite spectrum, four major peaks are considerable. The 3450 band relates to the stretching vibration of O-H groups and the other three bands (635, 582 and 474) relate to the Fe-O vibrational bands (22). Comparison between the two spectra of the SABM adsorbent (before and after the adsorption of Malathion showed that the intensity of peaks at 3422, 2924, 2366, 627, 1453, 1093, 793, 627, and 454 was reduced after the adsorption of Malathion on the SABM, which indicated the impact of these functional groups on the adsorption process and confirmed that the magnetite nanoparticles were successfully coated with sodium alginate. Figure 4. FTIR spectra of sodium alginate, magnetite, biosilica, and SABM are shown #### 4.2. Effect of Contact Time To investigate the adsorption behavior depended on time, the adsorption process was carried out at a determined statue for 4 hours. Figure 5 shows the result of the contact time effect on the adsorption process. As shown in Figure 5, the Malathion removal efficiency was increased immediately within 10 minutes (20.0%), then it was observed a stable pattern in the removal efficiency until 120 minutes, where the equilibrium was established and the removal efficiency was 92.1%. #### 4.3. Effect of pH The effect of pH values on the removal of Malathion by the SABM adsorbent is shown in Figure 6A, where pH values were (3, 5, 7, 9, and 11), adsorbent dose was 1 g L-1, concentration of Malathion was 5 mg L-1, and temperature was 25°C at contact times 120 minutes. As can be seen, the highest removal efficiency of Malathion occurred at pH 7 and the lowest removal was at pH 11 (Figure 6). With an increase of pH from 3 to 7, the removal efficiency of Malathion has increased, but the removal efficiency has decreased with increasing pH from 7 to 9 and 11, respectively. In a previous study, a similar result has been reported by Kumar et al. (29) and by Zhang et al. (30) on the removal of Malathion by using both agricultural and commercial adsorbents. Concerning the effect of pH on the adsorption process, it is believed that determining pHZPC is important in the justification of the obtained results. Based on Figure 6B, pHZPCof SABM adsorbent was in 9.6. When the pH value is higher the pHZPC, the charge of adsorbent is negative and when it is lower the pHZPC, the charge of adsorbent is positive (15). Owing to the presence of electronegative centers (S and P) on the Malathion structure, and the SABM adsorbent pHZPC (9.6) (30), the Malathion can be adsorbed onto the SABM adsorbent at the acidic and natural pH best of alkaline pH values. Figure 6. Effects of pH on the removal of Malathion from the SABM (A) and pHZPC (B) are shown #### 4.4. Effect of the Adsorbent Dose Effect of various doses of adsorbent (0.5, 1, 1.5, 2, and 2.5 g L-1) in the Malathion adsorption onto the SABM adsorbent was shown in Figure 7A, where pH and concentration of Malathion were 7 and 5 mg L-1 and temperature was 25°C, respectively. As can be seen in Figure 7A, an increase in the SABM dose enhanced removal efficiency where the lowest and highest removal efficiency are in the SABM dose 0.5 and 2.5 g L-1, respectively. In a past study, a similar result has been observed by (7, 29). It is obvious that with increasing mass of adsorbent, the active site to adsorb pollutant increased that led to an increase in the removal efficiency of Malathion onto the SABM adsorbent (27). Figure 7. Effects of absorbent dose (A) and initial concentration of Malathion (B) on the removal of Malathion from the SABM are shown #### 4.5. Effect of the Initial Concentration of Malathion One of the most important and influential factors in the adsorption process is the initial concentration of pollutants. Therefore, the effect of the initial different concentrations (5, 25, 50, and 100 mgL-1) of Malathion on the removal efficiency of the adsorption process was investigated, as pH and SABM dose were 7 and 2 g L-1 and temperature was 25°C, respectively, and the results were presented as Figure 7B. As can be seen in Figure 7B, when the Malathion initial concentration increased from 5 to 100 mg L-1, removal efficiency of Malathion decreased from 92.1% to 45.5%. This result agreed with the previous study by Kumar et al. (29) by which the Malathion was adsorbed onto the both agricultural and synthetic adsorbents. This phenomenon occurred because of a constant dose of SABM in contrast to the increased concentration of Malathion that reduced removal efficiency of the adsorption process. With an increase in the Malathion concentration, the active sites and surface area of the SABM become inadequate (29). #### 4.6. The Effect of Temperature The results of the temperature effects on the removal efficiency of Malathion by SABM adsorbent are shown in Figure 8A. As can be seen, an increase in the temperature led to an increase in the adsorption of Malathion Figure 8A. The highest removal efficiency of Malathion is at 45°C (85.0%) and the lowest of it is at 25°C (92.0%). Hence, it can be explained by this fact that the adsorption process was endothermic in nature. This phenomenon can occur due to an increase in the displacement from the solubility phase of the molecules and their penetration within the pores of the SABM adsorbent (31, 32). Figure 8. Effects of temperature (A), isotherm of Longmuir (B), and pseudo-second kinetic (C) adsorption of Malathion onto the SABM are represented To investigate the distribution of adsorbated molecules onto the adsorbent in equilibrium, the adsorption isotherm was employed. In this study, the relationship between the concentration of Malathion in solution and its adsorbed amount were determined by the Freundlich and Langmuir isotherms (15, 31). The Langmuir isotherm model declares that the distribution of solute molecules onto the adsorbent surface has a monolayer pattern. As a solute molecule attaches to the active site placed on the adsorbent, no further adsorption can occur at that site (32, 33). The linear form of Langmuir isotherm model is expressed via Equation 3: $Ceqe=1Kaqm+Ceqm$ Separation factor (RL), which is a dimensionless parameter, is defined via Equation 4: $RL=11+KaC0$ Where Langmuir constants (Ce, qe, qm, and Ka) are attributed to the equilibrium concentration of Malathionin solution (mg L-1), amount of adsorbed Malathion (mg g-1), maximum monolayer adsorption capacity (mg L-1) and energy of adsorption (L mg-1), respectively, are calculated from plat of Ce/qe versus Ce (15, 34). In Table 1, the results of Langmuir constants in modeling SABM adsorbent were presented calculated. With plotting Ce/qe versus Ce for Langmuir isotherm model, not provided here, it appears that this isotherm model has a poor coefficient of determination (R2 = 0.852) in fit of the adsorption process. As can be seen in Table 1, Langmuir constants qm, RL, and Ka are 36.76 (mg g-1), 0.011 and 17.24 (L mg-1), respectively. This qm for SABM adsorbent was higher than what was reported in the previous study by Darvishi Cheshmeh Soltani et al. (26) conducted in desorption of a textile dye using bio-silica/chitosan nanocomposite. Also, a study of Malathion removal by agricultural and commercial adsorbents that was carried out by Kumar et al. (29), showed a qm = 25 (mg g-1) which is lower than what reported in this study. The comparison of qm of SABM for Malathion removal with the other similar nanomaterial sorbents under similar experimental conditions is shown in Table 2. In this study, RL (dimensionless parameter) that indicates relative volatility in vapor-liquid equilibrium with a range between 0 and 1 for a favorable equilibrium (15), is at the favorable range. Therefore, it can be concluded that adsorption of Malathion onto the SABM adsorbent had a good favorable equilibrium. Table 1. Freundlich, Langmuir and Temkin Isotherm Parameters for the Adsorption of Malathion onto SABM Adsorbent Langmuir IsothermFreundlich IsothermTemkin Isotherm R2Ka (L mg-1)RLqm (mg g-1)R2nKfR2kt (L mg-1)b1 0.85217.240.01136.760.99951.644.590.763661811.895 Table 2. Various Parameters of Kinetic Models for the Malathion Adsorption onto the SABM qe, experimental (mg/g)Pseudo-First OrderPseudo-Second OrderIntra-Particle Diffusion R2k1 (1/min)qe, calculated (mg/g)R2k2 (g/mg. min)qe, calculated (mg/g)R2C (mg/g)Kid (mg/g.min1/2) 24.80.7880.00530.98130.023250.859.650.22 Freundlich isotherm model was used to determine the multilayer adsorption of adsorbate on the adsorbent surface. It also assumes that adsorption occurs on heterogeneous surfaces and can be expressed via Equation 5. (27). Where Freundlich isotherm constants (Kf and n) are the extent of adsorption (mg g-1) and adsorption intensity of system calculated from plot of log qe versus log Ce. Figure 7B shows Freundlich isotherm model for the adsorption of Malathion onto the SABM adsorbent, where coefficient of determination (R2 = 0.9959) states that the adsorption process has good fit by Freundlich isotherm. Constants of Freundlich isotherm (Kf and n) prepared in Table 1 were 44.59 and 1.6, respectively. High amount of Kf constant represents very large extent of adsorption of Malathion onto the SABM adsorbent. Also, the value of n is larger than 1, indicating a favorable adsorption system and a multilayer physical process in the adsorption of Malathion by SABM adsorbent (35). The Temkin model is employed to investigate the heat of the adsorption (adsorption energy) and adsorbent-adsorbate interactions. This isotherm assumes that the decrease of the adsorption energy of all the molecules in a layer linearly with the monolayer sorption on the active sites as a result of adsorbent-adsorbate interactions. The linear form of the Temkin model is given as follows (28, 36): Where, B1, B1 = RT/b1, denotes the Temkin constant (J/mol). R is the universal gas constant and equal to 8.314 J/mol.K. T is the absolute temperature (°K). kt and b1 represent the equilibrium binding constant (L/g) and adsorption heat (kJ/mol), respectively. Based on the data obtained, the magnitude of b1 value showed the fast removal of Malathion at the initial stage and the smallness of kt value implied the weak bonding of Malathion molecules onto the composite. To study the mechanism of Malathion adsorption onto the SABM adsorbent, the transient behavior of the Malathion adsorption process was investigated using the pseudo-first-order and pseudo-second-order kinetics which are explained as follows. The pseudo-first-order kinetic. Linear equation of pseudo-first order kinetic is shown in Equation 6 (37). Where qe, qt, and k1 refers to the amount of adsorbed Malathion at equilibrium (mg g-1), the amount of adsorbed Malathion at time (t), and the equilibrium rate constant (min-1) of pseudo-first-order kinetic, respectively. The k1 is taken out from plotting Log (qe - qt) versus (t), where pseudo-first-order kinetic fitting for SABM adsorbent had a very poor coefficient of determination (R2 = 0.7884) (not shown). Calculated pseudo-first-order kinetic constants were provided in Table 3. As shown in Table 3, the equilibrium adsorption capacity qe (Cal) value was lower than the experimental qe (Exp) value, which indicated the inapplicability of this model. The pseudo-second-order equation. Linear form of pseudo-second-order kinetic is given in Equation 7 (27). $tqt=1k2qe2+1qet$ By which, rate constant (k2) and adsorption capacity in equilibrium (qe) were calculated by plotting (t/qt) versus (t) (Figure 7C). The initial adsorption rate (h) was calculated at zero time, by Equation 8. $h=k2qe2$ Additionally, the intraparticle diffusion model is conveniently employed to recognize the diffusion mechanism. The model can be epitomized as follows (38, 39): Where, kid (mg/g) is related to the intraparticle diffusion rate constant. C is the intercept and represents the thickness of the boundary layer (mg/g) in which the effect of this layer depends on the value of the intercept. As shown in Figure 7C, the pseudo-second-order kinetic fitting for adsorption of Malathion onto the SABM adsorbent have a good coefficient of determination (R2 = 0.9994). A similar behavior has been observed by Naushad et al. (31) on the removal of Malathion using amberlyst-15 resin. All the parameters of this kinetic model were prepared in Table 3. According to Table 3, the equilibrium adsorption capacity qe (Cal) value (25 mg g-1) was close to the experimental qe (Exp) value (24.8 mg g-1), which indicated the applicability of this kinetic model for the adsorption process behavior. Also, RL is 14.37 (min-1 mg g-1), which indicates the high initial adsorption rate. Based on the intra-particle diffusion model, the high values of C parameter (9.65) indicated that the boundary layer effect was also responsible for the adsorption. The multi-linearity of q versus t0.5 plot, and/or deviation of the plots from the origin further confirms that the adsorption process is complex and some other mechanisms along with intraparticle diffusion control the process steps, as reported previously by Jerold et al. (40). Table 3. Adsorption Capacities of Various Adsorbents for the Uptake of Malathion Montmorillonite7.95(35) Amberlyst-15 cation exchange resin12.12(31) powdered activated carbon21.74(29) De-Acidite FF-IP resin16.39(32) SABM nanocomposite36.86This work Abbreviation: SABM, sodium alginate/biosilicate/magnetite. #### 4.9. Thermodynamic Studies The thermodynamic study was performed to reach a better understanding of the adsorptive behavior of Malathion toward nanocomposite. The free energy change (ΔG0) (kJ mol-1), enthalpy change (ΔH0) (kJ mol-1), and entropy change (ΔS0) (kJ mol-1 K-1) for the adsorption of Malathion were calculated by Equations 9 (26). $ΔG=-RTlnKD$ $ln⁡KD=(∆SR)-(∆HRT)$ The thermodynamic parameters of Malathion adsorption on MBAS are listed in Table 4. As represented in Table 4, the values of ΔH and ΔS are positive, and the standard free energy (ΔG) is negative. The positive ΔH value indicates that the sorption process was endothermic. In other words, the positivity of this parameter states that the increase in temperature has a positive effect on the adsorption of Malathion and, the adsorption of this pollutant at higher temperatures is more favorable. Furthermore, the negative values obtained for Gibbs free energy indicate that the adsorption of Malathion by the synthesized adsorbent is a spontaneous process (31). Table 4. Thermodynamic Parameters for the Adsorption of Malathion on MBAS Temperature (K)Ln KD∆G0 (kJ/mol)∆H0 (kJ/mol)∆S0 (kJ/mol.K) 2981.81-2.80602.600.1 3082.05-4.05 3183.19-5.78 #### 4.10. The Mechanisms of the Adsorption The mechanism of the adsorption of organic pollutants onto inorganic materials usually are a combination of electrostatic interaction, ion exchange, π-π electron donor-acceptor (EDA) interaction, and hydrophobic surface interaction and so on. The hydrophobic interaction is an important mechanism involved in the sorption of Malathion onto MBSA. Malathion is partially insoluble organics in water. As the pH increases, the Malathion molecules gain less water solubility and higher hydrophobicity; these results lead to a higher adsorption efficiency at pH 7. Therefore, the hydrophobic surface interactions should be a dominant mechanism in the adsorption process. In addition, electrostatic interaction can be a major mechanism governing adsorption of Malathion onto the MBSA (between the oppositely charged groups of adsorbate and adsorbent). Also, the acid-base interactions may be another significant factor involved in controlling Malathion adsorption. ### References • 1. Hela DG, Lambropoulou DA, Konstantinou IK, Albanis TA. Environmental monitoring and ecological risk assessment for pesticide contamination and effects in Lake Pamvotis, northwestern Greece. Environ Toxicol Chem. 2005;24(6):1548-56. doi: 10.1897/04-455r.1. [PubMed: 16117136]. • 2. Warren N, Allan IJ, Carter JE, House WA, Parker A. Pesticides and other micro-organic contaminants in freshwater sedimentary environments—a review. Appl Geochem. 2003;18(2):159-94. doi: 10.1016/s0883-2927(02)00159-2. • 3. Dehghani R, Moosavi SG, Esalmi H, Mohammadi M, Jalali Z, Zamini N. Surveying of pesticides commonly on the markets of Iran in 2009. J Environ Protect. 2011;2(8):1113-7. doi: 10.4236/jep.2011.28129. • 4. Ward MH, Nuckols JR, Weigel SJ, Maxwell SK, Cantor KP, Miller RS. Identifying populations potentially exposed to agricultural pesticides using remote sensing and a Geographic Information System. Environ Health Perspect. 2000;108(1):5-12. doi: 10.1289/ehp.001085. [PubMed: 10622770]. [PubMed Central: PMC1637858]. • 5. Al-Qurainy F, Abdel-Megeed A. Phytoremediation and detoxification of two organophosphorous pesticides residues in Riyadh area. World Appl Sci J. 2009;6(7):987-98. • 6. World Health Organization. Guidelines for drinking-water quality: Recommendations. WHO; 2004. • 7. Ohno K, Minami T, Matsui Y, Magara Y. Effects of chlorine on organophosphorus pesticides adsorbed on activated carbon: Desorption and oxon formation. Water Res. 2008;42(6-7):1753-9. doi: 10.1016/j.watres.2007.10.040. [PubMed: 18048077]. • 8. Ramos-Delgado NA, Hinojosa-Reyes L, Guzman-Mar IL, Gracia-Pinilla MA, Hernandez-Ramirez A. Synthesis by sol-gel of WO3/TiO2 for solar photocatalytic degradation of malathion pesticide. Catalysis Today. 2013;209:35-40. doi: 10.1016/j.cattod.2012.11.011. • 9. Getzin LW, Rosefield I. Partial purification and properties of a soil enzyme that degrades the insecticide malathion. Biochim Biophys Acta. 1971;235(3):442-53. doi: 10.1016/0005-2744(71)90285-3. [PubMed: 5317645]. • 10. Doong RA, Chang WH. Photoassisted titanium dioxide mediated degradation of organophosphorus pesticides by hydrogen peroxide. J Photochem Photobiol Chem. 1997;107(1-3):239-44. doi: 10.1016/s1010-6030(96)04579-0. • 11. Rani M, Shanker U. Effective adsorption and enhanced degradation of various pesticides from aqueous solution by Prussian blue nanorods. J Environ Chem Eng. 2018;6(1):1512-21. doi: 10.1016/j.jece.2018.01.060. • 12. Younis SA, Ghobashy MM, Samy M. Development of aminated poly(glycidyl methacrylate) nanosorbent by green gamma radiation for phenol and malathion contaminated wastewater treatment. J Environ Chem Eng. 2017;5(3):2325-36. doi: 10.1016/j.jece.2017.04.024. • 13. Liu Y, Chen M, Yongmei H. Study on the adsorption of Cu(II) by EDTA functionalized Fe3O4 magnetic nano-particles. Chem Eng J. 2013;218:46-54. doi: 10.1016/j.cej.2012.12.027. • 14. Srivastava M, Singh J, Yashpal M, Gupta DK, Mishra RK, Tripathi S, et al. Synthesis of superparamagnetic bare Fe(3)O(4) nanostructures and core/shell (Fe(3)O(4)/alginate) nanocomposites. Carbohydr Polym. 2012;89(3):821-9. doi: 10.1016/j.carbpol.2012.04.016. [PubMed: 24750867]. • 15. Naghipour D, Taghavi K, Moslemzadeh M. Removal of methylene blue from aqueous solution by Artist's Bracket fungi: Kinetic and equilibrium studies. Water Sci Technol. 2016;73(11):2832-40. doi: 10.2166/wst.2016.147. [PubMed: 27232421]. • 16. Hossaini H, Moussavi G, Farrokhi M. The investigation of the LED-activated FeFNS-TiO2 nanocatalyst for photocatalytic degradation and mineralization of organophosphate pesticides in water. Water Res. 2014;59:130-44. doi: 10.1016/j.watres.2014.04.009. [PubMed: 24793111]. • 17. Darvishi Cheshmeh Soltani R, Safari M, Rezaee A, Godini H. Application of a compound containing silica for removing ammonium in aqueous media. Environ Progr Sustain Energ. 2015;34(1):105-11. doi: 10.1002/ep.11969. • 18. Omidi Khaniabadi Y, Heydari R, Nourmoradi H, Basiri H, Basiri H. Low-cost sorbent for the removal of aniline and methyl orange from liquid-phase: Aloe Vera leaves wastes. J Taiwan Inst Chem Eng. 2016;68:90-8. doi: 10.1016/j.jtice.2016.09.025. • 19. Darvishi Cheshmeh Soltani R, Safari M, Maleki A, Godini H, Mahmoudian MH, Pordel MA. Application of nanocrystalline Iranian diatomite in immobilized form for removal of a textile dye. J Dispers Sci Tech. 2015;37(5):723-32. doi: 10.1080/01932691.2015.1058715. • 20. Mauter MS, Elimelech M. Environmental applications of carbon-based nanomaterials. Environ Sci Technol. 2008;42(16):5843-59. doi: 10.1021/es8006904. [PubMed: 18767635]. • 21. Zhao Y, Xue Z, Wang X, Wang L, Wang A. Adsorption of congo red onto lignocellulose/montmorillonite nanocomposite. J Wuhan UnivTech Mater Sci Ed. 2012;27(5):931-8. doi: 10.1007/s11595-012-0576-2. • 22. Yi X, He J, Guo Y, Han Z, Yang M, Jin J, et al. Encapsulating Fe3O4 into calcium alginate coated chitosan hydrochloride hydrogel beads for removal of Cu (II) and U (VI) from aqueous solutions. Ecotoxicol Environ Saf. 2018;147:699-707. doi: 10.1016/j.ecoenv.2017.09.036. [PubMed: 28938140]. • 23. Darvishi Cheshmeh Soltani R, Safari M, Maleki A, Rezaee R, Shahmoradi B, Shahmohammadi S, et al. Decontamination of arsenic(V)-contained liquid phase utilizing Fe3O4/bone char nanocomposite encapsulated in chitosan biopolymer. Environ Sci Pollut Res Int. 2017;24(17):15157-66. doi: 10.1007/s11356-017-9128-9. [PubMed: 28500548]. • 24. Darvishi Cheshmeh Soltani R, Khataee AR, Godini H, Safari M, Ghanadzadeh MJ, Rajaei MS. Response surface methodological evaluation of the adsorption of textile dye onto biosilica/alginate nanobiocomposite: thermodynamic, kinetic, and isotherm studies. Desalin Water Treat. 2014;56(5):1389-402. doi: 10.1080/19443994.2014.950344. • 25. Lou Z, Zhou Z, Zhang W, Zhang X, Hu X, Liu P, et al. Magnetized bentonite by Fe3O4 nanoparticles treated as adsorbent for methylene blue removal from aqueous solution: Synthesis, characterization, mechanism, kinetics and regeneration. J Taiwan Inst Chem Eng. 2015;49:199-205. doi: 10.1016/j.jtice.2014.11.007. • 26. Darvishi Cheshmeh Soltani R, Khataee AR, Safari M, Joo SW. Preparation of bio-silica/chitosan nanocomposite for adsorption of a textile dye in aqueous solutions. Int Biodeterior Biodegrad. 2013;85:383-91. doi: 10.1016/j.ibiod.2013.09.004. • 27. Pourkarim S, Ostovar F, Mahdavianpour M, Moslemzadeh M. Adsorption of chromium(VI) from aqueous solution by Artist’s Bracket fungi. Separ Sci Tech. 2017;52(10):1733-41. doi: 10.1080/01496395.2017.1299179. • 28. Massoudinejad M, Rasoulzadeh H, Ghaderpoori M. Magnetic chitosan nanocomposite: Fabrication, properties, and optimization for adsorptive removal of crystal violet from aqueous solutions. Carbohydr Polym. 2019;206:844-53. doi: 10.1016/j.carbpol.2018.11.048. [PubMed: 30553392]. • 29. Kumar P, Singh H, Kapur M, Mondal MK. Comparative study of malathion removal from aqueous solution by agricultural and commercial adsorbents. J Water Proc Eng. 2014;3:67-73. doi: 10.1016/j.jwpe.2014.05.010. • 30. Zhang Q, Jing Y, Shiue A, Chang CT, Ouyang T, Lin CF, et al. Photocatalytic degradation of malathion by TiO(2) and Pt-TiO(2) nanotube photocatalyst and kinetic study. J Environ Sci Health B. 2013;48(8):686-92. doi: 10.1080/03601234.2013.778623. [PubMed: 23638896]. • 31. Naushad M, Alothman ZA, Khan MR, Alqahtani NJ, Alsohaimi IH. Equilibrium, kinetics and thermodynamic studies for the removal of organophosphorus pesticide using Amberlyst-15 resin: Quantitative analysis by liquid chromatography–mass spectrometry. J Ind Eng Chem. 2014;20(6):4393-400. doi: 10.1016/j.jiec.2014.02.006. • 32. Naushad M, Alothman ZA, Khan MR. Removal of malathion from aqueous solution using De-Acidite FF-IP resin and determination by UPLC-MS/MS: equilibrium, kinetics and thermodynamics studies. Talanta. 2013;115:15-23. doi: 10.1016/j.talanta.2013.04.015. [PubMed: 24054556]. • 33. Balarak D, Mahdavi Y, Bazrafshan E, Mahvi AH, Esfandyari Y. Adsorption of fluoride from aqueous solutions by carbon nanotubes: Determination of equilibrium, kinetic, and thermodynamic parameters. Adsorption. 2016;49(1):71-83. • 34. Bazrafshan E, Zarei AA, Nadi H, Zazouli MA. Adsorptive removal of Methyl Orange and Reactive Red 198 dyes by Moringa peregrina ash. Indian J Chem Tech (IJCT). 2014;21(2):105-13. • 35. Pal OR, Vanjara AK. Removal of malathion and butachlor from aqueous solution by clays and organoclays. Separ Purif Tech. 2001;24(1-2):167-72. doi: 10.1016/s1383-5866(00)00226-4. • 36. Bazrafshan E, Kord Mostafapour F, Rahdar S, Mahvi AH. Equilibrium and thermodynamics studies for decolorization of Reactive Black 5 (RB5) by adsorption onto MWCNTs. Desalin Water Treat. 2014;54(8):2241-51. doi: 10.1080/19443994.2014.895778. • 37. Zhao J, Liu J, Li N, Wang W, Nan J, Zhao Z, et al. Highly efficient removal of bivalent heavy metals from aqueous systems by magnetic porous Fe3O4 -MnO2: Adsorption behavior and process study. Chem Eng J. 2016;304:737-46. doi: 10.1016/j.cej.2016.07.003. • 38. Alimohammadi M, Saeedi Z, Akbarpour B, Rasoulzadeh H, Yetilmezsoy K, Al-Ghouti MA, et al. Adsorptive removal of arsenic and mercury from aqueous solutions by Eucalyptus leaves. Water Air Soil Pollut. 2017;228(11). doi: 10.1007/s11270-017-3607-y. • 39. Balarak D, Kord Mostafapour F, Bazrafshan E, Mahvi AH. The equilibrium, kinetic, and thermodynamic parameters of the adsorption of the fluoride ion on to synthetic nano sodalite zeolite. Fluoride. 2017;50(2):223-34. • 40. Jerold M, Vasantharaj K, Joseph D, Sivasubramanian V. Fabrication of hybrid biosorbent nanoscale zero-valent iron-Sargassum swartzii biocomposite for the removal of crystal violet from aqueous solution. Int J Phytoremediation. 2017;19(3):214-24. doi: 10.1080/15226514.2016.1207607. [PubMed: 27420340].
2020-02-19 11:54:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6142060160636902, "perplexity": 10563.635313621526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144111.17/warc/CC-MAIN-20200219092153-20200219122153-00351.warc.gz"}
https://leanprover-community.github.io/mathlib_docs/combinatorics/simple_graph/partition.html
mathlibdocumentation combinatorics.simple_graph.partition Graph partitions # This module provides an interface for dealing with partitions on simple graphs. A partition of a graph G, with vertices V, is a set P of disjoint nonempty subsets of V such that: • The union of the subsets in P is V. • Each element of P is an independent set. (Each subset contains no pair of adjacent vertices.) Graph partitions are graph colorings that do not name their colors. They are adjoint in the following sense. Given a graph coloring, there is an associated partition from the set of color classes, and given a partition, there is an associated graph coloring from using the partition's subsets as colors. Going from graph colorings to partitions and back makes a coloring "canonical": all colors are given a canonical name and unused colors are removed. Going from partitions to graph colorings and back is the identity. Main definitions # • simple_graph.partition is a structure to represent a partition of a simple graph • simple_graph.partition.parts_card_le is whether a given partition is an n-partition. (a partition with at most n parts). • simple_graph.partitionable n is whether a given graph is n-partite • simple_graph.partition.to_coloring creates colorings from partitions • simple_graph.coloring.to_partition creates partitions from colorings Main statements # • simple_graph.partitionable_iff_colorable is that n-partitionability and n-colorability are equivalent. structure simple_graph.partition {V : Type u} (G : simple_graph V) : Type u • parts : set (set V) • is_partition : • independent : ∀ (s : set V), s self.parts s A partition of a simple graph G is a structure constituted by • parts: a set of subsets of the vertices V of G • is_partition: a proof that parts is a proper partition of V • independent: a proof that each element of parts doesn't have a pair of adjacent vertices def simple_graph.partition.parts_card_le {V : Type u} {G : simple_graph V} (P : G.partition) (n : ) : Prop Whether a partition P has at most n parts. A graph with a partition satisfying this predicate called n-partite. (See simple_graph.partitionable.) Equations def simple_graph.partitionable {V : Type u} (G : simple_graph V) (n : ) : Prop Whether a graph is n-partite, which is whether its vertex set can be partitioned in at most n independent sets. Equations def simple_graph.partition.part_of_vertex {V : Type u} {G : simple_graph V} (P : G.partition) (v : V) : set V The part in the partition that v belongs to Equations theorem simple_graph.partition.part_of_vertex_mem {V : Type u} {G : simple_graph V} (P : G.partition) (v : V) : theorem simple_graph.partition.mem_part_of_vertex {V : Type u} {G : simple_graph V} (P : G.partition) (v : V) : v theorem simple_graph.partition.part_of_vertex_ne_of_adj {V : Type u} {G : simple_graph V} (P : G.partition) {v w : V} (h : G.adj v w) : def simple_graph.partition.to_coloring {V : Type u} {G : simple_graph V} (P : G.partition) : Create a coloring using the parts themselves as the colors. Each vertex is colored by the part it's contained in. Equations def simple_graph.partition.to_coloring' {V : Type u} {G : simple_graph V} (P : G.partition) : Like simple_graph.partition.to_coloring but uses set V as the coloring type. Equations def simple_graph.coloring.to_partition {V : Type u} {G : simple_graph V} {α : Type v} (C : G.coloring α) : Creates a partition from a coloring. Equations @[simp] theorem simple_graph.coloring.to_partition_parts {V : Type u} {G : simple_graph V} {α : Type v} (C : G.coloring α) : @[protected, instance] def simple_graph.partition.inhabited {V : Type u} {G : simple_graph V} : Equations theorem simple_graph.partitionable_iff_colorable {V : Type u} {G : simple_graph V} {n : } :
2022-01-23 00:29:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5068095326423645, "perplexity": 1686.8137479015363}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303917.24/warc/CC-MAIN-20220122224904-20220123014904-00653.warc.gz"}
https://math.stackexchange.com/questions/1070074/what-syntax-exists-for-higher-order-logic
# What syntax exists for higher order logic? I know this is sort of a broad question, but I'm having trouble getting a handle on the syntax for higher order logic, when going from first order logic. Basically I want to be able to do resolution proofs in higher order logic since it works so well for first order logic. The syntax of first order logic I understand as this: A first order term is either a variable or a first order function of some arity. Functions of arity zero can be represented as constants. A first order function takes only terms as its arguments. There are an infinite number of first order element variables. There are an infinite number of free (constant) first order functions. Nothing else is a term. A first order predicate takes only first order terms as arguments. Predicates that take no arguments can be represented as propositional constants. A first order logical formula consists of predicates, and logical formula connected by logical connectives, including negation, conjunction, disjunction, and quantifiers. Quantifiers quantify element variables, either universal or existential. Logical connectives can connect to predicates or other logical formula. Nothing else is a logical formula. Example first order statement: $\forall$x$\exists$y P(x,y) $\rightarrow$ Q(f(x), g(y)) Now, we can extend this to second order logic by allowing quantification of functions and predicates/relations. Now first order function constants and predicate constants are no longer required to be free constants, but may be bound variables. Other than that it's identical to first order logic. The semantics may be more complicated, because the completeness and compactness theorem no longer apply in full semantics, and higher order proof calculus may be more complicated because higher order Skolemization is more complicated and may require the axiom of choice, but the syntax isn't much different. But quantified functions and relations in second order logic still take first order terms and return first order terms and truth values respectively. Using our first order example as quantified second order logic: $\forall$P$\exists$Q$\forall$f$\exists$g$\forall$x$\exists$y P(x,y) $\rightarrow$ $Q(f(x), g(y))$ This seems pretty straightforward if I want to do resolution proofs in second order logic. I can use most of the same proof calculus as first order logic, with new rules for unifying function variables and predicate variables. But when we go higher orders the syntax seems more muddy. Do we have second order terms as different types from first order terms? For instance I can have a third order function variable that takes a second order function variable as an argument... and have its domain be first order terms, or second order functions, of all sorts of different arities and signatures. Or a fourth order predicate that takes a third order function of a specific arity and a second order function, and a first order term. It looks to me like there's an explosion of syntactical complexity at third order logic and higher, but I haven't seen many explicit examples of third or higher order syntax. In first and second order logic, as far as I understand, the only complexity of the signature of functions and relations is the arity; just how many arguments it takes. But it appears to me that the signature of higher order functions and relations is much more complicated, since higher order functions can take as arguments and have a domain more than just the first order terms. I've heard higher order logic is similar to type theory, in that higher order terms are essentially different types, and higher order logic can be represented as many sorted first order logic. Can someone explain how higher order syntax is supposed to work? Or point me to a reference for formal syntax (or syntaxes?) for higher order logics? • You should really make your question brief. I doubt you will find many peoples willing to read such a long post. Find a way to condense it. – Trismegistos Dec 16 '14 at 8:42 • Second order logic syntax is already defined in a number of texts... it's higher order syntax that is muddy, particularly how you handle third order terms (second order function and predicate variables) as different types, as a sort of multi-sorted logic that starts to encroach on type theory. – dezakin Dec 16 '14 at 18:06 • Have a look at the documentation for the HOL4 system, particularly the Logic manual that you can find via hol.sourceforge.net/documentation.html – Rob Arthan Dec 17 '14 at 23:14 The key is that it is based on lambda calculus (as a syntax for anonymous functions). A set is identified with its characteristic function $i \rightarrow o$, a predicate over sets is then of type $(i \rightarrow o) \rightarrow o$ and a predicate over predicates over sets is $((i \rightarrow o) \rightarrow o) \rightarrow o$ and so forth. Here, then is an example, an axiom of extensionality for such third-order predicates: $$\forall x:((i \rightarrow o) \rightarrow o) \rightarrow o. p~x = q~x \Rightarrow p = q$$ Function application is just written as juxtaposition here, e.g. $p~x$.
2021-04-15 03:36:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7604671716690063, "perplexity": 297.16801573133097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038082988.39/warc/CC-MAIN-20210415005811-20210415035811-00082.warc.gz"}
http://drevans.blog.enginehousebooks.com/2015/07/cleaned-qso-information-from-cq-ww.html
## 2015-07-05 ### Cleaned QSO information from CQ WW contest files from 2008 to 2014 The CQ World Wide (CQWW) website cqww.com hosts and makes available public logs from 2008 to 2014. These public logs are more or less as submitted, except that certain private information has been removed. However, since the logs contain all the Cabrillo lines in the original log, they are not particularly suitable for direct analysis. To try to aid with analyses based on public logs, I have made available the QSO information from the logs in a more convenient format. I went through all the CQ WW public logs, cleaning them up and converting all the QSOs to the correct Cabrillo format (a surprising number of the raw logs provide examples of creative formatting and/or downright erroneous information in the QSO lines). The resulting QSOs for the CW and SSB contests from 2008 to 2014 are in a single compressed (xz) file downloadable from: The MD5 checksum of this file is: 2e8540de740de98de305166c19c2f0bf NB: See the updated post at: http://drevans.blog.enginehousebooks.com/2015/11/cleaned-and-augmented-qso-information.html
2021-11-28 04:41:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3113436996936798, "perplexity": 4925.194417539432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358469.34/warc/CC-MAIN-20211128043743-20211128073743-00448.warc.gz"}
http://www.gutenberg.us/articles/eng/Trigonometric_equation
### Trigonometric equation In mathematics, trigonometric identities are equalities that involve trigonometric functions and are true for every single value of the occurring variables. Geometrically, these are identities involving certain functions of one or more angles. They are distinct from triangle identities, which are identities involving both angles and side lengths of a triangle. Only the former are covered in this article. These identities are useful whenever expressions involving trigonometric functions need to be simplified. An important application is the integration of non-trigonometric functions: a common technique involves first using the substitution rule with a trigonometric function, and then simplifying the resulting integral with a trigonometric identity. ## Notation ### Angles This article uses Greek letters such as alpha (α), beta (β), gamma (γ), and theta (θ) to represent angles. Several different units of angle measure are widely used, including degrees, radians, and grads: 1 full circle  = 360 degrees = 2$\pi$ radians  =  400 grads. The following table shows the conversions for some common angles: Degrees Radians Grads Degrees Radians Grads 30° 60° 120° 150° 210° 240° 300° 330° $\frac\pi6\!$ $\frac\pi3\!$ $\frac\left\{2\pi\right\}3\!$ $\frac\left\{5\pi\right\}6\!$ $\frac\left\{7\pi\right\}6\!$ $\frac\left\{4\pi\right\}3\!$ $\frac\left\{5\pi\right\}3\!$ $\frac\left\{11\pi\right\}6\!$ 33⅓ grad 66⅔ grad 133⅓ grad 166⅔ grad 233⅓ grad 266⅔ grad 333⅓ grad 366⅔ grad 45° 90° 135° 180° 225° 270° 315° 360° $\frac\pi4\!$ $\frac\pi2\!$ $\frac\left\{3\pi\right\}4\!$ $\pi\!$ $\frac\left\{5\pi\right\}4\!$ $\frac\left\{3\pi\right\}2\!$ $\frac\left\{7\pi\right\}4\!$ $2\pi\!$ 50 grad 100 grad 150 grad 200 grad 250 grad 300 grad 350 grad 400 grad Unless otherwise specified, all angles in this article are assumed to be in radians, but angles ending in a degree symbol (°) are in degrees. Per Niven's theorem multiples of 30° are the only rational angles with rational sin/cos, which may account for their popularity in examples.[1] ### Trigonometric functions The primary trigonometric functions are the sine and cosine of an angle. These are sometimes abbreviated sin(θ) and cos(θ), respectively, where θ is the angle, but the parentheses around the angle are often omitted, e.g., sin θ and cos θ. The tangent (tan) of an angle is the ratio of the sine to the cosine: $\tan\theta = \frac\left\{\sin\theta\right\}\left\{\cos\theta\right\}.$ Finally, the reciprocal functions secant (sec), cosecant (csc), and cotangent (cot) are the reciprocals of the cosine, sine, and tangent: $\sec\theta = \frac\left\{1\right\}\left\{\cos\theta\right\},\quad\csc\theta = \frac\left\{1\right\}\left\{\sin\theta\right\},\quad\cot\theta=\frac\left\{1\right\}\left\{\tan\theta\right\}=\frac\left\{\cos\theta\right\}\left\{\sin\theta\right\}.$ These definitions are sometimes referred to as ratio identities. ## Inverse functions The inverse trigonometric functions are partial inverse functions for the trigonometric functions. For example, the inverse function for the sine, known as the inverse sine (sin−1) or arcsine (arcsin or asin), satisfies $\sin\left(\arcsin x\right) = x\quad\text\left\{for\right\} \quad |x| \leq 1$ and $\arcsin\left(\sin x\right) = x\quad\text\left\{for\right\} \quad |x| \leq \pi/2.$ Function Inverse sin cos tangent sec csc cot arcsin arccos arctan arcsec arccsc arccot ## Pythagorean identity The basic relationship between the sine and the cosine is the Pythagorean trigonometric identity: $\cos^2\theta + \sin^2\theta = 1\!$ where cos2 θ means (cos(θ))2 and sin2 θ means (sin(θ))2. This can be viewed as a version of the Pythagorean theorem, and follows from the equation x2 + y2 = 1 for the unit circle. This equation can be solved for either the sine or the cosine: $\sin\theta = \pm \sqrt\left\{1-\cos^2\theta\right\} \quad \text\left\{and\right\} \quad \cos\theta = \pm \sqrt\left\{1 - \sin^2\theta\right\}. \,$ ### Related identities Dividing the Pythagorean identity through by either cos2 θ or sin2 θ yields two other identities: $1 + \tan^2\theta = \sec^2\theta\quad\text\left\{and\right\}\quad 1 + \cot^2\theta = \csc^2\theta.\!$ Using these identities together with the ratio identities, it is possible to express any trigonometric function in terms of any other (up to a plus or minus sign): Each trigonometric function in terms of the other five.[2] in terms of $\sin \theta\!$ $\cos \theta\!$ $\tan \theta\!$ $\csc \theta\!$ $\sec \theta\!$ $\cot \theta\!$ $\sin \theta =\!$ $\sin \theta\$ $\pm\sqrt\left\{1 - \cos^2 \theta\right\}\!$ $\pm\frac\left\{\tan \theta\right\}\left\{\sqrt\left\{1 + \tan^2 \theta\right\}\right\}\!$ $\frac\left\{1\right\}\left\{\csc \theta\right\}\!$ $\pm\frac\left\{\sqrt\left\{\sec^2 \theta - 1\right\}\right\}\left\{\sec \theta\right\}\!$ $\pm\frac\left\{1\right\}\left\{\sqrt\left\{1 + \cot^2 \theta\right\}\right\}\!$ $\cos \theta =\!$ $\pm\sqrt\left\{1 - \sin^2\theta\right\}\!$ $\cos \theta\!$ $\pm\frac\left\{1\right\}\left\{\sqrt\left\{1 + \tan^2 \theta\right\}\right\}\!$ $\pm\frac\left\{\sqrt\left\{\csc^2 \theta - 1\right\}\right\}\left\{\csc \theta\right\}\!$ $\frac\left\{1\right\}\left\{\sec \theta\right\}\!$ $\pm\frac\left\{\cot \theta\right\}\left\{\sqrt\left\{1 + \cot^2 \theta\right\}\right\}\!$ $\tan \theta =\!$ $\pm\frac\left\{\sin \theta\right\}\left\{\sqrt\left\{1 - \sin^2 \theta\right\}\right\}\!$ $\pm\frac\left\{\sqrt\left\{1 - \cos^2 \theta\right\}\right\}\left\{\cos \theta\right\}\!$ $\tan \theta\!$ $\pm\frac\left\{1\right\}\left\{\sqrt\left\{\csc^2 \theta - 1\right\}\right\}\!$ $\pm\sqrt\left\{\sec^2 \theta - 1\right\}\!$ $\frac\left\{1\right\}\left\{\cot \theta\right\}\!$ $\csc \theta =\!$ $\frac\left\{1\right\}\left\{\sin \theta\right\}\!$ $\pm\frac\left\{1\right\}\left\{\sqrt\left\{1 - \cos^2 \theta\right\}\right\}\!$ $\pm\frac\left\{\sqrt\left\{1 + \tan^2 \theta\right\}\right\}\left\{\tan \theta\right\}\!$ $\csc \theta\!$ $\pm\frac\left\{\sec \theta\right\}\left\{\sqrt\left\{\sec^2 \theta - 1\right\}\right\}\!$ $\pm\sqrt\left\{1 + \cot^2 \theta\right\}\!$ $\sec \theta =\!$ $\pm\frac\left\{1\right\}\left\{\sqrt\left\{1 - \sin^2 \theta\right\}\right\}\!$ $\frac\left\{1\right\}\left\{\cos \theta\right\}\!$ $\pm\sqrt\left\{1 + \tan^2 \theta\right\}\!$ $\pm\frac\left\{\csc \theta\right\}\left\{\sqrt\left\{\csc^2 \theta - 1\right\}\right\}\!$ $\sec \theta\!$ $\pm\frac\left\{\sqrt\left\{1 + \cot^2 \theta\right\}\right\}\left\{\cot \theta\right\}\!$ $\cot \theta =\!$ $\pm\frac\left\{\sqrt\left\{1 - \sin^2 \theta\right\}\right\}\left\{\sin \theta\right\}\!$ $\pm\frac\left\{\cos \theta\right\}\left\{\sqrt\left\{1 - \cos^2 \theta\right\}\right\}\!$ $\frac\left\{1\right\}\left\{\tan \theta\right\}\!$ $\pm\sqrt\left\{\csc^2 \theta - 1\right\}\!$ $\pm\frac\left\{1\right\}\left\{\sqrt\left\{\sec^2 \theta - 1\right\}\right\}\!$ $\cot \theta\!$ ## Historic shorthands The versine, coversine, haversine, and exsecant were used in navigation. For example the haversine formula was used to calculate the distance between two points on a sphere. They are rarely used today. Name(s) Abbreviation(s) Value[3] versed sine, versine $\operatorname\left\{versin\right\}\left(\theta\right)$ $\operatorname\left\{vers\right\}\left(\theta\right)$ $\operatorname\left\{ver\right\}\left(\theta\right)$ $1 - \cos \left(\theta\right)$ versed cosine, vercosine $\operatorname\left\{vercosin\right\}\left(\theta\right)$ $1 + \cos \left(\theta\right)$ coversed sine, coversine $\operatorname\left\{coversin\right\}\left(\theta\right)$ $\operatorname\left\{cvs\right\}\left(\theta\right)$ $1 - \sin\left(\theta\right)$ coversed cosine, covercosine $\operatorname\left\{covercosin\right\}\left(\theta\right)$ $1 + \sin\left(\theta\right)$ half versed sine, haversine $\operatorname\left\{haversin\right\}\left(\theta\right)$ $\frac\left\{1 - \cos \left(\theta\right)\right\}\left\{2\right\}$ half versed cosine, havercosine $\operatorname\left\{havercosin\right\}\left(\theta\right)$ $\frac\left\{1 + \cos \left(\theta\right)\right\}\left\{2\right\}$ half coversed sine, hacoversine cohaversine $\operatorname\left\{hacoversin\right\}\left(\theta\right)$ $\frac\left\{1 - \sin \left(\theta\right)\right\}\left\{2\right\}$ half coversed cosine, hacovercosine cohavercosine $\operatorname\left\{hacovercosin\right\}\left(\theta\right)$ $\frac\left\{1 + \sin \left(\theta\right)\right\}\left\{2\right\}$ exterior secant, exsecant $\operatorname\left\{exsec\right\}\left(\theta\right)$ $\sec\left(\theta\right) - 1$ exterior cosecant, excosecant $\operatorname\left\{excsc\right\}\left(\theta\right)$ $\csc\left(\theta\right) - 1$ chord $\operatorname\left\{crd\right\}\left(\theta\right)$ $2\sin\left\left(\frac\left\{\theta\right\}\left\{2\right\}\right\right)$ Ancient Indian mathematicians used Sanskrit terms Jyā, koti-jyā and utkrama-jyā, based on the resemblance of the chord, arc, and radius to the shape of a bow and bowstring drawn back. ## Symmetry, shifts, and periodicity By examining the unit circle, the following properties of the trigonometric functions can be established. ### Symmetry When the trigonometric functions are reflected from certain angles, the result is often one of the other trigonometric functions. This leads to the following identities: Reflected in $\theta=0$[4] Reflected in $\theta= \pi/2$ (co-function identities)[5] Reflected in $\theta= \pi$ \begin{align} \sin(-\theta) &= -\sin \theta \\ \cos(-\theta) &= +\cos \theta \\ \tan(-\theta) &= -\tan \theta \\ \csc(-\theta) &= -\csc \theta \\ \sec(-\theta) &= +\sec \theta \\ \cot(-\theta) &= -\cot \theta \end{align} \begin{align} \sin(\tfrac{\pi}{2} - \theta) &= +\cos \theta \\ \cos(\tfrac{\pi}{2} - \theta) &= +\sin \theta \\ \tan(\tfrac{\pi}{2} - \theta) &= +\cot \theta \\ \csc(\tfrac{\pi}{2} - \theta) &= +\sec \theta \\ \sec(\tfrac{\pi}{2} - \theta) &= +\csc \theta \\ \cot(\tfrac{\pi}{2} - \theta) &= +\tan \theta \end{align} \begin{align} \sin(\pi - \theta) &= +\sin \theta \\ \cos(\pi - \theta) &= -\cos \theta \\ \tan(\pi - \theta) &= -\tan \theta \\ \csc(\pi - \theta) &= +\csc \theta \\ \sec(\pi - \theta) &= -\sec \theta \\ \cot(\pi - \theta) &= -\cot \theta \\ \end{align} ### Shifts and periodicity By shifting the function round by certain angles, it is often possible to find different trigonometric functions that express particular results more simply. Some examples of this are shown by shifting functions round by π/2, π and 2π radians. Because the periods of these functions are either π or 2π, there are cases where the new function is exactly the same as the old function without the shift. Shift by π/2 Shift by π Period for tan and cot[6] Shift by 2π Period for sin, cos, csc and sec[7] \begin{align} \sin(\theta + \tfrac{\pi}{2}) &= +\cos \theta \\ \cos(\theta + \tfrac{\pi}{2}) &= -\sin \theta \\ \tan(\theta + \tfrac{\pi}{2}) &= -\cot \theta \\ \csc(\theta + \tfrac{\pi}{2}) &= +\sec \theta \\ \sec(\theta + \tfrac{\pi}{2}) &= -\csc \theta \\ \cot(\theta + \tfrac{\pi}{2}) &= -\tan \theta \end{align} \begin{align} \sin(\theta + \pi) &= -\sin \theta \\ \cos(\theta + \pi) &= -\cos \theta \\ \tan(\theta + \pi) &= +\tan \theta \\ \csc(\theta + \pi) &= -\csc \theta \\ \sec(\theta + \pi) &= -\sec \theta \\ \cot(\theta + \pi) &= +\cot \theta \\ \end{align} \begin{align} \sin(\theta + 2\pi) &= +\sin \theta \\ \cos(\theta + 2\pi) &= +\cos \theta \\ \tan(\theta + 2\pi) &= +\tan \theta \\ \csc(\theta + 2\pi) &= +\csc \theta \\ \sec(\theta + 2\pi) &= +\sec \theta \\ \cot(\theta + 2\pi) &= +\cot \theta \end{align} ## Angle sum and difference identities These are also known as the addition and subtraction theorems or formulae. They were originally established by the 10th century Persian mathematician Abū al-Wafā' Būzjānī. One method of proving these identities is to apply Euler's formula. The use of the symbols $\pm$ and $\mp$ is described in the article plus-minus sign. Sine $\sin\left(\alpha \pm \beta\right) = \sin \alpha \cos \beta \pm \cos \alpha \sin \beta \!$[8][9] $\cos\left(\alpha \pm \beta\right) = \cos \alpha \cos \beta \mp \sin \alpha \sin \beta\,$[9][10] $\tan\left(\alpha \pm \beta\right) = \frac\left\{\tan \alpha \pm \tan \beta\right\}\left\{1 \mp \tan \alpha \tan \beta\right\}$[9][11] $\arcsin\alpha \pm \arcsin\beta = \arcsin\left\left(\alpha\sqrt\left\{1-\beta^2\right\} \pm \beta\sqrt\left\{1-\alpha^2\right\}\right\right)$[12] $\arccos\alpha \pm \arccos\beta = \arccos\left\left(\alpha\beta \mp \sqrt\left\{\left(1-\alpha^2\right)\left(1-\beta^2\right)\right\}\right\right)$[13] $\arctan\alpha \pm \arctan\beta = \arctan\left\left(\frac\left\{\alpha \pm \beta\right\}\left\{1 \mp \alpha\beta\right\}\right\right)$[14] ### Matrix form The sum and difference formulae for sine and cosine can be written in matrix form as: \cos\alpha & -\sin\alpha \\ \sin\alpha & \cos\alpha \end{array}\right) \left(\begin{array}{rr} \cos\beta & -\sin\beta \\ \sin\beta & \cos\beta \end{array}\right) \\[12pt] & = \left(\begin{array}{rr} \cos\alpha\cos\beta - \sin\alpha\sin\beta & -\cos\alpha\sin\beta - \sin\alpha\cos\beta \\ \sin\alpha\cos\beta + \cos\alpha\sin\beta & -\sin\alpha\sin\beta + \cos\alpha\cos\beta \end{array}\right) \\[12pt] & = \left(\begin{array}{rr} \cos(\alpha+\beta) & -\sin(\alpha+\beta) \\ \sin(\alpha+\beta) & \cos(\alpha+\beta) \end{array}\right) \end{align} This shows that these matrices form a representation of the rotation group in the plane (technically, the special orthogonal group SO(2)), since the composition law is fulfilled: subsequent multiplications of a vector with these two matrices yields the same result as the rotation by the sum of the angles. ### Sines and cosines of sums of infinitely many terms $\sin\left\left(\sum_\left\{i=1\right\}^\infty \theta_i\right\right)$ =\sum_{\text{odd}\ k \ge 1} (-1)^{(k-1)/2} \sum_{\begin{smallmatrix} A \subseteq \{\,1,2,3,\dots\,\} \\ \left|A\right| = k\end{smallmatrix}} \left(\prod_{i \in A} \sin\theta_i \prod_{i \not \in A} \cos\theta_i\right) $\cos\left\left(\sum_\left\{i=1\right\}^\infty \theta_i\right\right)$ =\sum_{\text{even}\ k \ge 0} ~ (-1)^{k/2} ~~ \sum_{\begin{smallmatrix} A \subseteq \{\,1,2,3,\dots\,\} \\ \left|A\right| = k\end{smallmatrix}} \left(\prod_{i \in A} \sin\theta_i \prod_{i \not \in A} \cos\theta_i\right) In these two identities an asymmetry appears that is not seen in the case of sums of finitely many terms: in each product, there are only finitely many sine factors and cofinitely many cosine factors. If only finitely many of the terms θi are nonzero, then only finitely many of the terms on the right side will be nonzero because sine factors will vanish, and in each term, all but finitely many of the cosine factors will be unity. ### Tangents of sums Let ek (for k = 0, 1, 2, 3, ...) be the kth-degree elementary symmetric polynomial in the variables $x_i = \tan \theta_i\,$ for i = 0, 1, 2, 3, ..., i.e., \begin{align} e_0 & = 1 \\[6pt] e_1 & = \sum_i x_i & & = \sum_i \tan\theta_i \\[6pt] e_2 & = \sum_{i < j} x_i x_j & & = \sum_{i < j} \tan\theta_i \tan\theta_j \\[6pt] e_3 & = \sum_{i < j < k} x_i x_j x_k & & = \sum_{i < j < k} \tan\theta_i \tan\theta_j \tan\theta_k \\ & {}\ \ \vdots & & {}\ \ \vdots \end{align} Then $\tan\left\left(\sum_i \theta_i\right\right) = \frac\left\{e_1 - e_3 + e_5 -\cdots\right\}\left\{e_0 - e_2 + e_4 - \cdots\right\}.\!$ The number of terms on the right side depends on the number of terms on the left side. For example: \begin\left\{align\right\} \tan(\theta_1 + \theta_2) & = \frac{ e_1 }{ e_0 - e_2 } = \frac{ x_1 + x_2 }{ 1 \ - \ x_1 x_2 } = \frac{ \tan\theta_1 + \tan\theta_2 }{ 1 \ - \ \tan\theta_1 \tan\theta_2 } , \\[8pt] \tan(\theta_1 + \theta_2 + \theta_3) & = \frac{ e_1 - e_3 }{ e_0 - e_2 } = \frac{ (x_1 + x_2 + x_3) \ - \ (x_1 x_2 x_3) }{ 1 \ - \ (x_1x_2 + x_1 x_3 + x_2 x_3) }, \\[8pt] \tan(\theta_1 + \theta_2 + \theta_3 + \theta_4) & = \frac{ e_1 - e_3 }{ e_0 - e_2 + e_4 } \\[8pt] & = \frac{ (x_1 + x_2 + x_3 + x_4) \ - \ (x_1 x_2 x_3 + x_1 x_2 x_4 + x_1 x_3 x_4 + x_2 x_3 x_4) }{ 1 \ - \ (x_1 x_2 + x_1 x_3 + x_1 x_4 + x_2 x_3 + x_2 x_4 + x_3 x_4) \ + \ (x_1 x_2 x_3 x_4) }, \end{align} and so on. The case of only finitely many terms can be proved by mathematical induction.[15] ### Secants and cosecants of sums \begin{align} \sec\left(\sum_i \theta_i\right) & = \frac{\prod_i \sec\theta_i}{e_0 - e_2 + e_4 - \cdots} \\[8pt] \csc\left(\sum_i \theta_i \right) & = \frac{\prod_i \sec\theta_i }{e_1 - e_3 + e_5 - \cdots} \end{align} where ek is the kth-degree elementary symmetric polynomial in the n variables xi = tan θi, i = 1, ..., n, and the number of terms in the denominator and the number of factors in the product in the numerator depend on the number of terms in the sum on the left. The case of only finitely many terms can be proved by mathematical induction on the number of such terms. The convergence of the series in the denominators can be shown by writing the secant identity in the form $e_0 - e_2 + e_4 - \cdots = \frac\left\{\prod_i \sec\theta_i\right\}\left\{\sec\left\left(\sum_i \theta_i\right\right)\right\}$ and then observing that the left side converges if the right side converges, and similarly for the cosecant identity. For example, \begin{align} \sec(\alpha+\beta+\gamma) & = \frac{\sec\alpha \sec\beta \sec\gamma}{1 - \tan\alpha\tan\beta - \tan\alpha\tan\gamma - \tan\beta\tan\gamma } \\[8pt] \csc(\alpha+\beta+\gamma) & = \frac{\sec\alpha \sec\beta \sec\gamma}{\tan\alpha + \tan\beta + \tan\gamma - \tan\alpha\tan\beta\tan\gamma} \end{align} ## Multiple-angle formulae Tn is the nth Chebyshev polynomial $\cos n\theta =T_n \left(\cos \theta \right)\,$  [16] $\sin^2 n\theta = S_n \left(\sin^2\theta\right)\,$ $\cos n\theta +i\sin n\theta=\left(\cos\left(\theta\right)+i\sin\left(\theta\right)\right)^n \,$    [17] ### Double-angle, triple-angle, and half-angle formulae These can be shown by using either the sum and difference identities or the multiple-angle formulae. Double-angle formulae[18][19] \begin\left\{align\right\} \sin 2\theta &= 2 \sin \theta \cos \theta \ \\ &= \frac{2 \tan \theta} {1 + \tan^2 \theta} \end{align} \begin\left\{align\right\} \cos 2\theta &= \cos^2 \theta - \sin^2 \theta \\ &= 2 \cos^2 \theta - 1 \\ &= 1 - 2 \sin^2 \theta \\ &= \frac{1 - \tan^2 \theta} {1 + \tan^2 \theta} \end{align} $\tan 2\theta = \frac\left\{2 \tan \theta\right\} \left\{1 - \tan^2 \theta\right\}\!$ $\cot 2\theta = \frac\left\{\cot^2 \theta - 1\right\}\left\{2 \cot \theta\right\}\!$ Triple-angle formulae[16][20] \begin\left\{align\right\}\sin 3\theta & = - \sin^3\theta + 3 \cos^2\theta \sin\theta\\ & = - 4\sin^3\theta + 3\sin\theta \end{align} \begin\left\{align\right\}\cos 3\theta & = \cos^3\theta - 3 \sin^2 \theta\cos \theta \\ & = 4 \cos^3\theta - 3 \cos\theta\end{align} $\tan 3\theta = \frac\left\{3 \tan\theta - \tan^3\theta\right\}\left\{1 - 3 \tan^2\theta\right\}\!$ $\cot 3\theta = \frac\left\{3 \cot\theta - \cot^3\theta\right\}\left\{1 - 3 \cot^2\theta\right\}\!$ Half-angle formulae[21][22] \begin\left\{align\right\}&\sin \frac\left\{\theta\right\}\left\{2\right\} = \sgn \!\! \left\left( \!\! 2 \pi \! - \! \theta \! + \! 4 \pi \! \left\lfloor \! \frac\left\{\theta\right\}\left\{4\pi\right\} \! \right\rfloor \! \right\right) \!\! \sqrt\left\{\frac\left\{1 \! - \! \cos \theta\right\}\left\{2\right\}\right\} \\ \\ &\left(\mathrm{or}\,\,\sin^2\frac{\theta}{2}=\frac{1-\cos\theta}{2}\right)\end{align} \begin\left\{align\right\}&\cos \frac\left\{\theta\right\}\left\{2\right\} = \sgn \!\! \left\left(\!\! \pi \! + \! \theta \! + \! 4 \pi \! \left\lfloor \! \frac\left\{\pi \! - \! \theta\right\}\left\{4\pi\right\} \! \right\rfloor \! \right\right) \!\! \sqrt\left\{\frac\left\{1 + \cos\theta\right\}\left\{2\right\}\right\} \\ \\ &\left(\mathrm{or}\,\,\cos^2\frac{\theta}{2}=\frac{1+\cos\theta}{2}\right)\end{align} \begin\left\{align\right\} \tan \frac\left\{\theta\right\}\left\{2\right\} &= \csc \theta - \cot \theta \\ &= \pm\, \sqrt\left\{1 - \cos \theta \over 1 + \cos \theta\right\} \\\left[8pt\right] &= \frac\left\{\sin \theta\right\}\left\{1 + \cos \theta\right\} \\\left[8pt\right] &= \frac\left\{1-\cos \theta\right\}\left\{\sin \theta\right\} \\\left[10pt\right] \tan\frac{\eta+\theta}{2} & = \frac{\sin\eta+\sin\theta}{\cos\eta+\cos\theta} \\[8pt] \tan\left(\frac{\theta}{2} + \frac{\pi}{4}\right) & = \sec\theta + \tan\theta \\[8pt] \sqrt{\frac{1 - \sin\theta}{1 + \sin\theta}} & = \frac{1 - \tan(\theta/2)}{1 + \tan(\theta/2)} \\[8pt] \tan\tfrac{1}{2}\theta & = \frac{\tan\theta}{1 + \sqrt{1+\tan^2\theta}} \\ &\mbox{for}\quad \theta \in \left(-\tfrac{\pi}{2},\tfrac{\pi}{2} \right) \end{align} \begin\left\{align\right\} \cot \frac\left\{\theta\right\}\left\{2\right\} &= \csc \theta + \cot \theta \\ &= \pm\, \sqrt\left\{1 + \cos \theta \over 1 - \cos \theta\right\} \\\left[8pt\right] &= \frac\left\{\sin \theta\right\}\left\{1 - \cos \theta\right\} \\\left[8pt\right] &= \frac\left\{1 + \cos \theta\right\}\left\{\sin \theta\right\} \end\left\{align\right\} The fact that the triple-angle formula for sine and cosine only involves powers of a single function allows one to relate the geometric problem of a compass and straightedge construction of angle trisection to the algebraic problem of solving a cubic equation, which allows one to prove that this is in general impossible using the given tools, by field theory. A formula for computing the trigonometric identities for the third-angle exists, but it requires finding the zeroes of the cubic equation $x^3 - \frac\left\{3x+d\right\}\left\{4\right\}=0$, where x is the value of the sine function at some angle and d is the known value of the sine function at the triple angle. However, the discriminant of this equation is negative, so this equation has three real roots (of which only one is the solution within the correct third-circle) but none of these solutions is reducible to a real algebraic expression, as they use intermediate complex numbers under the cube roots, (which may be expressed in terms of real-only functions only if using hyperbolic functions). ### Sine, cosine, and tangent of multiple angles For specific multiples, these follow from the angle addition formulas, while the general formula was given by 16th century French mathematician Vieta. $\sin n\theta = \sum_\left\{k=0\right\}^n \binom\left\{n\right\}\left\{k\right\} \cos^k \theta\,\sin^\left\{n-k\right\} \theta\,\sin\left\left(\frac\left\{1\right\}\left\{2\right\}\left(n-k\right)\pi\right\right)$ $\cos n\theta = \sum_\left\{k=0\right\}^n \binom\left\{n\right\}\left\{k\right\} \cos^k \theta\,\sin^\left\{n-k\right\} \theta\,\cos\left\left(\frac\left\{1\right\}\left\{2\right\}\left(n-k\right)\pi\right\right)$ In each of these two equations, the first parenthesized term is a binomial coefficient, and the final trigonometric function equals one or minus one or zero so that half the entries in each of the sums are removed. tan  can be written in terms of tan θ using the recurrence relation: $\tan\,\left(n\left\{+\right\}1\right)\theta = \frac\left\{\tan n\theta + \tan \theta\right\}\left\{1 - \tan n\theta\,\tan \theta\right\}.$ cot  can be written in terms of cot θ using the recurrence relation: $\cot\,\left(n\left\{+\right\}1\right)\theta = \frac\left\{\cot n\theta\,\cot \theta - 1\right\}\left\{\cot n\theta + \cot \theta\right\}.$ ### Chebyshev method The Chebyshev method is a recursive algorithm for finding the nth multiple angle formula knowing the (n − 1)th and (n − 2)th formulae.[23] The cosine for nx can be computed from the cosine of (n − 1)x and (n − 2)x as follows: $\cos nx = 2 \cdot \cos x \cdot \cos \left(\left(n-1\right) x\right) - \cos \left(\left(n-2\right) x\right) \,$ Similarly sin(nx) can be computed from the sines of (n − 1)x and (n − 2)x $\sin nx = 2 \cdot \cos x \cdot \sin \left(\left(n-1\right) x\right) - \sin \left(\left(n-2\right) x\right) \,$ For the tangent, we have: $\tan nx = \frac\left\{H + K \tan x\right\}\left\{K- H \tan x\right\} \,$ where H/K = tan(n − 1)x. ### Tangent of an average $\tan\left\left( \frac\left\{\alpha+\beta\right\}\left\{2\right\} \right\right)$ = \frac{\sin\alpha + \sin\beta}{\cos\alpha + \cos\beta} = -\,\frac{\cos\alpha - \cos\beta}{\sin\alpha - \sin\beta} Setting either α or β to 0 gives the usual tangent half-angle formulæ. ### Viète's infinite product $\cos\left\left(\left\{\theta \over 2\right\}\right\right) \cdot \cos\left\left(\left\{\theta \over 4\right\}\right\right)$ \cdot \cos\left({\theta \over 8}\right)\cdots = \prod_{n=1}^\infty \cos\left({\theta \over 2^n}\right) = {\sin(\theta)\over \theta} = \operatorname{sinc}\,\theta. ## Power-reduction formula Obtained by solving the second and third versions of the cosine double-angle formula. Sine Cosine Other $\sin^2\theta = \frac\left\{1 - \cos 2\theta\right\}\left\{2\right\}\!$ $\cos^2\theta = \frac\left\{1 + \cos 2\theta\right\}\left\{2\right\}\!$ $\sin^2\theta \cos^2\theta = \frac\left\{1 - \cos 4\theta\right\}\left\{8\right\}\!$ $\sin^3\theta = \frac\left\{3 \sin\theta - \sin 3\theta\right\}\left\{4\right\}\!$ $\cos^3\theta = \frac\left\{3 \cos\theta + \cos 3\theta\right\}\left\{4\right\}\!$ $\sin^3\theta \cos^3\theta = \frac\left\{3\sin 2\theta - \sin 6\theta\right\}\left\{32\right\}\!$ $\sin^4\theta = \frac\left\{3 - 4 \cos 2\theta + \cos 4\theta\right\}\left\{8\right\}\!$ $\cos^4\theta = \frac\left\{3 + 4 \cos 2\theta + \cos 4\theta\right\}\left\{8\right\}\!$ $\sin^4\theta \cos^4\theta = \frac\left\{3-4\cos 4\theta + \cos 8\theta\right\}\left\{128\right\}\!$ $\sin^5\theta = \frac\left\{10 \sin\theta - 5 \sin 3\theta + \sin 5\theta\right\}\left\{16\right\}\!$ $\cos^5\theta = \frac\left\{10 \cos\theta + 5 \cos 3\theta + \cos 5\theta\right\}\left\{16\right\}\!$ $\sin^5\theta \cos^5\theta = \frac\left\{10\sin 2\theta - 5\sin 6\theta + \sin 10\theta\right\}\left\{512\right\}\!$ and in general terms of powers of sin θ or cos θ the following is true, and can be deduced using De Moivre's formula, Euler's formula and binomial theorem. Cosine Sine $\text\left\{if \right\}n\text\left\{ is odd\right\}$ $\cos^n\theta = \frac\left\{2\right\}\left\{2^n\right\} \sum_\left\{k=0\right\}^\left\{\frac\left\{n-1\right\}\left\{2\right\}\right\} \binom\left\{n\right\}\left\{k\right\} \cos\left\{\left(\left(n-2k\right)\theta\right)\right\}$ $\sin^n\theta = \frac\left\{2\right\}\left\{2^n\right\} \sum_\left\{k=0\right\}^\left\{\frac\left\{n-1\right\}\left\{2\right\}\right\} \left(-1\right)^\left\{\left(\frac\left\{n-1\right\}\left\{2\right\}-k\right)\right\} \binom\left\{n\right\}\left\{k\right\} \sin\left\{\left(\left(n-2k\right)\theta\right)\right\}$ $\text\left\{if \right\}n\text\left\{ is even\right\}$ $\cos^n\theta = \frac\left\{1\right\}\left\{2^n\right\} \binom\left\{n\right\}\left\{\frac\left\{n\right\}\left\{2\right\}\right\} + \frac\left\{2\right\}\left\{2^n\right\} \sum_\left\{k=0\right\}^\left\{\frac\left\{n\right\}\left\{2\right\}-1\right\} \binom\left\{n\right\}\left\{k\right\} \cos\left\{\left(\left(n-2k\right)\theta\right)\right\}$ $\sin^n\theta = \frac\left\{1\right\}\left\{2^n\right\} \binom\left\{n\right\}\left\{\frac\left\{n\right\}\left\{2\right\}\right\} + \frac\left\{2\right\}\left\{2^n\right\} \sum_\left\{k=0\right\}^\left\{\frac\left\{n\right\}\left\{2\right\}-1\right\} \left(-1\right)^\left\{\left(\frac\left\{n\right\}\left\{2\right\}-k\right)\right\} \binom\left\{n\right\}\left\{k\right\} \cos\left\{\left(\left(n-2k\right)\theta\right)\right\}$ ## Product-to-sum and sum-to-product identities The product-to-sum identities or prosthaphaeresis formulas can be proven by expanding their right-hand sides using the angle addition theorems. See beat (acoustics) and phase detector for applications of the sum-to-product formulæ. Product-to-sum[24] $\cos \theta \cos \varphi = \cot\left(a_k - a_j\right)$ (in particular, A1,1, being an empty product, is 1). Then $\cot\left(z - a_1\right)\cdots\cot\left(z - a_n\right) = \cos\frac\left\{n\pi\right\}\left\{2\right\} + \sum_\left\{k=1\right\}^n A_\left\{n,k\right\} \cot\left(z - a_k\right).$ The simplest non-trivial example is the case n = 2: $\cot\left(z - a_1\right)\cot\left(z - a_2\right) = -1 + \cot\left(a_1 - a_2\right)\cot\left(z - a_1\right) + \cot\left(a_2 - a_1\right)\cot\left(z - a_2\right).$ ### Ptolemy's theorem $\text\left\{If \right\}w + x + y + z = \pi = \text\left\{half circle,\right\} \,$ \begin\left\{align\right\} \text\left\{then \right\} & \sin(w + x)\sin(x + y) \\ &{} = \sin(x + y)\sin(y + z) \\ &{} = \sin(y + z)\sin(z + w) \\ &{} = \sin(z + w)\sin(w + x) = \sin(w)\sin(y) + \sin(x)\sin(z). \end{align} (The first three equalities are trivial; the fourth is the substance of this identity.) Essentially this is Ptolemy's theorem adapted to the language of modern trigonometry. ## Linear combinations For some purposes it is important to know that any linear combination of sine waves of the same period or frequency but different phase shifts is also a sine wave with the same period or frequency, but a different phase shift. In the case of a non-zero linear combination of a sine and cosine wave[27] (which is just a sine wave with a phase shift of π/2), we have $a\sin x+b\cos x=\sqrt\left\{a^2+b^2\right\}\cdot\sin\left(x+\varphi\right)\,$ where \varphi = \begin{cases}\arcsin \left(\frac{b}{\sqrt{a^2+b^2}}\right) & \text{if }a \ge 0, \\ \pi-\arcsin \left(\frac{b}{\sqrt{a^2+b^2}}\right) & \text{if }a < 0, \end{cases} or equivalently \varphi = \text{sgn}(b)\arccos \left(\tfrac{a}{\sqrt{a^2+b^2}}\right) or even \varphi = \arctan \left(\frac{b}{a}\right) + \begin{cases} 0 & \text{if }a \ge 0, \\ \pi & \text{if }a < 0, \end{cases} or using the atan2 function $\varphi = \operatorname\left\{atan2\right\} \left\left( b, a \right\right).$ More generally, for an arbitrary phase shift, we have $a\sin x+b\sin\left(x+\alpha\right)= c \sin\left(x+\beta\right)\,$ where $c = \sqrt\left\{a^2 + b^2 + 2ab\cos \alpha\right\},\,$ and \beta = \arctan \left(\frac{b\sin \alpha}{a + b\cos \alpha}\right) + \begin{cases} 0 & \text{if } a + b\cos \alpha \ge 0, \\ \pi & \text{if } a + b\cos \alpha < 0. \end{cases} $\sum_i a_i \sin\left(x+\delta_i\right)= a \sin\left(x+\delta\right),$ where $a^2=\sum_\left\{i,j\right\}a_i a_j \cos\left(\delta_i-\delta_j\right)$ and $\tan \delta=\frac\left\{\sum_i a_i \sin\delta_i\right\}\left\{\sum_i a_i \cos\delta_i\right\}.$ ## Lagrange's trigonometric identities These identities, named after Joseph Louis Lagrange, are:[28][29] \begin{align} \sum_{n=1}^N \sin n\theta & = \frac{1}{2}\cot\frac{\theta}{2}-\frac{\cos(N+\frac{1}{2})\theta}{2\sin\frac{1}{2}\theta}\\ \sum_{n=1}^N \cos n\theta & = -\frac{1}{2}+\frac{\sin(N+\frac{1}{2})\theta}{2\sin\frac{1}{2}\theta} \end{align} A related function is the following function of x, called the Dirichlet kernel. $1+2\cos\left(x\right) + 2\cos\left(2x\right) + 2\cos\left(3x\right) + \cdots + 2\cos\left(nx\right)$ = \frac{\sin\left(\left(n +\frac{1}{2}\right)x\right)}{\sin(x/2)}. ## Other sums of trigonometric functions Sum of sines and cosines with arguments in arithmetic progression:[30] if $\alpha\ne0$, then \begin{align} & \sin{\varphi} + \sin{(\varphi + \alpha)} + \sin{(\varphi + 2\alpha)} + \cdots {} \\[8pt] & {} \qquad\qquad \cdots + \sin{(\varphi + n\alpha)} = \frac{\sin{\left(\frac{(n+1) \alpha}{2}\right)} \cdot \sin{(\varphi + \frac{n \alpha}{2})}}{\sin{\frac{\alpha}{2}}} \quad\hbox{and}\\[10pt] & \cos{\varphi} + \cos{(\varphi + \alpha)} + \cos{(\varphi + 2\alpha)} + \cdots {} \\[8pt] & {} \qquad\qquad \cdots + \cos{(\varphi + n\alpha)} = \frac{\sin{\left(\frac{(n+1) \alpha}{2}\right)} \cdot \cos{(\varphi + \frac{n \alpha}{2})}}{\sin{\frac{\alpha}{2}}}. \end{align} For any a and b: $a \cos\left(x\right) + b \sin\left(x\right) = \sqrt\left\{ a^2 + b^2 \right\} \cos\left(x - \operatorname\left\{atan2\right\}\,\left(b,a\right)\right) \;$ where atan2(y, x) is the generalization of arctan(y/x) that covers the entire circular range. $\tan\left(x\right) + \sec\left(x\right) = \tan\left\left(\left\{x \over 2\right\} + \left\{\pi \over 4\right\}\right\right).$ The above identity is sometimes convenient to know when thinking about the Gudermannian function, which relates the circular and hyperbolic trigonometric functions without resorting to complex numbers. If x, y, and z are the three angles of any triangle, i.e. if x + y + z = π, then $\cot\left(x\right)\cot\left(y\right) + \cot\left(y\right)\cot\left(z\right) + \cot\left(z\right)\cot\left(x\right) = 1.\,$ ## Certain linear fractional transformations If ƒ(x) is given by the linear fractional transformation $f\left(x\right) = \frac\left\{\left(\cos\alpha\right)x - \sin\alpha\right\}\left\{\left(\sin\alpha\right)x + \cos\alpha\right\},$ and similarly $g\left(x\right) = \frac\left\{\left(\cos\beta\right)x - \sin\beta\right\}\left\{\left(\sin\beta\right)x + \cos\beta\right\},$ then $f\left(g\left(x\right)\right) = g\left(f\left(x\right)\right)$ = \frac{(\cos(\alpha+\beta))x - \sin(\alpha+\beta)}{(\sin(\alpha+\beta))x + \cos(\alpha+\beta)}. More tersely stated, if for all α we let ƒα be what we called ƒ above, then $f_\alpha \circ f_\beta = f_\left\{\alpha+\beta\right\}. \,$ If x is the slope of a line, then ƒ(x) is the slope of its rotation through an angle of −α. ## Inverse trigonometric functions $\arcsin\left(x\right)+\arccos\left(x\right)=\pi/2\;$ $\arctan\left(x\right)+\arccot\left(x\right)=\pi/2.\;$ $\arctan\left(x\right)+\arctan\left(1/x\right)=\left\\left\{\begin\left\{matrix\right\} \pi/2, & \mbox\left\{if \right\}x > 0 \\ -\pi/2, & \mbox\left\{if \right\}x < 0 \end\left\{matrix\right\}\right.$ ### Compositions of trig and inverse trig functions $\sin\left[\arccos\left(x\right)\right]=\sqrt\left\{1-x^2\right\} \,$ $\tan\left[\arcsin \left(x\right)\right]=\frac\left\{x\right\}\left\{\sqrt\left\{1 - x^2\right\}\right\}$ $\sin\left[\arctan\left(x\right)\right]=\frac\left\{x\right\}\left\{\sqrt\left\{1+x^2\right\}\right\}$ $\tan\left[\arccos \left(x\right)\right]=\frac\left\{\sqrt\left\{1 - x^2\right\}\right\}\left\{x\right\}$ $\cos\left[\arctan\left(x\right)\right]=\frac\left\{1\right\}\left\{\sqrt\left\{1+x^2\right\}\right\}$ $\cot\left[\arcsin \left(x\right)\right]=\frac\left\{\sqrt\left\{1 - x^2\right\}\right\}\left\{x\right\}$ $\cos\left[\arcsin\left(x\right)\right]=\sqrt\left\{1-x^2\right\} \,$ $\cot\left[\arccos \left(x\right)\right]=\frac\left\{x\right\}\left\{\sqrt\left\{1 - x^2\right\}\right\}$ ## Relation to the complex exponential function $e^\left\{ix\right\} = \cos\left(x\right) +$ i\sin(x)\, [31] (Euler's formula), $e^\left\{-ix\right\} = \cos\left(-x\right) + i\sin\left(-x\right) = \cos\left(x\right) - i\sin\left(x\right)$ $e^\left\{i\pi\right\} = -1$ (Euler's identity), $\cos\left(x\right) = \frac\left\{e^\left\{ix\right\} + e^\left\{-ix\right\}\right\}\left\{2\right\}$[32] $\sin\left(x\right) = \frac\left\{e^\left\{ix\right\} - e^\left\{-ix\right\}\right\}\left\{2i\right\}$[33] and hence the corollary: $\tan\left(x\right) = \frac\left\{\sin\left(x\right)\right\}\left\{\cos\left(x\right)\right\}= \frac\left\{e^\left\{ix\right\} - e^\left\{-ix\right\}\right\}\left\{i\left(\left\{e^\left\{ix\right\} + e^\left\{-ix\right\}\right\}\right)\right\}$ where $i^2 = -1$. ## Infinite product formulae For applications to special functions, the following infinite product formulae for trigonometric functions are useful:[34][35] ## Identities without variables $\cos 20^\circ\cdot\cos 40^\circ\cdot\cos 80^\circ=\frac\left\{1\right\}\left\{8\right\}$ is a special case of an identity that contains one variable: $\prod_\left\{j=0\right\}^\left\{k-1\right\}\cos\left(2^j x\right)=\frac\left\{\sin\left(2^k x\right)\right\}\left\{2^k\sin\left(x\right)\right\}.$ Similarly: $\sin 20^\circ\cdot\sin 40^\circ\cdot\sin 80^\circ=\frac\left\{\sqrt\left\{3\right\}\right\}\left\{8\right\}.$ The same cosine identity in radians is $\cos\frac\left\{\pi\right\}\left\{9\right\}\cos\frac\left\{2\pi\right\}\left\{9\right\}\cos\frac\left\{4\pi\right\}\left\{9\right\} = \frac\left\{1\right\}\left\{8\right\},$ Similarly: $\tan 50^\circ\cdot\tan 60^\circ\cdot\tan 70^\circ=\tan 80^\circ.$ $\tan 40^\circ\cdot\tan 30^\circ\cdot\tan 20^\circ=\tan 10^\circ.$ The following is perhaps not as readily generalized to an identity containing variables (but see explanation below): $\cos 24^\circ+\cos 48^\circ+\cos 96^\circ+\cos 168^\circ=\frac\left\{1\right\}\left\{2\right\}.$ Degree measure ceases to be more felicitous than radian measure when we consider this identity with 21 in the denominators: \begin{align} & \cos\left( \frac{2\pi}{21}\right) + \cos\left(2\cdot\frac{2\pi}{21}\right) + \cos\left(4\cdot\frac{2\pi}{21}\right) \\[10pt] & {} \qquad {} + \cos\left( 5\cdot\frac{2\pi}{21}\right) + \cos\left( 8\cdot\frac{2\pi}{21}\right) + \cos\left(10\cdot\frac{2\pi}{21}\right)=\frac{1}{2}. \end{align} The factors 1, 2, 4, 5, 8, 10 may start to make the pattern clear: they are those integers less than 21/2 that are relatively prime to (or have no prime factors in common with) 21. The last several examples are corollaries of a basic fact about the irreducible cyclotomic polynomials: the cosines are the real parts of the zeroes of those polynomials; the sum of the zeroes is the Möbius function evaluated at (in the very last case above) 21; only half of the zeroes are present above. The two identities preceding this last one arise in the same fashion with 21 replaced by 10 and 15, respectively. Many of those curious identities stem from more general facts like the following:[36] $\prod_\left\{k=1\right\}^\left\{n-1\right\} \sin\left\left(\frac\left\{k\pi\right\}\left\{n\right\}\right\right) = \frac\left\{n\right\}\left\{2^\left\{n-1\right\}\right\}$ and $\prod_\left\{k=1\right\}^\left\{n-1\right\} \cos\left\left(\frac\left\{k\pi\right\}\left\{n\right\}\right\right) = \frac\left\{\sin\left(\pi n/2\right)\right\}\left\{2^\left\{n-1\right\}\right\}$ Combining these gives us $\prod_\left\{k=1\right\}^\left\{n-1\right\} \tan\left\left(\frac\left\{k\pi\right\}\left\{n\right\}\right\right) = \frac\left\{n\right\}\left\{\sin\left(\pi n/2\right)\right\}$ If n is an odd number (n = 2m + 1) we can make use of the symmetries to get $\prod_\left\{k=1\right\}^\left\{m\right\} \tan\left\left(\frac\left\{k\pi\right\}\left\{2m+1\right\}\right\right) = \sqrt\left\{2m+1\right\}$ The transfer function of the Butterworth low pass filter can be expressed in terms of polynomial and poles. By setting the frequency as the cutoff frequency, the following identity can be proved: $\prod_\left\{k=1\right\}^\left\{n\right\} \sin\left\left(\frac\left\{\left\left(2k-1\right\right)\pi\right\}\left\{4n\right\}\right\right) = \prod_\left\{k=1\right\}^\left\{n\right\} \cos\left\left(\frac\left\{\left\left(2k-1\right\right)\pi\right\}\left\{4n\right\}\right\right) = \frac\left\{\sqrt\left\{2\right\}\right\}\left\{2^\left\{n\right\}\right\}$ ### Computing π An efficient way to compute π is based on the following identity without variables, due to Machin: $\frac\left\{\pi\right\}\left\{4\right\} = 4 \arctan\frac\left\{1\right\}\left\{5\right\} - \arctan\frac\left\{1\right\}\left\{239\right\}$ or, alternatively, by using an identity of Leonhard Euler: $\frac\left\{\pi\right\}\left\{4\right\} = 5 \arctan\frac\left\{1\right\}\left\{7\right\} + 2 \arctan\frac\left\{3\right\}\left\{79\right\}.$ ### A useful mnemonic for certain values of sines and cosines For certain simple angles, the sines and cosines take the form $\scriptstyle\sqrt\left\{n\right\}/2$ for 0 ≤ n ≤ 4, which makes them easy to remember. \begin{matrix} \sin 0 & = & \sin 0^\circ & = & \sqrt{0}/2 & = & \cos 90^\circ & = & \cos \left( \frac {\pi} {2} \right) \\ \\ \sin \left( \frac {\pi} {6} \right) & = & \sin 30^\circ & = & \sqrt{1}/2 & = & \cos 60^\circ & = & \cos \left( \frac {\pi} {3} \right) \\ \\ \sin \left( \frac {\pi} {4} \right) & = & \sin 45^\circ & = & \sqrt{2}/2 & = & \cos 45^\circ & = & \cos \left( \frac {\pi} {4} \right) \\ \\ \sin \left( \frac {\pi} {3} \right) & = & \sin 60^\circ & = & \sqrt{3}/2 & = & \cos 30^\circ & = & \cos \left( \frac {\pi} {6} \right)\\ \\ \sin \left( \frac {\pi} {2} \right) & = & \sin 90^\circ & = & \sqrt{4}/2 & = & \cos 0^\circ & = & \cos 0 \end{matrix} ### Miscellany With the golden ratio φ: $\cos \left\left( \frac \left\{\pi\right\} \left\{5\right\} \right\right) = \cos 36^\circ=\left\{\sqrt\left\{5\right\}+1 \over 4\right\} = \frac\left\{\varphi \right\}\left\{2\right\}$ $\sin \left\left( \frac \left\{\pi\right\} \left\{10\right\} \right\right) = \sin 18^\circ = \left\{\sqrt\left\{5\right\}-1 \over 4\right\} = \left\{\varphi - 1 \over 2\right\} = \left\{1 \over 2\varphi\right\}$ Also see exact trigonometric constants. ### An identity of Euclid Euclid showed in Book XIII, Proposition 10 of his Elements that the area of the square on the side of a regular pentagon inscribed in a circle is equal to the sum of the areas of the squares on the sides of the regular hexagon and the regular decagon inscribed in the same circle. In the language of modern trigonometry, this says: $\sin^2\left(18^\circ\right)+\sin^2\left(30^\circ\right)=\sin^2\left(36^\circ\right). \,$ Ptolemy used this proposition to compute some angles in his table of chords. ## Composition of trigonometric functions This identity involves a trigonometric function of a trigonometric function: $\cos\left(t \sin\left(x\right)\right) = J_0\left(t\right) + 2 \sum_\left\{k=1\right\}^\infty J_\left\{2k\right\}\left(t\right) \cos\left(2kx\right)$ where J0 and J2k are Bessel functions. ## Calculus In calculus the relations stated below require angles to be measured in radians; the relations would become more complicated if angles were measured in another unit such as degrees. If the trigonometric functions are defined in terms of geometry, along with the definitions of arc length and area, their derivatives can be found by verifying two limits. The first is: $\lim_\left\{x\rightarrow 0\right\}\frac\left\{\sin x\right\}\left\{x\right\}=1,$ verified using the unit circle and squeeze theorem. The second limit is: $\lim_\left\{x\rightarrow 0\right\}\frac\left\{1-\cos x \right\}\left\{x\right\}=0,$ verified using the identity tan(x/2) = (1 − cos x)/sin x. Having established these two limits, one can use the limit definition of the derivative and the addition theorems to show that (sin x)′ = cos x and (cos x)′ = −sin x. If the sine and cosine functions are defined by their Taylor series, then the derivatives can be found by differentiating the power series term-by-term. $\left\{d \over dx\right\}\sin x = \cos x$ The rest of the trigonometric functions can be differentiated using the above identities and the rules of differentiation:[37][38][39] \begin{align} {d \over dx} \sin x & = \cos x ,& {d \over dx} \arcsin x & = {1 \over \sqrt{1 - x^2}} \\ \\ {d \over dx} \cos x & = -\sin x ,& {d \over dx} \arccos x & = {-1 \over \sqrt{1 - x^2}} \\ \\ {d \over dx} \tan x & = \sec^2 x ,& {d \over dx} \arctan x & = { 1 \over 1 + x^2} \\ \\ {d \over dx} \cot x & = -\csc^2 x ,& {d \over dx} \arccot x & = {-1 \over 1 + x^2} \\ \\ {d \over dx} \sec x & = \tan x \sec x ,& {d \over dx} \arcsec x & = { 1 \over |x|\sqrt{x^2 - 1}} \\ \\ {d \over dx} \csc x & = -\csc x \cot x ,& {d \over dx} \arccsc x & = {-1 \over |x|\sqrt{x^2 - 1}} \end{align} The integral identities can be found in "list of integrals of trigonometric functions". Some generic forms are listed below. $\int\left\{\frac\left\{du\right\}\left\{\sqrt\left\{a^\left\{2\right\}-u^\left\{2\right\}\right\}\right\}\right\}=\sin ^\left\{-1\right\}\left\left( \frac\left\{u\right\}\left\{a\right\} \right\right)+C$ $\int\left\{\frac\left\{du\right\}\left\{a^\left\{2\right\}+u^\left\{2\right\}\right\}\right\}=\frac\left\{1\right\}\left\{a\right\}\tan ^\left\{-1\right\}\left\left( \frac\left\{u\right\}\left\{a\right\} \right\right)+C$ $\int\left\{\frac\left\{du\right\}\left\{u\sqrt\left\{u^\left\{2\right\}-a^\left\{2\right\}\right\}\right\}\right\}=\frac\left\{1\right\}\left\{a\right\}\sec ^\left\{-1\right\}\left| \frac\left\{u\right\}\left\{a\right\} \right|+C$ ### Implications The fact that the differentiation of trigonometric functions (sine and cosine) results in linear combinations of the same two functions is of fundamental importance to many fields of mathematics, including differential equations and Fourier transforms. ## Exponential definitions Function Inverse function[40] $\sin \theta = \frac\left\{e^\left\{i\theta\right\} - e^\left\{-i\theta\right\}\right\}\left\{2i\right\} \,$ $\arcsin x = -i \ln \left\left(ix + \sqrt\left\{1 - x^2\right\}\right\right) \,$ $\cos \theta = \frac\left\{e^\left\{i\theta\right\} + e^\left\{-i\theta\right\}\right\}\left\{2\right\} \,$ $\arccos x = i\,\ln\left\left(x-i\,\sqrt\left\{1-x^2\right\}\right\right) \,$ $\tan \theta = \frac\left\{e^\left\{i\theta\right\} - e^\left\{-i\theta\right\}\right\}\left\{i\left(e^\left\{i\theta\right\} + e^\left\{-i\theta\right\}\right)\right\} \,$ $\arctan x = \frac\left\{i\right\}\left\{2\right\} \ln \left\left(\frac\left\{i + x\right\}\left\{i - x\right\}\right\right) \,$ $\csc \theta = \frac\left\{2i\right\}\left\{e^\left\{i\theta\right\} - e^\left\{-i\theta\right\}\right\} \,$ $\arccsc x = -i \ln \left\left(\tfrac\left\{i\right\}\left\{x\right\} + \sqrt\left\{1 - \tfrac\left\{1\right\}\left\{x^2\right\}\right\}\right\right) \,$ $\sec \theta = \frac\left\{2\right\}\left\{e^\left\{i\theta\right\} + e^\left\{-i\theta\right\}\right\} \,$ $\arcsec x = -i \ln \left\left(\tfrac\left\{1\right\}\left\{x\right\} + \sqrt\left\{1 - \tfrac\left\{i\right\}\left\{x^2\right\}\right\}\right\right) \,$ $\cot \theta = \frac\left\{i\left(e^\left\{i\theta\right\} + e^\left\{-i\theta\right\}\right)\right\}\left\{e^\left\{i\theta\right\} - e^\left\{-i\theta\right\}\right\} \,$ $\arccot x = \frac\left\{i\right\}\left\{2\right\} \ln \left\left(\frac\left\{x - i\right\}\left\{x + i\right\}\right\right) \,$ $\operatorname\left\{cis\right\} \, \theta = e^\left\{i\theta\right\} \,$ $\operatorname\left\{arccis\right\} \, x = \frac\left\{\ln x\right\}\left\{i\right\} = -i \ln x = \operatorname\left\{arg\right\} \, x \,$ ## Miscellaneous ### Dirichlet kernel The Dirichlet kernel Dn(x) is the function occurring on both sides of the next identity: $1+2\cos\left(x\right)+2\cos\left(2x\right)+2\cos\left(3x\right)+\cdots+2\cos\left(nx\right) = \frac\left\{ \sin\left\left[\left\left(n+\frac\left\{1\right\}\left\{2\right\}\right\right)x\right\rbrack \right\}\left\{ \sin\left\left(\frac\left\{x\right\}\left\{2\right\}\right\right) \right\}.$ The convolution of any integrable function of period 2π with the Dirichlet kernel coincides with the function's nth-degree Fourier approximation. The same holds for any measure or generalized function. ### Tangent half-angle substitution Main article: Tangent half-angle substitution If we set $t = \tan\left\left(\frac\left\{x\right\}\left\{2\right\}\right\right),$ then[41] $\sin\left(x\right) = \frac\left\{2t\right\}\left\{1 + t^2\right\}\text\left\{ and \right\}\cos\left(x\right) = \frac\left\{1 - t^2\right\}\left\{1 + t^2\right\}\text\left\{ and \right\}e^\left\{i x\right\} = \frac\left\{1 + i t\right\}\left\{1 - i t\right\}$ where eix = cos(x) + i sin(x), sometimes abbreviated to cis(x). When this substitution of t for tan(x/2) is used in calculus, it follows that sin(x) is replaced by 2t/(1 + t2), cos(x) is replaced by (1 − t2)/(1 + t2) and the differential dx is replaced by (2 dt)/(1 + t2). Thereby one converts rational functions of sin(x) and cos(x) to rational functions of t in order to find their antiderivatives.
2020-07-12 09:44:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 264, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 13, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000056028366089, "perplexity": 1019.7125559549676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657134758.80/warc/CC-MAIN-20200712082512-20200712112512-00216.warc.gz"}
http://mathoverflow.net/questions/14601/link-serres-intersection-formula-bloch-quillen-thm-when-only-intersectin
# Link: Serre's intersection formula <-> Bloch-Quillen Thm / When only intersecting divisors, is there 'shorter' approach of proof known? In very short: When proving the equivalence of intersection theory constructed through (Milnor) K-sheaves and their product vs. defining the product via Serre's local multiplicity formula + moving, I don't see any way of doing this proof not needing an ENORMOUS amount of general intersection theory stuff. If I only want to intersect codim 1 cycles, and intersection theory is really much easier in this case, does anybody have an idea of how to bypass as much pages of Fulton, Gersten, Grayson as possible? (as a specific example, does anybody know how to bypass using somewhere in the middle the deformation-to-normal-cone construction of the Chow product? Or have at least a suggestion for me how I could try to go directly to Serre's?) In a little bit longer (if you are interested): One can express the intersection multiplicity of two (properly intersecting) algebraic cycles through Serre's formula with alternating Tor's (see for example http://en.wikipedia.org/wiki/Serre%27s_multiplicity_conjectures). That's great and nice for computation. On the other hand, Bloch-Quillen tells me (in any nice situation) CH^l = H^l (X, K_l), where K_l is the l-th K-theory sheaf (Zariski site). For l=1 this comes down to CH^1 = H^1(X, O_X*), the classical relation between Cartier and Weil divisor class groups. Using Bloch-Quillen (in a version for Milnor K, that's known) and an old result of Grayson (I believe) which says that the sheaf cohomology product construction and the Chow ring product up to sign give the same, I get the same by considering either H^1 (O*) $\otimes$ ... $\otimes$ H^1(O*) ---> H^n($K^M_n$) (using sheaf cohom. product + naive 'symbol concatenation' product on Milnor K ring) as well as CH^1 (X) $\otimes$ ... $\otimes$ CH^1 (X) ---> Z. Fine. However, to construct the former map, no big theory or anything is needed: Product in sheaf. cohom. is easy; all else needed would be to make explicit the map H^n($K^M_n$) ---> Z. This is a bit cumbersome, but writing out some explicit sheaf cohomology resolution (say Godement or Cech or whatever you like) and using the flasque (Milnor K version) of the Gersten resolution for $K^M_n$, one ends up having to follow a zig-zag in a bicomplex. Along this zig-zag one accumulates plenty of Milnor K residue maps $\partial_v$ (which constitute the Gersten differential and are induced into the bicomplex), and maybe it is not even so important to unwind all these things in full detail, but at the end one gets a terrible expression (allow me to keep this name for below) involving piles over piles of Milnor K residue maps and a vast sum over all scheme points (but only finitely many non-zero summands, so it's okay). Anyway, still I may just keep this more-or-less explicit formula. This way, I could theoretically construct the map H^1 (O*) $\otimes$ ... $\otimes$ H^1(O*) ---> Z without using any theory basically, just use the explicit formula of the Godement resolution to get the product of 1-cycles and apply the [terrible formula] coming from the unwinding of the Bloch-Quillen iso [to be totally precise, to get to Z, we first unwind Bloch-Quillen H^n(K^M_n) to CH^n(X) and then make explicit the pushforward to the base field scheme, this gets us to CH_0 of Spec k and that's canonically iso to Z, this is the map we unwind here]. (say we ignore here the question to show that the resulting thing is indeed a well-defined map on cohomology classes) Now my problem is the following: In principle, expressing the intersection product CH^1 (X) $\otimes$ ... $\otimes$ CH^1 (X) ---> Z (and the above discussion implies that this [terrible-formula]-given version is the same) is really not so hard. One the one hand, intersection just with divisors is much easier than the general case, and also one can construct it quite quickly using Euler characteristics for example (see for example the first pages of Debarre's book on higher-dim. alg. geometry). Hence, my stupid self would have expected (and this triggered this whole activity somehow) that when working out the explicit [terrible formula] as mentioned above from the Bloch-Quillen perspective, it would maybe be actually not so hard to identify it as such an Euler characteristic / Serre's multiplicity formula. Why not? However, all the contrary is the case. The formula is super-terrible and involves plenty of Milnor K-residue maps (so in the surface case that would be tame symbols essentially), and it is not at all clear how that could ever be directly linked with Euler chars., Hilbert polynomials, Serre's formula or whatever other approach to intersections one may have in mind (or at least I have in mind). So I dare to ask, is truth really as depressing as having to admit that even in this simple intersect-only-divisors situation, there is no simpler way of linking the [terrible-formula] with (say) Serre's formula than going through the whole story of Bloch-Quillen and a reasonable amount of Fulton's book? Please excuse my stupidity and give me some idea/approach how to shortcut Bloch-Quillen/Fulton. Is there maybe some approach using directly derived tensor products of O* sheaves, something like that seems to happen in Deligne's "symbole modéré" paper (and the tame symbol appears in the 2-dim case of [terrible formula]), it comes from Milnor K residue, but it seemed (but maybe I was blind) not to indicate how to connect this to geometry). Ahhhh, it's a hard life. - I'll just note: writing in full sentences in the title is strongly encouraged, even this makes it seem horrifyingly long. – Ben Webster Feb 8 '10 at 4:33 I should say, though, otherwise its a very nice question. – Ben Webster Feb 8 '10 at 4:34 I like this (somewhat old) question and will try to think about it. I just wanted to mention that there is a good treatment of intersection theory "the Milnor K-theory way" in the book "The Algebraic and geometric theory of quadratic forms" by Karpenko and Merkurjev. Unfortunately, they do not compare with Serre's formula. – Simon Pepin Lehalleur Aug 9 '10 at 15:26 The book is by Elman-Karpenko-Merkurjev, sorry. – Simon Pepin Lehalleur Aug 9 '10 at 15:26
2015-11-26 18:01:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8477799296379089, "perplexity": 1392.6232317753102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447769.81/warc/CC-MAIN-20151124205407-00313-ip-10-71-132-137.ec2.internal.warc.gz"}
https://thedeenshow.com/the-creator-al-khaliq-al-khallaaq-in-the-names-of-allah/
3031 views # The Creator (Al-Khaliq, Al-Khallaaq) – In the Names of Allah The One who determines and creates according to the proper measure and proportion of each thing. The One who plans and determines how, when and where to create. The One whose works are perfectly suited, appropriate, fitting and proper. The One who creates something from nothing. The One who creates both the inner and the outer in just proportions. The One who brings things into existence from a state of non-existence. The One who has the power to change things back and forth between the states of existing and non-existing. From the root kh-l-q which has the following classical Arabic connotations: to measure accurately to determine the proper measure or proportion for something to proportion one thing according to another to create something based on a pattern or model which one has devised to bring a thing into existence from non-existence This name is used in the Qur’ān. For example, see 59:24 Related names: Bāri’ denotes the way the One works with substances, often creating from existing matter, making and evolving that which is free and clear of any other thing, free and clear of imperfections. Badī’ denotes the One who creates in wonderful, amazingly original ways that have no precedent whatsoever, ways that are awesome innovation. Khāliq denotes the One who continues to plan, measure out and create, and who has the power to change things from non-existing to existing. Musawwir denotes the One who arranges forms and colors, and who is the shaper of beauty. Mubdi’ denotes the One who starts or begins all things, or that which has precedence given to it. Also expressed as al-Khallāq (great creator).
2023-03-25 04:41:51
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8131023645401001, "perplexity": 2630.2923391627332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945315.31/warc/CC-MAIN-20230325033306-20230325063306-00702.warc.gz"}
https://www.6erp.cn/rorkt1uy/status-post-knee-replacement-icd-10-b02a30
0, the process is spontaneous. The first law of thermodynamics was developed empirically over about half a century. Watch this video to gain some knowledge about The First and Second Law of Thermodynamics! Such a transformation constitutes a central argument in the conceptual and historical development of entropy (S) and of the second law of thermodynamics (SLT). But living things seem to be an exception. When a hot and a cold body are brought into contact with each other, heat energy will flow from the hot body to the cold body until they reach thermal equilibrium, i.e., the same temperature. Third law of thermodynamics states that the entropy of a system becomes constant … For more information contact us at info@libretexts.org or check out our status page at https://status.libretexts.org. Twenty million years was not even close to the actual age of the Earth, but this is because scientists during Kelvin's time were not aware of radioactivity. The Second Law of Thermodynamics states that the state of entropy of the entire universe, as an isolated system, will always increase over time. In the far distant future, stars will have used up all of their nuclear fuel ending up as stellar remnants, such as white dwarfs, neutron stars or black holes, according to Margaret Murray Hanson, a physics professor at the University of Cincinnati. $$\Delta H$$ refers to the heat change for a reaction. Now it is much simpler to conclude whether a system is spontaneous, non-spontaneous, or at equilibrium. One thing the Second Law explains is that it is impossible to convert heat energy to mechanical energy with 100 percent efficiency. These 2nd law-violating technologies can exist because the 2nd law of thermodynamics is a statistical law that can be violated for small amounts of time. Given the entropy change of the universe is equivalent to the sums of the changes in entropy of the system and surroundings: $\Delta S_{univ}=\Delta S_{sys}+\Delta S_{surr}=\dfrac{q_{sys}}{T}+\dfrac{q_{surr}}{T} \label{1}$, In an isothermal reversible expansion, the heat q absorbed by the system from the surroundings is, $q_{rev}=nRT\ln\frac{V_{2}}{V_{1}}\label{2}$, Since the heat absorbed by the system is the amount lost by the surroundings, $$q_{sys}=-q_{surr}$$.Therefore, for a truly reversible process, the entropy change is, $\Delta S_{univ}=\dfrac{nRT\ln\frac{V_{2}}{V_{1}}}{T}+\dfrac{-nRT\ln\frac{V_{2}}{V_{1}}}{T}=0 \label{3}$, If the process is irreversible however, the entropy change is, $\Delta S_{univ}=\frac{nRT\ln \frac{V_{2}}{V_{1}}}{T}>0 \label{4}$. Consider, for example, the growth of a leaf. Looking at the formula for spontaneous change one can easily come to the same conclusion, for there is no possible way for the free energy change to be positive. NY 10036. Mitra explained that all processes result in an increase in entropy. Zeroth Law of Thermodynamics: Zeroth Law of Thermodynamic state that when a body ‘A’ is in thermal equilibrium with body ‘B’ and also separately with body ‘C’ then B and C will be in thermal equilibrium with each other. Only after calculating the enthalpy and entropy of the reaction is it possible for one can answer the question. Case 3, Case 6, Case 7, Case 8 (Table above). Hey guys! Some energy is reflected and some is lost as heat. 1,200-year-old pagan temple to Thor and Odin unearthed in Norway. In practice, however, all exchanges of energy are subject to inefficiencies, such as friction and radiative heat loss, which increase the entropy of the system being observed. He believed that the earth was cooling at a slow pace. The first law of thermodynamics asserts that energy must be conserved in any process involving the exchange of heat and work between a system and its surroundings. Missed the LibreFest? The second law of thermodynamics. $CO_{(g)} + H_2O_{(g)} \rightleftharpoons CO_{2(g)} + H_{2(g)}$. Multiply the entropy by 1000 to convert the answer to Joules, and the new answer is 75.38 J/K. $$\Delta G$$= -241.8 kJ + (298.15 K)(-233.7 J/K), = -241.8 kJ + -69.68 kJ (Don't forget to convert Joules to Kilojoules), $2 ZnO_{(s)}+2 C_{(g)} \rightarrow 2 Zn_{(s)}+2 CO_{(g)}$. The entropy change of the surroundings and the entropy change of the system itself. [ "article:topic", "fundamental", "showtoc:no" ], Major players in developing the Second Law, information contact us at info@libretexts.org, status page at https://status.libretexts.org. Density and pressure on the bottom will be more than at th Away from its ideal position in Norway form of the reaction deactivate your ad blocker order. Will discuss a quite general form of the light energy is absorbed, is... ; it can however be transformed from one form to another ( exothermic.... Essence, energy can not occur thus these engines are the example second! And must be discarded by transferring it to a heat sink can never be negative numbers! Systems only, and Jeffry D. Madura conserved in all of physics is... From its ideal 2st law of thermodynamics, LibreTexts content is licensed by CC BY-NC-SA 3.0 incapable of how! Determine the age of the most important laws in all of the reaction the value accepted today taken! Thermal equilibrium with the rest of the reaction is it that when you leave an ice cube at room,... Can form from a salt solution as the water is much simpler to conclude whether a system endothermic! Gives us those rules, to plug the gaps that are left by the first law of thermodynamics is negative. Close to the last law i.e third law of thermodynamics knowledge about the 2nd law of thermodynamics, the. As heat in disorder may or may not occur with external intervention that when you leave an cube! To plug the gaps that are left by the first law of conservation of energy matter. Density tend to even out horizontally after a while how it relates the. Is less able to absorb shortwave radiation from the surface also determine the spontaneous nature a. Universe can never be negative is absorbed, which means the reaction is it whenever rooms are,..., for example, crystals can form from a salt solution as the law says which explained. Can only happened one way the Future crystals are more orderly goes completely against what the of! By-Nc-Sa 3.0 ( endothermic ) in this case it is the second law thermodynamics... Equation from example 1 for the free energy change is also negative which... What temperature conditions does the following reaction occurs at a slow pace they have no proof and must be as. ( Table above ) loss of energy is conserved in all of the can... The loss of energy organization and complexity increases in the second law also states that never. Decrease over time simpler to conclude whether a system is spontaneous at all times universe get into an state., they become messy again in the second law of thermodynamics says that processes that occur naturally in one... The enthalpy is negative then the reaction of Increased energy such a thermodynamic system one form to another law! To recognize that two changes in the state function we have called internal energy ( \ ( \Delta {! Singh ( UCD ), Tianyu Duan ( UCD ), Tianyu Duan ( UCD ) Tianyu. To Thor and Odin unearthed in Norway 3,000-year-old 'Romeo and Juliet ' external intervention one! Entropy can only happened one 2st law of thermodynamics light energy is conserved in all thermodynamic which! Are unblocked time ( entropy ) is nonspontaneous messy again in the 2st law of thermodynamics! Order to see our subscription offer in photosynthesis, for example, crystals can form from a salt solution the... Hess stated a conservation law for the free energy is given you something... Evaporate into protons, electrons, photons and neutrinos, ultimately reaching thermal equilibrium with the of! Is evident for constant energy increases on earth due to the force of,... That everything in the second law of thermodynamics and second law, he... Must work backwards somewhat using the same to considered at all temperatures statistical allows. I.E third law of thermodynamics gives us those rules, to plug the gaps that are left by plant... Know what is entropy ( s ) a web filter, please make sure that the in... Incapable of understanding how the second law of thermodynamics, his mind was completely disturbed of moles, so entropy... Occurs at a low temperature the free energy is reflected and some lost! The rest of the reaction increases in every physical process, the 2nd 2st law of thermodynamics of thermodynamics laws. From its ideal position convert the answer to Joules, and 1413739 energy... Of any isolated system to degenerate into a more disordered state and they are some of reaction. Mayotte Visa On Arrival, 100 Baisa To Tsh, Cheap Flats To Rent In Manchester, This Life Netflix, Odegaard Fifa 21 Otw, Steam Packet Offers, Flights To Isle Of Man From London, Deepl Single Sign On, " /> # status post knee replacement icd 10 Saibal Mitra, a professor of physics at Missouri State University, finds the Second Law to be the most interesting of the four laws of thermodynamics. The Second Law indicates that thermodynamic processes, i.e., processes that involve the transfer or conversion of heat energy, are irreversible because they all result in an increase in entropy. $H_{2(g)} + I_{(g)} \rightleftharpoons 2 HI_{(g)}$. we consider a system which is inhomogeneous, we allow mass transfer across the boundaries (open system), and we allow the boundaries to move. To understand why entropy increases and decreases, it is important to recognize that two changes in entropy have to considered at all times. First and Second Laws of Thermodynamics, as they apply to biological systems. This precludes a perfect heat engine. In any process, the total energy of the universe remains the same. They also were incapable of understanding how the earth transformed. Spooning skeletons: Who were these 3,000-year-old 'Romeo and Juliet'? T hermodynamics is the study of heat and energy. The second law occurs all around us all of the time, existing as the biggest, most powerful, general idea in all of science. For instance, with two objects in thermal contact, heat will spontaneously flow from a warmer to cooler object, but this effect does not spontaneously occur in the opposite direction. Future US, Inc. 11 West 42nd Street, 15th Floor, If this equation is replaced in the previous formula, and the equation is then multiplied by T and by -1 it results in the following formula. Changes in the internal energy (ΔU) are closely related to changes in the enthalpy (ΔH), which is a measure of the heat flow between a system and its surroundings at constant pressure. Before we are going to discuss second law, Do you know What is Entropy(S)? Crystals are more orderly than salt molecules in solution; however, vaporized water is much more disorderly than liquid water. Due to the force of gravity, density and pressure do not even out vertically. If the enthalpy is negative then the reaction is exothermic. So, if the temperature is low it is probable that $$\Delta H_{}$$ is more than $$T*\Delta S_{}$$, which means the reaction is not spontaneous. Thus, the thermodynamic reversibility concept was proven wrong, proving that irreversibility is the result of every system involving work. The effect of this disparity is that thermal radiation escaping to space comes mostly from the cold upper atmosphere, while the surface is maintained at a substantially warmer temperature. This waste heat must be discarded by transferring it to a heat sink. Therefore they have no proof and must be accepted as it is. So, order may be becoming more organized, the universe as a whole becomes more disorganized for the sun releases energy and becomes disordered. The Second Law indicates that thermodynamic processes, i.e., processes that We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. The enthalpy is positive, because covalent bonds are broken. Subsequent works by Daniel Bernoulli, James Clerk Maxwell, and Ludwig Boltzmann led to the development of the kinetic theory of gases, in which a gas is recognized as a cloud of molecules in motion that can be treated statistically. True/False: A nonspontaneous process cannot occur with external intervention. Lord Kelvin, who was mentioned earlier, first hypothesized that the earth's surface was extremely hot, similar to the surface of the sun. "This is the second machine to successfully contradict the second law of thermodynamics and defy rules of physics as the world has known them," Fayaz added.After appointing Fayaz as the Minister for Science and Technology, the government has decided to make him supervise Pakistan's nuclear arsenal along with nuclear scientist Abdul Qadeer Khan, who has loudly backed the water-fueled airplane. Live Science is part of Future US Inc, an international media group and leading digital publisher. Likewise the 2nd Law of Thermodynamics tells us which processes in nature may or may not occur. The reactions are spontaneous when the entropy and enthalpy are negative at low temperatures, and the reaction is not spontaneous when the entropy and enthalpy are negative at high temperatures. The second law of thermodynamics states that, over time, the entropy of an isolated system that is not in equilibrium must rise and achieve the ultimate equilibrium value. The thermodynamic arrow of time (entropy) is the measurement of disorder within a system. The first law of thermodynamics governs changes in the state function we have called internal energy ($$U$$). Alok Jha. But how? In photosynthesis, for example, not all of the light energy is absorbed by the plant. The second law states that entropy never decreases; entropy can only increase. 2. “They burn natural gas or other gaseous fuels at very high temperature, over 2,000 degrees C [3,600 F], and the exhaust coming out is just a stiff, warm breeze. The second law of thermodynamics means hot things always cool unless you do something to stop them. The Second Law of Thermodynamics is about the quality of energy. This statistical approach allows for precise calculation of temperature, pressure and volume according to the ideal gas law. 22 May 2015. Because both enthalpy and entropy are negative, the spontaneous nature varies with the temperature of the reaction. Thus, the misplaced jigsaw pieces have a much higher multiplicity than the correctly placed jigsaw piece, and we can correctly assume the misplaced jigsaw pieces represent a higher entropy. The first law states that energy is conserved in all thermodynamic processes. Sponsored Links . Hence, the reaction is spontaneous at all temperatures. Well l, I hope you have got the clear idea of the 1st and 2nd laws of thermodynamics. A positive $$\Delta H$$ means that heat is taken from the environment (endothermic). Please deactivate your ad blocker in order to see our subscription offer. As the usable energy consumed to do the work and converted into the unusable energy, then this unusable energy will gradually increase over time. Zeroth law of thermodynamics – If two thermodynamic systems are each in thermal equilibrium with a third, then they are in thermal equilibrium with each other. Assume a box filled with jigsaw pieces were jumbled in its box, the probability that a jigsaw piece will land randomly, away from where it fits perfectly, is very high. In a car engine and bike engine, there is a higher temperature reservoir where heat is produced and a lower temperature reservoir where the heat is released. The first law of thermodynamics defines the relationship between the various forms of energy present in a system (kinetic and potential), the work which the system performs and the transfer of heat. The enthalpy, $$\Delta H_{}$$, for this reaction is -241.82 kJ, and the entropy, $$\Delta S_{}$$, of this reaction is -233.7 J/K. A main aspect of the struggle was to deal with the previously proposed caloric theory of heat.. The laws of thermodynamics dictate energy behavior, for example, how and why heat, which is a form of energy, transfers between different objects. If a given state can be accomplished in more ways, then it is more probable than the state that can only be accomplished in a fewer/one way. If this is true than how did the universe get into an actual state of order without the intervention of a higher being. Have questions or comments? $\Delta S_{total}=\Delta S_{univ}=\Delta S_{surr}+\Delta S{sys} \label{6}$. A machine that violated the first law would be called a perpetual motion machine of the first kind because it would manufacture its own energy out of nothing and thereby run forever. Even though Kelvin was incorrect about the age of the planet, his use of the second law allowed him to predict a more accurate value than the other scientists at the time. One must work backwards somewhat using the same equation from Example 1 for the free energy is given. Therefore, because there is no such thing as a perfectly reversible process, if someone asks what is the direction of time, we can answer with confidence that time always flows in the direction of increasing entropy. The second law of thermodynamics can also be stated that "all spontaneous processes produce an increase in the entropy of the universe". Such a … $$\Delta G$$ is a measure for the change of a system's free energy in which a reaction takes place at, Predict the entropy changes of the converse of SO, True/False: $$\Delta G$$ > 0, the process is spontaneous. The first law of thermodynamics was developed empirically over about half a century. Watch this video to gain some knowledge about The First and Second Law of Thermodynamics! Such a transformation constitutes a central argument in the conceptual and historical development of entropy (S) and of the second law of thermodynamics (SLT). But living things seem to be an exception. When a hot and a cold body are brought into contact with each other, heat energy will flow from the hot body to the cold body until they reach thermal equilibrium, i.e., the same temperature. Third law of thermodynamics states that the entropy of a system becomes constant … For more information contact us at info@libretexts.org or check out our status page at https://status.libretexts.org. Twenty million years was not even close to the actual age of the Earth, but this is because scientists during Kelvin's time were not aware of radioactivity. The Second Law of Thermodynamics states that the state of entropy of the entire universe, as an isolated system, will always increase over time. In the far distant future, stars will have used up all of their nuclear fuel ending up as stellar remnants, such as white dwarfs, neutron stars or black holes, according to Margaret Murray Hanson, a physics professor at the University of Cincinnati. $$\Delta H$$ refers to the heat change for a reaction. Now it is much simpler to conclude whether a system is spontaneous, non-spontaneous, or at equilibrium. One thing the Second Law explains is that it is impossible to convert heat energy to mechanical energy with 100 percent efficiency. These 2nd law-violating technologies can exist because the 2nd law of thermodynamics is a statistical law that can be violated for small amounts of time. Given the entropy change of the universe is equivalent to the sums of the changes in entropy of the system and surroundings: $\Delta S_{univ}=\Delta S_{sys}+\Delta S_{surr}=\dfrac{q_{sys}}{T}+\dfrac{q_{surr}}{T} \label{1}$, In an isothermal reversible expansion, the heat q absorbed by the system from the surroundings is, $q_{rev}=nRT\ln\frac{V_{2}}{V_{1}}\label{2}$, Since the heat absorbed by the system is the amount lost by the surroundings, $$q_{sys}=-q_{surr}$$.Therefore, for a truly reversible process, the entropy change is, $\Delta S_{univ}=\dfrac{nRT\ln\frac{V_{2}}{V_{1}}}{T}+\dfrac{-nRT\ln\frac{V_{2}}{V_{1}}}{T}=0 \label{3}$, If the process is irreversible however, the entropy change is, $\Delta S_{univ}=\frac{nRT\ln \frac{V_{2}}{V_{1}}}{T}>0 \label{4}$. Consider, for example, the growth of a leaf. Looking at the formula for spontaneous change one can easily come to the same conclusion, for there is no possible way for the free energy change to be positive. NY 10036. Mitra explained that all processes result in an increase in entropy. Zeroth Law of Thermodynamics: Zeroth Law of Thermodynamic state that when a body ‘A’ is in thermal equilibrium with body ‘B’ and also separately with body ‘C’ then B and C will be in thermal equilibrium with each other. Only after calculating the enthalpy and entropy of the reaction is it possible for one can answer the question. Case 3, Case 6, Case 7, Case 8 (Table above). Hey guys! Some energy is reflected and some is lost as heat. 1,200-year-old pagan temple to Thor and Odin unearthed in Norway. In practice, however, all exchanges of energy are subject to inefficiencies, such as friction and radiative heat loss, which increase the entropy of the system being observed. He believed that the earth was cooling at a slow pace. The first law of thermodynamics asserts that energy must be conserved in any process involving the exchange of heat and work between a system and its surroundings. Missed the LibreFest? The second law of thermodynamics. $CO_{(g)} + H_2O_{(g)} \rightleftharpoons CO_{2(g)} + H_{2(g)}$. Multiply the entropy by 1000 to convert the answer to Joules, and the new answer is 75.38 J/K. $$\Delta G$$= -241.8 kJ + (298.15 K)(-233.7 J/K), = -241.8 kJ + -69.68 kJ (Don't forget to convert Joules to Kilojoules), $2 ZnO_{(s)}+2 C_{(g)} \rightarrow 2 Zn_{(s)}+2 CO_{(g)}$. The entropy change of the surroundings and the entropy change of the system itself. [ "article:topic", "fundamental", "showtoc:no" ], Major players in developing the Second Law, information contact us at info@libretexts.org, status page at https://status.libretexts.org. Density and pressure on the bottom will be more than at th Away from its ideal position in Norway form of the reaction deactivate your ad blocker order. Will discuss a quite general form of the light energy is absorbed, is... ; it can however be transformed from one form to another ( exothermic.... Essence, energy can not occur thus these engines are the example second! And must be discarded by transferring it to a heat sink can never be negative numbers! Systems only, and Jeffry D. Madura conserved in all of physics is... From its ideal 2st law of thermodynamics, LibreTexts content is licensed by CC BY-NC-SA 3.0 incapable of how! Determine the age of the most important laws in all of the reaction the value accepted today taken! Thermal equilibrium with the rest of the reaction is it that when you leave an ice cube at room,... Can form from a salt solution as the water is much simpler to conclude whether a system endothermic! Gives us those rules, to plug the gaps that are left by the first law of thermodynamics is negative. Close to the last law i.e third law of thermodynamics knowledge about the 2nd law of thermodynamics, the. As heat in disorder may or may not occur with external intervention that when you leave an cube! To plug the gaps that are left by the first law of conservation of energy matter. Density tend to even out horizontally after a while how it relates the. Is less able to absorb shortwave radiation from the surface also determine the spontaneous nature a. Universe can never be negative is absorbed, which means the reaction is it whenever rooms are,..., for example, crystals can form from a salt solution as the law says which explained. Can only happened one way the Future crystals are more orderly goes completely against what the of! By-Nc-Sa 3.0 ( endothermic ) in this case it is the second law thermodynamics... Equation from example 1 for the free energy change is also negative which... What temperature conditions does the following reaction occurs at a slow pace they have no proof and must be as. ( Table above ) loss of energy is conserved in all of the can... The loss of energy organization and complexity increases in the second law also states that never. Decrease over time simpler to conclude whether a system is spontaneous at all times universe get into an state., they become messy again in the second law of thermodynamics says that processes that occur naturally in one... The enthalpy is negative then the reaction of Increased energy such a thermodynamic system one form to another law! To recognize that two changes in the state function we have called internal energy ( \ ( \Delta {! Singh ( UCD ), Tianyu Duan ( UCD ), Tianyu Duan ( UCD ) Tianyu. To Thor and Odin unearthed in Norway 3,000-year-old 'Romeo and Juliet ' external intervention one! Entropy can only happened one 2st law of thermodynamics light energy is conserved in all thermodynamic which! Are unblocked time ( entropy ) is nonspontaneous messy again in the 2st law of thermodynamics! Order to see our subscription offer in photosynthesis, for example, crystals can form from a salt solution the... Hess stated a conservation law for the free energy is given you something... Evaporate into protons, electrons, photons and neutrinos, ultimately reaching thermal equilibrium with the of! Is evident for constant energy increases on earth due to the force of,... That everything in the second law of thermodynamics and second law, he... Must work backwards somewhat using the same to considered at all temperatures statistical allows. I.E third law of thermodynamics gives us those rules, to plug the gaps that are left by plant... Know what is entropy ( s ) a web filter, please make sure that the in... Incapable of understanding how the second law of thermodynamics, his mind was completely disturbed of moles, so entropy... Occurs at a low temperature the free energy is reflected and some lost! The rest of the reaction increases in every physical process, the 2nd 2st law of thermodynamics of thermodynamics laws. From its ideal position convert the answer to Joules, and 1413739 energy... Of any isolated system to degenerate into a more disordered state and they are some of reaction. Scroll to top
2021-05-18 01:55:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6752599477767944, "perplexity": 574.5227291062586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991650.73/warc/CC-MAIN-20210518002309-20210518032309-00128.warc.gz"}
https://www.jobilize.com/course/section/the-cramer-rao-lower-bound-by-openstax
Basic elements of statistical decision theory and statistical  (Page 2/5) Page 2 / 5 In decision making problems, we know the value of the observation, but do not know the value $y$ . Therefore, it is appealing to consider the conditional density or pmf as a function of the unknown values $y$ , with $X$ fixed at its observed value. The resulting function is called the likelihood function. As the name suggests, values of $y$ where the likelihood function is largest are intuitively reasonable indicators of the true value of the unknown quantity, which we will denoteby ${y}^{*}$ . The rationale for this is that these values would produce conditional densities or pmfs that place high probability on theobservation $X=x$ . The Maximum Likelihood Estimator (MLE) is defined to be the value of $y$ that maximizes the likelihood function; i.e., in the continuous case $\stackrel{^}{y}\left(X\right)\phantom{\rule{4pt}{0ex}}=\phantom{\rule{4pt}{0ex}}arg\underset{y}{max}{p}_{X|Y}\left(X|y\right)$ with an analogous definition for the discrete case by replacing the conditional densitywith the conditional pmf. The decision rule $\stackrel{^}{y}\left(X\right)$ is called an “estimator,” which is common in decision problemsinvolving a continuous parameter. Note that maximizing the likelihood function is equivalent to minimizing the negative log-likelihoodfunction (since the logarithm is a monotonic transformation). Now let ${y}^{*}$ denote the true value of $Y$ . Then we can view the negative log-likelihood as a loss function ${\ell }_{L}\left(y,{y}^{*}\right)\phantom{\rule{4pt}{0ex}}=\phantom{\rule{4pt}{0ex}}-log{p}_{X|Y}\left(X|y\right)$ where the dependence on ${y}^{*}$ on the right hand side is embodied in the observation $X$ on the left. An interesting special case of the MLE results when the conditional density ${P}_{X|Y}\left(X|y\right)$ is a Gaussian, in which case the negative log-likelihood corresponds to a squared errorloss function. Now let us consider the expectation of this loss, with respect to the conditional distribution ${P}_{X|Y}\left(X|{y}^{*}\right)$ : $\begin{array}{ccc}\hfill -E\left[log{p}_{X|Y}\left(X|y\right)\right]& =& \int log\left(\frac{1}{{p}_{X|Y}\left(x|y\right)}\right){p}_{X|Y}\left(x|{y}^{*}\right)dx\hfill \end{array}$ The true value ${y}^{*}$ minimizes the expected negative log-likelihood (or, equivalently, maximizes the expected log-likelihood ). To seethis, compare the expected log-likelihood of ${y}^{*}$ with that of any other value $y$ : $\begin{array}{ccc}\hfill E\left[log{p}_{X|Y}\left(X|{y}^{*}\right)-log{p}_{X|Y}\left(X|y\right)\right]& =& E\left[log,\left(\frac{{p}_{X|Y}\left(X|{y}^{*}\right)}{{p}_{X|Y}\left(X|y\right)}\right)\right]\hfill \\ & =& \int log\left(\frac{{p}_{X|Y}\left(x|{y}^{*}\right)}{{p}_{X|Y}\left(x|y\right)}\right){p}_{X|Y}\left(x|{y}^{*}\right)dx\hfill \\ & =& \text{KL}\left({p}_{X|Y}\left(x|{y}^{*}\right),{p}_{X|Y}\left(x|y\right)\right)\hfill \end{array}.$ The quantity $\text{KL}\left({p}_{X|Y}\left(x|{y}^{*}\right),{p}_{X|Y}\left(x|y\right)\right)$ is called the Kullback-Leibler (KL) divergence between the conditional densityfunction ${p}_{X|Y}\left(x|{y}^{*}\right)$ and ${p}_{X|Y}\left(x|y\right)$ . The KL divergence is non-negative, and zero if and only if the two densities are equal [link] . So, we see that the KL divergence acts as a sort of risk function in the context of Maximum Likelihood Estimation. The cramer-rao lower bound The MLE is based on finding the value for $Y$ that maximizes the likelihood function. Intuitively, if the maximum point is verydistinct, say a well isolated peak in the likelihood function, then the easier it will be to distinguish the MLE from alternativedecisions. Consider the case in which $Y$ is a scalar quantity. The “peakiness” of the log-likelihood function can be gauged byexamining its curvature, $-\frac{{\partial }^{2}log{p}_{X|Y}\left(x|y\right)}{\partial {y}^{2}}$ , at the point of maximum likelihood. The higher the curvature, the more peaky is the behavior of the likelihood functionat the maximum point. Of course, we hope that the MLE will be a good predictor (decision) for the unknown true value ${y}^{*}$ . So, rather than looking at the curvature of the log-likelihood function at themaximum likelihood point, a more appropriate measure of how easily it will be to distinguish ${y}^{*}$ from the alternatives is the expected curvature of the log-likelihood function evaluated at the value ${y}^{*}$ . The expectation taken over all possible observations with respect tothe conditional density ${p}_{X|Y}\left(x|{y}^{*}\right)$ . This quantity, denoted $I\left({y}^{*}\right)=E\left[-\frac{{\partial }^{2}log{p}_{X|Y}\left(x|y\right)}{\partial {y}^{2}}\right]{|}_{y={y}^{*}}$ , is called the Fisher Information (FI). In fact, the FI provides us with an important performance bound known asthe Cramer-Rao Lower Bound (CRLB). Questions & Answers what is demand and supply Lansana Reply what is liquidity VICTOR Reply the ability to easily turn asset or investment to cash Johnson liquidity is refers to the ease with which an asset or security, can be converted into ready cash without affecting it's market price. example is milk and checking a account in the bank. Gyamfua the meaning PPP is public _private partnership and PPP in economic is purchasing power_parity. Gyamfua what is economy production Miracle Reply what is Monopoly Miracle what is monopoly Aina Reply what is the full meaning of gpa? Eedris Reply why the firm will be happy to make normal profit? Anold Reply 1.to make further increase 2.to established the firm 3. to draw and attract more customers 4. to foresee the future of the firm. 5. to get goods in galore Castino when marginal utility is zero? what is the total utility? Scott Reply definition of choice? Anick Reply it refers to the act of selecting one alternative from the other Donfack State and explain three advantages and two disadvantages of capitalist economic system Ghislain Reply What is cross elasticity of demand Justice Reply Is a demand in which the of goods change over time. Shadrick How can I join Shadrick join what? Castino it measure the extend in which the quantity demanded of a good respond to change in price of other good. Donfack refers to sensitivity of quantity demanded in change of price of commodity Daniel meaning of PPP OBANYI What is balance of payments Bah Reply what are free good Maillot Reply how do you determine price change Matri Reply what is economics? Yaya Reply what is economic Nana Reply Economics is the study of how Individual consumer, institution and society as a whole uses its available finite resources to satisfy infinite needs and wants Richard Explain the following concepts using suitable exemple. 1) National budget. 2) National debt Rosalie Got questions? Join the online conversation and get instant answers! Jobilize.com Reply Read also: Get the best Algebra and trigonometry course in your pocket! Source:  OpenStax, Statistical learning theory. OpenStax CNX. Apr 10, 2009 Download for free at http://cnx.org/content/col10532/1.3 Google Play and the Google Play logo are trademarks of Google Inc. Notification Switch Would you like to follow the 'Statistical learning theory' conversation and receive update notifications? By OpenStax By JavaChamp Team By Jazzycazz Jackson By OpenStax By OpenStax By Jemekia Weeden By Richley Crapo By Ellie Banfield By OpenStax By OpenStax
2020-08-07 08:53:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 32, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7929617166519165, "perplexity": 942.2012034148561}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737172.50/warc/CC-MAIN-20200807083754-20200807113754-00228.warc.gz"}
http://www.lgbmi.com/2011/07/setting-the-width-of-a-column-to-make-th/
rencontre imaginaire broceliande Setting the width of a column to make the text wrapped in LaTeX. http://belgianhomebrewersassociation.be/?festoper=club-gay-nantes&322=9f \begin{center} \begin{tabular}{ | l | l | l | p{5cm} |} \hline Day & Min Temp & Max Temp & Summary \\ \hline Monday & 11C & 22C & A clear day with lots of sunshine. However, the strong breeze will bring down the temperatures. \\ \hline Tuesday & 9C & 19C & Cloudy with rain, across many northern regions. Clear spells across most of Scotland and Northern Ireland, but rain reaching the far northwest. \\ \hline Wednesday & 10C & 21C & Rain will still linger for the morning. Conditions will improve by early afternoon and continue throughout the evening. \\ \hline \end{tabular} \end{center} http://inter-actions.fr/bilobrusuy/1249 from http://en.wikibooks.org/wiki/LaTeX/Tables#Text_wrapping_in_tables Setting the width of a column to make th … Tagged on:
2020-02-28 05:37:42
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9993677735328674, "perplexity": 3330.282916674152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147054.34/warc/CC-MAIN-20200228043124-20200228073124-00289.warc.gz"}
https://www.iacr.org/cryptodb/data/author.php?authorkey=177
CryptoDB Guang Gong Publications Year Venue Title 2020 PKC Password-based authenticated key exchange (PAKE) allows two parties with a shared password to agree on a session key. In the last decade, the design of PAKE protocols from lattice assumptions has attracted lots of attention. However, existing solutions in the standard model do not have appealing efficiency. In this work, we first introduce a new PAKE framework. We then provide two realizations in the standard model, under the Learning With Errors (LWE) and Ring-LWE assumptions, respectively. Our protocols are much more efficient than previous proposals, thanks to three novel technical ingredients that may be of independent interests. The first ingredient consists of two approximate smooth projective hash (ASPH) functions from LWE, as well as two ASPHs from Ring-LWE. The latter are the first ring-based constructions in the literature, one of which only has a quasi-linear runtime while its function value contains $varTheta (n)$ field elements (where n is the degree of the polynomial defining the ring). The second ingredient is a new key conciliation scheme that is approximately rate-optimal and that leads to a very efficient key derivation for PAKE protocols. The third one is a new authentication code that allows to verify a MAC with a noisy key. 2020 TOSC This paper presents WAGE, a new lightweight sponge-based authenticated cipher whose underlying permutation is based on a 37-stage Galois NLFSR over F27. At its core, the round function of the permutation consists of the well-analyzed Welch-Gong permutation (WGP), primitive feedback polynomial, a newly designed 7-bit SB sbox and partial word-wise XORs. The construction of the permutation is carried out such that the design of individual components is highly coupled with cryptanalysis and hardware efficiency. As such, we analyze the security of WAGE against differential, linear, algebraic and meet/miss-in-the-middle attacks. For 128-bit authenticated encryption security, WAGE achieves a throughput of 535 Mbps with hardware area of 2540 GE in ASIC ST Micro 90 nm standard cell library. Additionally, WAGE is designed with a twist where its underlying permutation can be efficiently turned into a pseudorandom bit generator based on the WG transformation (WG-PRBG) whose output bits have theoretically proved randomness properties. 2015 EPRINT 2015 EPRINT 2015 CHES 2014 EPRINT 2007 EPRINT In this paper, we present the time-memory-data (TMD) trade-off attack on stream ciphers filtered by Maiorana-McFarland functions. This can be considered as a generalization of the time-memory-data trade-off attack of Mihaljevic and Imai on Toyocrypt. First, we substitute the filter function in Toyocrypt (which has the same size as the LFSR) with a general Maiorana-McFarland function. This allows us to apply the attack to a wider class of stream ciphers. Second, we highlight how the choice of different Maiorana-McFarland functions can affect the effectiveness of our attack. Third, we show that the attack can be modified to apply on filter functions which are smaller than the LFSR and on filter-combiner stream ciphers. This allows us to cryptanalyze other configurations commonly found in practice. Finally, filter functions with vector output are sometimes used in stream ciphers to improve the throughput. Therefore the case when the Maiorana-McFarland functions have vector output is investigated. We found that the extra speed comes at the price of additional weaknesses which make the attacks easier. 2006 FSE 2006 EPRINT The algebraic immunity of an S-box depends on the number and type of linearly independent multivariate equations it satisfies. In this paper techniques are developed to find the number of linearly independent, multivariate, bi-affine and quadratic equations for S-boxes based on power mappings. These techniques can be used to prove the exact number of equations for any class of power mappings. Two algorithms to calculate the number of bi-affine and quadratic equations for any $(n,n)$ S-box based on power mapping are also presented. The time complexity of both algorithms is only $O(n^2)$. To design algebraically immune S-boxes four new classes of S-boxes that guarantee zero bi-affine equations and one class of S-boxes that guarantees zero quadratic equations are presented. The algebraic immunity of power mappings based on Kasami, Niho, Dobbertin, Gold, Welch and Inverse exponents are discussed along with other cryptographic properties and several cryptographically strong S-boxes are identified. It is conjectured that a known Kasami like APN power mapping is maximally nonlinear and a known Kasami like maximally nonlinear power mapping is differentially 4-uniform. Finally an open problem to find an $(n,n)$ bijective nonlinear S-box with more than $5n$ quadratic equations is solved and it is conjectured that the upper bound on this number is $\frac{n(n-1)}{2}$. 2005 EPRINT In this paper we propose a new 32-bit RC4 like keystream generator. The proposed generator produces 32 bits in each iteration and can be implemented in software with reasonable memory requirements. Our experiments show that this generator is 3.2 times faster than original 8-bit RC4. It has a huge internal state and offers higher resistance against state recovery attacks than the original 8-bit RC4. We analyze the randomness properties of the generator using a probabilistic approach. The generator is suitable for high speed software encryption. 2004 EPRINT A reasonably efficient password based key exchange (KE) protocol with provable security without random oracle was recently proposed by Katz, {\em et al.} \cite{KOY01} and later by Gennaro and Lindell \cite{GL03}. However, these protocols do not support mutual authentication (MA). The authors explained that this could be achieved by adding an additional flow. But then this protocol turns out to be 4-round. As it is known that a high entropy secret based key exchange protocol with MA\footnote{we do not consider a protocol with a time stamp or a stateful protocol (e.g., a counter based protocol). In other words, we only consider protocols in which a session execution within an entity is independent of its history, and in which the network is asynchronous.} is optimally 3-round (otherwise, at least one entity is not authenticated since a replay attack is applicable), it is quite interesting to ask whether such a protocol in the password setting (without random oracle) is achievable or not. In this paper, we provide an affirmative answer with an efficient construction in the common reference string (CRS) model. Our protocol is even simpler than that of Katz, {\em et al.} Furthermore, we show that our protocol is secure under the DDH assumption ({\em without} random oracle). 2003 EPRINT A broadcast encryption scheme for stateless receivers is a data distribution method which never updates users' secret information while in order to maintain the security the system efficiency decreases with the number of revoked users. Another method, a rekeying scheme is a data distribution approach where it revokes illegal users in an {\em explicit} and {\em immediate} way whereas it may cause inconvenience for users. A hybrid approach that appropriately combines these two types of mechanisms seems resulting in a good scheme. In this paper, we suggest such a hybrid framework by proposing a rekeying algorithm for subset cover broadcast encryption framework (for stateless receivers) due to Naor et al. Our rekeying algorithm can simultaneously revoke a number of users. A hybrid approach that appropriately combines these two types of mechanisms seems resulting in a good scheme. In this paper, we suggest such a hybrid framework by proposing a rekeying algorithm for subset cover broadcast encryption framework (for stateless receivers) due to Naor et al. Our rekeying algorithm can simultaneously revoke a number of users. As an important contribution, we formally prove that this hybrid framework has a pre-CCA like security, where in addition to pre-CCA power, the adversary is allowed to {\em adaptively} corrupt and revoke users. Finally, we realize the hybrid framework by two secure concrete schemes that are based on complete subtree method and Asano method, respectively. 2001 EUROCRYPT 2000 FSE 1994 ASIACRYPT 1990 AUSCRYPT Asiacrypt 2013 FSE 2012 Asiacrypt 2006
2020-08-13 18:36:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6463501453399658, "perplexity": 1169.2421343124356}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739048.46/warc/CC-MAIN-20200813161908-20200813191908-00142.warc.gz"}
https://proofwiki.org/wiki/Sine_of_Angle_in_Cartesian_Plane
# Sine of Angle in Cartesian Plane ## Theorem Let $P = \tuple {x, y}$ be a point in the cartesian plane whose origin is at $O$. Let $\theta$ be the angle between the $x$-axis and the line $OP$. Let $r$ be the length of $OP$. Then: $\sin \theta = \dfrac y r$ where $\sin$ denotes the sine of $\theta$. ## Proof Let a unit circle $C$ be drawn with its center at the origin $O$. Let $Q$ be the point on $C$ which intersects $OP$. $\angle OSP = \angle ORQ$, as both are right angles. Both $\triangle OSP$ and $\triangle ORQ$ share angle $\theta$. By Triangles with Two Equal Angles are Similar it follows that $\triangle OSP$ and $\triangle ORQ$ are similar. By definition of similarity: Then: $\ds \frac y r$ $=$ $\ds \frac {SP} {OP}$ $\ds$ $=$ $\ds \frac {RQ} {OQ}$ Definition of Similar Triangles $\ds$ $=$ $\ds RQ$ $OP$ is Radius of Unit Circle $\ds$ $=$ $\ds \sin \theta$ Definition of Sine When $\theta$ is obtuse, the same argument holds. When $\theta = \dfrac \pi 2$ we have that $x = 0$. Thus $y = r$ and $\sin \theta = 1 \dfrac y r$. Thus the relation holds for $\theta = \dfrac \pi 2$. When $\pi < \theta < 2 \pi$ the diagram can be reflected in the $x$-axis. In this case, both $\sin \theta$ and $y$ are negative. Thus the relation continues to hold. When $\theta = 0$ and $\theta = \pi$ we have that $y = 0$ and $\sin \theta = 0 = \dfrac y r$. Hence the result. $\blacksquare$
2023-03-22 06:58:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9802994132041931, "perplexity": 120.6654069502385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943750.71/warc/CC-MAIN-20230322051607-20230322081607-00503.warc.gz"}
https://e-eduanswers.com/mathematics/question1567604
, 18.10.2019 11:00, afifakiran5226 # Rationalize the denominators: 4/√x+2 ### Other questions on the subject: Mathematics Mathematics, 21.06.2019 14:30, mustachegirl311 Use the multiplier method to incrases £258 by 43% Mathematics, 21.06.2019 16:00, stormhorn491 Question part points submissions used suppose that 100 lottery tickets are given out in sequence to the first 100 guests to arrive at a party. of these 100 tickets, only 12 are winning tickets. the generalized pigeonhole principle guarantees that there must be a streak of at least l losing tickets in a row. find l. Mathematics, 21.06.2019 17:30, Lolgirl5862 Ireally need with the last four problems pls someone me 1.may a.$61.00 2.june b.$54.60 3.february c.$41.00 4.january d.$102.15 5.march e.$81.25 6.april f.$36.50 this question has no label. can anyone me out? ?
2020-10-27 17:18:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9444199800491333, "perplexity": 5206.721955911804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894426.63/warc/CC-MAIN-20201027170516-20201027200516-00465.warc.gz"}
https://www.controlbooth.com/threads/school-light-bar-colored-lens-replacement.44717/
# School Light Bar Colored Lens Replacement? #### DannyDepac ##### Member Hi Everyone, I work at a HS and our Stage lighting is not exactly "New" We have three LARGE light bars above our stage that have red white (clear) and blue glass lenses. Over the years a decent amount of those lenses have disappeared. I don't even know the correct name of the light bars let alone those lenses.... Can you educate me on the names I am missing and perhaps where (if) I can get replacements? I have attached a picture of one of the lenses I removed. They say Major Chicago Illinois Also they seem to have 150 watt Sylvania bulbs in them - I need to replace around 90 of them. Any particular recommends of type and maybe cheap places to get them? Thanks in advance and Happy Thanksgiving! #### Attachments • 391.4 KB Views: 115 • 495.5 KB Views: 99 #### DannyDepac ##### Member Wow you nailed that quick. Thank you! Would LEDs works ? My experience is LEDs delay when dimming but maybe there has been improvements? #### josh88 ##### Remarkably Tired. Fight Leukemia Make sure they are dimmable (or buy a couple and see if you like how they dim beforehand) but I do know people using leds in strip lights like that. #### Les Your best/cheapest option for roundels would probably be eBay. Might take a while to collect the amount you'll need, and expect to pay $4-8 each. Watch the size - they are/were made in various diameters: 4.5", 5 5/8", 8" and in different finishes (smooth, stippled, stripped/lenticular). I put "colored glass stage lens" in to eBay's search box and got several results just now. Using the term "Roundel" gets mixed results since most sellers have no idea what they're called. LED retrofit lamps can give various results... It depends mostly on how picky you are about dimming and intensity. What type of borderlights are these? Do they have an aluminum reflector and "A-lamp", or do they use Par or R shaped lamps with the roundels in metal frames? Last edited: #### derekleffew ##### Resident Curmudgeon Senior Team Premium Member Your best/cheapest option for roundels would probably be eBay. Might take a while to collect the amount you'll need, and expect to pay$4-8 each. Watch the size - they are/were made in various diameters: 4.5", 5 5/8", 8" and in different finishes (smooth, stippled, stripped/lenticular). Second all of this. If your hand is similar to mine, looks like you have the 5-5/8" variety--the most common size. Also, you PROBABLY can substitute HUB Electric Company for Major; Kopp Glass made them for both. I'd be more leary of unmarked ones, as the colors may not match exactly. As for the finish: generally, if you use clear lamps, you use Stippled rondels (like those you pictured); if frosted lamps, use smooth rondels. This "rule" may have been broken over the years by unknowing/uncaring custodians. Also they seem to have 150 watt Sylvania bulbs in them - I need to replace around 90 of them. Any particular recommends of type and maybe cheap places to get them? You probably need 150W A21 lamps [edit: see below*], in either clear or frosted, readily available on Amazon for $5-7 each. Don't have to be Sylvania brand. What most people do is keep moving the burnouts to the ends until there's nothing left, or until they can afford to replace all of them, or at least all of the same color. *If it's these, whether from 1931 or later, you need 150W PS25 lamps. PS=Pear Shaped, 25=25/8=3-1/8" diameter at widest point. https://normanlamps.com/150ps25-cl.html$2.45 each. Such a deal! I'd try to find the purchasing agent for the school district and punt the task to them. Their regular bulb supplier can probably get a better deal. And you might get lucky and they'll come out of a general maintenance budget rather than "theater money." So they look like this, except longer? This? If there is even the remotest possibility of a lighting upgrade in the future, I'd spend the least amount possible on keeping these fixtures active. Don't get me wrong, I love them, they're a great down and dirty way of getting colored light on the stage, but $30K of LED striplights are orders of magnitude better. BTW, the proper term is compartmentalized striplight, borderlight, or (used only by olde-phartes) X-rays. (The CB wiki has issues with Xray, X-ray, Xrays, X-Rays.) One more thing: aside from dimming issues, you're probably not going to find LED replacements that will fit in your reflectors (designed for PS or "long-neck" bulbs) and without the reflector, there's no rondel holder. Also, I don't think there's an LED that's as bright as a 150W incandescent. Last edited: #### JD ##### Well-Known Member Wow you nailed that quick. Thank you! Would LEDs works ? My experience is LEDs delay when dimming but maybe there has been improvements? LEDs are not a good option on this style of fixture as a direct replacement. One quirk of LEDs is that they don't last too long in an enclosed environment. Basically, the electronics and the emitters cook because there is no airflow, despite the fact that they don't throw much heat. RonHebbard #### JohnD ##### Well-Known Member Fight Leukemia I wonder if the OP's borderlights are the new in 1931 Major patented Slip Rings? It was an interesting design, no retaining rings to lose but they didn't age well. They would eventually start binding up so brute force was required to slip them. Here is an older post about rondels: https://www.controlbooth.com/threads/rondel-colors.28848/ RonHebbard #### DannyDepac ##### Member Your best/cheapest option for roundels would probably be eBay. Might take a while to collect the amount you'll need, and expect to pay$4-8 each. Watch the size - they are/were made in various diameters: 4.5", 5 5/8", 8" and in different finishes (smooth, stippled, stripped/lenticular). I put "colored glass stage lens" in to eBay's search box and got several results just now. Using the term "Roundel" gets mixed results since most sellers have no idea what they're called. LED retrofit lamps can give various results... It depends mostly on how picky you are about dimming and intensity. What type of borderlights are these? Do they have an aluminum reflector and "A-lamp", or do they use Par or R shaped lamps with the roundels in metal frames? Thank you - I have been checking and found a few options. They have what looks like the A Lamp. We have some with Gel films taped over them and they seem to work decently so if all else fails I'll try that. Thanks again #### DannyDepac ##### Member I wonder if the OP's borderlights are the new in 1931 Major patented Slip Rings? It was an interesting design, no retaining rings to lose but they didn't age well. They would eventually start binding up so brute force was required to slip them. Here is an older post about rondels: My school was built in 52 so I'm guessing they are a "slightly" updated version however they look the same as the catalog. They are DEFINITELY tough to open. I had never done it so it took me a good ten minutes to figure out how to remove the lens lol. #### DannyDepac ##### Member Second all of this. If your hand is similar to mine, looks like you have the 5-5/8" variety--the most common size. Also, you PROBABLY can substitute HUB Electric Company for Major; Kopp Glass made them for both. I'd be more leary of unmarked ones, as the colors may not match exactly. As for the finish: generally, if you use clear lamps, you use Stippled rondels (like those you pictured); if frosted lamps, use smooth rondels. This "rule" may have been broken over the years by unknowing/uncaring custodians. You probably need 150W A21 lamps [edit: see below*], in either clear or frosted, readily available on Amazon for $5-7 each. Don't have to be Sylvania brand. What most people do is keep moving the burnouts to the ends until there's nothing left, or until they can afford to replace all of them, or at least all of the same color. *If it's these, whether from 1931 or later, you need 150W PS25 lamps. PS=Pear Shaped, 25=25/8=3-1/8" diameter at widest point. https://normanlamps.com/150ps25-cl.html$2.45 each. Such a deal! I'd try to find the purchasing agent for the school district and punt the task to them. Their regular bulb supplier can probably get a better deal. And you might get lucky and they'll come out of a general maintenance budget rather than "theater money." So they look like this, except longer? This? DannyDepac #### PaulP514 ##### Member Danny, let me know if you need more information on the Chroma-Q Color Force II. It would definitely do a great job for that purpose with nice bright saturated colors and perfect blend. #### JohnD ##### Well-Known Member Fight Leukemia As @BillConnerFASTC points out you don't have to replace with strips. Strip lights are batten hogs, not leaving space for other fixtures. Since the OP may have an upgrade in the near future there are several possibilities to consider. There are fixed beam spread LED pars like the ColorSource Par. There are also zoomable LED pars. Then there is the option for moving wash lights. There are several reasons to consider moving lights. They are moving lights you can use for flash and trash for a band concert. They can be easily refocused remotely without ladders, scaffolding or bringing in a batten (assuming a fly loft). You can even call it in, if you have the right app. #### DannyDepac ##### Member Danny, let me know if you need more information on the Chroma-Q Color Force II. It would definitely do a great job for that purpose with nice bright saturated colors and perfect blend. Hi I'm just curious - Would that saturate my stage? The current lights (when all working) Really color the scene - albeit only red white or blue. Fight Leukemia
2019-12-07 16:07:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2872791290283203, "perplexity": 3174.2582716379134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540500637.40/warc/CC-MAIN-20191207160050-20191207184050-00058.warc.gz"}
https://www.quantamagazine.org/tag/p-adic-numbers/
What's up in ## Secret Link Uncovered Between Pure Math and Physics An eminent mathematician reveals that his advances in the study of millennia-old mathematical questions owe to concepts derived from physics. ## The Oracle of Arithmetic At 28, Peter Scholze is uncovering deep connections between number theory and geometry.
2018-06-22 09:11:02
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8384255170822144, "perplexity": 4105.983995747573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864387.54/warc/CC-MAIN-20180622084714-20180622104714-00355.warc.gz"}
http://www.zentralblatt-math.org/zmath/en/advanced/?q=an:1247.47009
Language:   Search:   Contact Zentralblatt MATH has released its new interface! For an improved author identification, see the new author database of ZBMATH. Query: Fill in the form and click »Search«... Format: Display: entries per page entries Zbl 1247.47009 Mursaleen, M.; Karakaya, V.; Polat, H.; Simşek, N. Measure of noncompactness of matrix operators on some difference sequence spaces of weighted means. (English) [J] Comput. Math. Appl. 62, No. 2, 814-820 (2011). ISSN 0898-1221 Summary: For a sequence $x=(x_{k})$, we denote the difference sequence by $\Delta x=(x_{k} - x_{k - 1})$. Let $u=(u_k)_{k=0}^\infty$ and $v=(v_k)_{k=0}^\infty$ be sequences of real numbers such that $u_{k}\ne 0$, $v_{k}\ne 0$ for all $k\in \bbfN$. The difference sequence spaces of weighted means $\lambda (u,v,\Delta)$ are defined as $\lambda (u,v,\Delta )=\{x=(x_{k}):W(x)\in \lambda\}$, where $\lambda$ is either of $c, c_{0}, \ell _{\infty }$ and the matrix $W=(w_{nk})$ is defined by $$w_{nk}=\cases u_n(v_k-v_{k+1})&\text{if }k<n,\\u_n v_n&\text{if }k=n,\\0&\text{if }k>n.\endcases$$ In this paper, we establish some identities or estimates for the operator norms and the Hausdorff measures of noncompactness of certain matrix operators on $\lambda (u,v,\Delta)$. Further, we characterize some classes of compact operators on these spaces by using the Hausdorff measure of noncompactness. MSC 2000: *47B37 Operators on sequence spaces, etc. 47B07 Operators defined by compactness properties 47H08 40C05 Matrix methods in summability 46B45 Banach sequence spaces Keywords: sequence space; weighted mean; matrix transformation; compact operator; Hausdorff measure of noncompactness Highlights Master Server
2013-05-21 11:16:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9353992342948914, "perplexity": 1005.2427795279832}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699899882/warc/CC-MAIN-20130516102459-00021-ip-10-60-113-184.ec2.internal.warc.gz"}
http://csnserver.com/tmbi7m/missing-orders-in-diffraction-26b666
e.g., let p=3 so that 3rd, 6 th, 9 , etc. Experimental setup of the angular diffraction experiment. By the end of this section, you will be able to: Describe the combined effect of interference and diffraction with two slits, each with finite width Determine the relative intensities of interference fringes within a diffraction pattern Identify missing orders, if any Fraunhofer diffraction by two slits. the 2nd order reflection of d 100 occurs at same θas 1st order reflection of d 200 We know that the direction of principal maxima in … In the double-slit diffraction experiment, the two slits are illuminated by a single light beam. The zeroth-order (m = 0) maximum corresponds to the central bright fringe at θ=0 , and the first-order maxima (m=±1) are the bright fringes on either side of the central fringe. What is missing order? (II) Missing orders occur for a diffraction grating when a diffraction minimum coincides with an interference maximum. This is due to van der Waals interactions that make material gratings function as phase gratings in addition to transmission gratings. missing order a … When doing experiments with gratings that have a slit width being an integer fraction of the grating spacing, this can lead to 'missing' orders. As such, they are characterized by diffraction efficiency ­— the fraction of power that is directed into the desired direction versus the total input power. MISSING ORDERS: Condition of interference maxima Condition of diffraction minima Hence all such maxima will be absent, and are called missing orders. For instance, when n=2 (as above), we just halve the d-spacing to make n=1. is such that the minima due to diffraction component in the intensity distribution falls at the same positions of principal maxima due to interference component, then, that order of principal maxima will be missing or absent. DIFFRACTION ORDERS 1st order: OT 2 sind 1 2nd order: 2 2 sinOT d 2 By convention, we set the diffraction order = 1 for XRD. Now we see certain values of e and b for which interference maxima are missing. The “Double Slit” pattern shows missing orders. one of the maxima has an angle equal to the angle of a minima in the single slit diffraction pattern pattern that has missing interference fringes; these fall at places where dark fringes occur in the diffraction pattern. angle αand opening angle β, the zeroes of the sinc2 envelope coincide with OAM sidebands that are allowed by the mask symmetry. The combined effect results in missing of certain orders of interference maxima. We refer to such a missing peak as a missing order. where (e + d ) is the grating element, ‘n’ the order of the maxima and the wavelength of the incident light.. 1) For a given wavelength the angle of diffraction is different for principal maxima of different orders. The central region of constructive interference is known as the central maximum, or A o.. On either side of the central maximum are the first order nodes, N 1.. Other articles where Order of diffraction is discussed: spectroscopy: X-ray optics: … is an integer called the order of diffraction, many weak reflections can add constructively to produce nearly 100 percent reflection. Describe the features of a double slit Fraunhofer’s diffraction pattern. 2 2 sinOT d 2 OT 2( /2)sind 2 e.g. These are termed missing orders.-8S -4S -S 0 S 4S 8S-8S -4S 0 4S 8S-8S -4S 0 4S 8S There seems to be not a lot of information on working out this type of question. This video discusses diffraction gratings, iridescence, and diffraction from a single slit. In fluorescence spectroscopy, monochromators are used to select the excitation and emission wavelengths. This ratio can be affected by the variation of duty cycle and phase depth. In comparison, atomic diffraction from a material grating phnever exhibits missing orders, regardless of the open fraction. A particular grating has slits of width 600nm and a slit separation of 1800nm. missing order spectra in double slit diffraction Que : In double slit diffraction what should be the ratio of a & b such that, the central diffraction maximum contains exactly seven interference fringes. . Properties of diffraction pattern for light diffracted by two slits 3. The diffracting object or aperture effectively becomes a secondary source of the propagating wave. Light of 5000 A is incident on a circular hole of radius (i) 1 cm and (ii) 1 mm. The fraction of power that is not in the desired pattern goes to higher orders at larger angles and to the zero order, which is … Will there be any missing orders if this is used to observe a line spectrum consisting of 450nm, 600nm and 650nm? What is the highest-order maximum for 400-nm light falling on double slits separated by 25.0 μm? Lecture aims to explain: 1. 'a' is slit width and 'b' is the separation between the two slits. The accompanying Table shows the distances of the dark fringes from the center of the central bright fringe for different orders Determine the angle of diffraction, $\theta$ , and $\sin \theta$ for each order. Monochromators utilis… For more information on how a fluorescence spectrometer works read our “Introduction to Fluorescence Measurements & Instrumentation”article. 3 Diffraction and Interference by the single slit minimum pattern -- from Equation (1) -- shown in Figure 4. For a single slit the position of thefirst minimum is given by Sin q = l/a wherea is the slit width Thuswhen Sin q = l/ a = nl/d noorder will be observed Theorder is said to be missing and the number of the missing order is given by n= d / a A typical fluorescence spectrometer will consist of two monochromators; an excitation monochromator to select the desired excitation wavelength and an emission monochromator to select which wavelength reaches the detector. How many holes are there in a diffraction grating? So, put another way, we see the broad diffraction envelope and underneath it, the equally spaced interference fringes. We call this a diffraction grating. what must be the ratio if the central maximum contains exactly five fringes? “Missing orders” in diffraction pattern produced by two slits This gives rise to a complicated pattern on the screen, in which some of the maxima of interference from the two slits are missing if the maximum of the interference is in the same direction as the minimum of the diffraction. A missing order occurs when the “diffraction minimum” overlaps with the “interference maximum” A grating has a 'zero-order mode' (where m = 0), in which there is no diffraction and a ray of light behaves according to the laws of reflection and refraction the same as with a mirror or lens, respectively. 3. I think you may mean the order of the spectrum produced by a diffraction grating. Then, 2 e sinθ n = nλ and e sinθ m = mλ If m = 1, 2, 3 … then n = 2, 4, 6… i.e., the interference orders 2, 4, 6 … missed in the diffraction pattern orders of maxima will be missing Thus, when, in a double slit set up, the interference maxima coincides with a diffraction minima, Note that some of the double-slit maxima have nearly zero intensity as they coincide with single slit minima, as shown in Figure 4. (i) Let e = b. In optical diffraction from a transmission grating, the n^th order is missing when the grating open fraction equals 1/n. Adiffraction grating produces an interference pattern with a single slitdiffraction pattern superimposed upon it. The solid line with multiple peaks of various heights is … Missing orders are discussed. I think the formula is; n = d sin theta/lamda However, I'm not sure. Diffraction refers to various phenomena that occur when a wave encounters an obstacle or opening. Let $D$ be the width of each slit and $d$ the separation of slits. Find the largest wavelength of light falling on double slits separated by 1.20 μm for which there is a first-order maximum. The order of the spectrum is simply a reference to how far the spectrum is from the centre line. Lit.showes that the diffraction peaks for RC appear at 21.6 (2 0 0), 19.9 (1 1 0) and 12.0 (11 0), however my sample is only showing two peaks, one at 12.1 and another one at 21.2. When an interference fringe sits … On the other hand, when δis equal to an odd integer multiple of λ/2, the waves will be m = ± 3 order for the interference is missing because the minimum of the diffraction occurs in the same direction. Apply the similar reasoning to the diffraction grating, you would realize the info on “a slit width of 0.83 μm” is relevant and important. At what angle is the fourth-order maximum for the situation in Question 1? I have absolute no idea what to do. These are regions of destructive interference.. On either side of N 1 are the next antinodes, A 1.. $(a)$ Show that if $d=2 D,$ all even orders $(m=2,4,6, \cdots)$ are missing. How many half period … The result is shown in Figure 5. Experimental Details The experiment is divided into two parts: In Part 1 (Single Slit Diffraction), we shine laser light through a single slit and What are the conditions of missing orders? He placed a screen at a distance of 1.490 $\mathrm{m}$ from the slit to observe the diffraction pattern of the laser light. For optical processing and switching, the intensity ratio of the diffracted and main beams of the grating needs to be controlled to within a certain range. $(b)$ … Well typically, these are rated in lines per centimeter. If the width of the slits is small enough (less than the wavelength of the light), the slits diffract the light into cylindrical waves. Peaks predicted by the double-source equation are not present, because they coincide with zeros in the single slit pattern. The Fraunhofer pattern formed by diffraction at each slit acts as an “envelope” which limits the amplitude of the intensity fringes formed by double-slit interference. This alternating pattern of nodes and antinodes continues throughout the construction. 2) For white light and for a particular order n, the light of different wavelengths will be diffracted in different directions. It is defined as the bending of waves around the corners of an obstacle or through an aperture into the region of geometrical shadow of the obstacle/aperture. Calculation of the diffraction pattern for light diffracted by two slits 2. where m is called the order number. even diffraction orders are missing [15]. Answer: Fraunhofer Diffraction by Double Slit. Is this in the visible part of the spectrum? One example of a diffraction pattern on the screen is shown in . Example 4.3 Intensity of the Fringes Figure 4.11 shows that the intensity of the fringe for So this is a diffraction grating and it's more useful than a double slit in many ways because it gives you clearly delineated dots and it let's you see them more clearly. missing order: interference maximum that is not seen because it coincides with a diffraction minimum: Rayleigh criterion: two images are just-resolvable when the center of the diffraction pattern of one is directly over the first minimum of the diffraction pattern of the other: resolution: ability, or limit thereof, to distinguish small details in images The missing order in the interference maximum order can be computed using where d is the separation between the two slits and a is the width of the slit. The Bragg condition for the reflection of X-rays is similar to the condition for optical reflection from a diffraction grating. Of diffraction pattern on the screen is shown in for a particular grating has slits width! The combined effect results in missing of certain orders of interference maxima can affected. X-Rays is similar to the condition for optical reflection from a material grating phnever exhibits missing orders coincide single! ) 1 cm and ( ii ) 1 cm and ( ii ) mm! = ± 3 order for the reflection of X-rays is similar to the for. Diffraction grating destructive interference.. on either side of n 1 are conditions. Width 600nm and 650nm and for a particular order n, the equally spaced interference fringes.. on either of. The formula is ; n = d sin theta/lamda However, i 'm not sure m called. 2 ( /2 ) sind 2 e.g for more information on working out this type of.... ) sind 2 e.g affected by the variation of duty cycle and depth! Screen is shown in either side of n 1 are the conditions of missing?! Experiment, the light of different wavelengths will be diffracted in different.. Features of a double slit Fraunhofer ’ s diffraction pattern, the equally interference. One example of a double slit Fraunhofer ’ s diffraction pattern 400-nm light falling on double slits separated 25.0... Of different wavelengths will be diffracted in different directions highest-order maximum for interference... Side of n 1 are the conditions of missing orders is from centre. Because they coincide with OAM sidebands that are allowed by the double-source equation are not present, because they with... Is called the order of the sinc2 envelope coincide with OAM sidebands are... Part of the spectrum is simply a reference to how far the spectrum by... Alternating pattern of nodes and antinodes continues throughout the construction produces an interference with... A fluorescence spectrometer works read our “ Introduction to fluorescence Measurements & Instrumentation ” article minimum of the spectrum simply... $be the ratio if the central maximum contains exactly five fringes n=2! Type of question diffraction envelope and underneath it, the equally spaced interference.. Addition to transmission gratings the light of 5000 a is incident on a circular hole radius! Reference to how far the spectrum is simply a reference to how far spectrum!, 600nm and 650nm nearly zero intensity as they coincide with single pattern! Of n 1 are the next antinodes, a 1, put another,! Antinodes, a 1 produces an interference pattern with a single light beam highest-order maximum for the is... ' is the separation of 1800nm fluorescence Measurements & Instrumentation ” article this type of question cm (. ' is the highest-order maximum for the reflection of X-rays is similar to the condition for optical reflection from material. Missing because the minimum of the open fraction such a missing order a … a particular grating has of. Zero intensity as they coincide with zeros in the same direction by the mask.! Works read our “ Introduction to fluorescence Measurements & Instrumentation ” article open fraction ratio can be affected by double-source. Next antinodes, a 1 minima, as shown in must be ratio! Missing orders effectively becomes a secondary source of the diffraction occurs in single. Situation in question 1 ( b )$ … what are the next antinodes a... Part of the diffraction pattern on the screen is missing orders in diffraction in Figure 4 the pattern... Atomic diffraction from a diffraction grating throughout the construction peaks of various is... A ' is the highest-order maximum for 400-nm light falling on double slits by! Produced by a single slit pattern ) 1 cm and ( ii ) 1 mm the mask symmetry period... Addition to transmission gratings for optical reflection from a single light beam of... The same direction the open fraction more information on working out this type of question $the between... Another way, we just halve the d-spacing to make n=1 describe the features of a slit! Either side of n 1 are the conditions of missing orders if this is due van... 2 2 sinOT d 2 OT 2 ( /2 ) sind 2 e.g halve d-spacing. 6 th, 9, etc on working out this type of question exactly! The double-source equation are not present, because they coincide with OAM sidebands that allowed! 25.0 μm the screen is shown in Figure 4 to fluorescence Measurements & Instrumentation ”.... Discusses diffraction gratings, iridescence, and diffraction from a material grating phnever missing. The d-spacing to make n=1 a particular order n, the equally spaced interference fringes ratio... Or aperture effectively becomes a secondary source of the propagating wave$ the separation of 1800nm broad diffraction and... Peaks predicted by the variation of duty cycle and phase depth 3rd 6. Halve the d-spacing to make n=1 heights is … the “ double slit Fraunhofer ’ diffraction! = d sin theta/lamda However, i 'm not sure is due to van der Waals interactions that make gratings. By a diffraction grating with OAM sidebands that are allowed by the double-source equation are present... The equally spaced interference fringes pattern shows missing orders the zeroes of the sinc2 envelope coincide with slit. Results in missing of certain orders of interference maxima spectrum consisting of 450nm, 600nm and 650nm for... Pattern shows missing orders sind 2 e.g Bragg condition for optical reflection from a diffraction pattern light... Maximum contains exactly five fringes this is used to observe a line spectrum consisting of,... Be diffracted in different directions of nodes and antinodes continues throughout the construction with multiple peaks of various is. Five fringes comparison, atomic diffraction from a diffraction grating width of each slit and missing orders in diffraction d be. A is incident on a circular hole of radius ( i ) 1 and! Function as phase gratings in addition to transmission gratings to transmission gratings diffraction grating observe. 5000 a is incident on a circular hole missing orders in diffraction radius ( i ) 1 mm well,! Is shown in Figure 4, the light of different wavelengths will be diffracted in different directions alternating of! To observe a line spectrum consisting of 450nm, 600nm and a slit of! At what angle is the highest-order maximum for the reflection of X-rays is to... So that 3rd, 6 th, 9, etc 2 ) for white and... Order n, the light of different wavelengths will be diffracted in directions. Comparison, atomic diffraction from a material grating phnever exhibits missing orders is simply a to... X-Rays is similar to the condition for optical reflection from missing orders in diffraction diffraction grating symmetry... The solid line with multiple peaks of various heights is … the “ double slit Fraunhofer ’ s diffraction on... Duty cycle and phase depth e.g., let p=3 so that 3rd, 6 th 9. Of light falling on double slits separated by 1.20 μm for which there is a first-order maximum light by... Double-Source equation are not present, because they coincide with zeros in the visible part of propagating! Exactly five fringes, atomic diffraction from a single slitdiffraction pattern superimposed upon it th. Slit separation of 1800nm a is incident on a circular hole of radius ( i ) 1.. Interactions that make material gratings function as phase gratings in addition to transmission gratings opening angle,! Allowed by the double-source equation are not present, because they coincide with single slit pattern antinodes. Of each slit and $d$ the separation of 1800nm, 9, etc gratings, iridescence, diffraction. What angle is the separation between the two slits are illuminated by a single light.! Underneath it, the two slits are illuminated by a diffraction grating, put another way, we just the! By the mask symmetry a missing order the construction antinodes continues throughout the construction the fraction! We just halve the d-spacing to make n=1 function as phase gratings addition. Produces an interference pattern with a single slitdiffraction pattern superimposed upon it what. Diffracting object or aperture effectively becomes a secondary source of the spectrum produced by a single slit pattern for... Line spectrum consisting of 450nm, 600nm and a slit separation of.... The d-spacing to make n=1 incident on a circular hole of radius ( i 1! And for a particular order n, the zeroes of the diffraction in... The broad diffraction envelope and underneath it, the light of different wavelengths will be diffracted in different directions double. Not present, because they coincide with single slit pattern underneath it, the light different! Diffraction envelope and underneath it, the two slits b ' is the fourth-order maximum for 400-nm falling... X-Rays is similar to the condition for the interference is missing because the minimum of diffraction! Produces an interference pattern with a single slit pattern as shown in Figure 4 situation in question 1 side n... Be affected by the mask symmetry a ' is slit width and ' '! Occurs in the double-slit maxima have missing orders in diffraction zero intensity as they coincide with zeros in the visible part of diffraction! There be any missing orders if this is used to observe a line spectrum consisting 450nm. A fluorescence spectrometer works read our “ Introduction to fluorescence Measurements & Instrumentation ” article sinc2 envelope coincide OAM... Pattern on the screen is shown in Figure 4 and diffraction from a single light beam the equation! Of light falling on double slits separated by 25.0 μm ) \$ … what are the next,!
2021-04-19 13:26:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47955283522605896, "perplexity": 948.1415573856768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038879374.66/warc/CC-MAIN-20210419111510-20210419141510-00182.warc.gz"}
https://labs.tib.eu/arxiv/?author=Rui%20Wang
• ### Prediction of at topological $p+ip$ excitonic insulator with parity anomaly(1705.06421) Jan. 14, 2019 cond-mat.str-el Excitonic insulators are insulating states formed by the coherent condensation of electron and hole pairs into BCS-like states. Isotropic spatial wave functions are commonly considered for excitonic condensates since the attractive interaction among the electrons and the holes in semiconductors usually leads to $s$-wave excitons. Here, we propose a new type of excitonic insulator that exhibits order parameter with $p+ip$ symmetry and is characterized by a chiral Chern number $C_\textrm{c}=1/2$. This state displays the parity anomaly, which results in two novel topological properties: fractionalized excitations with $\textrm{e}/2$ charge at defects and a spontaneous in-plane magnetization. The topological insulator surface state is a promising platform to realize the topological excitonic insulator. With the spin-momentum locking, the interband optical pumping can renormalize the surface electrons and drive the system towards the proposed $p+ip$ instability. • ### Challenges in Monocular Visual Odometry: Photometric Calibration, Motion Bias and Rolling Shutter Effect(1705.04300) June 7, 2018 cs.CV Monocular visual odometry (VO) and simultaneous localization and mapping (SLAM) have seen tremendous improvements in accuracy, robustness and efficiency, and have gained increasing popularity over recent years. Nevertheless, not so many discussions have been carried out to reveal the influences of three very influential yet easily overlooked aspects: photometric calibration, motion bias and rolling shutter effect. In this work, we evaluate these three aspects quantitatively on the state of the art of direct, feature-based and semi-direct methods, providing the community with useful practical knowledge both for better applying existing methods and developing new algorithms of VO and SLAM. Conclusions (some of which are counter-intuitive) are drawn with both technical and empirical analyses to all of our experiments. Possible improvements on existing methods are directed or proposed, such as a sub-pixel accuracy refinement of ORB-SLAM which boosts its performance. • ### Terahertz streaking of few-femtosecond relativistic electron beams(1805.03923) May 10, 2018 physics.acc-ph Streaking of photoelectrons with optical lasers has been widely used for temporal characterization of attosecond extreme ultraviolet pulses. Recently, this technique has been adapted to characterize femtosecond x-ray pulses in free-electron lasers with the streaking imprinted by farinfrared and Terahertz (THz) pulses. Here, we report successful implementation of THz streaking for time-stamping of an ultrashort relativistic electron beam of which the energy is several orders of magnitude higher than photoelectrons. Such ability is especially important for MeV ultrafast electron diffraction (UED) applications where electron beams with a few femtosecond pulse width may be obtained with longitudinal compression while the arrival time may fluctuate at a much larger time scale. Using this laser-driven THz streaking technique, the arrival time of an ultrashort electron beam with 6 fs (rms) pulse width has been determined with 1.5 fs (rms) accuracy. Furthermore, we have proposed and demonstrated a non-invasive method for correction of the timing jitter with femtosecond accuracy through measurement of the compressed beam energy, which may allow one to advance UED towards sub-10 fs frontier far beyond the ~100 fs (rms) jitter. • ### A Stealth CME Bracketed between Slow and Fast Wind Producing Unexpected Geo-effectiveness(1805.03128) We investigate how a weak coronal mass ejection (CME) launched on 2016 October 8 without obvious signatures in the low corona produced a relatively intense geomagnetic storm. Remote sensing observations from SDO, STEREO and SOHO and in situ measurements from WIND are employed to track the CME from the Sun to the Earth. Using a graduated cylindrical shell (GCS) model, we estimate the propagation direction and the morphology of the CME near the Sun. CME kinematics are determined from the wide-angle imaging observations of STEREO A and are used to predict the CME arrival time and speed at the Earth. We compare ENLIL MHD simulation results with in situ measurements to illustrate the background solar wind where the CME was propagating. We also apply a Grad--Shafranov technique to reconstruct the flux rope structure from in situ measurements in order to understand the geo-effectiveness associated with the CME magnetic field structure. Key results are obtained concerning how a weak CME can generate a relatively intense geomagnetic storm: (1) there were coronal holes at low latitudes, which could produce high speed streams (HSSs) to interact with the CME in interplanetary space; (2) the CME was bracketed between a slow wind ahead and a HSS behind, which enhanced the southward magnetic field inside the CME and gave rise to the unexpected geomagnetic storm. • ### VINE: An Open Source Interactive Data Visualization Tool for Neuroevolution(1805.01141) May 3, 2018 cs.AI, cs.NE Recent advances in deep neuroevolution have demonstrated that evolutionary algorithms, such as evolution strategies (ES) and genetic algorithms (GA), can scale to train deep neural networks to solve difficult reinforcement learning (RL) problems. However, it remains a challenge to analyze and interpret the underlying process of neuroevolution in such high dimensions. To begin to address this challenge, this paper presents an interactive data visualization tool called VINE (Visual Inspector for NeuroEvolution) aimed at helping neuroevolution researchers and end-users better understand and explore this family of algorithms. VINE works seamlessly with a breadth of neuroevolution algorithms, including ES and GA, and addresses the difficulty of observing the underlying dynamics of the learning process through an interactive visualization of the evolving agent's behavior characterizations over generations. As neuroevolution scales to neural networks with millions or more connections, visualization tools like VINE that offer fresh insight into the underlying dynamics of evolution become increasingly valuable and important for inspiring new innovations and applications. • ### Dynamic Sentence Sampling for Efficient Training of Neural Machine Translation(1805.00178) May 1, 2018 cs.CL Traditional Neural machine translation (NMT) involves a fixed training procedure where each sentence is sampled once during each epoch. In reality, some sentences are well-learned during the initial few epochs; however, using this approach, the well-learned sentences would continue to be trained along with those sentences that were not well learned for 10-30 epochs, which results in a wastage of time. Here, we propose an efficient method to dynamically sample the sentences in order to accelerate the NMT training. In this approach, a weight is assigned to each sentence based on the measured difference between the training costs of two iterations. Further, in each epoch, a certain percentage of sentences are dynamically sampled according to their weights. Empirical results based on the NIST Chinese-to-English and the WMT English-to-German tasks depict that the proposed method can significantly accelerate the NMT training and improve the NMT performance. • ### Uplink Achievable Rate in One-bit Quantized Massive MIMO with Superimposed Pilots(1803.08686) April 25, 2018 cs.IT, math.IT In this work, we consider a 1-bit quantized massive MIMO channel with superimposed pilot (SP) scheme, dubbed QSP. With linear minimum mean square error (LMMSE) channel estimator and maximum ratio combining (MRC) receiver at the BS, we derive an approximate lower bound on the achievable rate. When optimizing pilot and data powers, the optimal power allocation maximizing the data rate is obtained in a closed-form solution. Although there is a performance gap between the quantized and unquantized systems, it is shown that this gap diminishes as the number of BS antennas is asymptotically large. Moreover, we show that pilot removal from the received signal by using the channel estimate doesn't result in a significant increase in information, especially in the cases of low signal-to-noise ratio (SNR) and a large number of users. We present some numerical results to corroborate our analytical findings and insights are provided for further exploration of the quantized systems with SP. • ### Room Temperature Continuous-wave Excited Biexciton Emission in CsPbBr3 Nanocrystals(1804.09782) Biexcitons are a manifestation of many-body excitonic interactions crucial for quantum information and quantum computation in the construction of coherent combinations of quantum states. However, due to their small binding energy and low transition efficiency, most biexcitons in conventional semiconductors exist either at cryogenic temperature or under femtosecond pulse laser excitation. Here we demonstrate room temperature, continuous wave driven biexciton states in CsPbBr3 perovskite nanocrystals through coupling with a plasmonic nanogap. The room temperature CsPbBr3 biexciton excitation fluence (~100 mW/cm2) is reduced by ~10^13 times in the Ag nanowire-film nanogaps. The giant enhancement of biexciton emission is driven by coherent biexciton-plasmon Fano interference. These results provide new pathways to develop high efficiency non-blinking single photon sources, entangled light sources and lasers based on biexciton states. • ### A stochastic second-order generalized estimating equations approach for estimating intraclass correlation coefficient in the presence of informative missing data(1804.05923) April 16, 2018 stat.ME Design and analysis of cluster randomized trials must take into account correlation among outcomes from the same clusters. When applying standard generalized estimating equations (GEE), the first-order (e.g. treatment) effects can be estimated consistently even with a misspecified correlation structure. In settings for which the correlation is of interest, one could estimate this quantity via second-order generalized estimating equations (GEE2). We build upon GEE2 in the setting of missing data, for which we incorporate a "second-order" inverse-probability weighting (IPW) scheme and "second-order" doubly robust (DR) estimating equations that guard against partial model misspecification. We highlight the need to model correlation among missing indicators in such settings. In addition, the computational difficulties in solving these second-order equations have motivated our development of more computationally efficient algorithms for solving GEE2, which alleviates reliance on parameter starting values and provides substantially faster and higher convergence rates than the more widely used deterministic root-solving methods. • ### Insight into the origin of Lithium/Nickel ions exchange in layered Li(NixMnyCoz)O2 cathode materials(1804.04598) April 12, 2018 cond-mat.mtrl-sci In layered LiNixMnyCozO2 cathode material for lithium-ion batteries, the spins of transition metal (TM) ions construct a two-dimensional triangular networks, which can be considered as a simple case of geometrical frustration. By performing neutron powder diffraction experiments and magnetization measurements, we find that long-range magnetic order cannot be established in LiNixMnyCozO2 even at low temperature of 3 K. Remarkably, the frustration parameters of these compounds are estimated to be larger than 30, indicating the existence of strongly frustrated magnetic interactions between spins of TM ions. As frustration will inevitably give rise to lattice instability, the formation of Li/Ni exchange in LiNixMnyCozO2 will help to partially relieve the degeneracy of the frustrated magnetic lattice by forming a stable antiferromagnetic state in hexagonal sublattice with nonmagnetic ions located in centers of the hexagons. Moreover, Li/Ni exchange will introduce 180{\deg} superexchange interaction, which further relieves the magnetic frustration through bringing in new exchange paths. Thus, the variation of Li/Ni exchange ratio vs. TM mole fraction in LiNixMnyCozO2 with different compositions can be well understood and predicted in terms of magnetic frustration and superexchange interactions. This provides a unique viewpoint to study the Li/Ni ions exchange in layered Li(NixMnyCoz)O2 cathode materials. • ### High-frequency Oscillations in the Atmosphere above a Sunspot Umbra(1803.09046) March 24, 2018 astro-ph.SR We use high spatial and temporal resolution observations, simultaneously obtained with the New Vacuum Solar Telescope and Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory, to investigate the high-frequency oscillations above a sunspot umbra. A novel time--frequency analysis method, namely the synchrosqueezing transform (SST), is employed to represent their power spectra and to reconstruct the high-frequency signals at different solar atmospheric layers. A validation study with synthetic signals demonstrates that SST is capable to resolving weak signals even when their strength is comparable with the high-frequency noise. The power spectra, obtained from both SST and the Fourier transform, of the entire umbral region indicate that there are significant enhancements between 10 and 14 mHz (labeled as 12 mHz) at different atmospheric layers. Analyzing the spectrum of a photospheric region far away from the umbra demonstrates that this 12~mHz component exists only inside the umbra. The animation based on the reconstructed 12 mHz component in AIA 171 \AA\ illustrates that an intermittently propagating wave first emerges near the footpoints of coronal fan structures, and then propagates outward along the structures. A time--distance diagram, coupled with a subsonic wave speed ($\sim$ 49 km s$^{-1}$), highlights the fact that these coronal perturbations are best described as upwardly propagating magnetoacoustic slow waves. Thus, we first reveal the high-frequency oscillations with a period around one minute in imaging observations at different height above an umbra, and these oscillations seem to be related to the umbral perturbations in the photosphere. • ### Antiferromagnetic magnons from fractionalized excitations(1709.00060) We develop an approach to describe antiferromagnetic magnons on a bipartite lattice supporting the N\'{e}el state using fractionalized degrees of freedom typically inherent to quantum spin liquids. In particular we consider a long-range magnetically ordered state of interacting two-dimensional quantum spin$-1/2$ models using the Chern-Simons (CS) fermion representation of interacting spins. The interaction leads to Cooper instability and pairing of CS fermions, and to CS superconductivity which spontaneously violates the continuous $\mathrm{U}(1)$ symmetry generating a linearly-dispersing gapless Nambu-Goldstone mode due to phase fluctuations. We evaluate this mode and show that it is in high-precision agreement with magnons of the corresponding N\'{e}el antiferromagnet irrespective to the lattice symmetry. Using the fermion formulation of a system with competing interactions, we show that the frustration gives raise to nontrivial long-range four, six, and higher-leg interaction vertices mediated by the CS gauge field, which are responsible for restoring of the continuous symmetry at sufficiently strong frustration. We identify these new interaction vertices and discuss their implications to unconventional phase transitions. We also apply the proposed theory to a model of anyons that can be tuned continuously from fermions to bosons. • ### Imaging nanoscale spatial modulation of a relativistic electron beam with a MeV ultrafast electron microscope(1803.04670) March 15, 2018 physics.acc-ph Accelerator-based MeV ultrafast electron microscope (MUEM) has been proposed as a promising tool to study structural dynamics at the nanometer spatial scale and picosecond temporal scale. Here we report experimental tests of a prototype MUEM where high quality images with nanoscale fine structures were recorded with a pulsed 3 MeV picosecond electron beam. The temporal and spatial resolution of the MUEM operating in single-shot mode is about 4 ps (FWHM) and 100 nm (FWHM), corresponding to a temporal-spatial resolution of 4e-19 s*m, about 2 orders of magnitude higher than that achieved with state-of-the-art single-shot keV UEM. Using this instrument we offer the demonstration of visualizing the nanoscale periodic spatial modulation of an electron beam, which may be converted into longitudinal density modulation through emittance exchange to enable production of high-power coherent radiation at short wavelengths. Our results mark a great step towards single-shot nanometer-resolution MUEMs and compact intense x-ray sources that may have wide applications in many areas of science. • ### Coulomb-driven relativistic electron beam compression(1803.04736) March 15, 2018 physics.acc-ph Coulomb interaction between charged particles is a well-known phenomenon in many areas of researches. In general the Coulomb repulsion force broadens the pulse width of an electron bunch and limits the temporal resolution of many scientific facilities such as ultrafast electron diffraction and x-ray free-electron lasers. Here we demonstrate a scheme that actually makes use of Coulomb force to compress a relativistic electron beam. Furthermore, we show that the Coulomb-driven bunch compression process does not introduce additional timing jitter, which is in sharp contrast to the conventional radio-frequency buncher technique. Our work not only leads to enhanced temporal resolution in electron beam based ultrafast instruments that may provide new opportunities in probing material systems far from equilibrium, but also opens a promising direction for advanced beam manipulation through self-field interactions. • ### Carbon stars identified from LAMOST DR4 using Machine Learning(1712.07784) Jan. 31, 2018 astro-ph.SR In this work, we present a catalog of 2651 carbon stars from the fourth Data Release (DR4) of the Large Sky Area Multi-Object Fiber Spectroscopy Telescope (LAMOST). Using an efficient machine-learning algorithm, we find out these stars from more than seven million spectra. As a by-product, 17 carbon-enhanced metal-poor (CEMP) turnoff star candidates are also reported in this paper, and they are preliminarily identified by their atmospheric parameters. Except for 176 stars that could not be given spectral types, we classify the other 2475 carbon stars into five subtypes including 864 C-H, 226 C-R, 400 C-J, 266 C-N, and 719 barium stars based on a series of spectral features. Furthermore, we divide the C-J stars into three subtypes of CJ( H), C-J(R), C-J(N), and about 90% of them are cool N-type stars as expected from previous literature. Beside spectroscopic classification, we also match these carbon stars to multiple broadband photometries. Using ultraviolet photometry data, we find that 25 carbon stars have FUV detections and they are likely to be in binary systems with compact white dwarf companions. • ### Multi-Spacecraft Observations of the Rotation and Non-Radial Motion of a CME Flux Rope causing an intense geomagnetic storm(1801.07457) Jan. 23, 2018 astro-ph.SR We present an investigation of the rotation and non-radial motion of a coronal mass ejection (CME) from AR 12468 on 2015 December 16 using observations from SDO, SOHO, STEREO A and Wind. The EUV and HMI observations of the source region show that the associated magnetic flux rope (MFR) axis pointed to the east before the eruption. We use a nonlinear fore-free field (NLFFF) extrapolation to determine the configuration of the coronal magnetic field and calculate the magnetic energy density distributions at different heights. The distribution of the magnetic energy density shows a strong gradient toward the northeast. The propagation direction of the CME from a Graduated Cylindrical Shell (GCS) modeling deviates from the radial direction of the source region by about 45 degr in longitude and about 30 degr in latitude, which is consistent with the gradient of the magnetic energy distribution around the AR.The MFR axis determined by the GCS modeling points southward, which has rotated counterclockwise by about 95 degr compared with the orientation of the MFR in the low corona.The MFR reconstructed by a Grad-Shafranov (GS) method at 1 AU has almost the same orientation as the MFR from the GCS modeling, which indicates that the MFR rotation occurred in the low corona. It is the rotation of the MFR that caused the intense geomagnetic storm with the minimum Dst of -155 nT. These results suggest that the coronal magnetic field surrounding the MFR plays a crucial role in the MFR rotation and propagation direction. • ### Fake Colorized Image Detection(1801.02768) Jan. 14, 2018 cs.MM Image forensics aims to detect the manipulation of digital images. Currently, splicing detection, copy-move detection and image retouching detection are drawing much attentions from researchers. However, image editing techniques develop with time goes by. One emerging image editing technique is colorization, which can colorize grayscale images with realistic colors. Unfortunately, this technique may also be intentionally applied to certain images to confound object recognition algorithms. To the best of our knowledge, no forensic technique has yet been invented to identify whether an image is colorized. We observed that, compared to natural images, colorized images, which are generated by three state-of-the-art methods, possess statistical differences for the hue and saturation channels. Besides, we also observe statistical inconsistencies in the dark and bright channels, because the colorization process will inevitably affect the dark and bright channel values. Based on our observations, i.e., potential traces in the hue, saturation, dark and bright channels, we propose two simple yet effective detection methods for fake colorized images: Histogram based Fake Colorized Image Detection (FCID-HIST) and Feature Encoding based Fake Colorized Image Detection (FCID-FE). Experimental results demonstrate that both proposed methods exhibit a decent performance against multiple state-of-the-art colorization approaches. • ### Application of a semantic segmentation convolutional neural network for accurate automatic detection and mapping of solar photovoltaic arrays in aerial imagery(1801.04018) Jan. 11, 2018 cs.CV We consider the problem of automatically detecting small-scale solar photovoltaic arrays for behind-the-meter energy resource assessment in high resolution aerial imagery. Such algorithms offer a faster and more cost-effective solution to collecting information on distributed solar photovoltaic (PV) arrays, such as their location, capacity, and generated energy. The surface area of PV arrays, a characteristic which can be estimated from aerial imagery, provides an important proxy for array capacity and energy generation. In this work, we employ a state-of-the-art convolutional neural network architecture, called SegNet (Badrinarayanan et. al., 2015), to semantically segment (or map) PV arrays in aerial imagery. This builds on previous work focused on identifying the locations of PV arrays, as opposed to their specific shapes and sizes. We measure the ability of our SegNet implementation to estimate the surface area of PV arrays on a large, publicly available, dataset that has been employed in several previous studies. The results indicate that the SegNet model yields substantial performance improvements with respect to estimating shape and size as compared to a recently proposed convolutional neural network PV detection algorithm. • ### Joint Content Delivery and Caching Placement via Dynamic Programming(1801.00924) Jan. 3, 2018 cs.IT, math.IT In this paper, downlink delivery of popular content is optimized with the assistance of wireless cache nodes. Specifically, the requests of one file is modeled as a Poisson point process with finite lifetime, and two downlink transmission modes are considered: (1) the base station multicasts file segments to the requesting users and selected cache nodes; (2) the base station proactively multicasts file segments to the selected cache nodes without requests from users. Hence the cache nodes with decoded files can help to offload the traffic upon the next file request via other air interfaces, e.g. WiFi. Without proactive caching placement, we formulate the downlink traffic offloading as a Markov decision process with random number of stages, and propose a revised Bellman's equation to obtain the optimal control policy. In order to address the prohibitively huge state space, we also introduce a low-complexity sub-optimal solution based on linear approximation of the value functions, where the gap between the approximated value functions and the real ones is bounded analytically. The approximated value functions can be calculated from analytical expressions given the spatial distribution of requesting users. Moreover, we propose a learning-based algorithm to evaluate the approximated value functions for unknown distribution of requesting users. Finally, a proactive caching placement algorithm is introduced to exploit the temporal diversity of shadowing effect. It is shown by simulation that the proposed low-complexity algorithm based on approximated value functions can significantly reduce the resource consumption at the base station, and the proactive caching placement can further improve the performance. • ### Structural Stability of Lexical Semantic Spaces: Nouns in Chinese and French(1710.04173) Oct. 11, 2017 q-bio.NC Many studies in the neurosciences have dealt with the semantic processing of words or categories, but few have looked into the semantic organization of the lexicon thought as a system. The present study was designed to try to move towards this goal, using both electrophysiological and corpus-based data, and to compare two languages from different families: French and Mandarin Chinese. We conducted an EEG-based semantic-decision experiment using 240 words from eight categories (clothing, parts of a house, tools, vehicles, fruits/vegetables, animals, body parts, and people) as the material. A data-analysis method (correspondence analysis) commonly used in computational linguistics was applied to the electrophysiological signals. The present cross-language comparison indicated stability for the following aspects of the languages' lexical semantic organizations: (1) the living/nonliving distinction, which showed up as a main factor for both languages; (2) greater dispersion of the living categories as compared to the nonliving ones; (3) prototypicality of the \emph{animals} category within the living categories, and with respect to the living/nonliving distinction; and (4) the existence of a person-centered reference gradient. Our electrophysiological analysis indicated stability of the networks at play in each of these processes. Stability was also observed in the data taken from word usage in the languages (synonyms and associated words obtained from textual corpora). • ### Online Photometric Calibration for Auto Exposure Video for Realtime Visual Odometry and SLAM(1710.02081) Oct. 5, 2017 cs.CV Recent direct visual odometry and SLAM algorithms have demonstrated impressive levels of precision. However, they require a photometric camera calibration in order to achieve competitive results. Hence, the respective algorithm cannot be directly applied to an off-the-shelf-camera or to a video sequence acquired with an unknown camera. In this work we propose a method for online photometric calibration which enables to process auto exposure videos with visual odometry precisions that are on par with those of photometrically calibrated videos. Our algorithm recovers the exposure times of consecutive frames, the camera response function, and the attenuation factors of the sensor irradiance due to vignetting. Gain robust KLT feature tracks are used to obtain scene point correspondences as input to a nonlinear optimization framework. We show that our approach can reliably calibrate arbitrary video sequences by evaluating it on datasets for which full photometric ground truth is available. We further show that our calibration can improve the performance of a state-of-the-art direct visual odometry method that works solely on pixel intensities, calibrating for photometric parameters in an online fashion in realtime. • ### Plasmonic Optical Modulator based on Adiabatic Coupled Waveguides(1710.01689) Oct. 4, 2017 physics.optics In atomic multi-level systems, adiabatic elimination is a method used to minimize complicity of the system by eliminating irrelevant and strongly coupled levels by detuning them from one-another. Such a three-level system, for instance, can be mapped onto physical in form of a three-waveguide system. Actively detuning the coupling strength between the respective waveguide modes allows modulating light propagating through the device, as proposed here. The outer waveguides act as an effective two- photonic-mode system similar to ground- and excited states of a three-level atomic system, whilst the center waveguide is partially plasmonic. In adiabatic elimination regime, the amplitude of the middle waveguide oscillates much faster in comparison to the outer waveguides leading to a vanishing field build up. As a result, the middle waveguide becomes a dark state and hence a low insertion-loss of 8 decibel is expected to keep when achieving the modulation depth as high as 70 decibel despite the involvement of a plasmonic waveguide in the design presented here. The modulation mechanism relies on switching this waveguide system from a critical coupling regime to adiabatic elimination condition via electrostatically tuning the free-carrier concentration and hence the optical index of a thin ITO layer residing in the plasmonic center waveguide. This alters the effective coupling length and the phase mismatching condition thus modulation in each of outer waveguides. Our results show a modulator energy efficiency as low as 40 atto-joule per bit and an extinction ratio of 50 decibel. Given the minuscule footprint of the modulator, the resulting lumped-element limited RC delay is expected to exceed 200 giga hertz. This type of modulator paves the way for next-generation both energy-and speed conscience optical short-reach interconnects. • ### 3D Shape Reconstruction from Sketches via Multi-view Convolutional Networks(1707.06375) Sept. 29, 2017 cs.CV, cs.GR We propose a method for reconstructing 3D shapes from 2D sketches in the form of line drawings. Our method takes as input a single sketch, or multiple sketches, and outputs a dense point cloud representing a 3D reconstruction of the input sketch(es). The point cloud is then converted into a polygon mesh. At the heart of our method lies a deep, encoder-decoder network. The encoder converts the sketch into a compact representation encoding shape information. The decoder converts this representation into depth and normal maps capturing the underlying surface from several output viewpoints. The multi-view maps are then consolidated into a 3D point cloud by solving an optimization problem that fuses depth and normals across all viewpoints. Based on our experiments, compared to other methods, such as volumetric networks, our architecture offers several advantages, including more faithful reconstruction, higher output surface resolution, better preservation of topology and shape structure. • ### Stereo DSO: Large-Scale Direct Sparse Visual Odometry with Stereo Cameras(1708.07878) Aug. 25, 2017 cs.CV We propose Stereo Direct Sparse Odometry (Stereo DSO) as a novel method for highly accurate real-time visual odometry estimation of large-scale environments from stereo cameras. It jointly optimizes for all the model parameters within the active window, including the intrinsic/extrinsic camera parameters of all keyframes and the depth values of all selected pixels. In particular, we propose a novel approach to integrate constraints from static stereo into the bundle adjustment pipeline of temporal multi-view stereo. Real-time optimization is realized by sampling pixels uniformly from image regions with sufficient intensity gradient. Fixed-baseline stereo resolves scale drift. It also reduces the sensitivities to large optical flow and to rolling shutter effect which are known shortcomings of direct image alignment methods. Quantitative evaluation demonstrates that the proposed Stereo DSO outperforms existing state-of-the-art visual odometry methods both in terms of tracking accuracy and robustness. Moreover, our method delivers a more precise metric 3D reconstruction than previous dense/semi-dense direct approaches while providing a higher reconstruction density than feature-based methods. • ### Empirical information on nuclear matter fourth-order symmetry energy from an extended nuclear mass formula(1705.05122) Aug. 5, 2017 nucl-ex, nucl-th, astro-ph.SR We establish a relation between the equation of state (EOS) of nuclear matter and the fourth-order symmetry energy $a_{\rm{sym,4}}(A)$ of finite nuclei in a semi-empirical nuclear mass formula by self-consistently considering the bulk, surface and Coulomb contributions to the nuclear mass. Such a relation allows us to extract information on nuclear matter fourth-order symmetry energy $E_{\rm{sym,4}}(\rho_0)$ at normal nuclear density $\rho_0$ from analyzing nuclear mass data. Based on the recent precise extraction of $a_{\rm{sym,4}}(A)$ via the double difference of the "experimental" symmetry energy extracted from nuclear masses, for the first time, we estimate a value of $E_{\rm{sym,4}}(\rho_0) = 20.0\pm4.6$ MeV. Such a value of $E_{\rm{sym,4}}(\rho_0)$ is significantly larger than the predictions from mean-field models and thus suggests the importance of considering the effects of beyond the mean-field approximation in nuclear matter calculations.
2020-04-02 10:33:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4559221863746643, "perplexity": 1961.5990362453695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506870.41/warc/CC-MAIN-20200402080824-20200402110824-00516.warc.gz"}
https://electronics.stackexchange.com/questions/303449/running-cob-led-without-driver-undervolted
# Running COB LED without driver (undervolted) !Hobby project! I have a power supply form a printer (actually I have 3 :) ). The PSU is 24V, 1.25A. I want to connect some COB LEDs to this PSU without additional electronics. The idea is that the LEDs requires no driver if they are under-volted (am I right?). I have found some cheap 10W rated 32-34V. I use 3 of them in parallel. 1. Will they light up at 24V? (I know, it is more like a "guess" question but no other documentation is provided with the LED) 2. There are COB LEDs out there that are "natively" running at 26V or 32V is the minimum voltage for a high power LED? If question 1 and 2 fails, how to drive the LEDs with minimal electronics (using the PSU I have)? • Note: this is for a hobby project not high tech stuff :) – Ultralisk May 4 '17 at 21:16 • Leds are current controlled. If the voltage is too low, no light. If the voltage is too high without a current limiter the it will glow brightly for a very short time and then burn out. You cannot "under volt" an led. You must reduce the current. – JRE May 4 '17 at 21:48 Oh for pete's sake: 1. You can't "under volt" an led. Too little voltage and it won't turn on at all. 2. If you try to put leds in parallel, then you must provide each parallel line with its own current limiting. 3. You must use current limiting to drive your leds. An led operating at just over its forward voltage will draw just a little current. At a slightly higher voltage it will draw much more current - enough to burn out. 4. Leds of that type are typically operated with a constant current power supply. The voltage range you listed is the range in which the power supply can maintain the desired current. Leds are not like light bulbs, despite the fact that both produce light. Read up on them before you destroy a bunch by treating them like incandescent light bulbs. You can't operate a 32Volt led on 24 Volts. You must get the voltage up to the minimum forward voltage, and then limit the current. The simplest way would be to get a proper power supply. To use your current power supply, you must use a boost regulator to generate a higher voltage. You then either use a series resistor to limit the current, or you use an active current limiting circuit - which a proper power supply will already have. • 3. That's what I meant. – Ultralisk May 4 '17 at 22:00 • ok. I ma going to order a LED+driver kit. – Ultralisk May 4 '17 at 22:01 • "an led operating at just over its forward voltage will draw just a little current" - That's what I meant! An 'undervolted' LED will draw a minimal amount of current, WAY WAY WAY under the critical value. Simply put a "3V" LED at exactly 3V without any other resistor or limiting circuit and it will work forever. All those cheap Chinese LED key chains lamps are working this way. So, in a way your no. 3 contradicts no. 1 :) – Ultralisk May 5 '17 at 18:24 • The cheap chinese lights have voltage higher than the forward voltage of the LEDs. They depend on the internal resistance to keep from burning out the LEDs. – JRE May 11 '17 at 18:42 • My last comment was about the key chain lamps. Don't try to apply that to a 30W light - that will get you in a world of hurt. – JRE May 12 '17 at 13:00 You can use a boost step up constant current driver. Something like a $5 Mean Well LDD module would do the trick. You would need one for each CoB. Or you could use 18V CoBs. There is not many CoBs available between 18 and 24 volts. ## UPDATE You can use a supply with a voltage higher than 24v. You have to look at the datasheet for worst case maximum forward voltage which is probably around 27v. You then may need a few volts headroom for the LED driver. If you don't care about efficiency or inconsistent brightness from one to another, you can use a current limiting resistor, instead of a constant current driver, and that will add a little voltage. To do it correctly, you use a constant current driver for each CoB or get as driver that with a high enough voltage (e.g. 80v or more) to drive them all in series with one driver. A constant current driver output voltage can be greater than the CoB Vf. The output voltage is a function of the CoB's Vf. Your supply is a constant voltage supply. They do not work well with LEDs. They can be used to power inexpensive ($2-\$5) constant current DC/DC converters. • why 18v? why 18 and not 24? – Ultralisk May 9 '17 at 11:11 • The supply needs to be higher than the LED's forward voltage. 24V is the typical voltage at the test current and temperature. In real life, LEDs are not typical. As current increases so does forward voltage. I have one CoB that is spec'd at 34V but the real life Vf is 39V. – Misunderstood May 9 '17 at 11:21 • you mean to use LED rated 18V with my 24V power supply? – Ultralisk May 9 '17 at 11:23 • Yes, see my update – Misunderstood May 9 '17 at 12:24
2020-09-30 02:46:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4634244441986084, "perplexity": 1809.6358657474757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402101163.62/warc/CC-MAIN-20200930013009-20200930043009-00369.warc.gz"}
http://mathhelpforum.com/trigonometry/175099-find-period-cosine-quadratic-function.html
# Math Help - Find the period of Cosine of Quadratic function 1. ## Find the period of Cosine of Quadratic function Hi all, Hope some here can help me with this math problem. Given, y1 = ax^2 + b. y2 = cos (y1). where a and b are constants. Is y2 periodic with respect to x.? Visually using example, the graph, seems to be periodic. How do u find the exact period of such a function? regards, cybershakith 2. cos(ax^2+b+T)=cos(ax^2+b) Find T. Edit: Rough mistake! Thank you @topsquark! 3. Originally Posted by Also sprach Zarathustra cos(ax^2+b+T)=cos(ax^2+b) Find T. Actually I would say to solve $cos(a(x - T)^2 + b) = cos(ax^2 + b)$. Then show that the period T depends on x. -Dan 4. I tried something along those lines. cos (a*x^2 + b) = cos (a*(x+T)^2 + b) Hence, a*x^2 + b + 2* PI*k = a*(x+T)^2 + b, where is k is an integer. Which reduces to, 2*PI*k = a*2*x*T + a * T^2 So T is function of x, is it really then periodic? 5. Okay. So the function is not periodic. But let's take an example, y = Cos (2*PI*ax^2 + 2*PI*b) where is a =0.01277777778 and b = 255.5555556; From plotting this graph, it seems like the y values are peridoc over x = 900. So how does it happen? for x =0; y = Cos (2*PI*ax^2 + 2*PI*b) = Cos 2*PI * 255.5555556; for x = 900; y = Cos (2*PI*ax^2 + 2*PI*b) = Cos 2*PI*( 255.5555556 + 0.01277777778*900^2 ) = Cos 2*PI*(10605.5555556); for x =11; y = Cos (2*PI*ax^2 + 2*PI*b) = Cos 2*PI*( 255.5555556 + 0.01277777778*11^2 ) = Cos 2*PI*(257.10166671138); for x = 911; y = Cos (2*PI*ax^2 + 2*PI*b) = Cos 2*PI*( 255.5555556 + 0.01277777778*911^2 ) = Cos 2*PI*(10860.10166855538); //slight difference due to lack of precision. This is true for all x, it seems. this is because fractional part of ax^2 and a(x+T)^2 terms are the same. So is it periodic? 6. Originally Posted by cybershakith Hi all, Hope some here can help me with this math problem. Given, y1 = ax^2 + b. y2 = cos (y1). where a and b are constants. Is y2 periodic with respect to x.? Visually using example, the graph, seems to be periodic. How do u find the exact period of such a function? regards, cybershakith It is not periodic. To convince yourself look at the zeros of y2. CB 7. Originally Posted by cybershakith Okay. So the function is not periodic. But let's take an example, y = Cos (2*PI*ax^2 + 2*PI*b) where is a =0.01277777778 and b = 255.5555556; From plotting this graph, it seems like the y values are peridoc over x = 900. So how does it happen? If you are plotting this on a computer with a fixed plotting interval, you are probably seeing aliasing between the function and the sampling function CB
2014-10-22 07:25:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8717969059944153, "perplexity": 3616.9205483054484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507446231.28/warc/CC-MAIN-20141017005726-00271-ip-10-16-133-185.ec2.internal.warc.gz"}
http://mathhelpforum.com/statistics/183518-normal-distribution-print.html
# Normal Distribution • June 23rd 2011, 11:07 AM Kanwar245 Normal Distribution Suppose X ~ N(0,1). Why can we write P(a≤ X ≤ b) = P(X ≤ b) – P(X ≤ a) • June 23rd 2011, 11:59 AM Plato Re: Normal Distribution Quote: Originally Posted by Kanwar245 Suppose X ~ N(0,1). Why can we write P(a≤ X ≤ b) = P(X ≤ b) – P(X ≤ a) For example do you understand that for any a, $\mathcal{P}(X=a)=0~?$ From that it follows at once that $\mathcal{P}(X\le a)=\mathcal{P}(X So $\mathcal{P}(X\ge a)=1-\mathcal{P}(X. Thus $=\mathcal{P}(a\le X \le b)=\mathcal{P}(X\le b)-\mathcal{P}(X\le a)$.
2015-09-03 10:49:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5936841368675232, "perplexity": 6965.1725226614135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645311026.75/warc/CC-MAIN-20150827031511-00321-ip-10-171-96-226.ec2.internal.warc.gz"}
http://openstudy.com/updates/50025063e4b0848ddd66888a
## A community for students. Sign up today Here's the question you clicked on: ## sksugar 2 years ago DO THE SUM 4/6 3/8 • This Question is Closed 1. eyust707 ${4 \over 6} + {3 \over 8}$ 2. eyust707 Whats that smallest number that 6 and 8 both go into? 3. HillDP get the LCM first then proceed with the solution 4. sksugar I THOUGHT WHEN THE 2 DENOMINATOR OR NOT THE SAME U HAVE TO MULTIPLY 5. DHASHNI |dw:1342329635372:dw| 6. DHASHNI |dw:1342329727851:dw| 7. TheViper $\Huge{{4\over6}+{3\over8}}$$\LARGE{\color{blue}{={{16+9}\over24}}}$$\LARGE{\color{green}={{25\over24}}}$ #### Ask your own question Sign Up Find more explanations on OpenStudy Privacy Policy
2015-04-25 23:34:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6605141758918762, "perplexity": 11385.117345613235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246651873.94/warc/CC-MAIN-20150417045731-00297-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/clocks-within-each-ship-in-bell-spaceship-paradox.804582/
# Clocks Within Each Ship in Bell Spaceship Paradox Tags: 1. Mar 22, 2015 ### 1977ub I believe length contraction always makes more sense when integrated with reminders of relativity of Simultaneity. Let's say the engines are at the back end of each rocket. For the viewer "A" in the initial frame, they begin moving and continue accelerating simultaneously, and clocks next to the engines are seen in synch. However right away, "A" will measure that for each rocket, a clock by the engine is running faster than the clock by the head of that same rocket. I have never seen this pointed out in a discussion of this paradox, and I think that this may be one reason people so often are mistaken about this situation. The idea that "both spaceships accelerate and keep their clocks synchronized" to "A" distracts the reader and puts one in the mind of a situation without SR. The reminder that within each rocket, the clocks appear out of synch to "A" might snap the view back to SR. We might attempt to apply this same logic to the rope, but right away we run into a problem, since the back end of the rope is connected to the front of the back rocket, and the front of the rope is connected to the back of the front rocket. If the back end of the rope was accelerating in synch with the backs of the rockets, the rope back clock could not be connected to the back rocket front clock, since they are out of synch. And the similar applies to the front of the rope being attached to the front ship back clock. We are asking of a suggested nonstressed rope that it fulfill contradictory requirements. 2. Mar 22, 2015 ### Staff: Mentor And everybody knows that fulfilling contradictory requirements is highly stressful. 3. Mar 24, 2015 ### harrylin Rearranging: Relativity of simultaneity must certainly be dealt with when using less convenient inertial reference systems, and I agree that it is useful. When this is properly done it emerges that although length contraction is "relative", it plays a role in any inertial frame. I'm not sure if I simply misunderstand what you mean, but it sounds like a wrong argument. "A" would measure if it were technically feasible that for each rocket, a clock by the engine is running extremely slightly slower than the clock by the head of that same rocket due to the rocket's length contraction. However that's not spectacular at all. Or perhaps you mean that an observer "B" inside one rocket, when verifying clock synchronization, will discover that the rocket has accelerated from the fact that the synchronization of the instantaneously co-moving inertial frame at approximately that time does not correspond anymore with that of "A"'s reference frame (a clock by the engine will now appear to be behind compared to the one near the head). Adding such useless detail is more than likely to distract from Bell's striking example of a breaking string. Certainly not! It's key to Bell's clear and straightforward argument about the physical consequences of SR's length contraction. His case is perfectly SR and it's a simple scenario with identical rockets. Clocks that are out of sync can certainly be connected.. I'm afraid that I can't follow your logic at all. Note that "Bell's spaceship paradox" was only paradoxical for his colleagues who misunderstood SR; it was his example to drive home their misconception. How do you think that your discussion better clarifies Bell's example than he did himself? Last edited: Mar 24, 2015 4. Mar 24, 2015 ### 1977ub I'm comparing this scenario to the more familiar one of the single train/ship which travels by, and which was accelerated at some unknown time in the past, all of its clocks now in synch. There are no stresses there, and there is no "real" length contraction which can be divorced from RoS. In a frame where the clocks at both ends are in synch, there is no length contraction. In a different frame, the clocks at both ends are now out of synch, and there is measured a length contraction. Would the rope break in Bell's example if the ships did not contract (in the initial frame)? No. Would the ships contract (in the initial frame) without the clocks at their front and back ends being out of synch (in the initial frame)? No. The alternative is to add a detail to Bell's example whereby each ship has an engine at the front and at the back, and so all 4 engines fire on the same program, in synch from the initial frame. In that example, the ships themselves would stretch and break apart. But that is not the situation in Bell's standard paradox. I didn't think about this until I was looking at the wiki page, and the contracted ships. 5. Mar 24, 2015 ### harrylin The ships could be made (artificially) to not contract in the initial frame; the string would still break. I think that I already clarified that the clocks inside the rocket are only very slightly out of synch according to "A", and that this is caused by the rocket's contraction (due to the contraction, the front clock's linear speed is all the time very slightly less than that of the rear clock). The inverse is incorrect: the rate at which clocks tick or the way they are synchronized has no effect on the length of the rocket - that is unphysical! And then, I'm afraid that someone else will propose to put clocks in the 4 engines in order to clarify that the engines contract. It's easy to make the string much longer than the rockets, so that their contraction can be ignored; no need to complicate it with 4 engines or breaking rockets. And see next! That drawing nicely illustrates that the contraction of the rockets can be made totally irrelevant. If the rockets as drawn there are made with some technical means to have a "proper stretching" such that they do not length contract in the original rest frame, it will make no difference at all for the breaking of the string. Last edited: Mar 24, 2015 6. Mar 24, 2015 ### 1977ub Agreed. The string would break and the ships would break. All of these effects are "only slight" depending upon the actual velocities in play. Also by out of synch I should clarify that I don't mean simply not ticking at the same *rate* but more importantly in the ladder/barn way, i.e. the back clock of each ship is presumably made set earlier due to being at the back. Surely this applies? I'm not up to doing the math for the accelerating case, but in a non-accelerating case, a moving ship is seen to be shortened, and with the back clock set later than the front - by viewers in the "platform" frame. This is the "out of synch" I refer to. The RoS out-of-synch which accompanies length contraction. Of course. I was grasping for an inference, a comparison with the non-stressed familiar 2nd-frame-train, not implying direct cause and effect. Not at all. I would think pointing out that the clocks within the rockets are not in synch would shatter the sleight-of-hand of the way the "two simultaneously accelerating" rockets scenario is set up. It doesn't matter how long the rockets are compared to the rope/space between them. If they contract, it will put stress on the rope. Why does this not happen with the 2nd-frame-train? Everything contracts together. The scenario here has been designed so that they whole getup can't contract *together*. If we set the string up for "proper stretching" as well, it won't break. My real point is that for platform observers, an unaccelerated moving flotilla of ships is expected to be shortened and with rear clock set ahead of front clock. I'm trying to find the differences here. I realize that on an unaccelerated moving ship, clocks can be synchronized, such that they appear unsynchronized for platform viewer. I'm not quite sure what can be said analogously for the case of a single accelerating ship, its clocks, and comparison with platform viewer. 7. Mar 24, 2015 ### harrylin Certainly not! Once more, your statement in your first post, that "right away, "A" will measure that for each rocket, a clock by the engine is running faster than the clock by the head of that same rocket", is totally wrong. The back clock of each ship is just like the clocks of the two ships set in synch to the original rest frame before take-off, and if their travel histories are identical, that cannot change. Let's take the example of your 4 clocks that are made to accelerate identically in "A"'s rest frame thanks to a slight artificial stretching of the rockets. For that case they will remain in synch with each other according to the original frame, the rest frame of "A". That is - happily - not required to understand Bell's spaceship example. It accompanies length contraction if the operator performed a so-called Einstein synchronization at that velocity. Not at all: "simultaneously" refers to the launch pad reference system. It's as much a "sleight-of-hand" as synchronous clocks in the GPS system that you may use in your car. Please look again carefully at the Wikipedia sketch. Once more: as pictured there, Bell's example is insensitive to length contraction by the rockets. It sounds as if you are bugged by the bug that bugged Bell's colleagues. Material objects length contract with their increase of speed, but their speed increase cannot affect the space between them. The point of Bell's Spaceship example is that the string undergoes "proper stretching" so that it breaks... Here you are getting to the essential point: shortened as compared to a measurement with instruments of the flotilla, if those have first been synchronized according to convention. The main difference here is that Bell discusses physical effects due to a change in velocity, while the other examples merely examine differences in measurements between two independent inertial reference systems. A similar issue arises with time dilation: the discussion of effects of change in velocity ("twin paradox") is quite different from the discussion about how two systems in inertial motion measure each other (mutual time dilation). Observational symmetry if broken with a change of velocity. Does that help? Last edited: Mar 24, 2015 8. Mar 24, 2015 ### 1977ub I have to start again. :) One rocket, with acceleration = 0, moving with positive velocity V wrt ref frame S, is measured by observers in S to be shortened (compared to an identical rocket sitting next to the platform) in the direction of motion, and also to have a clock at the back which is set later than the clock in the front (assuming that the denizens have Einstein sync'd them). Now, give that rocket a slight acceleration, and all of those effects still hold, and their extent is slightly changing with time. For observers in S, the clock at the back is getting increasingly ahead compared with a clock at the nose, and the rocket is getting progressively shorter. Any problem there? (The denizens won't be able to do exact Einstein sync if accelerating though, right?) Last edited: Mar 24, 2015 9. Mar 25, 2015 ### harrylin I immediately noticed two problems with that: 1. Once more: according to S the clock at the back is not getting increasingly ahead compared with a clock at the nose (I explained why it's even getting slightly less ahead). By what physical means, do you think, would S propose that the clock at the back should tick faster than the clock at the nose?? 2. You seem to try to show something completely different from what Bell's spaceship example shows! How does your example demonstrate that one should not confound the length contraction of material objects with the space between them? It was apparently the mistaken notion of space contraction that made Bell's example a "paradox" for many of Bell's colleagues (this is also referred to in the intro in Wikipedia). 10. Mar 25, 2015 ### stevendaryl Staff Emeritus What you're saying isn't correct. Let's assume that the rocket is "Born-rigid". What that means is that it has the same length $L$ in any frame in which it is (momentarily) at rest. Then it follows, by the mathematics of relativity, that as the rocket accelerates, it keeps getting shorter and shorter, as viewed in the original rest frame S. Now, think about what "getting shorter" means. It means that the rear end of the rocket is getting closer to the front end of the rocket. Which means that the rear end is traveling (slightly) faster than the front end. Which means (by time dilation) that the rear clock is running slightly slower than the front clock. So what you're saying is exactly backwards. The rear clock gets farther and farther behind the front clock. 11. Mar 25, 2015 ### stevendaryl Staff Emeritus I think a point of confusion is clock synchronization in different frames. According to the Lorentz transformations, if (1) a rocket is moving at constant velocity v relative to frame S, and (2) the clocks at the front and the rear are synchronized, according to the rocket's reference frame (call it S'), then according to frame S the front clock will be behind the rear clock by an amount $\delta t' = \frac{v L'}{c^2}$ where $L'$ is the length of the rocket in its own rest frame (by length contraction, $L = \frac{L'}{\gamma}$ is the length in frame S). Note the phrase: if the clocks are synchronized in frame S'. That's not going to happen naturally; the people on board the rocket have to adjust the clocks to make that happen. They have to SET the front clock so that it's synchronized with the back clock. So you can imagine a constantly accelerating rocket to be approximated by the following discrete process: 1. The rocket is initially at rest in some frame $S_0$. The clocks at the front and rear are synchronized in that frame. 2. At time $t=0$, the rocket accelerates instantaneously to speed $\delta v$ relative to $S_0$. So it's at rest in a new frame, $S_1$. 3. The front clock must be set back by an amount $\delta t' = \frac{\delta v L}{c^2}$ in order for the two clocks to be in synch in frame $S_1$. 4. At time $t = t_1$, the rocket again accelerates to speed $\delta v$ relative to $S_1$. 5. Again, the front clock must be set back. 6. etc. Every time the rocket accelerates, the front clock must be set back. If you didn't continually adjust the front clock, then the front clock would get farther and farther ahead of the rear clock. 12. Mar 25, 2015 ### 1977ub Instead of acceleration, let us consider a sequence of rockets, each moving with constant velocity, only progressively faster. One way to visualize the clocks being out of synch to the platform viewer is that there is a light beacon in the center of the rocket. For observers in that vehicle's frame, the ping guarantees that the end clocks are in synch. For a viewer in initial frame S, the light signal rushes backward to the rear of the vehicle, triggering the clock to tick forward, before the corresponding signal from the beacon reaches the (receeding) front clock. Therefore the clock at the rear is set to a later time than the one in the front. Now let's move on to a faster moving vehicle. The effect is more pronounced. Observers in S now measure that the rear of the 2nd vehicle is set even later with regard to the front clock than in the initial vehicle's case. etc. No? 13. Mar 25, 2015 ### stevendaryl Staff Emeritus When people say that the front clock on a rocket runs faster than the rear clock, they are NOT talking about synchronization. Forget about synchronization of front and rear clocks, and instead, consider the following thought experiment: Take two clocks in the rear of the rocket. Set them both to $t=0$. Leave one clock in the rear, and bring the other clock to the front. Let both clocks continue to run for one year. Then bring the front clock back to the rear. Then the clock that had been in the front of the rocket will show more elapsed time than the clock that had been in the rear of the rocket the whole time. 14. Mar 25, 2015 ### stevendaryl Staff Emeritus You cannot keep the clocks in the front and rear synchronized in an accelerating rocket, if they are identical clocks. 15. Mar 25, 2015 ### 1977ub Yes, this is precisely the effect I wish to ignore. I am attempting to find a way to bring the insights of the ladder/barn paradox - in which there is no "real" shortening apart from the one related to relativity of simultaneity - into the situation of the Bell's Spaceship paradox. I'm not quite there yet, and am hoping to get there, a step at a time. 16. Mar 25, 2015 ### 1977ub Perhaps if someone can confirm this for me - this other (very similar to other cases being discussed here) situation. Just like Bell's, except that the programmed identical acceleration goes on for a finite amount of time, then the engines shut off (the times at which they do so only being the same in the platform frame) and both ships coast after that. The platform observers now find that even though the backs of the ships are the same distance from one another as they began, and the fronts of the ships are the same distance from one another as they began, that both now coasting ships are now both shortened. This is correct? 17. Mar 25, 2015 ### stevendaryl Staff Emeritus Hmm. What I described is a real effect. It's not a coordinate effect. 18. Mar 25, 2015 ### stevendaryl Staff Emeritus That is correct, as I understand it. 19. Mar 25, 2015 ### 1977ub I'm trying to figure out if this is similar enough to Bell's Spaceship paradox to be illustrative. The ships are shortened, and while the distance between the backs of the ships remains the same to original observer, the ships are shortened to him, and so the distance from the back of the front ship to the front of the back ship is shortened >>although not by as great a percentage as either ship<<. Observers on the ships agree that their ships are now farther apart than they were before their acceleration phase. They will not be surprised to see the rope has broken. Is there something extra, something mysterious, about the "always accelerating" detail of Bell's Spaceship paradox which negates or calls into question the relevance of this particular narrative? 20. Mar 25, 2015 ### A.T. The breaking of the rope has nothing to with the contraction of the ships in the original rest frame. The rope is attached to points on the ships which have a constant distance in the original rest frame, but it still will break.
2017-10-21 05:43:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5585727691650391, "perplexity": 850.2667007604186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824570.79/warc/CC-MAIN-20171021043111-20171021063111-00371.warc.gz"}
https://intelligencemission.com/free-electricity-machine-free-evening-electricity.html
For ex it influences Free Power lot the metabolism of the plants and animals, things that cannot be explained by the attraction-repulsion paradigma. Forget the laws of physics for Free Power minute – ask yourself this – how can Free Power device spin Free Power rotor that has Free Power balanced number of attracting and repelling forces on it? Have you ever made one? I have tried several. Gravity motors – show me Free Power working one. I’ll bet if anyone gets Free Power “vacuum energy device” to work it will draw in energy to replace energy leaving via the wires or output shaft and is therefore no different to solar power in principle and is not Free Power perpetual motion machine. Perpetual motion obviously IS possible – the earth has revolved around the sun for billions of years, and will do so for billions more. Stars revolve around galaxies, galaxies move at incredible speed through deep space etc etc. Electrons spin perpetually around their nuclei, even at absolute zero temperature. The universe and everything in it consists of perpetual motion, and thus limitless energy. The trick is to harness this energy usefully, for human purposes. A lot of valuable progress is lost because some sad people choose to define Free Power free-energy device as “Free Power perpetual motion machine existing in Free Power completely closed system”, and they then shelter behind “the laws of physics”, incomplete as these are known to be. However if you open your mind to accept Free Power free-energy definition as being “Free Power device which delivers useful energy without consuming fuel which is not itself free”, then solar energy , tidal energy etc classify as “free-energy ”. Permanent magnet motors, gravity motors and vacuum energy devices would thus not be breaking the “laws of physics”, any more than solar power or wind turbines. There is no need for unicorns of any gender – just common sense, and Free Power bit of open-mindedness. What is the name he gave it for research reasons? Thanks for the discussion. I appreciate the input. I assume you have investigated the Free Energy and found none worthy of further research? What element of the idea is failing? If one is lucky enough to keep something rotating on it’s own, the drag of Free Power crankshaft or the drag of an “alternator” to produce electricity at the same time seems like it would be too much to keep the motor running. Forget about discussing which type of battery it msy charge or which vehicle it may power – the question is does it work? No one anywhere in the world has ever gotten Free Power magnetic motor to run, let alone power anything. If you invest in one and it seems to be taking Free Power very long time to develop it means one thing – you have been stung. Free Energy’t say you haven’t been warned. As an optimist myself, I want to see it work and think it can. It would have to be more than self-sustaining, enough to recharge offline Free Energy-Fe-nano-Phosphate batteries. If it worked, you would be able to buy Free Power guaranteed working model. This has been going on for Free Electricity years or more – still not one has worked. Ignorance of the laws of physics, does not allow you to break those laws. Im not suppose to write here, but what you people here believe is possible, are true. The only problem is if one wants to create what we call “Magnetic Rotation”, one can not use the fields. There is Free Power small area in any magnet called the “Magnetic Centers”, which is around Free Electricity times stronger than the fields. The sequence is before pole center and after face center, and there for unlike other motors one must mesh the stationary centers and work the rotation from the inner of the center to the outer. The fields is the reason Free Power PM drive is very slow, because the fields dont allow kinetic creation by limit the magnetic center distance. This is why, it is possible to create magnetic rotation as you all believe and know, BUT, one can never do it with Free Power rotor. Free Power In my opinion, if somebody would build Free Power power generating device, and would manufacture , and sell it in stores, then everybody would be buying it, and installing it in their houses, and cars. But what would happen then to millions of people around the World, who make their living from the now existing energy industry? I think if something like that would happen, the World would be in chaos. I have one more question. We are all biulding motors that all run with the repel end of the magnets only. I have read alot on magnets and thier fields and one thing i read alot about is that if used this way all the time the magnets lose thier power quickly, if they both attract and repel then they stay in balance and last much longer. My question is in repel mode how long will they last? If its not very long then the cost of the magnets makes the motor not worth building unless we can come up with Free Power way to use both poles Which as far as i can see might be impossible. For those who have been following the stories of impropriety, illegality, and even sexual perversion surrounding Free Electricity (at times in connection with husband Free Energy), from Free Electricity to Filegate to Benghazi to Pizzagate to Uranium One to the private email server, and more recently with Free Electricity Foundation malfeasance in the spotlight surrounded by many suspicious deaths, there is Free Power sense that Free Electricity must be too high up, has too much protection, or is too well-connected to ever have to face criminal charges. Certainly if one listens to former FBI investigator Free Energy Comey’s testimony into his kid-gloves handling of Free Electricity’s private email server investigation, one gets the impression that he is one of many government officials that is in Free Electricity’s back pocket. Free Energy to leave possible sources of motive force out of it. 0. 02 Hey Free Power i forgot about the wind generator that you said you were going to stick with right now. I am building Free Power vertical wind generator right now but the thing you have to look at is if you have enough wind all the time to do what you want, even if all you want to do is run Free Power few things in your home it will be more expencive to run them off of it than to stay on the grFree Energy I do not know how much batteries are there but here they are way expencive now. Free Electricity buying the batteries alone kills any savings you would have had on your power bill. All i am building mine for is to power Free Power few things in my green house and to have for some emergency power along with my gas generator. I live in Utah, Free Electricity Ut, thats part of the Salt Free Power valley and the wind blows alot but there are days that there is nothing or just Free Power small breeze and every night there is nothing unless there is Free Power storm coming. I called Free Power battery company here and asked about bateries and the guy said he would’nt even sell me Free Power battery untill i knew what my generator put out. I was looking into forklift batts and he said people get the batts and hook up their generator and the generator will not keep up with keeping the batts charged and supply the load being used at the same time, thus the batts drain to far and never charge all the way and the batts go bad to soon. So there are things to look at as you build, especially the cost. Free Power Hey Free Power, I went into the net yesterday and found the same site on the shielding and it has what i think will help me alot. Sounds like your going to become Free Power quitter on the mag motor, going to cheet and feed power into it. Im just kidding, have fun. I have decided that i will not get my motor to run any better than it does and so i am going to design Free Power totally new and differant motor using both magnets and the shielding differant, if it works it works if not oh well, just try something differant. You might want to look at what Free Electricity told Gilgamesh on the electro mags before you go to far, unless you have some fantastic idea that will give you good over unity. No “boing, boing” … What I am finding is that the abrupt stopping and restarting requires more energy than the magnets can provide. They cannot overcome this. So what I have been trying to do is to use Free Power circular, non-stop motion to accomplish the attraction/repulsion… whadda ya think? If anyone wants to know how to make one, contact me. It’s not free energy to make Free Power permanent magnet motor, without Free Power power source. The magnets only have to be arranged at an imbalanced state. They will always try to seek equilibrium, but won’t be able to. The magnets don’t produce the energy , they only direct it. Think, repeating decimal….. ##### It Free Power (mythical) motor that runs on permanent magnets only with no external power applied. How can you miss that? It’s so obvious. Please get over yourself, pay attention, and respond to the real issues instead of playing with semantics. @Free Energy Foulsham I’m assuming when you say magnetic motor you mean MAGNET MOTOR. That’s like saying democratic when you mean democrat.. They are both wrong because democrats don’t do anything democratic but force laws to create other laws to destroy the USA for the UN and Free Energy World Order. There are thousands of magnetic motors. In fact all motors are magnetic weather from coils only or coils with magnets or magnets only. It is not positive for the magnet only motors at this time as those are being bought up by the power companies as soon as they show up. We use Free Power HZ in the USA but 50HZ in Europe is more efficient. Free Energy – How can you quibble endlessly on and on about whether Free Power “Magical Magnetic Motor” that does not exist produces AC or DC (just an opportunity to show off your limited knowledge)? FYI – The “Magical Magnetic Motor” produces neither AC nor DC, Free Electricity or Free Power cycles Free Power or Free energy volts! It produces current with Free Power Genesis wave form, Free Power voltage that adapts to any device, an amperage that adapts magically, and is perfectly harmless to the touch. LOL I doubt very seriously that we’ll see any major application of free energy models in our lifetime; but rest assured, Free Power couple hundred years from now, when the petroleum supply is exhausted, the “Free Electricity That Be” will “miraculously” deliver free energy to the masses, just in time to save us from some societal breakdown. But by then, they’ll have figured out Free Power way to charge you for that, too. If two individuals are needed to do the same task, one trained in “school” and one self taught, and self-taught individual succeeds where the “formally educated” person fails, would you deny the results of the autodidact, simply because he wasn’t traditionally schooled? I’Free Power hope not. To deny the hard work and trial-and-error of early peoples is borderline insulting. You have Free Power lot to learn about energy forums and the debates that go on. It is not about research, well not about proper research. The vast majority of “believers” seem to get their knowledge from bar room discussions or free energy websites and Free Power videos. What is the name he gave it for research reasons? Thanks for the discussion. I appreciate the input. I assume you have investigated the Free Energy and found none worthy of further research? What element of the idea is failing? If one is lucky enough to keep something rotating on it’s own, the drag of Free Power crankshaft or the drag of an “alternator” to produce electricity at the same time seems like it would be too much to keep the motor running. Forget about discussing which type of battery it msy charge or which vehicle it may power – the question is does it work? No one anywhere in the world has ever gotten Free Power magnetic motor to run, let alone power anything. If you invest in one and it seems to be taking Free Power very long time to develop it means one thing – you have been stung. Free Energy’t say you haven’t been warned. As an optimist myself, I want to see it work and think it can. It would have to be more than self-sustaining, enough to recharge offline Free Energy-Fe-nano-Phosphate batteries. For Free Power start, I’m not bitter. I am however annoyed at that sector of the community who for some strange reason have chosen to have as Free Power starting point “there is such Free Power thing as free energy from nowhere” and proceed to tell everyone to get on board without any scientific evidence or working versions. How anyone cannot see that is appalling is beyond me. And to make it worse their only “justification” is numerous shallow and inaccurate anecdotes and urban myths. As for my experiments etc they were based on electronics and not having Free Power formal education in that area I found it Free Power very frustrating journey. Books on electronics (do it yourself types) are generally poorly written and were not much help. I also made Free Power few magnetic motors which required nothing but clear thinking and patience. I worked out fairly soon that they were impossible just through careful study of the forces. I am an experimenter and hobbyist inventor. I have made magnetic motors (they didn’t work because I was missing the elusive ingredient – crushed unicorn testicles). The journey is always the important part and not the end, but I think it is stupid to head out on Free Power journey where the destination is unachievable. Free Electricity like the Holy Grail is Free Power myth so is Free Power free energy device. Ignore the laws of physics and use common sense when looking at Free Power device (e. g. magnetic motors) that promises unending power. Thanks Free Electricity, you told me some things i needed to know and it just confirmed my thinking on the way we are building these motors. My motor runs but not the way it needs to to be of any real use. I am going to abandon my motor and go with Free Power whole differant design. The mags are going to be Free Power differant shape set in the rotor differant so that shielding can be used in Free Power much more efficient way. Sorry for getting Free Power little snippy with you, i just do not like being told what i can and cannot do, maybe it was the fact that when i was Free Power kidd i always got told no. It’s something i still have Free Power problem with even at my age. After i get more info on the shielding i will probably be gone for Free Power while, while i design and build my new motor. I am Free Power machanic for Free Power concrete pumping company and we are going into spring now here in Utah which means we start to get busy. So between work, house, car&truck upkeep, yard & garden and family, there is not alot of time for tinkering but i will do my best. Free Power, please get back to us on the shielding. Free Power As I stated magnets lose strength for specific reasons and mechanical knocks etc is what causes the cheap ones to do exactly that as you describe. I used to race model cars and had to replace the ceramic magnets often due to the extreme knocks they used to get. My previous post about magnets losing their power was specifically about neodymium types – these have Free Power very low rate of “aging” and as my research revealed they are stated as losing Free Power strength in the first Free energy years. But extreme mishandling will shorten their life – normal use won’t. Fridge magnets and the like have very weak abilities to hold there magnetic properties – I certainly agree. But don’t believe these magnets are releasing energy that could be harnessed. I had also used Free Power universal contractor’s glue inside the hole for extra safety. You don’t need to worry about this on the outside sections. Build Free Power simple square (box) frame Free Electricity′ x Free Electricity′ to give enough room for the outside sections to move in and out. The “depth” or length of it will depend on how many wheels you have in it. On the ends you will need to have Free Power shaft mount with Free Power greasble bearing. The outside diameter of this doesn’t really matter, but the inside diameter needs to be the same size of the shaft in the Free Energy. On the bottom you will need to have two pivot points for the outside sections. You will have to determine where they are to be placed depending on the way you choose to mount the bottom of the sections. The first way is to drill holes and press brass or copper bushings into them, then mount one on each pivot shaft. (That is what I did and it worked well.) The other option is to use Free Power clamp type mount with Free Power hole in to go on the pivot shaft. Even the use of replacable magnesium plates in Free Power battery every Free energy -Free Power miles gives the necessary range for Free energy families for long trips. Magnet-only motors are easy to build. There are plans around. They are cheap to build. Trouble is no one knows how to get them to spin unaided. I have lost count of the people I have corresponded with who seriously believe that magnetising Free Power magnet somehow gives it energy that is then used to drive the motor. Once rumours about how magnetic motors “work” they spread through the free energy websites and forums as “truth”. The blindly ignorant population believe what is proclaimed because they don’t have the education or experience to be able to question the bogus Free Energy. I suppose with people wholeheartedly believing an all powerful supernatural being created the entire universe it isn’t hard for them to believe Free Power magnet can power Free Power motor. Both thoughts demonstrate ignorance. To follow up on my own comment, optimistically, if the “drag” created by the production of electricity is less than the permanent magnetic “drive” required of the rotating armature or field, theoretically it could work. Someone noted in Free Power previous posting that Telsa already developed this motor.
2019-04-25 22:54:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3799980878829956, "perplexity": 1285.5740842936111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578742415.81/warc/CC-MAIN-20190425213812-20190425235812-00538.warc.gz"}
https://www.arcfonts.com/2022/09/26/you-are-able-to-level-range-along-with-your-flash/
You are able to level range along with your flash otherwise finger How, the new fist uses up throughout the $10$ degree of glance at whenever kept straight out. Thus, pacing off backwards till the hand totally occludes this new tree tend to give the range of adjoining side of the right triangle. If it point is actually $30$ paces what’s the top of one’s forest? Well, we require specific factors. Assume their pace try $3$ feet. Then the adjacent duration is actually $90$ feet. Brand new multiplier ‘s the tangent out of $10$ degree, or: And that to possess benefit out-of memories we will state try $1/6$ (a great $5$ per cent error). To ensure answer is more or less $15$ feet: Also, you can utilize your own thumb as opposed to your first. To utilize the first you might multiply by the $1/6$ the adjoining top, to use your own flash regarding the $1/30$ as this approximates this new tangent away from $2$ degrees: This might be reversed. Knowing the fresh peak of something a distance out you to is included by the thumb otherwise little finger, you then carry out proliferate that height because of the suitable total look for your length. ## Very first functions This new sine mode is scheduled for everyone genuine $\theta$ possesses a selection of $[-step 1,1]$ . Obviously just like the $\theta$ wind gusts around the $x$ -axis, the career of your $y$ accentuate begins to recite in itself. We state the fresh sine form is periodic which have several months $2\pi$ . A graph often show: The latest graph suggests a couple periods. The new wavy aspect of the graph ‘s that it setting are familiar with design unexpected moves, such as the level of sun in a day, or even the alternating electric current at the rear of a computer. Using this chart – otherwise considering when the $y$ coordinate is actually $0$ – we see that sine means have zeros any kind of time integer several off $\pi$ , otherwise $k\pi$ , $k$ in the $\dots,-dos,-step one, 0, 1, 2, \dots$ . The new cosine mode is comparable, where it’s got an equivalent domain name and you can diversity, but is “out-of phase” towards the sine curve. A chart out-of each other shows both are relevant: The brand new cosine function is a shift of your sine mode (or vice versa). We see your zeros of cosine form takes place at items of your mode $\pi/2 + k\pi$ , $k$ in $\dots,-2,-step 1, 0, step one, dos, \dots$ . The new tangent function doesn’t always have all of the $\theta$ for its domain name, as an alternative the individuals https://datingranking.net/de/russische-datierung/ circumstances in which department of the $0$ happen is omitted. Such exists if cosine are $0$ , otherwise once again at the $\pi/dos + k\pi$ , $k$ into the $\dots,-2,-step one, 0, 1, dos, \dots$ . The range of brand new tangent mode could be most of the genuine $y$ . The latest tangent means is also unexpected, although not which have months $2\pi$ , but instead just $\pi$ . A chart will teach which. Here we avoid the vertical asymptotes by continuing to keep them of the fresh new area domain and layering several plots. $r\theta = l$ , where $r$ is the distance out-of a group and you will $l$ the length of brand new arc designed of the angle $\theta$ . Both is actually relevant, as the a circle off $2\pi$ radians and you may 360 amount. So to alter regarding stages for the radians it takes multiplying by the $2\pi/360$ also to transfer off radians so you’re able to degree it needs multiplying by $360/(2\pi)$ . The newest deg2rad and you will rad2deg characteristics are available for this action. For the Julia , this new services sind , cosd , tand , cscd , secd , and you will cotd are around for describe work of writing the latest several businesses (that’s sin(deg2rad(x)) is equivalent to sind(x) ). ## The sum of the-and-distinction algorithms Consider the point on the unit community $(x,y) = (\cos(\theta), \sin(\theta))$ . When it comes to $(x,y)$ (otherwise $\theta$ ) could there be an approach to show the fresh new perspective discovered by the rotating an additional $\theta$ , that is what are $(\cos(2\theta), \sin(2\theta))$ ?
2023-04-01 19:46:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.514290988445282, "perplexity": 1431.2819851165448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00372.warc.gz"}
https://kanavgupta.xyz/blog/function-end-halting.html
Kanav Aspiring Cryptology Researcher @kanav99 # Finding function end in Assembly code I have one more interesting problem in NP-Completeness. ## Problem # Suppose you have an infinitely long text section of assembly code, be it of any modern ISA. Forget everything about virtual addressing, every address is either relatively addressed or with base at the start of this assembly code. At the beginning of the assembly code, you have a function. You have to find the end of this function - that is, the maximum address of instruction where the execution could reach without following function calls. We can also convert it into a decision problem - given $$n$$ instructions, can the execution ever reach the $$n+1$$-th instruction without following function calls? ## Intution # I have a strong intution that this problem is very much related to Halting Problem. ## Where do you see this problem? # When C code compiled with GCC, the function ends are not stored inside the binary generated. Function starts can be found using exported symbol table. ## Interested? # Hit me up :) Insights of people who develop reverse engineer tools would be much appreciated :)
2022-09-27 03:54:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45843109488487244, "perplexity": 1622.6365872828885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00548.warc.gz"}
https://www.physicsforums.com/threads/debroglie-wavelength-considering-relativistic-effects.115124/
# DeBroglie wavelength considering relativistic effects 1. Mar 22, 2006 ### *Alice* "Electrons are accelerated by a potential of 350kV in an electron microscope. Calculate the de Broglie wavelength of those electrons taling relativistic effects into account" I attempted the following: W = W(kin) = 350keV now $$W(kin)= (1-gamma)mc^2$$ so, now one could solve for gamma and find the velocity of the particle. afterwards $$p=m*v=h/lambda$$ HOWEVER: I get a negative result in a root when I try to solve for v. Therefore I think that my energy formula must be wrong (I already excluded calculation errors). Can anyone see it? 2. Mar 22, 2006 ### nrqed It's $(\gamma -1) m c^2$ (the gamma factor is always larger or equal to 1) Patrick
2017-06-22 14:43:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.876874566078186, "perplexity": 1781.4638932631735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319575.19/warc/CC-MAIN-20170622135404-20170622155404-00279.warc.gz"}
https://www.physicsforums.com/threads/rotational-motion-question.384700/
# Rotational motion question 1. Mar 8, 2010 ### sheepcountme 1. The problem statement, all variables and given/known data Two cars race around a circular track. Car A accelerates at 0.340 rad/s2 around the track, and car B at 0.270 rad/s2. They start at the same place on the track and car A lets the slower-to-accelerate car B start first. Car B starts at time t = 0. When car A starts, car B has an angular velocity of 1.40 rad/s. At what time does car A catch up to car B? 2. Relevant equations rotational motion equations 3. The attempt at a solution I attempted to use rotational motion equations and set up theta car A=theta car B since they will be in the same place (theta) at the same time. as in: omega initial A x time + 1/2 alpha A time^2 = omega initial B x time + 1/2 alpha A time^2 So: 0(t) + 1/2(.340)t^2=1.4(t)+1/2(.270)t^2 and then I solved for time using quadratic theorem and got 40 seconds but this was incorrect. Could you tell me where I went wrong?? Thanks! 2. Mar 8, 2010 ### aim1732 Is the angle really equal ? The second car already has traversed some angle when the other starts. So angle by faster car > angle by slower car. 3. Mar 8, 2010 ### dpeagler You first have to find delta theta to see how far car B has gone. Then find how long it took to traverse that distance. Then plug your delta theta into the equations for car A with its respective angular velocity then add the two separate times (first time while car A was waiting + time it took car A to catch up) should be way less than 40 seconds if I'm not mistaking. 4. Mar 8, 2010 ### sheepcountme Okay, so I used omegafinal^2=omegainitial^2+2(alpha*deltatheta) to find delta theta for car B coming to delta theta=3.63rad Then omegafinal=omegainitial+alphat to find the time for car B to reach this point, getting t=5.19s Then I set the previous delta theta equal to an equation for Car A: (3.63)=omegainitialt+1/2(alpha)t^2 3.63=0+1/2(.34)t^2 and solved for t getting 4.62 seconds and so I added this 4.62 to 5.19 getting 9.81 however this was incorrect... 5. Mar 8, 2010 ### dpeagler The first two parts you did are correct ( when you found the radians and the time ), as far as I can tell. You didn't put your Theta initial B as 3.63 radians. This should fix your problem. Sorry I haven't had time to sit down and work it out to make sure this is the mistake, but I'm pretty sure. 6. Mar 10, 2010 ### sheepcountme So, using the equation above, I get 1/2(.340)t^2=3.63t+1/2(.270)t^2 0=-.035t^2+3.63t and time ends up being 103.72s This seems much too big...I hate needing someone to hold my hand through this, but there's really something I'm missing. 7. Mar 10, 2010 ### dpeagler You're still not using that equation correctly. $$\theta$$= $$\theta$$$$_{int}$$ + $$\omega$$$$_{int}$$t + $$\frac{1}{2}$$$$\alpha$$t$$^{2}$$ (sorry about the formatting still trying to get used to LaTeX.) You then set them equal to each other. You put your radians in the wrong place on the equation you listed. You put them where the initial angular velocity goes. This should bring your answer down by around a factor of two.
2018-03-23 14:08:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6880785226821899, "perplexity": 1594.3514495802767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648226.72/warc/CC-MAIN-20180323122312-20180323142312-00386.warc.gz"}
http://openstudy.com/updates/4d506f96a805b7646a72ca0b
## anonymous 5 years ago what is 20% of65? 1. anonymous What is 20% of65?$What is 20% of 65?$ 2. anonymous How do I work the problem? 3. anonymous Getting a percentage of a value involves multiplying the value by the percentage over 100. Since this percentage is 20, $$\frac{20}{100} = 0.2$$, and the value you're looking for is $$0.2*65=13$$. 4. anonymous So any percentage over 100 has to be simplified then multiplied 5. anonymous What is 5% of 80?
2017-01-20 12:16:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9414078593254089, "perplexity": 2710.4555396731034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00024-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.jobilize.com/trigonometry/test/using-reference-angles-to-find-coordinates-by-openstax
# 7.3 Unit circle  (Page 6/11) Page 6 / 11 ## Using reference angles to find sine and cosine 1. Using a reference angle, find the exact value of $\text{\hspace{0.17em}}\mathrm{cos}\left(150°\right)\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\text{sin}\left(150°\right).$ 2. Using the reference angle, find $\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\frac{5\pi }{4}\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\frac{5\pi }{4}.$ 1. $150°\text{\hspace{0.17em}}$ is located in the second quadrant. The angle it makes with the x -axis is $\text{\hspace{0.17em}}180°-150°=30°,$ so the reference angle is $\text{\hspace{0.17em}}30°.$ This tells us that $\text{\hspace{0.17em}}150°\text{\hspace{0.17em}}$ has the same sine and cosine values as $\text{\hspace{0.17em}}30°,$ except for the sign. $\begin{array}{ccc}\mathrm{cos}\left(30°\right)=\frac{\sqrt{3}}{2}& \phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}& \mathrm{sin}\left(30°\right)=\frac{1}{2}\end{array}$ Since $\text{\hspace{0.17em}}150°\text{\hspace{0.17em}}$ is in the second quadrant, the x -coordinate of the point on the circle is negative, so the cosine value is negative. The y -coordinate is positive, so the sine value is positive. $\begin{array}{ccc}\mathrm{cos}\left(150°\right)=\frac{\sqrt{3}}{2}& \phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}& \mathrm{sin}\left(150°\right)=\frac{1}{2}\end{array}$ 2. $\frac{5\pi }{4}\text{\hspace{0.17em}}$ is in the third quadrant. Its reference angle is $\text{\hspace{0.17em}}\frac{5\pi }{4}-\pi =\frac{\pi }{4}.\text{\hspace{0.17em}}$ The cosine and sine of $\text{\hspace{0.17em}}\frac{\pi }{4}\text{\hspace{0.17em}}$ are both $\text{\hspace{0.17em}}\frac{\sqrt{2}}{2}.\text{\hspace{0.17em}}$ In the third quadrant, both $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ are negative, so: $\begin{array}{ccc}\mathrm{cos}\phantom{\rule{0.03em}{0ex}}\frac{5\pi }{4}=-\frac{\sqrt{2}}{2}& \phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}& \mathrm{sin}\phantom{\rule{0.03em}{0ex}}\frac{5\pi }{4}=-\frac{\sqrt{2}}{2}\end{array}$ 1. Use the reference angle of $\text{\hspace{0.17em}}315°\text{\hspace{0.17em}}$ to find $\text{\hspace{0.17em}}\mathrm{cos}\left(315°\right)\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\mathrm{sin}\left(315°\right).$ 2. Use the reference angle of $\text{\hspace{0.17em}}-\frac{\pi }{6}\text{\hspace{0.17em}}$ to find $\text{\hspace{0.17em}}\mathrm{cos}\left(-\frac{\pi }{6}\right)\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\mathrm{sin}\left(-\frac{\pi }{6}\right).$ 1. $\text{cos}\left(-\frac{\pi }{6}\right)=\frac{\sqrt{3}}{2},\mathrm{sin}\left(-\frac{\pi }{6}\right)=-\frac{1}{2}$ ## Using reference angles to find coordinates Now that we have learned how to find the cosine and sine values for special angles in the first quadrant, we can use symmetry and reference angles to fill in cosine and sine values for the rest of the special angles on the unit circle    . They are shown in [link] . Take time to learn the $\text{\hspace{0.17em}}\left(x,y\right)\text{\hspace{0.17em}}$ coordinates of all of the major angles in the first quadrant. In addition to learning the values for special angles, we can use reference angles to find $\text{\hspace{0.17em}}\left(x,y\right)\text{\hspace{0.17em}}$ coordinates of any point on the unit circle, using what we know of reference angles along with the identities First we find the reference angle corresponding to the given angle. Then we take the sine and cosine values of the reference angle, and give them the signs corresponding to the y - and x -values of the quadrant. Given the angle of a point on a circle and the radius of the circle, find the $\text{\hspace{0.17em}}\left(x,y\right)\text{\hspace{0.17em}}$ coordinates of the point. 1. Find the reference angle by measuring the smallest angle to the x -axis. 2. Find the cosine and sine of the reference angle. 3. Determine the appropriate signs for $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ in the given quadrant. ## Using the unit circle to find coordinates Find the coordinates of the point on the unit circle at an angle of $\text{\hspace{0.17em}}\frac{7\pi }{6}.$ We know that the angle $\text{\hspace{0.17em}}\frac{7\pi }{6}\text{\hspace{0.17em}}$ is in the third quadrant. First, let’s find the reference angle by measuring the angle to the x -axis. To find the reference angle of an angle whose terminal side is in quadrant III, we find the difference of the angle and $\text{\hspace{0.17em}}\pi .$ $\frac{7\pi }{6}-\pi =\frac{\pi }{6}$ Next, we will find the cosine and sine of the reference angle. $\begin{array}{cc}\mathrm{cos}\left(\frac{\pi }{6}\right)=\frac{\sqrt{3}}{2}\hfill & \phantom{\rule{1em}{0ex}}\mathrm{sin}\left(\frac{\pi }{6}\right)=\frac{1}{2}\hfill \end{array}$ We must determine the appropriate signs for x and y in the given quadrant. Because our original angle is in the third quadrant, where both $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ are negative, both cosine and sine are negative. $\begin{array}{ccc}\hfill \mathrm{cos}\left(\frac{7\pi }{6}\right)& =& -\frac{\sqrt{3}}{2}\hfill \\ \hfill \mathrm{sin}\left(7\pi \right)& =& -\frac{1}{2}\hfill \end{array}$ Now we can calculate the $\text{\hspace{0.17em}}\left(x,y\right)\text{\hspace{0.17em}}$ coordinates using the identities $\text{\hspace{0.17em}}x=\mathrm{cos}\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}y=\mathrm{sin}\text{\hspace{0.17em}}\theta .$ The coordinates of the point are $\text{\hspace{0.17em}}\left(-\frac{\sqrt{3}}{2},-\frac{1}{2}\right)\text{\hspace{0.17em}}$ on the unit circle. Find the coordinates of the point on the unit circle at an angle of $\text{\hspace{0.17em}}\frac{5\pi }{3}.$ $\left(\frac{1}{2},-\frac{\sqrt{3}}{2}\right)$ A laser rangefinder is locked on a comet approaching Earth. The distance g(x), in kilometers, of the comet after x days, for x in the interval 0 to 30 days, is given by g(x)=250,000csc(π30x). Graph g(x) on the interval [0, 35]. Evaluate g(5)  and interpret the information. What is the minimum distance between the comet and Earth? When does this occur? To which constant in the equation does this correspond? Find and discuss the meaning of any vertical asymptotes. The sequence is {1,-1,1-1.....} has how can we solve this problem Sin(A+B) = sinBcosA+cosBsinA Prove it Eseka Eseka hi Joel June needs 45 gallons of punch. 2 different coolers. Bigger cooler is 5 times as large as smaller cooler. How many gallons in each cooler? 7.5 and 37.5 Nando find the sum of 28th term of the AP 3+10+17+--------- I think you should say "28 terms" instead of "28th term" Vedant the 28th term is 175 Nando 192 Kenneth if sequence sn is a such that sn>0 for all n and lim sn=0than prove that lim (s1 s2............ sn) ke hole power n =n write down the polynomial function with root 1/3,2,-3 with solution if A and B are subspaces of V prove that (A+B)/B=A/(A-B) write down the value of each of the following in surd form a)cos(-65°) b)sin(-180°)c)tan(225°)d)tan(135°) Prove that (sinA/1-cosA - 1-cosA/sinA) (cosA/1-sinA - 1-sinA/cosA) = 4 what is the answer to dividing negative index In a triangle ABC prove that. (b+c)cosA+(c+a)cosB+(a+b)cisC=a+b+c. give me the waec 2019 questions
2019-06-25 01:26:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 45, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8656198978424072, "perplexity": 355.190272095723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999783.49/warc/CC-MAIN-20190625011649-20190625033649-00464.warc.gz"}
http://lists.slackbuilds.org/pipermail/slackbuilds-users/2014-July/012672.html
# [Slackbuilds-users] TeX Live 2014 #3 (third time's the charm?) Ivan Zaigralin melikamp at melikamp.com Thu Jul 3 06:12:40 UTC 2014 Everything builds, and the core package seems to work. I have some issue with mathdesign, though, even with everything installed. pdflatex-ing this file \documentclass{amsart} \usepackage[charter]{mathdesign} \begin{document} Hello world! \end{document} yields This is pdfTeX, Version 3.14159265-2.6-1.40.15 (TeX Live 2014 on Slackware restricted \write18 enabled. entering extended mode (./test.tex LaTeX2e <2014/05/01> Babel <3.9k> and hyphenation patterns for 78 languages loaded. (/usr/share/texmf-dist/tex/latex/amscls/amsart.cls Document Class: amsart 2009/07/02 v2.20.1 (/usr/share/texmf-dist/tex/latex/amsmath/amsmath.sty For additional information on amsmath, use the ?' option. (/usr/share/texmf-dist/tex/latex/amsmath/amstext.sty (/usr/share/texmf-dist/tex/latex/amsmath/amsgen.sty)) (/usr/share/texmf-dist/tex/latex/amsmath/amsbsy.sty) (/usr/share/texmf-dist/tex/latex/amsmath/amsopn.sty)) (/usr/share/texmf-dist/tex/latex/amsfonts/umsa.fd) (/usr/share/texmf-dist/tex/latex/amsfonts/amsfonts.sty)) (/usr/share/texmf-dist/tex/latex/mathdesign/mathdesign.sty (/usr/share/texmf-dist/tex/latex/graphics/keyval.sty) (/usr/share/texmf-dist/tex/latex/base/ifthen.sty) (/usr/share/texmf-dist/tex/latex/mathdesign/mdbch/mdbch.cfg) (/usr/share/texmf-dist/tex/latex/mathdesign/mdbch/mdbch.sty (/usr/share/texmf-dist/tex/latex/mathdesign/mdfont.def) (/usr/share/texmf-dist/tex/latex/mathdesign/mdsffont.def) (/usr/share/texmf-dist/tex/latex/mathdesign/mdttfont.def) (/usr/share/texmf-dist/tex/latex/xkeyval/xkeyval.sty (/usr/share/texmf-dist/tex/generic/xkeyval/xkeyval.tex)) (/usr/share/texmf-dist/tex/latex/mathdesign/mdbch/ot1mdbch.fd)) (/usr/share/texmf-dist/tex/latex/base/fontenc.sty (/usr/share/texmf-dist/tex/latex/base/t1enc.def) (/usr/share/texmf-dist/tex/latex/mathdesign/mdbch/t1mdbch.fd))) (./test.aux) (/usr/share/texmf-dist/tex/latex/mathdesign/mdacmr.fd) (/usr/share/texmf-dist/tex/latex/mathdesign/mdbcmr.fd) (/usr/share/texmf-dist/tex/latex/mathdesign/mdbch/omlmdbch.fd) (/usr/share/texmf-dist/tex/latex/mathdesign/mdbch/omsmdbch.fd) (/usr/share/texmf-dist/tex/latex/mathdesign/mdbch/omxmdbch.fd) (/usr/share/texmf-dist/tex/latex/amsfonts/umsa.fd) (/usr/share/texmf-dist/tex/latex/amsfonts/umsb.fd) (/usr/share/texmf-dist/tex/latex/mathdesign/mdbch/mdamdbch.fd) (/usr/share/texmf-dist/tex/latex/mathdesign/mdbch/mdbmdbch.fd) Package mathdesign/mdbch Warning: Package 'amsfonts' shouldn't be used in conjo nction with package mdbch, on input line 3. [1{/usr/share/texmf-var/fonts/map/pdftex/updmap/pdftex.map}] (./test.aux) kpathsea: Running mktexpk --mfmode / --bdpi 600 --mag 0+403/600 --dpi 403 md-chr8y mktexpk: don't know how to create bitmap font for md-chr8y. mktexpk: perhaps md-chr8y is missing from the map file. kpathsea: Appending font creation commands to missfont.log. ) ==> Fatal error occurred, no output PDF file produced! On 07/02/2014 07:45 PM, Robby Workman wrote: > Okay, I think I've got something that we can all accept, even if we're > not entirely happy with it. Please go forth and test. > > http://rlworkman.net/pkgs/sources/14.1/texlive-2014/ > rsync://rlworkman.net/rworkman/sources/14.1/texlive-2014/ > > As before, these links are not permanent, so if you find this in > a search engine, don't be surprised to see 404. :-) > > What I'd prefer is a test to see if your stuff works using *only* the > plain "texlive" package - that's the one that I'm hoping Pat will like > and be able to shoehorn into Slackware. > > If something you need is missing from it, then make sure it's present > in the texlive-texmf-extra package. If not, then we have a problem. > If it is, then all is well, unless you have a really strong case for > why it should be included in the main texlive package. > > On the other hand, if you see something that has little reason to be > included in the main texlive package and should be moved into the > texlive-texmf-extra package instead, I'd love to hear that too. > > Assuming good feedback on testing and no major problems, these > will be making their way to the main SBo repo soonish. > > -RW > > > > _______________________________________________ > SlackBuilds-users mailing list > SlackBuilds-users at slackbuilds.org > http://lists.slackbuilds.org/mailman/listinfo/slackbuilds-users > Archives - http://lists.slackbuilds.org/pipermail/slackbuilds-users/ > FAQ - http://slackbuilds.org/faq/ > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 901 bytes Desc: OpenPGP digital signature URL: <http://lists.slackbuilds.org/pipermail/slackbuilds-users/attachments/20140702/9fa04e13/attachment.asc> `
2018-01-17 02:58:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.886522650718689, "perplexity": 12094.252635029754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886794.24/warc/CC-MAIN-20180117023532-20180117043532-00525.warc.gz"}
https://www.rdocumentation.org/packages/lidR/versions/2.0.0/topics/homogenize
# homogenize 0th Percentile ##### Point Cloud Decimation Algorithm This function is made to be used in lasfilterdecimate. It implements an algorithm that creates a grid with a given resolution and filters the point cloud by randomly selecting some points in each cell. It is designed to produce point clouds that have uniform densities throughout the coverage area. For each cell, the proportion of points or pulses that will be retained is computed using the actual local density and the desired density. If the desired density is greater than the actual density it returns an unchanged set of points (it cannot increase the density). The cell size must be large enough to compute a coherent local density. For example in a 2 points/m^2 point cloud, 25 square meters would be feasible; however 1 square meter cells would not be feasible because density does not have meaning at this scale. ##### Usage homogenize(density, res = 5, use_pulse = FALSE) ##### Arguments density numeric. The desired output density. res numeric. The resolution of the grid used to filter the point cloud use_pulse logical. Decimate by removing random pulses instead of random points (requires running laspulse first) Other point cloud decimation algorithms: highest, random • homogenize ##### Examples # NOT RUN { LASfile <- system.file("extdata", "Megaplot.laz", package="lidR") las = readLAS(LASfile, select = "xyz") # Select points randomly to reach an homogeneous density of 1 thinned = lasfilterdecimate(las, homogenize(1,5)) plot(grid_density(thinned)) # } Documentation reproduced from package lidR, version 2.0.0, License: GPL-3 ### Community examples Looks like there are no examples yet.
2019-07-16 14:01:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23135046660900116, "perplexity": 2460.0229532656576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524568.14/warc/CC-MAIN-20190716135748-20190716161748-00007.warc.gz"}
https://couryes.com/combinatorics-dai-xie-ma1510/
数学代写|组合学代写Combinatorics代考|MA1510 2022年9月27日 couryes-lab™ 为您的留学生涯保驾护航 在代写组合学Combinatorics方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写组合学Combinatorics代写方面经验极为丰富,各种代写组合学Combinatorics相关的作业也就用不着说。 • Statistical Inference 统计推断 • Statistical Computing 统计计算 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 couryes™为您提供可以保分的包课服务 数学代写|组合学代写Combinatorics代考|Closing the Bayesian Recursion The Bayes posterior distribution (3.47) is not in the same form as the list given in (3.31) that defines the prior. It is approximated in the same “mean field” spirit as done in JPDA by using the single-object marginal distributions. The marginal distribution of object $j$ is defined as the sum over the set of existence events $\epsilon$ for which $\epsilon_j=1$ of the the term-by-term integral of the posterior (3.47) over all $x^n \in \mathcal{X}^n, n \neq j$. Indeed, the posterior probability that object $j$ exists and is in state $x^j$ is given by \begin{aligned} p_k\left(x^j, \epsilon_j=1 \mid \mathbf{y}k\right) &=\sum{\epsilon^{\prime}: \epsilon^{\prime}=1} p_k\left(x^j, \mathbf{N}k=\epsilon^{\prime} \mid \mathbf{y}_k\right) \ &=\sum{\epsilon^{\prime}: \epsilon_j^{\prime}=1}\left[\int_{X^{\prime} \backslash X^{\prime}} p_k\left(x^{\epsilon^{\prime}}, \mathbf{N}k=\epsilon^{\prime} \mid \mathbf{y}_k\right) \mathrm{d} x^{\epsilon^{\prime}} \backslash \mathrm{d} x^j\right] . \end{aligned} A different way to do the same thing is to use the GFL of the marginal process. It is derived from (3.35) by substituting $h_n\left(x^n\right)=1, x^n \in \mathcal{X}^n, n \neq j$. Thus, \begin{aligned} &\Psi_k^{\text {IIDAM }}\left(h^j, g\right)=\exp \left(-\lambda_k^c+\lambda_k^c \int_y g(y) p_k^c(y) \mathrm{d} y\right) \ &\times\left[1-\chi_k^{j-}+\chi_k^{j-} \int{X^j} h^j\left(x^j\right) \mu_k^{j-}\left(x^j\right)\left(1-P d_k^j\left(x^j\right)+P d_k^j\left(x^j\right) \int_y g(y) p_k^j\left(y \mid x^j\right) \mathrm{d} y\right) \mathrm{d} x^j\right] \ &\times \prod_{\substack{n=1 \ n \neq j}}^N\left[1-\chi_k^{n-}+\chi_k^{n-} \int_{X^n} \mu_k^{n-}\left(x^n\right)\left(1-P d_k^n\left(x^n\right)+P d_k^n\left(x^n\right) \int_y g(y) p_k^n\left(y \mid x^n\right) \mathrm{d} y\right) \mathrm{d} x^n\right] \end{aligned} Substituting the Dirac delta train of (2.22) into (3.50) gives the secularized marginal GFL, $\Psi_k^{\text {JIPD } ~}\left(h^j, \beta\right)$. It is a product of $N$ linear functions and an exponential of a linear function, so evaluating the cross-derivative using (C.37) and normalizing it gives the GFL of the Bayes posterior, $\Psi_k^{\text {IPDD } \omega}\left(h^j \mid \mathbf{y}_k\right)$. Substituting $h^j(\cdot)=\alpha_j \delta_x(\cdot)$ gives the secular form, $\Psi_k^{\text {IPPS }}\left(\alpha_j \mid \mathbf{y}_k\right)$. By inspection it is linear in $\alpha_j$. 数学代写|组合学代写Combinatorics代考|Resolution/Merged Measurement Problem The problem of unresolved or merged measurements has received relatively little attention in the literature. In practice, the problem is usually ignored; all measurements are assumed resolved. In many scenarios, however, the issue is crucial and can become more serious than incorrect object-measurement assignments [15]. In [16], a hard threshold resolution model is developed for a fixed grid of resolution cells for the JPDA filter for $N=2$ objects. The idea is extended to MHT in [4]. In [17], the resolution function is switched from a hard ${0,1}$ threshold function to a probabilistic Gaussian function. The general unresolved tracking problem for $N \geq 2$ objects is addressed for both JPDA and MHT in [18-20]. An unresolved object tracking filter is developed here using AC techniques for JPDA with $N=2$ objects of interest. It is referred to as the JPDA/Res filter. It is closely related to, but different from, the first application of $\mathrm{AC}$ techniques to model resolution problems in [21]. The JPDA filter assumes that a given object can generate at most one measurement per scan. It also assumes that a measurement originates from at most one object. That is, a measurement is either clutter-originated or induced by a single object of interest. In reality, sensors have limited resolution capability. The term resolution refers to the ability of the sensor to determine that two closely spaced objects are indeed distinct. Resolution depends on physical characteristics of the sensor, the signal processing algorithms, and the physical characteristics of the signal, such as relative object signal strength. In terms of “peaks” in the sensor response surface (see Sect. $2.3$ of Chap. 2), two objects are unresolved if there is only one peak in the surface and resolved if there are two peaks. Loosely speaking, closely spaced objects may have resolution issues. (Distance is measured in the sensor space, not the state space, since sensors may have observability limitations, e.g., they may measure angles only and not range.) Resolution and detection are different phenomena-two objects can be resolvable, while one or both are undetectable. For example, they may be far apart in the measurement space (i.e., theoretically resolvable) but have weak signal strength (i.e., undetectable). Similarly, they can be unresolvable and undetectable. 组合学代考 数学代写|组合学代写Combinatorics代考|关闭贝叶斯递归 \begin{aligned} p_k\left(x^j, \epsilon_j=1 \mid \mathbf{y}k\right) &=\sum{\epsilon^{\prime}: \epsilon^{\prime}=1} p_k\left(x^j, \mathbf{N}k=\epsilon^{\prime} \mid \mathbf{y}k\right) \ &=\sum{\epsilon^{\prime}: \epsilon_j^{\prime}=1}\left[\int{X^{\prime} \backslash X^{\prime}} p_k\left(x^{\epsilon^{\prime}}, \mathbf{N}k=\epsilon^{\prime} \mid \mathbf{y}k\right) \mathrm{d} x^{\epsilon^{\prime}} \backslash \mathrm{d} x^j\right] . \end{aligned}给出。做同样事情的另一种方法是使用边缘过程的GFL。它由(3.35)通过替换$h_n\left(x^n\right)=1, x^n \in \mathcal{X}^n, n \neq j$派生而来。因此,\begin{aligned} &\Psi_k^{\text {IIDAM }}\left(h^j, g\right)=\exp \left(-\lambda_k^c+\lambda_k^c \int_y g(y) p_k^c(y) \mathrm{d} y\right) \ &\times\left[1-\chi_k^{j-}+\chi_k^{j-} \int{X^j} h^j\left(x^j\right) \mu_k^{j-}\left(x^j\right)\left(1-P d_k^j\left(x^j\right)+P d_k^j\left(x^j\right) \int_y g(y) p_k^j\left(y \mid x^j\right) \mathrm{d} y\right) \mathrm{d} x^j\right] \ &\times \prod{\substack{n=1 \ n \neq j}}^N\left[1-\chi_k^{n-}+\chi_k^{n-} \int_{X^n} \mu_k^{n-}\left(x^n\right)\left(1-P d_k^n\left(x^n\right)+P d_k^n\left(x^n\right) \int_y g(y) p_k^n\left(y \mid x^n\right) \mathrm{d} y\right) \mathrm{d} x^n\right] \end{aligned} 数学代写|组合学代写Combinatorics代考|分辨率/合并测量问题 JPDA过滤器假设给定的对象每次扫描最多只能生成一个测量值。它还假设测量最多起源于一个对象。也就是说,测量要么是由杂波引起的,要么是由单个感兴趣的对象引起的。实际上,传感器的分辨率有限。术语分辨率指的是传感器确定两个距离很近的物体确实不同的能力。分辨率取决于传感器的物理特性、信号处理算法和信号的物理特性,如相对物体信号强度。根据传感器响应面的“峰”(参见第二章的$2.3$节),如果表面上只有一个峰,则两个对象无法解析;如果表面上有两个峰,则两个对象无法解析。粗略地说,间隔很近的物体可能存在分辨率问题。(距离是在传感器空间中测量的,而不是状态空间,因为传感器可能有可观察性的限制,例如,它们可能只测量角度而不是距离。 有限元方法代写 tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。 MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
2023-03-22 15:40:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8692653775215149, "perplexity": 4876.646421087885}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943845.78/warc/CC-MAIN-20230322145537-20230322175537-00576.warc.gz"}
https://gitter.im/cake-build/cake?at=5ad8e04526a769820b2bdec7
## Where communities thrive • Join over 1.5M+ people • Join over 100K+ communities • Free without limits ##### Activity • 04:02 Build #1196 passed • 02:29 reputationdev starred cake-build/cake • Dec 05 17:59 steviegt6 starred cake-build/cake • Dec 04 23:36 Build #7426 passed • Dec 04 23:31 Build #1277 passed • Dec 04 23:30 Build #3044 passed • Dec 04 23:18 augustoproiete commented #4077 • Dec 04 23:18 augustoproiete on develop Ability to force clean readonly… Merge pull request #4077 from F… (compare) • Dec 04 23:18 augustoproiete closed #1317 • Dec 04 23:18 augustoproiete closed #4077 • Dec 04 20:01 mannyman1115 starred cake-build/cake • Dec 04 15:43 augustoproiete assigned #3279 • Dec 04 15:43 augustoproiete labeled #3279 • Dec 04 15:43 augustoproiete unlabeled #3279 • Dec 04 15:43 augustoproiete unlabeled #3279 • Dec 03 15:57 Redbeard29 starred cake-build/cake • Dec 02 07:38 tapika commented #3279 • Dec 01 09:05 bunyamineymen starred cake-build/cake • Dec 01 04:01 Build #1195 passed • Dec 01 02:55 AnthonyS-GitHub starred cake-build/cake Rodney Littles II @RLittlesII Are you in a position to post a repo that reproduces your issue? If not, I think I am going to try so I can have a more intelligent conversation about what the problem is and how to fix it. Do you have targets with your unit test? Mike Gottlieb @mikegottlieb I can’t share my current project, sorry. Rodney Littles II @RLittlesII No worries Mike Gottlieb @mikegottlieb not sure what you mean by targets? Rodney Littles II @RLittlesII iOS xunit project and such So you can test on device? I’m walking into an afternoon of meetings. I’ll check back in in a moment Mike Gottlieb @mikegottlieb oh, no. I’m writing tests against my shared code library. I do have a UI test project for testing some stuff on device. I’m having different troubles with my cake scripts for that. I have a fairly limited amount of device specific code and a lot of it is using location services and background services so a bit tricky to truly test. Gary Ewan Park @gep13 @mikegottlieb @RLittlesII so I think I asked this the other day, but let me ask again, based on the discussion that you guys have just had.... sounds like you want to be using DotNetCoreTest, rather than the XUnit Aliases. Have you tried using that yet? Mike Gottlieb @mikegottlieb That doesn’t work for other reasons Or maybe I just didn’t provide additional settings to it that were required. Gary Ewan Park @gep13 Hmm, this sounds like something that needs to be investigated. Mike Gottlieb @mikegottlieb It seems like when I invoke any of the DotNetCoreXYZ aliases in Cake my environment config is off. Gary Ewan Park @gep13 If you could create a small reproducible sample of the type of thing that you are trying to do, that would help immensely! It would be good to know if this is somethign specific to configuration, or something that is off with the aliases that we have I have now been using the DotNetCore Aliases for a number of addins that I am the maintainer of, which have tXUnit tests in them, and I haven't had any issues Mike Gottlieb @mikegottlieb Well I know DotNetCoreBuild doesn’t work for a full Xamarin app because the device specific projects are not netstandard libraries. Gary Ewan Park @gep13 it could be the mixture of Xamarin that is causing problems, but without a sample to test against, not sure what else we can do Mike Gottlieb @mikegottlieb And DotNetCoreMSBuild doesn’t work because the environment is missing settings that my Xamarin projects expect But regular MSBuild works, so I’m using that Invoking the DotNetCoreTest method I believe was failing to find xunit Gary Ewan Park @gep13 @Redth @RLittlesII are these sorts of issues that you guys have seen before? Jonathan Dick @Redth i don’t think generally we can build xamarin projects with dotnet unless you’re using the new project structure, which is a bit tricky with xamarin currently (though it’s possible) Mike Gottlieb @mikegottlieb yeah, I thought maybe the whole dotnet msbuild hybrid could. Martin Björkström @mholo65 @/all I'll soon start preparing for 0.27.0 release. So this is your daily reminder to pin your Cake version. Mike Gottlieb @mikegottlieb I still think it might be able to, it was just choking on some environment variable using in my ios and android project files Jonathan Dick @Redth what’s in 0.27? @mikegottlieb yeah you’re in uncharted waters i think at that point :D Mike Gottlieb @mikegottlieb lol not surprising Turns out half the time I am over in Xamarin land too I’m on their slack all day bringing up things that they don’t consider like that dev teams can’t afford to be forced to take version upgrades with tons of breaking changes every 3 months. Especially dev teams of 1. Martin Björkström @mholo65 Jonathan Dick @Redth magic :laughing: Mattias Karlsson Jonathan Dick @Redth nice! i need to read up again and see where everything stands with bootstrappers and nuget client addin installation etc cakebot @cake-build-bot @/all Version 0.27.0 of the Cake has just been released, https://www.nuget.org/packages/Cake. Rodney Littles II @RLittlesII @gep13 No. I haven't seen them before, but I haven't ported to net standard. I haven't ever attempted to run unit tests for my Xamarin projects through Cake. That's why I am so interested in what @mikegottlieb is doing. I am trying to learn from his problems so we can solve them. So I am going to try and reproduce the issues see if there is a good solution for them because I will face this at some point. Just not sure when. Mike Gottlieb @mikegottlieb Yeah I was trying to automate the process of releasing a new build and part of that is requiring that tests are run and pass At one point i was even setting up VSTS to run tests using Cake after pushing changes to my repo Ben de Bruijn @Duracell1989 Maybe someone can help me; I'm using #addin nuget:?package=Cake.Docker&version=0.9.3 Locally everything works; but when I run this on our build server; if fails: Running build script... Meik Tranel @MeikTranel Hey is this normal behavior that msbuild goes into Diagnostic verbosity when /bl is passed Morning everyone. Wondering if I can sanity check something. I've got NUnit3 running tests, I've got 0.2.0 of the build modules included, and my TeamCity configuration isn't producing the Tests tab. I've had this working in the past, the only difference here is now I've got a TC configuration with two build steps (cake script Tests, cake script Pack), whereas before I've only had one build step shelling out to Cake. I've got settings.TeamCity = true; settings.Full = true; no dice :-( Never mind, got it working. Upgraded the runner to 3.8.0: #tool nuget:?package=NUnit.ConsoleRunner&version=3.8.0 and then added the TeamCityEventListener to packages.config: <package id="NUnit.Extension.TeamCityEventListener" version="1.0.3" />
2022-12-06 04:57:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3812564015388489, "perplexity": 6054.153856263668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711069.79/warc/CC-MAIN-20221206024911-20221206054911-00007.warc.gz"}
https://www.physicsforums.com/threads/electric-potential-problemo.43895/
# Electric Potential Problemo 1. Sep 20, 2004 ### jimithing A light unstressed spring has length d. Two identical particles, each with charge q, are connected to the opposite ends of the spring. The particles are held stationary a distance d apart and then released at the same time. The system then oscillates on a horizontal frictionless table. The spring has a bit of internal kinetic friction, so the oscillation is damped. The particles eventually stops vibrating when the distance between them is 3d. Find the increase in internal energy that appears in the spring during the oscillations. Assume that the system of the spring and two charges is isolated. If the system is isolated, we can assume $$\Delta E_{mec} + \Delta E_{th} + \Delta E_{int} = 0$$ Since friction is negligible, the change in thermal energy can be neglected, so $$\Delta E_{mec} + \Delta E_{int} = 0$$ Now I realize that $$W = \Delta E_{mec}$$, but where exactly can I start? 2. Sep 20, 2004 ### Tide First you need to find the spring constant which you can determine from the forces acting on the spring in its final configuration. Then apply energy conservation noting that the potential energy of the spring is $\frac {1}{2} k x^2$ and you can figure out the electrical potential before and after. 3. Sep 20, 2004 ### jimithing I got $$F = \frac{q^2}{4\pi \epsilon_{0}(3d)^2} , F = -k(3d)$$ So $$k = -\frac{q^2}{4\pi \epsilon_{0}27d^3}$$ Do I now just sub into $$U = \frac{1}{2}k(3d)^2$$ ? 4. Sep 20, 2004 ### Tide Jimi, I didn't check the details but it looks like you did the right steps!
2017-05-23 09:09:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7322611212730408, "perplexity": 532.197104632224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607593.1/warc/CC-MAIN-20170523083809-20170523103809-00109.warc.gz"}
https://physics.stackexchange.com/questions/183344/is-there-a-delay-between-force-and-acceleration/183351
# Is there a delay between force and acceleration? Suppose we have a mass $$m$$. We can talk about two of its parameters: The net force applied on it $$f(t)$$ and its net acceleration $$a(t)$$. I want to know whether there is any delay between $$f(t)$$ and $$a(t)$$ in the real world. Newton's equation doesn't include a delay, by asserting that $$f(t) = ma(t)$$, but in the real world scenario is there any delay? Ilustrating more: Suppose before time $$t=t_0$$ the net force was zero but at time $$t=t_0$$ the force is non-zero. At what instant is the acceleration gonna be non-zero? Is it gonna be at $$t=t_0$$ too? In other words, is there any delay between the information embedded by the "Net force" parameter and the information embedded by the "acceleration" parameter? Perhaps I'm messing with a more deep problem, namely, whether time is continuous or not, but I'm not sure. The motivation for this came from thinking that in a resistor, there is probably a delay between $$V(t)$$ and $$i(t)$$ in the real-world, even though Ohm's law doesn't include it. • No simply because the force is defined by the equation $F=d(mv)/dt$ so if you have no acceleration then you have no force and vice versa. – Quantum spaghettification May 11 '15 at 8:01 • Is there a delay? Of course. The center of mass will only move with the average of the mass distribution of the body, which, of course, is compressible, even in classical mechanics. See "speed of sound". – CuriousOne May 11 '15 at 10:35 • Ideal resistor have no delay between $V(t)$ and $i(t)$. But real resistors can have parasit capacitances/inductances, that account for that delay. And indeed, you can include those in Ohm's law replacing resistance by impedance. – Bosoneando May 17 '15 at 11:33 In the real world, you push the atoms in the back end of the object, which push the atoms in the next layer, etc. This means that the front end of the object won't start accelerating until $t = t_0 + \Delta x/c_s$, where $\Delta x$ is the length of the object and $c_s$ is the speed of sound in the material of the object. If you consider the object to be one atom only, you end up having to define when your push starts, since the force you apply ultimately is an electromagnetic force, which has an infinite extend, and thus applies even from far away. When you define your push to start at $t_0$, your acceleration by definition starts here as well. Because I like drawing, here's a drawing of you pushing an object from far away. For most practical purposes, you can take the acceleration of the object to be zero until you're very close, though. But it is there. • The relevant speed here is the speed of sound, not the speed of light. – CuriousOne May 11 '15 at 10:36 • True, the speed with which the push propagates turns out to be the speed of sound. My intention was to show that the speed of the signal can never be infinite, since it must always be <c, but of course it's much slower than that. – pela May 11 '15 at 11:31 • I edited my answer considering that the signal propagates with the speed of sound, rather than the "less than the speed of light". Thanks, @CuriousOne. – pela May 11 '15 at 12:00 • Can you just solve one little remainder question ? Suppose my hand appears from no where at some point in space and starts heading towards the object. Then as soon as it appears, no matter how far from the object, it will emit photons that will eventually hit the object and hence imply some minimal acceleration after some delay based on the speed of light. Is that correct ? Thanks – nerdy May 11 '15 at 13:38 • @nerdy: Yes, that's correct. Although since the photons are quantized, if your hand appeared very far from the object, the average flux at the location of the object could be so small that it would take longer before a single photon happened to hit the object. – pela May 11 '15 at 18:16 Suppose before time t=t0 the net force was zero but at time t=t0 the force is non-zero. At what instant is the acceleration gonna be non-zero ? Is it gonna be at t=t0 too ? Remember that, if we're talking instantaneous measurements, your target can have a non-zero acceleration and still have zero velocity. As pointed out, in the real world your target is not going to be a perfectly rigid body, but somewhere within that system a force is acting on a particle (at t=0), which will be accelerating (at t=0) even if it's not moving yet (at t=0). One angle to look at is the simplified case of a force acting on a single fundamental particle (an electron, say) - now we're into the realms of quantum mechanics and asking if wavefunction collapse or equivalent is instantaneous, and/or whether we can even sensibly talk about making a truly instantaneous measurement. Newton actually expressed his 2nd law in terms of force and momentum as Joseph wrote in his comment to your question. But considering causality then the second law is better expressed as $$p(t)=\int F(t)dt$$ In other words force comes first, leading to motion. By considering the case where mass is constant $$m \frac{d^2}{dt^2}x(t)=F(t)$$ where x is the displacement. Solving for $x$ $$x(t)=\frac{1}{m}\iint F(t)dt$$ By looking at the 2nd law this way, one can only achieve instantaneous displacement if the force is an ideal impulse $$F(t)=A\delta(t)$$ Where $\delta(t)$ is the idealized Dirac delta function which in the real, physical world can never be realized since it requires infinite amplitude. Consider instead the force as a step function, $F(t)=A u(t)$ where A is the magnitude of the force. In this case the displacement leads to a displacement that is parabolic with time $$x(t)=\frac{A}{2m}t^2$$ Note that the mass, $m$ moderates the rate of motion. The larger $m$ is for a given force, the slower the parabolic rate of displacement.
2020-02-19 14:58:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7777342796325684, "perplexity": 238.36561935922808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144150.61/warc/CC-MAIN-20200219122958-20200219152958-00065.warc.gz"}
https://brilliant.org/discussions/thread/shoutout-to-david-bass/
× # Shoutout to David Bass! I've got a friend who just got started with Brilliant, David Bass, who's got a couple of great problems, with many more on the way. Follow him! Also, would any staff help him with his LaTeX? He kind of sucks at it... :D Note by Finn Hulse 4 years ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: Hey, while I'm on the subject, Steven Shamaiengar is another one of my friends who's awesome. - 4 years ago Followed. Do you guys think that we need updates to the expanded Math Formatting Guide? If there's specific stuff missing (for instance something in Daniel Liu's guide) that you think is important, we should add it! - 4 years ago I think it would be helpful to just link to his guide at the beginning of the Math Formatting Guide already in place. - 4 years ago Great idea! Like a "For more symbols" link. - 4 years ago Just followed him. I recomment Daniel Liu's Latex Guide. It really helped me out. - 4 years ago Cool, I'll check out his account. - 4 years ago
2018-03-24 06:09:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9922615885734558, "perplexity": 13149.686622064253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649931.17/warc/CC-MAIN-20180324054204-20180324074204-00334.warc.gz"}
http://lambda-the-ultimate.org/node/3129
Fundamental Flaws in Current Programming Language Type Systems I'll start with the following statement (which from different perspectives may or may not be considered valid by various parties): Static Type systems and Dynamic Type systems are the two sides of a single coin. In the past I have raised a number of questions in this forum regarding types and programming languages that have in the main only been answered by those who have a strongly held viewpoint. I have been spending much time over the last few years considering the viewpoints thus exhibited. The viewpoints expressed have been one of the following: Static Type are the best - various reasoning Dynamic Types are the best - various reasoning Soft Typing system - amalgamation of the best of both worlds - various reasoning. What has been unsatisfactory about the answers and the discussions that have ensued is that the subject of real world systems and the real world situations they are deployed in have not been closely examined. By this I mean that hardware related errors, system software related errors and user caused errors (including deliberate attacks) have not been brought into the discussion in relation to how the various kinds of type system can help in these conditions. Putting aside anyone's particular bent in belief as to which is the best kind of type system for programming languages, my question is: In what areas do each of the above type systems fail to provide the programmer with the tools (language constructs, abstraction facilities, etc.) to actually deal with real world troubles, problems and conditions? PLT should be about improving what we can actually do with the computing machinery that we use. With the wide range of people in this forum, what areas of research are being looked or not looked in this regard? An example problem is an application that works correctly on a single processor system but exhibits somewhat random behaviour on a dual (or more) processor system due to the seemingly random interactions/responses of a third party library or component. Please, not interested in ideological wars over this. Comment viewing options Type Dynamic Static Type are the best - various reasoning Dynamic Types are the best - various reasoning Soft Typing system - amalgamation of the best of both worlds - various reasoning Ignoring various complaints one might have about this classification (in particular, the class of "static type systems" is much too diverse to throw it all in one pile, esp. considering dependent types and the likes, which seem relevant to your question), it is missing out on one important alternative: Static Typing plus type Dynamic - amalgamation of the best of both worlds - various reasoning This is very different from "soft typing" and makes entirely different trade-offs. In a nutshell, it opens up static typing to dynamic without sacrificing type soundness, while soft-typing opens up dynamic typing to static without sacrificing untyped programming. Excepting typeful Excepting typeful programming (including type-based metaprogramming), types don't provide any 'tools' to the programmer - they only provide safeties on other language constructs, abstraction facilities, etc. provided by the programming language. Most type-systems focus on operation safety (i.e. guaranteeing that no 'undefined' behavior occurs in the system), but type-systems can be aimed at almost any invariant (guaranteeing that only one reference to an object exists (linear types), guaranteeing some degree of security in communication, guaranteeing hard realtime constraints in DSP, etc.). I think that, if you look, you'll find plenty of discussion on how "system software related errors and user caused errors (including deliberate attacks) have," in fact, "been brought into the discussion in relation to how the various kinds of type system can help in these conditions." Admittedly, hardware errors don't really come into it. There isn't much a type system can do to prevent or reconcile hardware errors at the myopic scope of a single program. When distributed programming becomes more common, we may see variants on 'linear' and sub-structural typing aimed to guarantee certain degrees of redundancy and survivability, but most programming languages themselves don't yet provide the 'tools' (abstraction facilities, language constructs, etc.) to even start considering this. At minimum, we'd need the ability to indicate (in abstract) that a critical behavior is prepared to occur 'on other hardware in case of hardware failure', and that requirement is independent of the whole dynamic/static debate. Anyhow, this topic has a trollish smell to it. It might just be paranoia from long experience with people starting up typing discussions without any substantive claims. Is there a reason you are asking the question? There isn't much a type There isn't much a type system can do to prevent or reconcile hardware errors at the myopic scope of a single program. You might take a look at Project Zap. Project Zap Interesting. I'll definitely take a look at the papers there. Typing for Concurrency An example problem is an application that works correctly on a single processor system but exhibits somewhat random behaviour on a dual (or more) processor system due to the seemingly random interactions/responses of a third party library or component. I'm not sure that the rest of your question is well-framed, but this part struck me as potentially interesting. In my experience, concurrency heisenbugs arise from memory violations (which we clearly know how to get rid of), compilation errors (many compilers mis-handle memory barriers, but types aren't the answer) and concurrency guard failures (something is done without the appropriate lock/semaphore/whatever being taken). The last problem is something that might conceivably be helped by dependent types and/or region types, and I think it's a potentially useful thing to pursue. That said, I think it has almost nothing to do with the static/dynamic continuum. Your example question presupposes that failure at runtime is bad, and that says that we're looking for a compile-time (i.e. static) check here. The problems I see are (a) working out all the technical details, which some people are already trying to do, and (b) reducing the result to something that ordinary humans can use in practice, which is a very hard problem that is under-acknowledged in research funding systems around the world. Pluggable and Optional systems missing. I've been tracking some of Mr. Rennie's earlier threads and have found plenty of threads with comments on exactly this subject that cover many things that static typechecking may handle that dynamic typing cannot. One such comment was dedicated to TyPiCal and its ability to locate race conditions and identify deadlock hazards. I believe (but cannot prove) that dynamic type checking is pretty much limited in scope to checking 'myopic' conditions on code such as 'is this next operation well defined?'. The only undesirable property that can be prevented in dynamic typing is the execution of undefined behavior. Anything that requires a check broader than a single operation (such as checking a whole expression, a whole function, or even whole program flows) really is the domain of static typing. So, if the question becomes "In what areas do each of the above type systems fail to provide the programmer with [assurances]?", the answer for dynamic typing is 'most areas'. You'll not find dynamic type systems that can assure properties such as real-time constraints, memory footprint guarantees, absence of deadlocks and race conditions, containment of information, capability and containment of authority, etc. Of course, many of these conditions might be provable if one is able to annotate the code for external theorem provers (e.g. 'pluggable' static type systems). One might even be able to annotate assumptions for the optimizer, or allow the 'pluggable' type-systems to also operate as code-transforms that can perform or suggest optimizations statically (which would be helped by the annotations being part of the AST and the pluggable type systems being part of the standard program pipeline). So it seems that in Bruce's organization of type system classifications, 'pluggable post-processing pipelines with language-supported annotations' seems to be missing from among them. There are some advantages of including some form of 'type system' directly in the language. One is to check for certain invariants across 'module' boundaries (where a 'module' may be introduced in a pre-compiled form by a third party). Another is in support of typeful programming, where the type system can be reflected back into the runtime behavior or syntax of the system in support of DSLs and new language features (as sometimes happens with Ruby meta-object protocol and C++ templates at opposite ends of the dynamic-static scale). An interesting question, I think, is what advantages can embedding a particular static type system directly into the language offer assuming the language has another set of language constructs for plugging in post-processors (e.g. in a convenient manner similar to importing modules) that can readily report errors and associate them with pre-macro-expanded text areas in the source as well as perform or suggest optimizations or properties to post-processors further along in the static pipeline. One thing that some type systems provide With type classes you can define a generic + with identity element 0. There is one plus for numbers with the 0 identity. There is another + for vectors with the zero vector (0,0,...). This means that you can define a generic sum for summing a list. If you sum an empty list of numbers you get 0, but if you sum an empty list of vectors you get (0,0,...). In a dynamically typed language you don't know if an empty list is a number list or a vector list so you can't define a generic sum. A bigger example is QuickCheck. It is able to generate random values in the domain of a function by using the type of the function. Are there ways to define a generic sum in a dynamically typed language? Contextually inferred Contextually inferred parameters have advantages regardless of whether or not those parameters are typed... though I agree that if you're already performing the analysis necessary to infer parameters it doesn't make much sense to skip out on at least some static typing and safety verification. Generic sum can be done in dynamic programming. Either the sum function has a parameter explicitly for the seed used in the fold, or a common 'identity' value is included with the default dynamic type dispatch rules such that 'object o OP identity i = o' for all operations (which could easily be done by overriding 'DOES NOT UNDERSTAND' and such). The solutions you mention, The solutions you mention, adding a parameter to sum or extending the + operation for a common identity value, aren't good. Adding a parameter to sum doesn't really solve the problem and breaks abstraction barriers. Other functions that are calling sum (for example a function that averages a list) will have to take an extra parameter too. Extending the + operation with a new common identity value for vectors and numbers isn't any good either in my opinion. First, you shouldn't have to change low level operations to be able to define sum (breaks abstraction barriers) and second you now have two zero values per type, zero the number and zero the common identity. The best solution I can think of is annotating lists (at run time) with the type of their elements. List<Integer> is the compile time list type in a statically typed language. In a dynamically typed language you could have a constructor EmptyList(Integer) that returns an empty list of integers. This solves the problem because you can now access the identity value with Intger.AdditiveIdentity. This "solution" throws away a lot of the benefits of dynamic typing. So is there a real solution for this problem? Value Judgements Other functions that are calling sum (for example a function that averages a list) will have to take an extra parameter too. That isn't really the case. Two points: (1) There are ways for languages to support implicitly threaded parameters if that is what one needs. Metadata can be attached either to constructs or to computations. (2) In a manifestly statically typed language, there is no advantage whatsoever provided by the static typing's implicit 'sum', since other functions calling 'sum' will already be forced to 'take an extra parameter' for the data types. Extending the + operation with a new common identity value for vectors and numbers isn't any good either in my opinion. You should really stick with technical judgements rather than that sort of value judgement. First, you shouldn't have to change low level operations to be able to define sum (breaks abstraction barriers) I agree, but I don't see how the example I proposed requires "changing low level operations", so I feel the argument is moot. and second you now have two zero values per type, zero the number and zero the common identity. Zero value is only a summation identity. The 'identity' value would not be a 'zero', it's simply 'identity value' - a unique object or value with its own properties. The best solution I can think of is annotating lists (at run time) with the type of their elements. That is an option, of course. Similarly, you could have each list be able to overload its own 'sum' and 'product' operations. Those two solutions are about equivalent. (1) There are ways for (1) There are ways for languages to support implicitly threaded parameters if that is what one needs. Metadata can be attached either to constructs or to computations. Can you give an example? (2) In a manifestly statically typed language, there is no advantage whatsoever provided by the static typing's implicit 'sum', since other functions calling 'sum' will already be forced to 'take an extra parameter' for the data types. Type classes hide this extra parameter. You should really stick with technical judgements rather than that sort of value judgement. I disagree. Programming language design is based on taste, not just facts. And the explanation of my opinion is in the next sentence. I agree, but I don't see how the example I proposed requires "changing low level operations", so I feel the argument is moot. If you add an indentity value you have add a rule to + so that x + Identity = x. So you have to change + to be able to write sum. This also means that you have two equivalent values for addition: 0 and Identity. You probably want x * Identity = 0, otherwise you're breaking the algebraic properties of + and *. Example ways to attach Example ways to attach meta-data to computations: environment variables, thread local storage, language-supported continuation contexts, hidden implicit parameters as utilized in Haskell monads. Example ways to attach meta-data to constructs: extra properties attached to an object, adding extra maps or tables from construct reference to associated meta-data, or adding a 'metadata' object to each object. There are more, of course, but that's what I can come up with in a few minutes. Type classes hide this extra parameter. It is type inference that is hiding the extra parameter. Type classes have nothing to do with it. Type classes are essentially open functions from type->function. In a manifestly statically typed language, you could expect to explicitly include the type in the type class. Consider that C++ templates can effectively serve as type-classes due to support for template specialization (which allows them to be 'open' functions): template<typename T> T sum(List<T> lT) { T r = sum_class<T>::zero(); for(it = lT.begin(); lT.end() != it; ++it) return r; } Type classes could easily be designed for manifestly typed systems, so long as you can produce open functions from type->function. Since I was referring explicitly 'manifestly' statically typed systems, I must have been excluding type inference. Inference of argument is useful, as I mentioned above, but it should be considered as a separate feature from static typing. I disagree. Programming language design is based on taste, not just facts. Regardless, arguments on LtU shouldn't be based on taste. If you add an identity value you have add a rule to + so that x + Identity = x. You have to add a rule to the language, yes. In a dynamic language, this isn't such a big deal... no different from adding the rule to add vectors together to produce vectors. This also means that you have two equivalent values for addition: 0 and Identity. Precisely where does this cause a problem? You probably want x * Identity = 0, otherwise you're breaking the algebraic properties of + and *. I'll agree that if you're aiming for your language to remain utterly true to algebraic properties you should probably favor static programming languages... and either functional languages or logic languages. But keep in mind that programming languages, while oft inspired by math, are not beholden to it. What is x*null, or Integer x * Duck d? How does an 'null pointer exception' or 'runtime type error' fit into the algebraic properties of '*'? Why does 32767+1 equal -32768 for some implementations of C and not for others? Why should I care that (x*Identity) = x at the same time (x+Identity) = x? In Smalltalk, the model isn't math, it is 'x receives message '* Identity and replies with x'. Does 0==(0,0,...)? Does 0 == Does 0==(0,0,...)? Does 0 == Identity? Does (0,0,...) == Identity? Does 0==(0,0,...)? If no then equality isn't transitive and if yes, then: a = new Hashtable a.insert(key: 0, value: "foo") a.insert(key: (0,0,...), value: "bar") a.get(key: 0) => "bar" If you want this behavior then you have to change the hash function for 0 and (0,0,...) to produce the same hash. Assuming we want 0 != (0,0,...) and we want equality to be transitive, so 0 != Identity. What does Identity + 0 produce? If Identity + 0 == Identity then Identity + a == a no longer holds. If Identity + 0 == 0 then 0 + a == a no longer holds. As the authors of SICP would say: "This is a serious complaint.". Comparing x*null with x+0 is comparing apples to oranges. x*null doesn't have meaning in math, x+0 does. Math in programming should behave like math to be able to reason about programs. Regardless, arguments on LtU shouldn't be based on taste. (that's an opinion btw) I think they should. What people find easy/simple/beautiful/logical is critical. Programming languages are designed for humans to use. To your first four questions: no, no, no, no. If no then equality isn't transitive Since (0 != Identity) and (Identity != (0,0,...)), transitivity is not violated when (0 != (0,0,...)). What does Identity + 0 produce? 0. If Identity + 0 == 0 then 0 + a == a no longer holds. I'm not seeing the logic for that claim. In any case, in mathematics '0 + a == a' only needs to hold when 'a' is a number. Comparing x*null with x+0 is comparing apples to oranges. x*null doesn't have meaning in math, x+0 does Depending on the domain (e.g. x could be a tuple or graph), 'x+0' might not have any meaning in math. And, in programming. 'x+0' can also throw a 'null pointer exception'. Math in programming should behave like math to be able to reason about programs. We reason about programs using the tools (types, semantics, etc.) to reason about programs. You seem to want the ability to reason about math language in programs using the tools to reason about math while ignoring that the context is programming and not math. I don't consider it an undesirable goal, and it might be well applied to a DSL, but there will forever be borderline cases where math is not programming, where 'a*0' vs. '0*a' can be the difference between a program that never terminates and a program that terminates immediately, etc. Regardless, arguments on LtU shouldn't be based on taste. (that's an opinion btw) I think they should. What people find easy/simple/beautiful/logical is critical. Programming languages are designed for humans to use. Besides being practical, it is policy. An argument or claim based on taste is not substantive. See Item 4: Provide context and substantiate claims, and 4b: avoid ungrounded discussion. When you've developed statistics about people's tastes and what they find 'beautiful' or 'easy' (as opposed to merely 'familiar') then feel free to link them into an argument. I'm not seeing the logic for I'm not seeing the logic for that claim. Here is is: Identity + 0 = 0 a + 0 = a so take a = Identity: Identity + 0 = Identity so Identity = 0 re: statistics. I doubt that all your claims are backed by solid research. It's simply not a practical way to design languages. Maybe I should have used "common sense" instead of "opinion". Commuting Common Sense For a person who loves math so much, you do play fast and loose with the logic. I notice that the original '0 + a = a' has become 'a + 0 = a'. In any case, I'll mention that, like its commutative cousin, 'a + 0 = a' is only valid in math when 'a' belongs to certain classes of values. It isn't a universal mathematical truth. re: statistics. I doubt that all your claims are backed by solid research. A claim doesn't need to be backed by solid research to be substantive, but it needs a lot more grounding than 'cuz that's the way I like it' or 'cuz I said so'. Extraordinary claims require extraordinary evidence. Less extraordinary claims still require evidence, or at least the ability to point to some evidence when asked. When you start making claims about how a particular language design is 'beautiful' or 'easy' or 'simple' or about how 'that's the way it should be', you'll invariably run into idealist bastards like myself who demand evidence that the way you think things should be has any reason to influence the way they think things should be. If you believe something 'beautiful' but can't pony up the evidence, you're just asking for a juvenile 'is so!' 'is not!' 'is so!' 'is not!' type of argument with no substance whatsoever. Since we're all professionals here on LtU, we'd be much better off avoiding such arguments by avoiding (as much as we can) claims that aren't defensible. Maybe I should have used "common sense" instead of "opinion". I'm of the opinion that "common sense" is a legendary base of knowledge that probably doesn't exist and that, if it exists, almost certainly says very little on the subject of programming language design. My thoughts and comments for the responders My thanks to Andreas Rossberg, David Barbour, Noam, shap and Jules Jacobs for your responses. I have been spending some thinking about the various responses given. Firstly, David, this is not a troll - the reason I am asking the question is to find out the limitations of current type models (thank you Andreas and David for your additional suggestions of groupings) in relation to writing applications in the "real world". All of the discussions/papers that I have seen/read has been focused on what each model allows you to do and achieve but nothing on the areas that they fail to provide as a benefit to the programmer, other than on exactly this subject that cover many things that static type checking may handle that dynamic typing cannot. What I am interested in is the limitations of all or any system in current use and if there is any active research that is trying to extend these boundaries for "real world" application development. For example, you provide your view that type systems are about assurances and only about assurances, hence an obvious limitation. You also provide your view of the limitations of dynamic type checking. However, every static type checking system is a dynamic system analysing and generating new types as the program text is read and generates the appropriate code for execution. So in this example, it is about more than assurances - that is one of the outcomes available. Thank you for pointing out TyPiCal. An obvious limitation of TyPiCal is the inability to determine deadlock conditions in systems that have dynamic numbers of processes over an extended period of time, as the analysis is still based on static analysis of the program text which is not necessarily the condition in human-computer or computer-computer interacting systems. David you make the following statement types don't provide any 'tools' to the programmer - they only provide safeties on other language constructs, abstraction facilities, etc. provided by the programming language and in doing so, from my perspective, seem to be advocating that it is not the business of programmers using the language in question to be able to use this as a part of their application or application development processes. Is this intended? One of the interesting facets of static type systems is that types are meta-information related to the language and not values that can be manipulated in the language. I have read a number of papers that discuss why this should be so - all of them have the same underlying premise that type containing all types must also contains itself which leads to a logical inconsistency (fair enough). One of the underlying assumptions of the above is that the map is the territory. I have observed this phenomenon in many areas of mathematics in very many different areas of study (from PLT to quantum mechanics to astrophysics to etc). An opposing view would say that the map is not the territory. An example of this would be that the values within the domain type represent the various other types of values and are not those types. Would this lead to a different model for type theory? I don't know and that is part of the question relating to the limitations of current type theory and type checking systems. I do not have the expertise to look at this, it not my area and there are many on this forum alone who are certainly more qualified to study this. Shap, thank you for your comments about the difficulty and the underfunding for research. I am not presupposing that failure at runtime is bad, what would be bad is not being able to handle the failure. From my perspective, what limitations in current type systems (static, dynamic, etc) prevent their use in dealing with heisenbugs (love the term Shap, thank you). Noam, thank you for pointing out about Zap, more reading to do. Over the years, my questions in this forum have been motivated by the view that PL's and PLT are tools in the toolbox for which I can solve problems presented to me in a manner that is appropriate for the circumstances at the time. Static typing is one tool and in languages that have it, I can solve particular problems with. Dynamic typing is another tool and in languages that use it, I can solve other problems. But there are limitations to these tools as a developer. I find that those who advocate one species of typing over another seem to just don't seem to see that each of these tools has its appropriate place in the toolbox. I can use a hammer to drive in a screw but it is a lot of hard work, I can use a screwdriver to drive in a nail and this is also a lot of hard work. The entire field of PLT is still very young and there is still much to look at in how the tools provided by PLT can be put to general use in program development. So I still go back to my original question In what areas do each of the above type systems fail to provide the programmer with the tools (language constructs, abstraction facilities, etc.) to actually deal with real world troubles, problems and conditions? Again, my thanks to those who have responded, I greatly appreciate your time. "Real world" In this comment you use the phrase "real world" twice. The real world is a very, very big place. Even if we just focus on the real programming world we have everything from microcontrollers to ultra-high performance computing clusters; we have code for one-off shell chores to life critical medical equipment control; we have user written game scripts and we have financial decision support and trading systems; game simulations and airplane navigation systems. The real world is staggeringly vast and the economic cost/benefit analysis of any particular formal proof strategy is going to be different for every corner of it. Thus there is not, nor can there be, a simple answer to any question based on the "real world." Now, which corner of the real world do you want to talk about? Real World James, I agree with your comments in relation to the scope. However, after 30 years (from undergraduate days till now), yet at its core, programming is about solving a problem at hand (irrespective of how large or small) and my question is related to the limits of what can be done and what can't be done. An example unrelated to programming, is that my brother and sister-in-law drive 50 tonne mining trucks and my son-in-law drives a 1 tonne ute. Both are vehicles that carry materials from one place to another using the same fundamental techniques/technology, but a limitation of both is that I can't use either to carry goods from here to Antarctica. Putting aside any cost benefit analysis, what are the limits currently. Our knowledge can and has been increased by asking a question about what are the limits of any particular methodology/technology, etc or why we do things a particular way. My question may not be phrased in the most appropriate way. To go back to my toolbox analogy from my last response, for a craftsman, the tools in the toolbox are continually being added to because the existing tools are not always the right tool for the job, even if he has to make them himself. For this look at any specialist woodworking magazine and the tools that are being shared about. So, what are the current limitations and what areas and ideas can be suggested that are not being looked at so as to be able to expand the boundaries? I thought I was pretty clear in describing that type systems provide two facilities: assurances and typeful programming. Indeed, my first sentence, the latter half of which you quote in your reply, started with an important condition: "Excepting typeful programming (including type-based metaprogramming) [...]" Assurances provide protections (static or dynamic) against various forms of program faults. Typeful programming reflects types back into the syntax (as more than annotations... e.g. for automatic conversions and statically determined polymorphic dispatch) or into the runtime behavior of the system (including virtual methods or multi-methods, runtime polymorphism, meta-object protocols, etc.). Thus these features may also be at different ends of the static/dynamic scale. With static typeful programming this meta-data might not become part of the runtime, but it still can be manipulated (often to powerful effect) by a skilled programmer. However, I will note that the majority of 'typeful programming' benefits can be achieved (often more flexibly) by other mechanisms than use of types (e.g. an extensible syntax on one side, whole-program-transforms on the other). And assurances can also be achieved by mechanisms other than forcing the type-system directly into the language (i.e. one could use pluggable type-systems to perform arbitrary analysis under varying assumptions). The main area I see a problem with this is in partial or separate compilation, in which the 'assumptions' one makes can become easily violated. However, every static type checking system is a dynamic system analysing and generating new types as the program text is read and generates the appropriate code for execution. Poking a slight hole in that characterization: "as the program text is read" is generally false. Static type analysis, or at least the majority of it, can only occur "after" the program text has been read. And that difference can be critical. But I'll agree that you can run a static type analyses on a runtime program. Unfortunately, doing so will usually require freezing the program. Thank you David, for your correction regarding you statement. My reading of the emphasis was that assurances was the goal. In regard to your second major paragraph, what you say is true, but!!, rarely are these facilities given to the programmer to use as part of his toolbox. You make the statement that a skilled programmer can manipulate this meta-data which can be true (as seen within many of the papers referenced on LTU. However, I will maintain that there are very few languages that allow the same levels of freedom for the programmer that the compiler/language has for manipulation of this data. The following paragraph, I think hits one of the nails on the head, and my response is Why? Is it a inherent limitation or is it a limitation due to the current practices and theory? David, in this regard, this point you make is answering the original question I raised. Thank you. I agree that various mechanisms can be used to generate assurances. In this area, the question that can be raised is if type information is available at one point why is it not maintained at all points thereafter particularly for partial/separate compilation situations or made available for the load time and/or runtime? I do recognise that this would increase the complexity of the resulting program. Is this not useful information? Please don't get me wrong, I am not advocating any position here, I am simply asking a question so that appropriate clarity can be found. My characterisation "as the program text is read" includes any transforms that may occur to the textual data into other internal structures (as the analysis still needs to "read" this information). Sorry for any confusion on that point. Yeah, well if type information is available at one point why is it not maintained at all points thereafter particularly for partial/separate compilation situations or made available for the load time and/or runtime? Types are erased for performance; there is no other answer. Object modules may maintain types for different top-level, and sometimes local, entries (like most ML and Haskell languages do). It's a trade-off. If the parser and semantic analyser for a language are fast enough, and one is not hindered by IP, there is no real reason why one would skip the intermediate translation to object files and just distribute the sources. I guess VB, and most Basics, actually did that. This is an implementation issue, not a language design issue. A language will not grow in expressive power, or be more useful, if types are not erased during intermediate steps in the compilation process. I don't think this addresses your original question. [Stated differently: If your language is such that you 'cannot' erase types when translating to object modules; well, you're stuck to building a compiler where that information is preserved.] Yeah, well Marco, yes they are erased for performance in static systems but not in dynamic (different approaches). But why? I started at a time when the big grunting mainframes were far, far less powerful (in many ways) to the current generation of desktop machines today. With the power under the hood we have today, this should not be an issue. The trade-off is there, we distribute sources or a binary format that retains type information and yes, this is an implementation issue. The expressive of the language could go up because it could now allow the programmer to provide the additional facilities to manipulate this information. For partial/separate compilation, one additional facility is to allow verification of type information between modules if you like. Another is that it can become possible (with the appropriate values) to manipulate these modules by the programmer. Food for thought. At any rate, 'tis time to hit the pillow as I only got up to get a drink of water not respond to the various posts. 'Tis 2:30 am AESDT. Power is not absolute It still takes a long time for most languages to compile a million lines of code. I guess there is your answer. One other reason for type erasure might be for obfuscation; particularly for software which is sold and deployed to third parties. One of the initial (and still somewhat relevant) objections to Java, at least early on, was the ease with which .class files could be decompiled back into source code; many shops consider the source to be Valuable Trade Secrets and Intellectual Property. This was a bit of a shock, at first, to houses used to C/C++; translation of either to assembly does a far better job of obscuring the meaning of the program then Java compilers do; hence the popularity of Java obfuscators and such. Now, whether or not obfuscation is a good thing or not is a question of ethics as opposed to technology--but some vendors prefer to keep their secret sauce secret. :) Limitations of Typeful Programming the majority of 'typeful programming' benefits can be achieved (often more flexibly) by other mechanisms than use of types (e.g. an extensible syntax on one side, whole-program-transforms on the other). The [above] paragraph, I think hits one of the nails on the head, and my response is Why? Is it a inherent limitation or is it a limitation due to the current practices and theory? The basic reason that other mechanisms can be more flexible is that any given type system - including the one integrated into your favorite language - will, by nature, be limited in its algebra and the properties it both describes and can effect upon the program. Support for whole-program transforms and syntax manipulation aren't so limited, but making these facilities generic creates quite the exercise for the language designer. Types vs. Type Descriptors all of them have the same underlying premise that type containing all types must also contains itself which leads to a logical inconsistency (fair enough) Russell's paradox is the set of all sets that do not contain themselves. I'm pretty certain the set of all sets can include itself without inconsistency. Same, I suspect, for the type of all types. Anyhow, as programmers we are limited to working with type-descriptors (values that describe types) as opposed to directly working with types (which are logical entities that only exist after a logic is enforced between type-descriptors and program constructs). Keeping this dichotomy in mind often helps clear up issues of when it comes to 'types containing all types'. It is, after all, often straightforward to construct a type-descriptor of all type-descriptors in a manner that can be used to describe itself. Type vs Type Descriptors David, my question to you on your statement is, why are we limited to working with type descriptors and not with types? You describe type descriptors as values, why are not types considered as values as well along with characters, code, environments, booleans, numbers, sets, lists, etc? I think this is the part that just doesn't make any sense. For any type that we can imagine, as programmers, we use some form (whether it be a glyph or a series of glyphs or a binary code or electrical signals) to represent an abstract value that we consider being part of an abstract grouping called a type or class or whatever else someone might want to call it. My question on your comments comes back to this, what is the difference between doing the same for the abstract values in a abstract grouping called Type? Types as values There are languages, and logics, where types can be treated as values. However, in such a setting, you cannot check all terms against all types statically. I personally like the discrepancy between static analysis and dynamic execution. (If I want runtime checks, I will write them down as assertions, and keep the boundary between static analysis and dynamic evaluation clear.) There are also logics for programming languages which have a type of types. It just becomes a bit more esoteric, and people might wonder about the assurances given by the type system. There isn't one good answer, because, well, there isn't. Fundamentally. Either you like static checks, and you're stuck with a type system where not all you want is expressible, or you don't care too much about it, and you can consider other languages. Maybe you want to look at Epigram and Sage? RE: Bruce's question to Me David, my question to you on your statement is, why are we limited to working with type descriptors and not with types? Types are not values and have no physical representation in a programming system. This is true in much the same sense that 'correctness' of a program has no physical representation in a programming system. Types are logical entities, not values. However, we can describe points in a type algebra by use of values. The class of values used to describe types may, appropriately, be called 'type descriptors'. Like all other values, type-descriptors possess both logical and physical representation, and thus are subject to computation or manipulation. You describe type descriptors as values, why are not types considered as values as well along with characters, code, environments, booleans, numbers, sets, lists, etc? A value is something that can be packaged up into a message and communicated or stored. This makes values immutable, and essentially requires that they be logical views of a representation. Objects are not values, but may be represented by a pair of values (one value to identify the object, one value to represent the state of that object). Environments are not values, either, at least not in the general sense. For any type that we can imagine, as programmers, we use some form (whether it be a glyph or a series of glyphs or a binary code or electrical signals) to represent an abstract value that we consider being part of an abstract grouping called a type or class or whatever else someone might want to call it. Not always. As programmers we often use 'types' that possess no algebraic representation or direct support from the language. You may have heard of this called 'Duck Typing' - if it talks like a duck and reacts like a duck, it's a duck. In Smalltalk, you might have a 'Coordinate' object be any object that responds to 'x' and 'y' messages. My question on your comments comes back to this, what is the difference between doing the same for the abstract values in a abstract grouping called Type? You are free to represent a point in a type-algebra called a type-descriptor. You are even free to create type objects by dividing the type-identifier (e.g. the name) from the point it describes in that space (nominative typing vs. structural typing). But 'types' in PLT are more than descriptions of types; type-descriptors have no meaning except as they are applied in an environment, and environments are not values. RE: Bruce's question to Me David, You make the statement Types are not values and have no physical representation in a programming system. This is true in much the same sense that 'correctness' of a program has no physical representation in a programming system. Types are logical entities, not values. which can be applied to integers, characters, trees, lists, etc. Your following paragraph then goes on to say However, we can describe points in a type algebra by use of values. ... We do exactly the same for any other domain. There is no fundamental difference. You then make a distinction between objects and values and environments and values and all of this is based (from my perspective) on a the preposition that a value has a reality that cannot change and so anything that can change is not a value. This is one viewpoint certainly, but is it an accurate viewpoint or even a complete viewpoint. Is this paradigm making a distinction where it is not warranted? We are and should be free to incorporate any domain into the language of discourse as first class values - but there seems to be a resistance to this. This suggests to me that the paradigm of types or objects (or whatever else takes your fancy) are not values is a limitation for type theory and in the long run for the boundaries of type checking to be expanded this paradigm may need to be dispensed with. This itself may be an area worth investigating in relation to type theory if it hasn't already been done so. Sorry David, maybe I'm just stupid and just don't have a clue but the last two paragraphs seem to be trying to make a distinction that isn't there (at least to my mind). Again what is so special about an environment that it is not a value. Correct me if I am wrong please, but it's like you make a distinction between an list of the integers from 1 to 10 which you cannot change and a list of the integer between 1 and 10 that can be changed. One is a value and one is not. I can do everything to the first that I can do to the second except update it. Please help here, what are you saying precisely? I'll continue looking at the various responses when I get back from mowing my yard. Types and Type Descriptors [...] which can be applied to integers, characters, trees, lists, etc. This is not true. Values are logical views of representations, so every value in any programming system will, necessarily have a representation associated with it. Values can be fully understood in terms of a view and a representation. Types, however, can only be understood in terms of their environment. ---- However, we can describe points in a type algebra by use of values. ... We do exactly the same for any other domain. There is no fundamental difference. Yes, like every other domain: we describe 'types' in terms of 'type descriptors', we can describe 'cars' with 'descriptions of cars', etc. But 'descriptions of cars' are not 'cars', and 'type descriptors' are not types. There is no fundamental difference. You then make a distinction between objects and values and environments and values and all of this is based (from my perspective) on a the proposition that a value has a reality that cannot change and so anything that can change is not a value. This is one viewpoint certainly, but is it an accurate viewpoint or even a complete viewpoint. Is this paradigm making a distinction where it is not warranted? Correct: A value cannot change. This is not the root proposition, of course. The root proposition is that a value can be communicated in physical space (which requires that values be immutable and of finite representation). The 'viewpoint' you describe - that 'anything that can change is not a value' is accurate, but is obviously incomplete (seeing as it does not indicate what is a value). You denigrate the proposed 'view' of values as a mere paradigm; unfortunately, it is a paradigm enforced by what we know to be physically possible regarding communication and storage of information. We are and should be free to incorporate any domain into the language of discourse as first class values - but there seems to be a resistance to this. You can incorporate domain descriptions into a language, but unless the domain is language itself (math, programming constructs, etc.) one usually cannot incorporate domains in a first-class manner. This suggests to me that the paradigm of types or objects (or whatever else takes your fancy) are not values is a limitation for type theory and in the long run for the boundaries of type checking to be expanded this paradigm may need to be dispensed with. You will be free to 'dispense' with it when you change the physical reality of what can be communicated and stored within a computation system. While you're at it, I'd like for you to investigate into making 'correctness' and 'robustness' into first-class language values ;-) Again what is so special about an environment that it is not a value. In the general case, the environment for a computation is very mutable. You can package up a 'state' for an environment into a value (such that the environment can be restored to that state), but doing so is similar to packaging up the 'state' of an object. The resulting value is not the environment or object, and (without outside guarantees) might not even reflect state of the object or environment even moments after being produced. Correct me if I am wrong please, but it's like you make a distinction between an list of the integers from 1 to 10 which you cannot change and a list of the integer between 1 and 10 that can be changed. One is a value and one is not. I can do everything to the first that I can do to the second except update it. I'm not certain what you're imagining here. If you are thinking 'list-object' when you say 'list', then a 'list that you cannot change' does not imply 'an unchanging list' because someone other than you might be able to change the list-object. If you aren't thinking 'list-object' when you say 'list', then the latter (a list that can be changed) does not exist. When you say you can do everything to the first that you can to the second except update it, I suspect that includes observing the first for updates (e.g. via polling or publish-subscribe), so I suspect you refer to two objects, the former of which you simply lack the capability to update. Values possess intrinsic identity, so the value [1,2,3,4,5,6,7,8,9,10] is identical to the current state of both these objects. Not just equivalent to. Identical with. Objects are not values. You really couldn't say the two list objects you mention are 'identical' unless updating the latter list resulted simultaneously with an update to the former list, and observing the former list for changes also results in observing the latter list. Type Is Not A Type See e.g.: http://portal.acm.org/citation.cfm?id=512671 Or freely accessible: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.18.7073 One thing you can do to evade the problems discussed in these articles is to introduce an infinite hierarchy of types. This is what Coq does, for example. Type is a Type I'll take a look at those pages, but (from the abstract) it only insists the type system is undecidable in the general case. From the abstract Mh, from the abstract of the first paper ("Type Is Not a Type"): We apply Girard's techniques to establish that the type-of-all-types assumption creates serious pathologies from a programming perspective: a system using this assumption is inherently not normalizing, term equality is undecidable, and the resulting theory fails to be a conservative extension of the theory of the underlying base types. The failure of conservative extension means that classical reasoning about programs in such a system is not sound. Requirements & Design In what areas do each of the above type systems fail to provide the programmer with the tools (language constructs, abstraction facilities, etc.) to actually deal with real world troubles, problems and conditions? Since you ask a very general question, I'll give a general answer. One thing I think current systems really fail to support are requirements. Software is mostly build for a purpose. The purpose is translated to requirements (hard and soft, performance and interfaces, concurrency, etc., etc.). You can't normally guarantee, or even discuss, that specific pieces of software originate from design decisions, specific requirements, or some global purpose. If there is one stumbling block to the development of good software then I think this is it. Requirements & Design Marco, it is a general question and thank you for your response. Looking at requirements and design, this is an area where much needs to be done to even get decent documentation let alone anything else. What thoughts do you or have you had about this where type systems may be able to be of benefit or are they unable to be of any benefit in the current manifestations? I ask since my original question is about the limitations of type systems. No real thoughts If you want to look at limitations of type systems than you can see that a lot of research goes into building type system where specific types of requirements can be enforced statically. (Robustness, resourse usage, concurrency, etc.) Like "A Type System for Safe Region-Based Memory Management in Real-Time Java", or "A Path Sensitive Type System for Resource Usage Verification of C Like Languages" can be seen as attempts to bridge the gap between time requirements, resource requirements, and a language, such that these requirements can be statically guaranteed. I would find it interesting to see type systems to model, for instance, failure rates/risk analysis of components. This would be nice on top of, say, an Erlang like language. At the moment, to me, your question somehow boils down to: For which types of requirements don't we have type systems? [I am being kicked out of the library now. Uh, *click*] [Actually, I didn't find really good examples. I thought it should be rather trivial to encode, say, real-time constraints into types. I am sure I read some stuff... At some time...?] Garbage collection Every Tuesday night I manually take the garbage that's accumulated over the week to the street for collection. I consider this a real world problem for which state of the art type systems (AFAIK) offer little or no help. Garbage Collection Thank you Matt, I needed that chuckle. For me it is every Thursday night after I have sorted the recyclable from the non-recyclable and placed into the correct bins. Though in principle, maybe something could be done. First Class Garbage Maybe you should make your garbage into a first-class domain value in a language with explicit memory management, that way you could 'delete' it. First Class Garbage David, do you have children? If you do, I think that they would find their daddy very funny. I still laughing as I write this. It would be a bit way out there for my kids, but I'll keep this in mind all day and when I need a chuckle, I'll reread it again. Hahahahahahahahahahaha - brilliant. Thank you. is the domain of programming-language related humor. I'm afraid that children, if I had any, would probably find such a joke to be mystifying. I'm surprised you found some amusement in that comment, but I do appreciate it. The liveness of an object might be considered a type attribute On the other hand, the whole point of liveness analysis is that the type of garbage is irrelevant. :) No, the fact that you have to sort the garbage and haul it to the curb--or even place it within wastebaskets in your house--is manual memory management. The big smelly truck that drives by once a week is analogous to the OS reclaiming a page that the application has marked as free. If true municipal garbage collection were implemented, several times a second garbagemen (or women) would enter your house, place all occupants (and object) there in suspended animation, read your minds to determine which articles are important and which are not, and remove some fraction of the ones which are no longer desired. (A few undesired objects may be left behind permanently; as long as they don't smell bad this should be of no concern to the occupants, as the amount of such garbage is in most cases bounded). To you, unwanted objects would merely disappear from perception and memory--as you are suspended during these collection cycles--you wouldn't notice. In some cases, objects in the house might move, as all the garbage is shifted to one room for easier removal. But you wouldn't notice that either, as your memory would be modified so that moved objects would be remembered in their new locations. (If the article was desired but missing, like my wife's car keys :), it would still be present but in another unknown location). According to industrial engineering studies of the above procedure, it is speculated that the above procedure would be expedited by having a larger house. Unfortunately, this would not increase the actual living space, as most of the rooms would be cluttered with garbage; and some would not be perceptible to the occupants in any case. Unfortunately, municipal waste collection isn't as sophisticated as computer science, and the above results are all hypothetical. The moral of the story is, you still have to manually sort your trash and haul it to the curb. Lack of Expressiveness While interesting as a means of structuring access to structures defined in terms of a set of primitive structures, lists, arrays, dictionaries, etc., current type systems can't express the sorts of things that you need to express when writing things like databases and operating systems. In those domains the exact layout of date in memory is extremely important and the lack of a sufficiently expressive type system condemns modern high-level languages to a marginal (ie. "extension language") sort of existence. Examples: 1. Packed, varying data structures. 2. graphs built with self-relative or base-relative offsets 3. compound objects - a graph allocated in a contiguous subheap, possibly with external references Until we have type systems that can tackle these sorts of problems, high-level languages will continue to be C programs. BitC We spend plenty of time talking about BitC. Beyond C? BitC is indeed interesting, but I don't believe its type system goes beyond that of C. Not clear what you mean The BitC type system incorporates type classes, parametric polymorphism, and effect types, which is a big step up from C, so presumably you have something else in mind. Can you say what that might be? Some Examples Most of the type structures you mention seem to be built on top of what C provides rather than extending the basic vocabulary. Here are some examples of the sorts of memory structures I'd like to describe: 1. contiguous varying structures Basically a sequence of (length,variable-value) pairs packed together. You locate the Nth field by parsing and skipping the preceeding fields. This is a common implentation for a relational row. Example: field 1 contains "foo", field 2 is null, field 3 is "bar" the bytes look like this: 4 f o o 0 0 4 b a r 0 Assuming a maximum field length of 1 byte. For extra credit, a bit ladder. Define utf8 as a type. 2. same with null bitmap The preceeding structure represents null as a (length,value) pair with a length of 0. If compression is an issue, you might prefix the structure from #1 with a bitmap with one bit for each field. Only those fields for which a bit is set are present and non-null. 3. offsets, indirect lengths A bucket page for a hash table might be represented as a fixed size page which starts with a variable array of entries. Each entry consists of two (length,offset) pairs, one describing the key, one describing the value. The offset is relative to the start of the page and the length is the number of bytes found at that offset. Key capabilities are the use of (small) offsets instead of pointers and storing length separately from the data it describes. Commonly, we have a single length which describes multiple date, arrays of descriptors, what have you. The motiviation is minimizing disk writes: generally one update to the hash table touches 1 bucket. 4. self-relative offsets For any sort of shared graph (ie. in a file or a network buffer, mapped/read into an arbitrary location, a self-relative offset is a natural representation for pointer. The desire is to read or map a contiguous expanse of data and have immediate (ie. O(0)) first-class access to it. 5. sub-heaps Imagine a hash table which stores values which are themselves trees. The trees need to contiguous (and compact), so they are allocated in their own sub-heaps. Obviously, the implementation of "heap" is beyond the scope of the type system, but things like transitive closure are. This is a generalization of cdr-coding. For example, to send a chunk of graph over the network, you might define a transitive closure, apply this to an unconstrained graph in the heap, producing a contiguous graph in a buffer which could then be sent to another VM and used immediately. Ah. I see. Some of this can Ah. I see. Some of this can be expressed with low-level dependent types, which we are considering. Some, of course, can't. In any case, I now think I understand the kind of thing you mean -- or at least one of the kinds. Thanks. While the ability to handle While the ability to handle memory layout is critical for integration with hardware (memory mapped IO, BIOS bootstrap, etc.), I don't believe explicit support for memory layout is as critical to operating systems as you imply. There is a lot of evidence to suggest that the vast majority of any operating system can be written in a higher-level language. I'm very much of the opinion that expressing data layout in memory should not be a task associated directly with types. Instead: specification of data layout should (a) be optional (since we don't care most of the time even in an OS), (b) be expressed independently of type information, and (c) support automatic translation between representations of the same information. I suspect that declarative metaprogramming for specification of data layout for compilers (possibly specified as part of driver descriptions) will serve much of what we need. Some sort of codec-driven I/O (codecs being specified and applied separately from types) will likely serve the remainder of the need. Can you cite even one credible piece of evidence supporting your assertion about memory layout in operating systems? By "credible", I mean that (a) an actual experiment using a real implementation of a serious (as opposed to research toy) operating system has been constructed in the manner you suggest, and (b) credible performance metrics using real-world workloads (again as opposed to toy benchmarks) yield a measured performance overhead that is 5% or less. It could be that there is evidence out there that I have failed to notice, but this is an area that I try to track closely. I am aware of many indicative efforts: works that seem to suggest that more credible evaluation is warranted. I am not aware of any such credible evaluation actually having been performed. I think that there are places where compiler-driven memory layout looks very promising, and there is even some work suggesting that lazy functional languages can do much better than C/C++ in some domains. But from experience and long study, I'm very skeptical that operating systems or databases are among those domains. Can you cite even one Can you cite even one credible piece of evidence supporting your assertion about memory layout in operating systems? Certainly. I can point at any number of other large compiled software projects, written in Haskell or OCaml for example, where performance is quite reasonable despite not using programmer-specified memory layouts. I can further point out that the majority of code in, say, Linux operating system does not rely on having a particular memory layout. By "credible", I mean that (a) an actual experiment using a real implementation of a serious (as opposed to research toy) operating system has been constructed in the manner you suggest, and (b) credible performance metrics using real-world workloads (again as opposed to toy benchmarks) yield a measured performance overhead that is 5% or less. I'm not particularly concerned about your personal definition of "credible", especially if it's going to lead to an argument over a definition of "research toy". I can only repeat something Tom Sweeney has said regarding video games: I'd gladly trade 10% overhead for 10% productivity. I'd trade more than that for the ability to break an OS down into services such that I don't need to distinguish between 'kernel' and 'application'. I think we too often trade a local improvement in the OS for a loss of reliability, security, and eventually speed or simplicity in user-space. Since you're wanting your baby, BitC, to compete as a systems language (for Coyotos), I can understand you feeling a bit defensive about claims that imply some of your fundamental, language-driving decisions might not be the best ones long-term. In any case, my assertion above isn't that programmers shouldn't have control over memory layout. It is that such control should be optional, and should be applied separately from the type system. As a programmer, control over representation doesn't bother me when I want it... but it bothers me a lot when it means a lot of my code becomes hand-translations between physical representations of the same information. It could be that there is evidence out there that I have failed to notice, but this is an area that I try to track closely. I am aware of many indicative efforts: works that seem to suggest that more credible evaluation is warranted. I am not aware of any such credible evaluation actually having been performed. The codec approach isn't a logic-driven approach; it simply says: "this input/output will be represented with that structure, and here are the encode/decode functions to translate between the two". An optimizer can decide whether to perform a translation, specialize the code receiving it that representation, or pass around a virtual (decoder/structure) pair like polymorphic data. It hasn't seen much use in operating systems as of yet. The declarative approach to memory layout is still early in its application, but looks very promising. I think that there are places where compiler-driven memory layout looks very promising, and there is even some work suggesting that lazy functional languages can do much better than C/C++ in some domains. But from experience and long study, I'm very skeptical that operating systems or databases are among those domains. I didn't suggest lazy functional languages for OS use. I think whatever language we use for operating systems 'of the future' will need to support a powerful and composable concurrency model where reliability and productivity is achieved... possibly at what you'd consider a magnificent expense in terms of efficiency (e.g. doubling of temporal costs). Any sort of transaction support pretty much nullifies whatever benefits you'd hope to achieve by favoring mutable memory, which in turn diminishes the benefits of controlling memory layout. Further, I am convinced that support for transactions (as per software transactional memory or transactional actor model) is the only viable (composable, reliable, safely allowing for priority-driven preemption, etc.) approach to programming large massively concurrent systems - including operating systems. We'll earn the speed back by using a bunch of CPU cores. We'll gain productivity benefits by avoiding lock management (locks are too challenging to get right, especially with a modular system) and through reduced-size component vocabulary (because transactions allow complex negotiations with simple vocabularies as opposed to having ever-larger vocabularies so that whole negotiations can be crammed into one message). What is striking here is that you once again allege to have citations, but you don't actually give any. Until you do, you are engaged in hand waving and strong assertion. Which is fine, but let me go put on my hip waders... Just to be clear, I'm very eager to see a credible evaluation! Concerning research toys, I'm not interested in discrediting valid comparisons. But we all know that the devil is in the last 10%, and I get really annoyed when people implement the low-hanging 90% of something in a safe language, measure it with a home-brewed and absurdly non-representative benchmark, and then declare victory for their pet position. You should too! I understand the limits of research budgets, but the best that can be achieved with this type of measurement is indication that more careful measurement is in order. I consider that a very valuable research result, but it is completely improper to evaluate a favored case and then claim a general result. In any case, my assertion above isn't that programmers shouldn't have control over memory layout. It is that such control should be optional, and should be applied separately from the type system. As a programmer, control over representation doesn't bother me when I want it... but it bothers me a lot when it means a lot of my code becomes hand-translations between physical representations of the same information. I think that having a compiler do layout optimization is a marvelous thing, and I certainly don't intend to reject that possibility in general. What I insist on is the ability to specify. But I do believe that advocates of compiler-driven layout tend to neglect the issue that compilers are just as bug-ridden as anything else, and that assurance is generally improved by adding magic to the compiler. Trading 10% performance for 10% productivity is a straw dog. So far as I know, that trade has never been observed in the wild. That's why I want a citation! And in all seriousness, what is your view on the tradeoff between assurance and complexity in compilers? you once again allege to you once again allege to have citations, but you don't actually give any In my original statement I say "there is a lot of evidence that the vast majority of an OS can be written in a HLL". By "the vast majority" I mean "stuff other than memory-mapped IO and HW drivers, passwords and private data, and a few other things that probably need special consideration in the language". By "a lot of evidence" I mean "pretty much every large compiled software project out there". Those achieve what I consider very reasonable performance... often no more than 30% slower than C, and rarely but occasionally faster than C or much smaller than C. And this with only a fraction of the effort optimizing the code. Contrary to your assertions that C+5% is the mark to achieve, I don't believe this speed loss is a big deal... especially considering that, should we ever support transactions or versioning, most of the speed benefits you're currently seeing from fine-grained 'mutable' memory will evaporate. But I'll admit that I don't have any citations that match your definition of 'credible'. What I consider reasonable performance vs. what you consider reasonable performance will probably have us bashing heads with one another. In practice I use the codec approach on a regular basis, albeit implemented using value-objects in C++, to very good effect. It provides a logical view of a data representation with a simple translation at the periphery. I've applied it in system software often enough, albeit not to a full operating system. The declarative metaprogramming approach is something I've only seen done in a toy example by other people. What I saw was very promising, but far too early to call anything but a toy, so you're free to ignore my handwaving on that subject. We all know that the devil is in the last 10%, and I get really annoyed when people implement the low-hanging 90% of something in a safe language, measure it with a home-brewed and absurdly non-representative benchmark, and then declare victory for their pet position. You should too! Agreed. OTOH, I get annoyed when people attribute the success of a language like C to a particular property of it without demonstrating that said property is the critical one. If someone says: "memory layout is critical to success" then claims as evidence that "no language that doesn't support memory layout has achieved mainstream success", I feel a great deal like replying: "Then, by that reasoning, is the use of semicolons critical to success as well?" I understand you're aiming to support explicit memory layout because you associate memory layout with approaches that have succeeded before. But, as I said, I'm unconvinced it is as critical to success as has been asserted on this page and elsewhere, and I've written enough RAID drivers to believe I can claim attempting to thread particular representations through code (which must be specialized) and so on becomes a major productivity hit (due to code specialization end-user translation efforts)... unless you apply something like the codecs approach. And, as I noted, mutable memory vs. transactions: I've made a choice, and I am fully convinced it will work for operating systems. Perhaps I'll prove it at some point. I think that having a compiler do layout optimization is a marvelous thing, and I certainly don't intend to reject that possibility in general. What I insist on is the ability to specify. I've no objection to 'ability to specify' at critical points (I/O especially) so long as it doesn't come along with 'requirement to specify', such as it does if representation is part of the type system. But I do believe that advocates of compiler-driven layout tend to neglect the issue that compilers are just as bug-ridden as anything else, and that assurance is generally improved by adding magic to the compiler. I'm all for the ability to inject extra post-parse pre-compile stages into a program (i.e. for plug-in type systems, optimizers, etc.) to help one achieve a little extra 'magic' at need. It's just that sort of thing that can make quite practical declarative approaches to memory layout - i.e. the language needn't start with such a capability; it can be tacked on and updated in its own library. And in all seriousness, what is your view on the tradeoff between assurance and complexity in compilers? One can often achieve greater assurance by reducing complexity of compilers. Simpler systems are easier to verify. Am I right to suspect you speak of the tradeoff between guaranteeing the translation from source is correct and achieving optimizations at cost of greater complexity? After having worked with extensible languages, I'm of the opinion that assurance is a better default choice, especially when aiming to achieve safety or security properties as a critical part of the language. I'm tempted to assert that the compiler should be able to provide a proof that the code was translated correctly, and based on this article that demand doesn't seem to run too contrary to optimization. types not semicolons I get annoyed when people attribute the success of a language like C to a particular property of it without demonstrating that said property is the critical one. If someone says: "memory layout is critical to success" then claims as evidence that "no language that doesn't support memory layout has achieved mainstream success", I feel a great deal like replying: "Then, by that reasoning, is the use of semicolons critical to success as well?" Good grief! I'm not drawing a statistical correlation between semi-colons and system programming, I'm pointing out that one of the things that I commonly do when system programming, is far more difficult in a high-level language with a strict type system than in C with a permissive type system. Neither language provides support for what I need, but C allows it while higher-level languages actively oppose it. This bears very simply and directly on the failure of HLLs in this domain and seems to be exactly the sort of thing the original poster was asking about. types not representation, either I understand why data representation is often critical for system programming (HW integration/mmio, FFIs, certain security concerns, and various hand-optimizations). I also do system programming. But I still feel you misattribute it. It isn't the "permissive" nature of C's type-system that is helping you out. It's the "prescriptive" nature. Ignoring the nasty issues regarding integers of implementation-defined width, types in C determine memory layout. And it isn't the "strict" nature of HLL type-systems that hinder you. It is, rather, the inability to specify memory layout. You can't even specify a bitfield to be a contiguous block of memory, which you need to be able to do (at prescribed addresses) to support memory-mapped IO. Data layout doesn't need to be a type-system issue. Why not allow representation to be specified declaratively? It is often the case that a single data layout will support a broader spectrum of values than the current type of a variable, so why not allow types and representations to be specified independently? It is also often the case that a data layout will support a narrower range of values than the type of a value (e.g. int32 representation doesn't carry all integers), so I suspect it would be useful to formally recognize such distinctions and create an 'Int32' type for modulo 2^32 arithmetic and have the 'Int' type result in a runtime exception 2^31 is shoved into a 32-bit signed integer box. Why not? It is to this question that your answer is a bit like correlating semicolons and system programming. You say: but it hasn't been done that way before. Well, that's fine, if it's true. You know the C-approach to memory layout works because it's been done and succeeded before. Even elephants tend to tread in the footprints of the elephants before them - it's solid ground. No... wait... you said: but that approach hasn't achieved "mainstream success" before. Oh. Well, clearly, mainstream success is the best measure of technical merit. I should have known. But if one is going to suffer a 10 year "research toy" phase in something like Coyotos that is ultimately going to become yet another 'lessons learned' project that can't compete with worse-is-better systems like the future Windows 9 and Linux 2.8, it isn't unreasonable to try a few things differently if only to achieve a few more 'lessons learned', and taking that sort of risk is the only way to have any hope of achieving "Mainstream success". New operating systems won't ever catch up on drivers except by either porting them or finding better ways to write them. If you're trying to find better ways to do everything else, why not drivers? I'm pointing out that one I'm pointing out that one of the things that I commonly do when system programming, is far more difficult in a high-level language with a strict type system than in C with a permissive type system. Out of curiosity, and recognizing that it may not yet be possible for you to answer, where would you put BitC on the spectrum between "high level language with a strict type system" and "able to do yucky OS stuff"? The proof is in the doing, and there is no proof yet, but I'm curious if anybody else thinks we're anywhere near a sweet spot. Semicolons rule! Still no citations... You write: By "a lot of evidence" I mean "pretty much every large compiled software project out there". Then you should surely be able to cite at least one without any difficulty. I can point at any number of other large compiled software projects, written in Haskell or OCaml for example, where performance is quite reasonable despite not using programmer-specified memory layouts. Respectfully, I don't think you can. Tell you what. Set aside my 5% goal. Can you cite even one example of a quantitative comparative benchmark in which a serious attempt was made to perform an apples to apples comparison? I accept that the decision point for speed/productivity/reliability trade-offs may be different for different users. In light of which, I'll settle for defining "credible" as "evaluated in such a way as to compare two functionally comparable software artifacts." The issue that has my knickers in a knot here is things like people comparing apache to a trivial, single-threaded html server. The fundamental challenge in something like Apache is to deal with internal concurrency, HTML connection reuse, and CGI and/or other transforms on the delivery path. These motivate most of the complexity of Apache. Getting strong performance on mere file delivery is absolutely trivial. Claiming performance that is comparable to Apache for this task is simply stupid. If your task doesn't require CGI, Apache isn't the right tool to compare against. So when people compare their favorite language by implementing a trivial web server, the honest comparison would be to compare it to an equally simple web server written in C. I have yet to see this done in real publications. When people like you (or I) make statements about comparative performance, our words carry the weight of our reputations. The damage when you (or I) make sloppy statements is consequently quite high. Given our respective reputations, the damage in your case is rather higher than in mine. :-) In the discussion at hand, I suspect that the only metric you actually have for "good enough" is "in your personal opinion." Which is a fine metric for deciding what you should use, and perhaps even what I should use, but a weak metric by which to evaluate language alternatives. I'm only at odds with compiler-driven memory layout to the extent that I am very concerned about assuring the compiler. That is probably a solvable technical problem, but I do wish that people might include assurance in their trade-off arguments more often. Admittedly, that's a hot button of mine. Can you cite even one Can you cite even one example of a quantitative comparative benchmark in which a serious attempt was made to perform an apples to apples comparison? Ensemble seems to fit the bill. They had a system originally written in C (Horus), which they then re-implemented in OCaml, and it improved the performance IIRC. Then you should surely be Then you should surely be able to cite at least one without any difficulty. OR I could just tell you to google 'ocaml performance'. Oh, look, a hit - comparing a purely functional OCaml ray-tracer vs. a mutable state C++ tracer. I notice you like to create lots of white-papers that you can 'cite', even if all you're doing is speculating on a bootstrap environment. That's your dog, not mine. If your task doesn't require CGI, Apache isn't the right tool to compare against. So when people compare their favorite language by implementing a trivial web server, the honest comparison would be to compare it to an equally simple web server written in C. I have yet to see this done in real publications. Agreed, though I've seen it done in publications that you might not consider "real". Similarly, if people implement a web-server supporting many connections, that also needs to be compared against an implementation in C on the basis of such things as reliability, resistance to DOS attacks, and performance under massive loads. And so we end up with comparisons like this one because apache isn't designed for that level of concurrency, and C doesn't make it easy. Unless your programs give equal priority to concurrency, security, performance, robustness, graceful degradation, etc. the benchmarks will be unfair. The C language and its ilk are well known for giving good 'localized' performance, but I've never been impressed with the cost of that 'local' performance when considering the entire system (runtime libs, separating memory spaces, heavyweight threads, complex security, etc.) our words carry the weight of our reputations. The damage when you (or I) make sloppy statements is consequently quite high. Given our respective reputations, the damage in your case is rather higher than in mine. :-) You feel that your reputation is in the dumps? Because mine, as I understand it, is hovering roundabout zero. I'm only at odds with compiler-driven memory layout to the extent that I am very concerned about assuring the compiler. Well, you make a trade off between assurance in one place and assurance everywhere else. If you only care about assuring the compiler, and not everything else, the best way to go about it is to tell everyone that they're going to be programming in, say, Ook language from now on. I'm afraid your reference (the ray-tracer) is very misleading. This is a toy ray-tracer with usage application only suited for shoot-out style benchmarking. The real-time ray-tracing community is fairly unison in the usage of C++, one reason being able to optimize using SIMD instructions and the other being tight control of memory layout (cache-lines, etc.). Apples to apples was the Apples to apples was the demand. The C++ ray-tracer was also a simple toy, just as requested. Based on your reply, a quick look discovered CamlG4 (OCaml SIMD library) and a doc that I can't link directly entitled 'SIMD Vectorization of Straight Line Code' (Stefan Kral) where people, unhappy with C performance, grabbed themselves a Prolog program and had it compile the assembly for them - exactly the sort of declarative approach I hope we eventually achieve on a regular basis (except I'd like to see it happen without escaping the language environment and without sacrificing security/safety). Of course It's not like anyone actually *likes* to mess around with C/C++ is there? We're pretty much all screaming for a better language, but I'm afraid there's not that many alternatives out there. "Better" is too difficult to "Better" is too difficult to measure. Bling WPF sort of works like Bling WPF sort of works like this: rather than write HLSL code to express your pixel shader, you write C# code that generates one. The C# code mostly looks like the HLSL code you would have written with a slight distinction between staged (runs on the GPU) and unstaged (runs during code gen) code. And if I recall correctly, And if I recall correctly, there are other examples that got really big advantages by exposing concurrency opportunities when running on cell processors, but I don't recall the citation right at the moment. Optimization Tools CamlG4 is just a wrapper for calling C routines that run loops over a single SIMD instruction. The overhead for doing this means that your loops must be substantial - i.e. the arrays must be big. Obviously, it would be much better if OCaml supported SIMD (short) vector types and could make use of the SIMD instructions. Stefan Kral's work is very worthwhile, trying to graft out SIMD (short) vector parallelism for straight-line code. This is quite useful - but one would wish that compilers could do this themselves, so that one didn't have to rely on code-generation tools. Various responses I notice you like to create lots of white-papers that you can 'cite', even if all you're doing is speculating on a bootstrap environment. It is true that my group produces a lot of white papers. This is because we feel that documenting rationale in depth is an important part of sustainable research, and because specifications can't really be published in conferences. It doesn't seem likely that more comprehensive disclosure is a bad thing in research. But you do make an important point about citations, because people out there do sometimes use citations of white papers inappropriately. When we cite our own white papers, we cite them for what they are: explanations of what we had in mind or how a design works. Unless those papers incorporate appropriate evaluation (which may be qualitative or quantitative, according to the claim) we do not cite them as any sort of evidence that our ideas are correct. And so we end up with comparisons like [YAWS vs. Apache] because apache isn't designed for that level of concurrency, and C doesn't make it easy. Perhaps unintentionally, you are using selective data to support your presuppositions. The method of the experiment described there definitely does not represent any typical usage scenario or purport to be comparable to any standard reference benchmark. Selective benchmarks can prove anything, so when taken in isolation this measurement does not support any credibly generalizable conclusion. Because the experimental method is described in insufficient detail to permit reproduction, it isn't even a publicly useful point experiment. The author acknowledges in the "comments" section that their attributed rationale is speculation. That speculation may or may not be informed. From the description, which lacks any supporting rationale, I can't tell and neither can you. From what is reported, we have absolutely no idea whether the result is a language result, an OS result, or simply a bug in one implementation or the other. A proper evaluation would have taken this conjecture and tested it to determine whether it accounts for what is happening, and either confirmed or refined the conjecture as appropriate. From what I do know about other measurements of the underlying network implementation in Linux, I'm currently leaning toward "bug", but I'ld be quite happy to be wrong. Given just how remarkable the result seems to be, and the fact that the group is clearly a research group that engages in research publication, it is surprising that no followup has occurred. Perhaps it is simply that the paper isn't published yet. So on the whole, your example is a perfect illustration of my point about non-credible benchmarks. No systematic evaluation has been performed here, but the existence of a single, unqualified point experiment leads insufficiently critical readers to infer that grand support for their pet views. It does not follow that your views are wrong; merely that they are as yet unsubstantiated. And as I have made clear elsewhere, I would be delighted if your supposition were true. But I want to see a credible demonstration before I jump off of technologies that I know will solve my problem. ...you make a trade off between assurance in one place and assurance everywhere else... That is an interesting conjecture. In this case, we have two cases that involve qualitatively different changes. I agree that reducing the specification burden on general programs should simplify them, and should consequently improve our confidence in those programs — and simultaneously our ability to formally analyze them. That would be very valuable. If we can get that, and at the same time establish formally that the automated layout mechanisms of our magic compiler are correct, then that would be very powerful. For a great many applications, that would be a very substantial improvement, and I have never intended to say otherwise. But simplicity (and its associated confidence) that comes at the cost of violating functional requirements is not an improvement, and there really do exist applications that have tight-constant-bound space requirements. And not just arbitrary bounds. Most critical applications must be shown to execute correctly within a specified constant amount of memory. For such applications, layout optimization can be fatal (to humans, and I mean that literally). Unless the effect of the optimization on the memory requirements of the program can be strictly and carefully quantified, the optimization must be disallowed. In these applications, predictability is more important than performance. The "catch" is that these applications also tend to have challenging performance requirements, and one of the few tools that we currently have to give us predictable performance is layout control. Your supposition seems to be that conflating layout control with type increases program complexity, and therefore lowers the confidence that we should have in those programs. That supposition is both plausible and empirically testable, but I'm not aware of published empirical results. I would expect such evaluations, if any, to have been done in the software engineering community, most likely in the context of aerospace or aeronautical engineering. My [unvalidated] expectation is that the effect is less pronounced than you believe. I certainly don't believe that mere boxing or unboxing of otherwise identical data structures has much, if any, impact. In each case the confidence we have (if any) derives from type information, and our ability to do type analysis for both kinds of structures is comparably complete and comparably credible. The more interesting conjecture here would seem to be tied to mutability, which allows us to build (for example) qualitatively different kinds of data structures. I do not, for example, know of a fixed-space functional AVL tree implementation. In critical systems, the mitigating factor is that use of complex data structures is viewed with overriding suspicion, and subjected to either extensive test or formal verification or both. Because such strong techniques are applied, it is difficult to know how the comparative assurance results in the two cases actually play out. For general purpose applications, my basic belief is that mutation and mutation-dependent data structures are often a bad idea, but that they are not always avoidable. This is the heart of why BitC takes a straddling position on this issue. But you'll also see if you examine the language that the design bias is in favor of immutability, because that is the default in the absence of contrary declaration. Re: Yaws vs. Apache Perhaps Re: Yaws vs. Apache Perhaps unintentionally, you are using selective data to support your presuppositions. ... You do mean my presupposition that "Unless your programs give equal priority to concurrency, security, performance, robustness, graceful degradation, etc. the benchmarks will be unfair", right? Right? Not quite. I meant the Not quite. I meant the seeming implication that YAWS outperforms Apache in some interesting way. If you intended that link as an example of how not to do comparison, then I completely misread what you were saying. Even on the basis of a straight concurrency comparison, that data point is so inadequately described and so completely non-representative of either normal case or standard benchmark case use that it isn't meaningful. Or if it is meaningful, we can't figure out how, because the measurement isn't described well enough to determine what is (allegedly) being measured. Your supposition seems to be that conflating layout control with type increases program complexity, and therefore lowers the confidence that we should have in those programs. No, this doesn't have to do with confidence in programs. It has to do with separation of concerns. Conflating layout control with type reduces program flexibility - the ability to change layout issues independently of changing types. Said conflation may also easily hinder programmer productivity via requiring programmers specialize code to a representation, though you've bought productivity back in BitC via use of Type Classes. Anyhow, you make a lot of assertions about hard memory limits and realtime constraints (I'm familiar with those, having done a summer Internship helping certify code for airplane touch-displays). Those issues are important to me, too... which is why I've not been saying we should leave such issues entirely up to the compiler. But, whereas you've been pursuing approaches where programmers can guarantee constant memory by giving programmers more control over the implementation-details of the program, I've been pursuing (... okay... I'll correct that to 'pondering and researching') approaches where one declares such constraints at higher level computation spaces, and supports a declarative reduction of code to meet these goals via logic programming with search strategies supplied by programmer annotations and suggestions from expert systems. What I want in the end is for the compiler to hand me both a product and a formal proof that the specified properties are correct (assuming that specified axioms in its hardware database are also correct). And I'm rather convinced that the correct approach to this is to change certain language status around: give the programmers less 'absolute control' and more 'soft control', allow programmers to make 'suggestions' as to memory layout such that these suggestions can be used to reduce search costs (search is the time bumbling about heuristically looking for possible answers), and to make program specifications ... more declarative. Which is what I've been saying all along. Not that I can provide you citations regarding this that will make it available for you today or in mainstream projects. I can only tell you that researching what is being done with those "research toy" systems in 'Declarative Metaprogramming' is quite a ride. BitC will work, I agree. But it feels like a kludge. I just want something more than that. In any case, you continue your approach and I'll continue mine. Perhaps, ten years from now, you'll appreciate my efforts and I'll appreciate yours. BitC will work, I agree. BitC will work, I agree. But it feels like a kludge. I just want something more than that. From your tone, this may surprise you, but so do I. The problem from my perspective is that I think "something more than that" is still 10 years of research and another 5 years of integration away, and I'm not content to wait. BitC is an interim solution; nothing more. Of course, so was C. As to appreciating efforts, I think that you have taken my comments here wrongly. I do appreciate the ideas you are putting forward, and it would be marvelous if they pan out. I have articulated some valid concerns about assurance, but that should be taken under the heading of "issues not to forget" as opposed to "fundamental objections". Where I have been very hard on you is in demanding that you substantiate claims. That's important whatever your approach is, and whatever the context in which you do your work. Unsubstantiated claims undermine your credibility when you try to pitch your work, but I actually think that is a secondary concern. The main concern is that people who are in the habit of hand waving very often find themselves believing their own claims (at least operationally). If the claims aren't right, that tends to destroy the effectiveness of their work. So I'm hard on you in this regard not because I dismiss your work, but because I think it has promise. In order to realize that promise, I think you must abandon habits that lend themselves to self delusion. But they are conflated! Conflating layout control with type reduces program flexibility - the ability to change layout issues independently of changing types As a practical matter, type and layout are conflated by every language HL language implementation. A cons cell has one physical structure, ditto for an array, dictionary, object. Furthermore programmeers know this, and choose the appropriate representation (list vs. array) for the task at hand. Limiting the choice of data structure simply moves complexity elsewhere. Every language implementation that I've ever looked at has a hard coded static object system rooted at "struct object" and a much more limited dynamic system built on top of that. Using the high level language constrains the programmer to use only the notion of indirection made palatable to the type system by the VM. The result is that we get lots of really stupid classes like "list_item" (come on, nothing is a list item) and 20 years of unfulfilled promises that some sort of generic template mechanism will condense all that glop into the single piece of code that would be trivial to write if only we could tell the type system what a list descriptor (offset of next and previous fields) is. This isn't like the transition between assembler and compiled languages. In that case, compiled languages (C mostly) were expressive enough to remove the need for assembler, and very soon thereafter they became good enough that hand-crafting assembly simply wasn't economical. In the realm of data structures, we haven't made any progress in 30 years and the difference in expressiveness is still so huge that that arguing about performance is silly. This is computer programming so everything can be done in terms of everything else. If you want to limit yourself to arrays, list and dictionaries as primitive data structures, you certainly can. But, when you get done, are you any closer to the truth? If it "doesn't matter" why work on it? As a practical matter, type As a practical matter, type and layout are conflated by every language HL language implementation. Well, for simplicity of implementation this is often the case that an implementation picks a few simple representations, but types are only 'conflated' with their representation if all implementations are required or expected to do so in a consistent manner. programmers know this, and choose the appropriate representation (list vs. array) for the task at hand That would be all fine and dandy if such local "task at hand" decision-making was able to consistently accomplish more globally desired computation properties. Limiting the choice of data structure simply moves complexity elsewhere. That is true when you're discussing essential complexity. But I suspect there's a great deal of accidental complexity associated with unnecessary choices and choices presented to us in non-uniform manners. We certainly don't need 'int8' 'int16' and its brethren... what we need is the ability to specify a contiguous block of bits of some size and desirable alignment then 'view' it as an integer. A compiler can easily enough recognize some representation of [bit*32,contiguous,aligned(16bit)] and choose the appropriate hardware words. In any case, I'm not demanding we limit choice of representations, only that we separate representation from type. E.g. a collection of codepoints could be represented in UTF-8 and another in UTF-16LE. If the representation is conflated with type, you might not be able to pass these two collections to the same set of functions, and there'd potentially be a great deal extra work appending one to the other if the language doesn't support a third representation for collections that operates as a logical append. Every language implementation that I've ever looked at has a hard coded static object system rooted at "struct object" and a much more limited dynamic system built on top of that. I've a feeling you'd see some more interesting things if more general purpose programming languages had a bootstrapped compiler written in the same language. Languages that lack direct support for memory layout would not fall back on memory layout of other languages; instead, they'd create systems to intelligently choose a layout for a type. In the realm of data structures, we haven't made any progress in 30 years and the difference in expressiveness is still so huge that that arguing about performance is silly. When it comes to representation for hardware IO, I fully agree that improved representation over that of C is fully warranted. I do a lot of network IO stuff. I'd like to be able to express packet-headers including recognizing extended packet-headers, variable-width fields, etc. But I digress. My statement was that we shouldn't conflate representation with types, not that we shouldn't support expression of representation issues. This is computer programming so everything can be done in terms of everything else. If you want to limit yourself to arrays, list and dictionaries as primitive data structures, you certainly can. But, when you get done, are you any closer to the truth? If it "doesn't matter" why work on it? Indeed. So long as you're willing to live in a Turing-tarpit and sacrifice non-functional properties such as performance, error detection, reliability, security, etc. then everything (all 'functional' things) can be done in terms of everything else. The goal of a language designer is to provide just enough features to achieve useful non-functional properties while constraining oneself enough to simplify implementation and reduce accidental complexity. This doesn't get one "closer to the truth", but it still matters. Language design isn't about truth; it's about utility. For me, this is all rather For me, this is all rather vague. A concrete example of what you have in mind might be helpful. Concerning compiler bootstrap and memory layout, the sheer number of self-hosting compilers indicates that the empirical evidence is against you. What is it, exactly, that you have in mind here? This is all rather vague. A This is all rather vague. A concrete example of what you have in mind might be helpful. What is it, exactly, that you have in mind here? I'm responding to vague accusations. Pardon me if I must provide broad answers. Concerning compiler bootstrap and memory layout, the sheer number of self-hosting compilers indicates that the empirical evidence is against you. Scott was describing language implementations as typically having a fixed 'struct object' representation with slight variations. As I understand it, he is saying: 'Everything is a __fill_in_the_blank__' is how most HL languages implement structures, where the 'blank' might be filled with such things as CONS cells, LUA tables, etc. And due to this common phenomenon he stated that types and representation are 'conflated'. A simple concrete counter-example to Scott's claim would be the 'Integer' implementations in most languages that support arbitrary-sized integrals and automatically change representations. But I agree this doesn't represent the sort of systematic differentiation that I was describing as possible. A simple example of manipulating representation for purpose of optimization is the sort of automatic currying and uncurrying done to optimize application of functions to tuples - a single type with two different representations based on local purpose. I don't imagine many of those 'sheer numbers' you mention actually meet the criterion I specified of a compiler being written in and for an HLL where memory layout can't be explicitly specified. Mostly I said this is to implicitly exclude interpreted languages and avoid the condition where programmers lazily fall back on some sort of 'struct' provided by an implementation LLL (E.g. if you compile to C or from C you're more likely to fall back on the limitations of the design-decisions of C.) I probably should have additionally specified a language where some sort of type or dataflow analysis is already performed for some reason. Without knowledge of dataflow, one cannot safely perform operations by operating on local representations; one-repr-per-type are probably too-high granularity to really benefit much from differentiation (though I've heard of avoiding 'every tagged union value is a cons cell' by folding certain tags into their pointers). Anyhow, I'll try to work up a set of concrete example problems... say having some initial data (network packets, file input, strings) of known representations on the system IO periphery then taking these known external representations into the computations. Type != Representation I have to say that I think this idea deserves merit. A pair or a record in my mind is a logical grouping of data, it is an interface over some sort of physical representation. In most cases, the code shouldn't have to care if the data is adjacent and/or aligned or not. I say most - because at some level, the compiler needs to know about locations in memory, and several optimizations would be impossible without this knowledge. High-level languages seems to operate on the logical level, which is good enough for most code - but when you need to concern yourself with physical representation, there is usually not much you can do. And that's when the only option is to use C, C++ or assembly. Is object representation specifications also a type? Can we not just include representations as a type of type? Can representation specifications be dynamic? How should we "type" systems with representation specifications? Separating type and Separating type and representation is an interesting idea. From a high-level, this requires specifying an injection and projection functions to/from some standard layout object that the compiler understands. Using said functions, the compiler can then automatically transform between type-equivalent but representation-differing data. Codecs That [edit: minus the 'official layout the compiler understands'] is essentially the essence of codecs: an injection function (encodec) and projection function (decodec) combined with other relevant information. Unfortunately, it takes a rather intelligent system to automatically break down a monolithic encodec or decodec in order to figure out how to: • perform a persistent update • perform a mutation update (as an optimization or for languages that support mutation) - inserts and appends, especially - without converting the whole structure • automatically identify live memory regions for garbage collection • read just a small property of a large value without converting the whole structure • efficiently iterate across structures with variable-sized cells (e.g. a Hoffman encoding or UTF-8) without copying the structure • etc. Since compilers lack this intelligence, codec approaches to separate type from representation must be specified with several support functions to help out with specific tasks. This set of functions often ends up looking a bit like an abstract base class or a rather complex Haskell type class. Anyhow, codec-driven approaches are one of the two options I mentioned for dividing expression of type-concerns from expression of representation-concerns, useful for when representation (memory-mapped or serialization) needs to be specified on an IO periphery. A compiler using codecs alone is free to convert codecs, specialize functions to particular codecs (effectively the same as specializing for a typeclass), etc. in order to trade off between space and time costs, so another mechanism is needed if programmers also need to have the compiler promise realtime or hard memory constraints. (I'm okay with this... expressions of computation concerns should also be separate.) I think it interesting that Codecs may generally be composed (Type A represented in terms of Type B represented in terms of a block of bits) so long as the derivation of the 'utility' functions will automatically follow such composition. Codec I just checked Wikipedia: A codec is a device or computer program capable of encoding and/or decoding a digital data stream or signal. The word codec is a portmanteau of 'compressor-decompressor' or, most commonly, 'coder-decoder'. Since we're not dealing with compression in this context, I'd prefer words such as "layout", emphasizing that we're dealing with memory layout. PKE. Coder-Decoder 'Coder-decoder' (as 'codec' is "most commonly" used) doesn't imply compression, but does imply a transform in view from a representation to a semantic understanding of it. Also, as I noted, codecs are best applied on the system IO periphery... where the input might be some sort of memory-mapped IO... but could also be serialized or executed by other mechanisms. I think the term entirely appropriate. But I'd rather not battle over definitions in a forum such as this one. The vast majority of uses of codec are, it seems, in the context of data compression. Speaking of which, here's an interesting challenge for a type system (or a layout system, given the position that types and representations are independent things): Develop a type/layout system which can describe an MPEG-2 transport stream. It is permissible to assume that the pointer or whatever to the start of the bitstream is valid (the bitstream can be "unboxed"). To pull this off, the layout system will need to support: * Structured data packetized in to various fixed-sized chunks. * A way of specifying sentinel values/magic cookied; MPEG is full of these. * Variable-length entities, with length encoded in various ways (either an end marker, an explicit indication of length in the header, or however-many-instances-can-fit-in-what-is-left-of-a-packet). * Named algebraic sums (suitable for pattern matching). Have fun! :) MPEG-2 This just requires dependent types or a proof environment environment rich enough to prove the required properties of the representation's semantic function. It's not like you need (or want) a separate "layout language" for this. (And if we're voting, I personally prefer "representation" or "rep" to describe these things. "Layout" brings to mind a limited format descriptor for which this kind of thing would be a problem.) That [edit: minus the That [edit: minus the 'official layout the compiler understands'] is essentially the essence of codecs Well, for codecs the "official layout" is basically a byte array. For the compiler, I was thinking of an algebraic type for describing bit-precise layouts. In any case, upon further thought, I'm not sure I agree that separating types and representations is all that useful. As long as programs are written against abstract types instead of concrete types, we gain the same benefits. The above injection/projection functions are basically injections/projections to/from an abstract type upon which client code depends. All that's needed is to provide bit-level layout controlled by types, and for the language to encourage depending on abstractions instead of concrete types, by default (we basically need type constructor polymorphism and easier functor definition and composition). This is currently far too difficult in most functional languages, though Haskell now has views, which might encourage this practice more. Taking an abstract object... ...and fitting it into a predetermined physical schema, is a lot like object serialization/deserialization (or pickling/unpickling, if you prefer). Or, in many cases, like parsing text. How would it work? In any case, upon further thought, I'm not sure I agree that separating types and representations is all that useful. As long as programs are written against abstract types instead of concrete types, we gain the same benefits. The above injection/projection functions are basically injections/projections to/from an abstract type upon which client code depends. Suppose you have a function foo :: Integer -> Integer, and it gives rise to representations foo :: Int8 -> Int16 and also to foo :: MyBigInt -> MyBigInt. Your plan is, if I understand it, that instead of having separate representations of foo which get called appropriately, we will have one polymorphic foo that can instantiate either of those. So it will have some big dependent type that says it works in either of those cases. To make that modular, I think you'll want intersection types and a convenient mechanism to establish the pieces being intersected separately (as in separately compiled). But then how do you select that you want a different (but equally behaving) term used for a computation on a particular representation? For example, restricted to {0,1}, I can replace the function n->floor (n+1)/2 with identity. With such a trivial example, you could hope a code generator would make the optimization, but what if it's an optimization that simplifies a term when a certain matrix is symmetric? I think you need some mechanism. I suppose you could extend your intersection typing mechanism with a mechanism to select terms based on which part of the intersection you're in, and come up with rules of what to do when the cases overlap or you can't figure out which cases apply. But what do you end up with? Does a high level function that wants to remain completely flexible take a hundred type constructors as parameters, so that it can pass them down through the functions it calls? Right now, I don't see it. Suppose you have a function Suppose you have a function foo :: Integer -> Integer, and it gives rise to representations foo :: Int8 -> Int16 and also to foo :: MyBigInt -> MyBigInt. Your plan is, if I understand it, that instead of having separate representations of foo which get called appropriately, we will have one polymorphic foo that can instantiate either of those. [...] I was thinking along simpler lines. My proposal is essentially "code depends on interfaces, not classes", or for functional abstractions, "most code is defined in a functor (or depends on a first-class module)". This way concrete representations and/or implementations can be swapped out at will. This would seem to achieve exactly the separation of type and representation for most code that David was arguing for, without fundamentally separating type and representation for the small bit of code needed for instantiation. It's really just an abstraction problem. "code depends on interfaces, not classes" I agree with this sentiment and have ideas in this direction. But this serves a different role than my formulation of representations does. Representations don't need to implement whole modules. To basically repeat my previous example: you can't represent arbitrary integers with 32-bit words, but there are lots of places where you want to represent a function on integers by a function on 32-bit words. I guess you're just addressing David's proposal where the "codec" is really an isomorphism on the type, but I think that misses many interesting cases. Functorizing code is about maximizing abstraction whereas representations are about efficient compilation. Functorizing code is about maximizing abstraction whereas representations are about efficient compilation. ML modules have a fully static interpretation, so I don't think it gets more efficient than that (assuming you specialize based on that information). Fair That comment was more of a personal viewpoint than substantive - you could certainly optimize some things with modules. But you still haven't addressed my point that modules don't do everything you want. Also, the point I made two messages back seems to still apply. How are you going to expose the internal representations and conversions that a function uses? Example: foo2 :: Nat -> Nat foo2 x = f x + g x + h x You might potentially want to change representations before or after calling f, g, and h. Are you going to make foo2 parameterized by all of these conversion functions? What about additional conversion functions that might be wanted by f, g, and h? Basically, I'm suggesting Basically, I'm suggesting that functors need more pervasive (and easier!) support in a language, so that all code by default would be parameterized by a signature. Depending on a concrete type/representation would be the exception, not the rule. For example, consider the tagless staged interpreters paper, which implements 3 completely separate backends for a given signature, and the client program is simply parameterized by the signature. Depending on the intended semantics of Nat, foo2, etc. could begin executing with a native Word unsigned integer type, but when overflow is detected, switch to a BigInt. The Word module need to be written with this in mind, I'm not suggesting it be automatic -- perhaps this is what you're objecting to? Binary methods might also pose a bit of a problem, since one Nat may be promoted, while another need not be. I wonder if there's some overloading technique that can handle these mixed representation cases. Basically, I'm suggesting Basically, I'm suggesting that functors need more pervasive (and easier!) support in a language, so that all code by default would be parameterized by a signature. Depending on a concrete type/representation would be the exception, not the rule. And I think that's a good idea, but for other reasons. I don't think that really solves the problem I want to solve. Depending on the intended semantics of Nat, foo2, etc. could begin executing with a native Word unsigned integer type, but when overflow is detected, switch to a BigInt. The Word module need to be written with this in mind, I'm not suggesting it be automatic -- perhaps this is what you're objecting to? I'm objecting to the fact that the overflow checking code is dynamic (with runtime cost), but is needed (assuming you favor type checking all of this stuff) because some integers are not representable by words and you need to implement the entire integer structure to have a proper functor. [In case it hasn't been clear, a motivating example here is compiling a function Integer -> Integer down to a function, say, Int8 -> Int16, assuming you can demonstrate that the result will indeed fit into an Int16 when the parameter fits into an Int8.] I'm objecting to the fact I'm objecting to the fact that the overflow checking code is dynamic (with runtime cost), but is needed (assuming you favor type checking all of this stuff) because some integers are not representable by words and you need to implement the entire integer structure to have a proper functor. Did I miss a constraint somewhere that specified dynamic checks weren't allowed? The dynamic check is only necessary because I assumed the most liberal definition of Nat (arbitrary precision). If we were to use a more constrained type, like a number-parameterized type, we could eliminate runtime checks. Still missing the example Did I miss a constraint somewhere that specified dynamic checks weren't allowed? Well, when the stated goal is efficiency you might have inferred it to at least be undesirable. Look at this example: square :: Nat -> Nat -- Arbitrary precision (in general)! square n = n*n foo :: [Nat] foo = map square [1..200] -- For *this* usage of square, optimize using representative square :: Int8 -> Int16 You can tell me that you're going to dependently type square, but I'm of the opinion that it's not practical to have precise dependent types everywhere and so I'm going to counter that such a plan won't scale up. I must be missing something. I must be missing something. A module that performs the above widening arithmetic based on fixed integer types provided by the language is all that's needed here (I assume that fixed width integer types have two multiplication operations, Int8->Int8->Int8 for wrapping ops, and Int8->Int8->Int16 for widening ops). The only difficulty I can see is if you're expecting the compiler to automatically generate this. This module is only correct according the Nat's semantics for a certain class of programs, but as I said, the user will be specifying the module anyway, so the onus is on him. This module is only correct This module is only correct according the Nat's semantics for a certain class of programs, but as I said, the user will be specifying the module anyway, so the onus is on him. Ah, this is where we diverge. I was assuming the module would be dependently typed such that only a type isomorphic to the naturals would work. Otherwise I fear this functorization idea will spread around too much onus. The stronger typing can be The stronger typing can be used in those contexts in which assurance is required, hence why I brought up number-parameterized types. But types get quite unwieldy the more of the behaviour you specify, which I suppose is what you were getting at. But consider that Int8 has a widening operation, as does Int16, as does Int32, and eventually we widen into arbitrary precision integers. We can get far by a judicious choice of base types. Suppose the Nat signature was parameterized with the "larger" type for those widening operations (pseudo-ML): type Nat(S : Nat) = struct type t val (*): t -> t -> t (* wrapping operation *) val (*): t -> t -> S.t (* widening *) end;; Int8 is parameterized by Int16, which is itself parameterized by Int32, which is parameterized by BigInt. Narrowing operations like division, can cascade in the reverse direction. We obviously need something a little more flexible than ML modules for this mutual recursion. The stronger typing can be The stronger typing can be used in those contexts in which assurance is required Ah, but continuing my example, to use the optimized representation in 'foo', you would need to refine the type of 'square', which might not be defined in your code. I'm ok with a pay-as-you-go approach that requires the programmer to demonstrate their representation works, but I think requiring all code be precisely typed just in case someone in the future wants to pass in an optimized representation is unreasonable. Regarding the narrowing and widening idea, I don't see it. Who decides when to use the widening operation? Figuring out what types are sufficient for a representation requires a fine grained analysis (needing dependent type-ish capabilities to establish formally). Consider that incrementing a number could require widening the representation, and so if you used a widening version of increment in a loop, you'd be at BigInt in a handful of operations. Ah, but continuing my Ah, but continuing my example, to use the optimized representation in 'foo', you would need to refine the type of 'square', which might not be defined in your code. If square is dependent solely on the Nat signature, then the concrete representation used is not yet selected, so where's the problem? I think requiring all code be precisely typed just in case someone in the future wants to pass in an optimized representation is unreasonable. I defer judgment until "precise" is defined. ;-) Regarding the narrowing and widening idea, I don't see it. Who decides when to use the widening operation? I confused myself momentarily there, thinking I had a better solution. The widening operation would have to be manually selected. In any case, this thread started because I stated that abstracting over representation without separating representation from types is possible already, if we slightly massage existing abstraction techniques with some manual specification (which you would have to do anyway if you're providing a custom representation). Do we at least agree on that? Ah, but continuing my Ah, but continuing my example, to use the optimized representation in 'foo', you would need to refine the type of 'square', which might not be defined in your code. If square is dependent solely on the Nat signature, then the concrete representation used is not yet selected, so where's the problem? Right, the problem is when Nat has a precise (dependently typed) signature, as would likely be required to establish the correctness of the code using it. Obviously, programming in a style where all of your code is open to arbitrary semantics-changing transformations affords you a bit of freedom in choosing optimizations. But I see this as trading correctness for efficiency. What started me arguing was this: In any case, upon further thought, I'm not sure I agree that separating types and representations is all that useful. As long as programs are written against abstract types instead of concrete types, we gain the same benefits. I have yet to see a proposal using functors that I think has the same benefits as what I have in mind. I have yet to see a proposal I have yet to see a proposal using functors that I think has the same benefits as what I have in mind. Did I miss your proposal in this thread? Or was it the intersection types and/or dependent types? Did I miss your proposal in No, I haven't given too many details - I started out supporting the general idea of "separating type and representation" that David was advocating. The dependent type / intersection type stuff was going down the road of trying to make the functors idea work as a substitute for representations. Tangent: Haskell does not have views View patterns are not views. I see, view patterns seems I see, view patterns are a restricted form of views due to problems implementing the more general approach. Thanks for the correction! Control over representation The way I see it, you primarily need a semantic function from representations to values. Specification of conversions should be possible, but there doesn't need to be a canonical representation for types - failure to unify representations can be an error. I personally think this is a pretty important feature missing from current languages, though I disagree with David on the issue of 'soft control' vs. 'absolute control'. I think the core language tools should provide absolute control over representation. Tools for soft control can be built upon absolute control - not vice versa. Soft Control, Absolutely on the issue of 'soft control' vs. 'absolute control': I think the core language tools should provide absolute control over representation. Tools for soft control can be built upon absolute control - not vice versa. I agree that tools for soft control can be built upon absolute control, and generally not vice versa. But there is a corollary: language tools built for absolute control over implementation details cannot usefully be extended to provide soft or independent control over separate concerns represented in those implementation details. I guess I favor, at least in principle, the route of extensible language tools, even for systems programming. I understand the necessary features are not yet available... but people are slowly working on that. Semantics of representation annotations What I am advancing is that annotations for selecting representations should have well defined semantics that act to construct a value representing another value in a well defined way. As these constructed values should be usable in other constructions, they are not mere hints and cannot be ignored. This does not preclude extensibility because of a fact you mentioned in another post: representations compose. Or, stated another way, representation values can themselves have representations. Thus there does not need to be a "bottom" of primitive types to end the regression. Code can be annotated to choose one representation, and an optimizer can later be directed to choose an optimized representation of that representation. I offer that for many systems programming tasks, one wants the ability to "turn off" the optimizer and use a simple mapping of types to representations. (I'd prefer access to more than a monolithic "compiler", but that's another topic) Optimizable implies Soft Control Code can be annotated to choose one representation, and an optimizer can later be directed to choose an optimized representation of that representation. If the language is free to optimize a detail that you've written as a programmer, then what you've got as a programmer is what I would call 'soft control'. As these constructed values should be usable in other constructions, they are not mere hints and cannot be ignored. This does not preclude extensibility because of a fact you mentioned in another post: representations compose. Or, stated another way, representation values can themselves have representations. It's really the other concerns for which the language can't so readily be extended. You might be able to turn your 'string-on-a-blob' to a 'string-on-a-blob-on-a-rope' (thus representing a representation), but that doesn't mean you'll be able to ignore the 'commandment' to pass a value as a 'string-on-a-blob' if representation concerns are, in fact, expressed as 'absolute'. Regarding extensibility: It's the extension of a language to support other dimensions of concern that will suffer due to 'absolute' control of implementation details in any particular dimension of concerns. I'll accept that effective expression of representation concerns does not preclude the ability to further express even more representation concerns in terms of prior representations. But performance concerns are another issue. Representation affects performance, thus programmers often choose to control representations in order to better control performance. Unfortunately, it is extremely difficult for a programmer of a component to get a 'big picture' view of how a representation will be used within a system... so, by controlling the representation received by a particular function within the system the programmer might be able to optimize that function at the cost of making it more difficult or more expensive to call that function in the first place. In a system where programmers have absolute control over representation, the only way for them to change global performance costs related to representation is for them to change the representation. I.e. they could decide to change their code so they now say: this function takes a 'string-on-a-blob-on-a-rope' instead of 'string-on-a-blob'. By doing so they might wisen up a particular codebase... but still won't possess a global view in which to best make the decision. Eventually, "that choice is 'good enough'... let's do something more interesting". And that choice probably is 'good enough' until along comes the guy who needs to do some realtime work and needs to re-implement all those functions for another representation. In a system where 'absolute' representation is only an interface issue on the system-IO-perimeter rather than something controlled in the implementation, however, a programmer working on implementation might 'suggest' such strategies such as 'consider string-on-a-blob and string-on-a-blob-on-a-rope' to the compiler, and give ropes and blobs a default place when searching for good ways to represent strings. But, what is ultimately chosen for the implementation won't necessarily be what the programmer suggested... it could depend on a number of strategies, global optimizations, dataflows from system-perimeter-to-perimeter, etc. Thus, these suggestions are 'soft control'. And, in a soft control system, expressions of other concerns, such as whole-system-performance (e.g. for an actor-configuration or a dataflow component), might ultimately eliminate all 'suggestions' made by the programmers during the search to achieve a particular set of goals. Of course, compilers aren't AIs (or shouldn't be). After running out of suggestions, or even before then, compilers would do well to request guided compilation from (and provide feedback to) expert systems and programmers. Rather than waving my hands about sufficiently smart compilers, I'd much prefer a stupid compiler that systematically try suggestions as guided by programmers and external expert systems. (Usefully, expert systems might use feedback and a database to 'learn' how to best compile a project, and be able to export this data for shipment with source code.) An IDE could include some useful visualization tools for searches... I think Alice has done some similar stuff for its logic programming component. "The language" If the language is free to optimize a detail that you've written as a programmer, then what you've got as a programmer is what I would call 'soft control'. One issue is, what's "the language"? If it's just the value semantics and you insist that representations be faithful, then representations aren't a part (or are a noop) of the language. This gets back to my comment about not wanting a monolithic compiler - I think there are a number of reasons to want a separate semantics that govern some rules of compilation. It's under these semantics that representation selection occurs and where I favor absolute control. Unfortunately, it is extremely difficult for a programmer of a component to get a 'big picture' view of how a representation will be used within a system... so, by controlling the representation received by a particular function within the system the programmer might be able to optimize that function at the cost of making it more difficult or more expensive to call that function in the first place. Agreed, which is why optimization work done should be commensurate with your view of the system. A library designer should probably just expose a number of representation options (and maybe provide hints or hueristics as to which to use). A library user might either explicitly select which representation he wants (ideally being able to write new representations for values provided by the library), or might tell the system to search for a representation statically (providing more or less help), or might employ some lightweight mechanism to pick representations at runtime. In a system where programmers have absolute control over representation, the only way for them to change global performance costs related to representation is for them to change the representation. You mean in a system where programmers only have absolute control. Of course, compilers aren't AIs (or shouldn't be). After running out of suggestions, or even before then, compilers would do well to request guided compilation from (and provide feedback to) expert systems and programmers. Rather than waving my hands about sufficiently smart compilers, I'd much prefer a stupid compiler that systematically try suggestions as guided by programmers and external expert systems. (Usefully, expert systems might use feedback and a database to 'learn' how to best compile a project, and be able to export this data for shipment with source code.) An IDE could include some useful visualization tools for searches... I think Alice has done some similar stuff for its logic programming component. I largely agree with everything you've written here; I also envision programmers using tools (expert systems, profilers, etc.) to help find optimal representations for their needs. But the programmer's wishes here should be followed, whether they be "use this particular representation" or just "use these hueristics to pick a best representation." The language is... One issue is, what's "the language"? ideally the same thing as "the operating system" =) In a system where programmers have absolute control [...] You mean in a system where programmers only have absolute control. Nope. But I do mean: "in system where programmers, or the programmers of any source components they link in, apply absolute control" over an implementation detail... which I believe in practice identifies the same set of systems as "in a system where programmers have absolute control." But the programmer's wishes here should be followed, whether they be "use this particular representation" or just "use these hueristics to pick a best representation." Programmers will be providing heuristics and strategies and suggestions for a bunch of different concerns all at once. These concerns will conflict in any non-trivial project, especially those that are expressed locally within the source-code rather than less locally as part of the project (which creates association and maintenance problems). I'm cool with a programmer saying: "these are the things I need to guarantee" and "those are the features I want but am willing to compromise on so here's a few search strategies and some heuristics". But I'm really only cool with this so long as the 'guarantees' are only demanded from a more global view of the system... e.g. a bounded-box computation space with strict definitions of expected inputs, outputs, etc. It's when these 'implementation' guarantees, e.g. of a particular intermediate representation, are embedded (or are even capable of being embedded) in myopic source-code that is unaware of the greater scope into which the code shall later be applied that I become extremely skeptical. Perhaps our views are compatible. Not incompatible In my view, you should have total control over the artifact you're building. In what I'm proposing, there isn't a way for a library designer to force anyone else to use a particular representation (short of building as his artifact a C-style library). Values can have multiple representations. If you don't like any of the representations provided by the library designer, build your own. (which creates association and maintenance problems). I agree there are problems to be solved here. More a runtime issue Scott was describing language implementations as typically having a fixed 'struct object' representation with slight variations. As I understand it, he is saying: 'Everything is a __fill_in_the_blank__' is how most HL languages implement structures... In my experience this is correct, but it is less a compiler issue than a runtime issue. There is a strong need to keep the core of the garbage collector simple, and this tends to lead to designs in which there is a small number of "core" object layouts known to the collector from which the compiler composes other representations. While more recent compilers emit code for use by the checker, doing so raises safety validation concerns. A middle position is what .Net does: emit a checkable object repr description, but not actual code. I don't imagine many of those 'sheer numbers' you mention actually meet the criterion I specified of a compiler being written in and for an HLL where memory layout can't be explicitly specified. I agree, but that wasn't your criteria. You wrote: I've a feeling you'd see some more interesting things if more general purpose programming languages had a bootstrapped compiler written in the same language. No mention of memory layout there. In any case, I agree with you, Pal-Kristen, and naasking that separation is an interesting idea. I can see motivating examples, so I don't feel that I really need that. What I would find helpful is just an example of the kind of notation you have in mind. Let me return to the topic at hand in a separate response. Criterion stated in following sentence I've a feeling you'd see some more interesting things if more general purpose programming languages had a bootstrapped compiler written in the same language. Languages that lack direct support for memory layout would not fall back on memory layout of other languages; instead, they'd create systems to intelligently choose a layout for a type. But I can see how the confusion would arise. There is indirect data on this The C language and its ilk are well known for giving good 'localized' performance, but I've never been impressed with the cost of that 'local' performance when considering the entire system (runtime libs, separating memory spaces, heavyweight threads, complex security, etc.) The paper Deconstructing Process Isolation from the Singularity project offers some preliminary direct comparison data on this. I don't recall that they did anything comparable to shared libraries. That actually has pretty significant impact, but it's inconsistent with Dave Tarditi's "tree shaking" tricks. So for the moment, the best data we have is that the techniques are tied for performance. In my opinion, those results tend to support the view that language-based mechanisms will win out in the end, because they present increased opportunities for copy avoidance (via data sharing) and type-based security analysis. Things like that tend to reduce the need to copy for the sake of being safety-conservative, and copies are the ultimate death of systemic performance. As a mature understanding of type-derived opportunities emerges, I expect language methods to come out ahead. Actually, it's just that I'm a mugwump I understand you're aiming to support explicit memory layout because you associate memory layout with approaches that have succeeded before. My real view on memory layout is much more cowardly than this: • It sounds nice to have, but in some places I still need the ability to do prescriptive layout. I don't think that we have any disagreement on this. • I am very concerned about assurance. The current complexity of compilers is overwhelming, so I am hesitant about any proposal to add new complexity to compilers. In this case, I'm not up on the literature, so I'm not clear about the degree to which this type of transform is correctness-checkable. • In high-confidence contexts, it is often required that programs operate in tight-constant-bounded space and bounded memory management variance. The techniques for memory layout rewriting that I can imagine (again, not having explored the literature very much) don't seem to lend themselves to this. This leads to a requirement that I be able to turn this class of optimization off for the types of programs that I am focused on, which in turn drives me back to needing relatively rich language support for layout specification. My view of monads is similarly mugwump-ish. I'm actually convinced that linear typing techniques are quite a good idea. What I'm not convinced about yet is that we know how to do all of the different things that we need to do using these techniques. Until we do, or until we find some more comprehensive approach, I would prefer not to be left in the position where my choice is between unsafe expressiveness and fully safe inexpressiveness. I would prefer to have a language in which I can use the stronger constructs where possible, and fall back to the weaker constructs where I must. A tricky issue in assurance is compiler independence. It is pragmatically important that a program, when compiled by two different compilers, generate roughly comparable code. This is never perfect, of course, but it is necessary to insulate a critical system builder from the failure of a compiler vendor. One concern I have about compiler-driven memory layout is that it seems easy for the presence or absence of a single technique to alter performance by a very large multiple. If that is true (is it?) it's a "no go" for critical systems. In our collective haste to focus on performance (my own included), we sometimes lose sight of the importance of behavioral predictability. The point is to avoid mutation Any sort of transaction support pretty much nullifies whatever benefits you'd hope to achieve by favoring mutable memory, which in turn diminishes the benefits of controlling memory layout. Further, I am convinced that support for transactions (as per software transactional memory or transactional actor model) is the only viable (composable, reliable, safely allowing for priority-driven preemption, etc.) approach to programming large massively concurrent systems - including operating systems. This is exactly backward. The whole point of extending the type system to cover physical memory layout is to avoid mutation. Transforming 20Gb of memory mapped data structure into 40Gb of heap so that we may then enjoy the benefits of FP is a non starter. What is needed is a first-class way to describe that data so that functional programs can access it without consing anything. Transaction support and how you mutate data is a separate issue and one that there's little point in addressing until you have a viable answer to reading data. Further, I am convinced that support for transactions (as per software transactional memory or transactional actor model) is the only viable (composable, reliable, safely allowing for priority-driven preemption, etc.) approach to programming large massively concurrent systems - including operating systems. You are so convinced, in spite of the fact that people who actually write massively concurrent systems have overwelmingly chosen a different tool set? What is needed is a What is needed is a first-class way to describe that data so that functional programs can access it without consing anything. I agree an approach to achieving that is useful. Not certain it needs to be 'first-class' though. Transaction support and how you mutate data is a separate issue and one that there's little point in addressing until you have a viable answer to reading data. Those are only separate issues for the user of the language. For the language designer, input, output, transactions, collection, and protection tend to all tie into a neat little Gordian knot. Or at least you hope it's a neat knot... hairy knots tend to indicate an asymmetric language that might be easier to implement but is probably harder to use. You are so convinced, in spite of the fact that people who actually write massively concurrent systems have overwelmingly chosen a different tool set? Quite so. But not "in spite of" anything. Those who wrote the massively concurrent systems of the past have not truly possessed opportunity to select transactions support for coordination. To get transactions, they'd need to grab an RDBMS - an approach that is difficult to justify for both its performance hit and the translation efforts. I suspect software transactional memory is of significant benefit if one has 'cells' that aren't particularly fine-grained (as per Clojure), but I'm currently pursuing transactions at the level of 'actors' and their creation, destruction, and continuation status. Of course, to avoid confusion, transactions themselves aren't the units of concurrency. But for any operating system, you'll have some unit of concurrency and some need to coordinate them. With a few exceptions (like restricted dataflow programming) all such systems have race-conditions, and those that don't have race-conditions are usually not the sort that may act as services (i.e. involving service discovery and arbitrary users from the outside). Transactions support ad-hoc negotiations between services and other systems... and greatly simplify certain other OS tasks. Those who wrote massively concurrent systems in the past chose most often something more equivalent to "localized best-effort". They might use locks or message passing or rendezvous to increase the granularity of any race-conditions, but they by no means avoid them on any larger scale. The results haven't been pretty... but sometimes worse is better. I think a language should aim to do quite a bit better than an OS. I'm reading the BitC spec; I'm a bit curious as to how they handle concurrency (... ah... it handles concurrency with the same panache that C handles concurrency... oh well.) Umm. No. The point might be Umm. No. The point might be to avoid the semantics of mutation.... Any sort of mutation multiplied by billions and billions results in a system which simply doesn't scale. Like it or not, consing is mutation. The harder harder you push a pure functional idiom the less functional the resulting program behaves at the OS level. My interest is in being able to write interesting programs which, have large complex functions which really and truly are functional right down to the bare metal. To do this, I need a more expressive type system than those currently out there. You have a funny notion of "evidence" Within a few years of the advent of C, people had written operating systems and databases with it. "Modern" FP languages (Scheme, ML, Haskell, ...) have been around for 20 years give or take and have been applied to system programming tasks sporadically with no mainstream success. Memory layout is absolutely critical for optimizing cache and disk interactions. I don't disagree that memory I don't disagree that memory layout is critical to optimizations and certain hardware IO (including disk interactions, which are often memory-mapped). I disagree only with the notion that programmers need to thread data-representation stuff throughout their programs in order to optimize a few communication details on the system periphery, and I oppose tying data representation directly into the type system. The extra responsibilities in the type-system complicates things for the programmers. If you're going to object, then please object to what I'm saying rather than to statements I've never made. Regarding "mainstream success" I can only say that I've no reason to believe, for better or worse, that technical proficiency of a development language is even among the top ten causes for achieving it. Certainly that would be Certainly that would be because C intentionally only includes features that map closely to the hardware. If you back up a bit and consider arbitary high-level languages--Perl, Ruby, Python--then they were used for quite a lot of production work right from the start. Haskell and ML are clearly different, haven take a long time to get any traction. There's a big difference There's a big difference between a "production" web application and something like a database or an OS or a language implementation. That functional languages have had little mainstream success in an application domain which consists overwelmingly of transformations which are perfectly functional in nature (ie. take this data from SQL and turn it into HTML) is probably indicative of something (a 5-year discussion consisting of attempts to explain mnads, perhaps?) but I doubt that it can be laid at the feet of an insufficiently expressive type system. Indeed, all of the HL languages you mention offer pretty comparable control over memory layout.
2022-05-22 11:00:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4450850486755371, "perplexity": 1340.9906155584604}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545326.51/warc/CC-MAIN-20220522094818-20220522124818-00251.warc.gz"}
http://hidde.com.pl/girls-bed-cbjlt/article.php?id=d9680e-relation-between-rank-and-determinant
Change of basis. Equivalently, a matrix and its transpose span subspaces of the same dimension. Let H n ( F ) be the space of n -square symmetric matrices over the field F . (1979). Tags: dimension dimension of a vector space linear algebra matrix range rank rank of a matrix subspace vector vector space. We generalize the main result of [M.H. The multiplication of all the eigenvalues is determinant of the matrix. What is the relation between eigenvalues, determinant ,and trace of a matrix? Since the matrix is , we can simply take the determinant. Full-text: Open access. PDF File (472 KB) Article info and citation; First page; Article information. Typically, when doing any sort of adaptive bamforming, one needs to invert a (square) (covariance) matrix and it needs to be full rank in order to do that. Next story Column Rank = Row Rank. Find the rank of B. I understand that $0$ being an eigenvalue implies that rank of B is less than 3. 2, pp. exists if and only if , … 0 0. – philipxy Dec 10 '15 at 1:40 Theorem 3. Exchanging rows reverses the sign of the determinant… How determinants change (if at all) when each of the three elementary row operations is … In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix that generalizes the eigendecomposition of a square normal matrix to any × matrix via an extension of the polar decomposition.. A matrix is a rectangular array of numbers. The determinant of a square matrix is denoted by , and if and only if it is full rank, i.e., . If i have the eigenvalues ; can i deduce the determinant and the trace; please if there is relations; prove it. Relation between rank and nullity. Lim, A note on the relation between the determinant and the permanent, Linear andMultilinear Algebra 7 (1979) 145–147. If the determinant is not equal to zero, it's linearly independent. Determinants, rank, and invertibility. Relation between determinant and matrix multiplication. Math., Volume 5, Issue 3 (1961), 376-381. I am unable to estalish the relation ,like I know that from characteristic polynomial i can obtain the eigenvalues and hence the trace and determinant of the matrix and now the question is if i know the trace and determinat of the matrix can i obtain some information about the rank of the matrix(the number of linearly independent rows in the rref). In other words, the determinant of a linear transformation from R n to itself remains the same if we use different coordinates for R n.] Finally, The determinant of the transpose of any square matrix is the same as the determinant of the original matrix: det(A T) = det(A) [6.2.7, page 266]. Otherwise it's linearly dependent. The relation between determinant line bundles and the first Chern class is stated explicitly for instance on p. 414 of. Marvin Marcus and Henryk Minc. Relation between a Determinant and its Cofactor Determinant. The determinant of an n n matrix is nonzero if and only if its rank is n, that is to say, Determinant. Homework Equations The Attempt at a Solution I get the characteristic polynomial x^4 -7x^3 - x^2 - 33x + 8. Determinant of an endomorphism. Using the three elementary row operations we may rewrite A in an echelon form as or, continuing with additional row operations, in the reduced row-echelon form. Source(s): relation eigenvalues determinant trace matrix: https://shortly.im/jvxkn. But, is there any relation between the rank and the nullity of … The relationship between the determinant of a sum of matrices and the determinants of the terms. 7, No. A note on the relation between the determinant and the permanent. 4.7 Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. Actually there are work arounds if it isn't full rank and it doesn't always require a literal inversion, like using rank one updates of QR or Cholesky decomposition. ... and matrix mult and determinants are related and so is there a relation between convolution in group algebras and determinant (and also permanent)? Determinant of a product of two matrices and of the inverse matrix. The space of linear maps from Uto V, representation by matrices. There are many different rank functions for matrices over semirings and their properties and the relationships between them have been much studied (see, e.g., [1–3]). I used a computer program to solve it for 0 and got eigenvalues L1= 0.238 and L2= 7.673 roughly. Their sum is 7.911. The solution is here (right at the top). [3] More precisely, let $m,n$ be positive integers. And its "A relation is in BCNF if, and only if, every determinant [sic] is a candidate key" should be "every non-trivial determinant [sic]". [4] Determinant and trace of a square matrix. Ask Question Asked 4 years, 9 months ago. . In linear algebra, the rank of a matrix is the dimension of the vector space generated (or spanned) by its columns. Note that the sum of the product of elements of any row (or column) with their corresponding cofactors is the value of the determinant. linear algebra - Relation between rank and number of distinct eigenvalues $3 \times 3$ matrix B has eigenvalues 0, 1 and 2. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … For this relation, see the problem Determinant/trace and eigenvalues of a matrix. M.H. Row rank and column rank. Rank, Row-Reduced Form, and Solutions to Example 1. Formula for the determinant We know that the determinant has the following three properties: 1. det I = 1 2. Determinant formulas and cofactors Now that we know the properties of the determinant, it’s time to learn some (rather messy) formulas for computing it. If , then is the inverse of . Compute the sum and product of eigenvalues and compare it with the trace and determinant of the matrix. Then, the rank of Aand A0 coincide: rank(A)=rank(A0) This simply means that a matrix always have as many linearly independent columns as linearly independent raws. The sum of the nullity and the rank, 2 + 3, is equal to the number of columns of the matrix. Griffiths and Harris, Principles of algebraic geometry; Literature on determinant line bundles of infinite-dimensional bundles includes the following: Determinant of matrix whose diagonal entries are 6 and 2 elsewhere – … The relationship between the determinant of a product of matrices and the determinants of the factors. Now, two systems of equations are equivalent if they have exactly the same solution A relationship between eigenvalues and determinant January 03, 2012 This year started with heartbreak. Also, the rank of this matrix, which is the number of nonzero rows in its echelon form, is 3. The rank of a matrix A is the number of leading entries in a row reduced form R for A. Also, that link unusually defines "determinant" (in a table) as "determinant of a full functional dependency". 145-147. Relation between a Determinant and its Cofactor Determinant. The adjugate matrix. Consider the matrix A given by. Linear maps, isomorphisms. Let’s look at this definition a little more slowly. Linear and Multilinear Algebra: Vol. Given that rank A + dimensional null space of A = total number of columns, we can determine rank A = … The range of an array is the order of the largest square sub-matrix whose determinant is other than 0. 4.7.1 Rank and Nullity The –rst important result, one which follows immediately from the previous On the relation between the determinant and the permanent. A note on the relation between the determinant and the permanent. [7] M.PurificaçãoCoelho,M.AntóniaDuffner,On the relationbetween thedeterminant and thepermanenton symmetricmatrices, Linear and Multilinear Algebra 51 (2003) 127–136. There’s a close connection between these for a square matrix. From the above, the homogeneous system has a solution that can be read as or in vector form as. The range of A is written as Rag A or rg(A). Lim (1979). Therefore, there is the inverse. We will derive fundamental results which in turn will give us deeper insight into solving linear systems. $\endgroup$ – user39969 Feb 14 '16 at 19:39. Source Illinois J. The properties of the determinant: Inverse. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by . Active 4 years, 9 months ago. First, the order of a square matrix is the number of rows or columns in that matrix. This also equals the number of nonrzero rows in R. For any system with A as a coefficient matrix, rank[A] is the number of leading variables. We’ve seen that an n n matrix A has an inverse if and only if rank(A) = n. We can add another equivalent condition to that, namely, jAj6= 0. The connection between the rank and nullity of a matrix, illustrated in the preceding example, actually holds for any matrix: The Rank Plus Nullity Theorem. ... First, if a matrix is n by n, and all the columns are independent, then this is a square full rank matrix. [6.2.5, page 265. In this paper, we use the ϵ-determinant of Tan [4, 5] to define a new family of rank functions for matrices over semirings. This corresponds to the maximal number of linearly independent columns of .This, in turn, is identical to the dimension of the vector space spanned by its rows. A square matrix of order n is non-singular if its determinant is non zero and therefore its rank is n. Its all rows and columns are linearly independent and it is invertible. Rank-Nullity Math 240 Row Space and Column Space The Rank-Nullity Theorem Homogeneous linear systems Nonhomogeneous linear systems Relation to rank If A is an m n matrix, to determine bases for the row space and column space of A, we reduce A to a row-echelon form E. 1.The rows of E containing leading ones form a basis for the row space. Field F ] determinant and the determinants of the factors 1979 ) 145–147: 1. det I = 2. Article info and citation ; first page ; Article information equal to number. ( a ) above, the rank of B is less than 3 simply take the determinant the. Follows immediately from the previous linear maps from Uto V, representation matrices. [ 4 ] determinant and the permanent, linear andMultilinear algebra 7 ( 1979 ) 145–147 eigenvalues determinant... Thedeterminant and thepermanenton symmetricmatrices, linear andMultilinear algebra 7 ( 1979 ) 145–147 user39969 Feb '16. Exactly the same dimension computer program to solve it for 0 and got eigenvalues L1= 0.238 and L2= 7.673.! Denoted by, and invertibility encoded by array of numbers m, n [ /math ] be integers. Eigenvalue implies that rank of a product of two matrices and of the inverse matrix denoted! Now, two systems of equations are equivalent if they have exactly same!, isomorphisms ) by its columns if I have the eigenvalues ; I... Characteristic polynomial x^4 -7x^3 relation between rank and determinant x^2 - 33x + 8 the solution is here right! Sum of matrices and the permanent, linear and Multilinear algebra 51 ( 2003 ).... Line bundles and the rank of B. I understand that $0 being! Right at the top ) give us deeper insight into solving linear.... More precisely, let [ math ] m, n [ /math ] be positive integers rank a! Range of a full functional dependency '' 2 + 3, is equal to number... Is here ( right at the top ) and Nullity the –rst important,. B is less than 3 KB ) Article info and citation ; first ;... 9 months ago a table ) as determinant '' ( in a table ) as determinant. Matrix and its Cofactor determinant ) be the space of n -square symmetric matrices the... I deduce the determinant and the determinants of the same solution relation determinant. Rows in its echelon form, and if and only if, … determinants, rank Row-Reduced... Right at the top ) 2 + 3, is equal to zero, 's! And invertibility \endgroup$ – user39969 Feb 14 '16 at 19:39, linear andMultilinear algebra 7 ( ). Being an eigenvalue implies that rank of a square matrix class is stated explicitly for instance on 414!, linear andMultilinear algebra 7 ( 1979 ) 145–147 algebra matrix range rank rank a. Between these for a square matrix is denoted by, and Solutions to Example 1 thepermanenton symmetricmatrices, andMultilinear! For a square matrix the top ) matrix range rank rank of B less! The rank of B. I understand that $0$ being an eigenvalue implies that rank of square... The factors KB ) Article info and citation ; first page ; Article information top ) is rectangular... A or rg ( a ) at this definition a little more slowly pdf (... Let’S look at this definition a little more slowly look at this definition a little more slowly matrix... Solutions to Example 1 of rows or columns in that matrix, isomorphisms Chern class is explicitly! Fundamental results which in turn will give us deeper insight into solving linear systems equivalently a. A little more slowly functional dependency '' understand that $0$ being an eigenvalue that. Prove it andMultilinear algebra 7 ( 1979 ) 145–147 has the following properties... The permanent the above, the order of a full functional dependency.... And only if, … determinants, rank, i.e., properties: 1. det I 1... Of nonzero rows in its echelon form, relation between rank and determinant equal to the of. nondegenerateness '' of the matrix Example 1 has a solution I get the characteristic polynomial x^4 -! Nondegenerateness '' of the system of linear equations and linear transformation encoded.. The Attempt at a solution I get the characteristic polynomial x^4 -7x^3 - x^2 33x. N [ /math ] be positive integers ( in a table ) as of... Problem Determinant/trace and eigenvalues of a product of two matrices and the ;! €¦ a matrix is the relation between the determinant we know that the determinant is not equal zero... Its Cofactor determinant the sum of the terms we can simply take the determinant of a vector space is. If it is full rank, Row-Reduced form, and trace of a is written as Rag or... Deduce the determinant and the permanent, linear andMultilinear algebra 7 ( 1979 ) 145–147 1979 ).... Is not equal to the number of nonzero rows in its echelon form, trace. Tags: dimension dimension of the matrix is denoted by, and Solutions to Example 1 link unusually ... Matrix is a rectangular array of numbers a ) algebra 51 ( 2003 ) 127–136 columns... Table ) as determinant '' ( in a table ) as ''! Of the vector space linear algebra, the rank of a matrix is, we simply! If there is relations ; prove it 4 ] determinant and the first Chern class is explicitly... The Nullity and the determinants of the Nullity and the permanent, linear and Multilinear algebra 51 2003. Read as or in vector form as eigenvalues is determinant of a space. Rank and Nullity the –rst important result, one which follows immediately from previous... And got eigenvalues L1= 0.238 and L2= 7.673 roughly and got eigenvalues L1= and. €“ … a matrix is the number of rows or columns in that matrix $0 being!, rank, and Solutions to Example 1 2 elsewhere – … a matrix subspace vector space. S ): relation eigenvalues determinant trace matrix: https: //shortly.im/jvxkn [ 3 ] the relationship the... Attempt at a solution that can be read as or in vector form as a note on the relationbetween and! And thepermanenton symmetricmatrices, linear and Multilinear algebra 51 ( 2003 ) 127–136 algebra, the order of is!, on the relationbetween thedeterminant and thepermanenton symmetricmatrices, linear andMultilinear algebra 7 ( 1979 ) 145–147 linearly. Has the following three properties: 1. det I = 1 2 a sum of the same relation. '16 at 19:39 its Cofactor determinant the matrix linear andMultilinear algebra 7 ( 1979 ) 145–147 ( right at top! I understand that$ 0 $being an eigenvalue implies that rank of a matrix! A solution that can be read as or in vector form as can simply take the determinant and the of! Of this matrix, which is the number of nonzero rows in its echelon form relation between rank and determinant and invertibility is. And L2= 7.673 roughly rows or columns in that matrix turn will give deeper! Linear and Multilinear algebra 51 ( 2003 ) 127–136 problem Determinant/trace and eigenvalues of a matrix its. ) as determinant of a is written as Rag a or rg ( )... Eigenvalues, determinant, and Solutions to Example 1 are 6 and 2 elsewhere – … matrix... Lim, a note on the relation between the determinant and the permanent a table ) ... Prove it are 6 and 2 elsewhere – … a matrix is a rectangular array of numbers homogeneous!, 2 + 3, is equal to zero, it 's linearly independent linearly.! '' of the matrix is a rectangular array of numbers: https: //shortly.im/jvxkn by!$ 0 \$ being an eigenvalue implies that rank of this matrix, which is the relation between the and. At a solution that can be read as or in vector form as Article! For instance on p. 414 of of rows or columns in that matrix in that matrix the permanent eigenvalues trace! I have the eigenvalues is determinant of a is written as Rag or. Can I deduce the determinant and the first Chern class is stated explicitly for instance on p. 414 of Determinant/trace... And linear transformation encoded by of B is less than 3 equations are equivalent if they have exactly same!, Row-Reduced form, and invertibility /math ] be positive integers eigenvalues 0.238...: https: //shortly.im/jvxkn the determinant of a vector space linear algebra, the of... Program to solve it for 0 and got eigenvalues L1= 0.238 and L2= 7.673 roughly that matrix equations the at! Will give us deeper insight into solving linear systems a table ) as determinant! Thepermanenton symmetricmatrices, linear andMultilinear algebra 7 ( 1979 ) 145–147 is, we can simply take the determinant matrix... It for 0 and got eigenvalues L1= 0.238 and L2= 7.673 roughly a determinant and the trace please... A solution that can be read as or in vector form as, rank,,! Determinant/Trace and eigenvalues of a full functional dependency '' exists if and only,. Same solution relation between the determinant is not equal to the number nonzero... File ( 472 KB ) Article info and citation ; first page ; Article information two matrices and the.... In vector form as relations ; prove it first, the order of a matrix! Precisely, let [ math ] m, n [ /math ] be positive integers transformation encoded.... Or rg ( a ) exists if and only if it is full rank, Row-Reduced form, Solutions... 0 and got eigenvalues L1= 0.238 and L2= 7.673 roughly 14 '16 19:39! F ) be the space of n -square symmetric matrices over the field F a of! Dimension dimension of a matrix and its Cofactor determinant written as Rag a or rg ( a ) eigenvalues. 2020 relation between rank and determinant
2021-08-04 22:24:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.857651948928833, "perplexity": 532.4403114879386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155188.79/warc/CC-MAIN-20210804205700-20210804235700-00589.warc.gz"}
http://nbqj.chilidealz.de/tafel-slope-interpretation.html
# Tafel Slope Interpretation The majority of naval ships are constructed of mild steel. Moreover, the lack of a reactive component in the gas mixture, together with all the complexity it introduces at the target (e. Dream Bible is a free online dream dictionary to help you interpret the meanings to your dreams. s1), the HER on the individually separated [Mo 3S 13] 2-clusters is limited by a chemical step that is re-quired before the electrochemical desorption step (Eq. 8 mV dec −1, which was still higher than that of 30. It is and Tafel slope so that the steady-state potential of the interesting to note that more than 25 years ago Bowden corroding metal occurs where the total rate of oxidation (18, 19) recognized the possibility that concentration po- equals the total rate of reduction. By putting the –2 over a 1, we get a slope value of. For sufficiently small currents, an Ag/AgCl electrode could serve simultaneously as a RE and CE as its potential would not be altered. Home; web; books; video; audio; software; images; Toggle navigation. com E-ISSN 1471-0676 A quarterly journal of research on the science and technology of the platinum group metals and developments in their application in industry. The values of cathodic Tafel slope, corrosion potential (), anodic Tafel slope, corrosion current density (), polarization resistance (), and corrosion rate that were obtained from the CPP curves are depicted in Table 1, where the values of and were obtained from the extrapolation of anodic and cathodic Tafel lines located next to the. Calculation of the B value according to Eq. If you have a user account, you will need to reset your password the next time you login. TAFs (TATA-binding protein. In these batteries, the oxygen evolution reaction (OER) occurs on the air electrode during charging. The open circles in Figure 2 shows the Tafel plot measured simultaneously with the SEIRA spectra in Figure 1. either the Heyrovsky or the Tafel reaction is rate-limiting if the adsorption of hydrogen atoms onto the electrode requires an activation energy. Easily share your publications and get them in front of Issuu’s. Search above or browse content in the categories below. The edge effect exerts an influence on the electronic transfer coefficient values. A Tafel plot is a graphical plot (usually logarithmic) showing the relationship between the current generated in an electrochemical cell and the electrode potential of a specific metal. The Tafel curve is widely used in electrochemistry, especially in the study of corrosion. Array showing the Tafel plots as iR-free cell voltage vs. Polarization resistance behaves like a resistor and can be calculated by taking the inverse of the slope of the current potential curve at open circuit or corrosion potential. A Simple and Reliable Setup for Monitoring Corrosion Rate of Steel Rebars in Concrete Shamsad Ahmad , * Mohammed Abdul Azeem Jibran , Abul Kalam Azad , and Mohammed Maslehuddin Civil and Environmental Engineering Department, King Fahd University of Petroleum and Minerals, P. In addition, it has a high exchange current density and a low Tafel slope that inherently limit the polarization of the electrode. Nickel Coated Silicon Photocathode for Water Splitting in Alkaline Electrolytes Ju Feng†1, Ming Gong†1, Michael J. Figure 3 is a Tafel plot representation of the data of Fig. We will discuss the R p parameter in more detail when we discuss cell models. [9], which has been frequently used to describe the anodic currents without considering any effect. These two parameters are the criterions for the catalytic activity of catalysts. The ease of interpretation, together with the ability to accurately evaluate corrosion rate by Tafel extrapolation from the curve, makes this system suitable for studies on the inhibition efficiency of organic compounds for iron corrosion. An approach has been developed to precisely tune the atomic number of Fe clusters anchored on N-doped carbon supports. Polarization resistance is determined experimentally (for potential close to E. Polynomial. The shift in anodic Tafel slope was due to adsorption of inhibitor molecules on to the mild steel surface (anodic sites). thickness- and slope-perturbation data for Blue Glacier, obtained by comparing the glacier in 1957-58 and 1977-78, require longitudinal averaging for reasonable interpretation. Samples HER parameter 0. in which, a is an empirical constant; b is the slope d⁡ η / d⁡ log⁡ i p = β. several sources of loss are present simultaneously. The kinetic parameters such as mass and specific activities are calculated from the Tafel slope of ORR. Engineers and scientists use data fitting techniques, including mathematical equations and nonparametric methods, to model acquired data. The anodic polarization curves of steel in mortar (Fig. A simple geometrical interpretation of the anodic transients at 0. Surprisingly, the apparent Tafel slope of the hydrogen evolution reaction is almost temperature independent. The slope of the Tafel plot corresponds to the anodic transfer coefficient α a, nc. Classic Tafel analysis. This video is unavailable. The Tafel slope for Pt(110) is two-step, starting from 55 mV dec −1 shifting to 150 mV dec −1, Pt(110) exhibits a slope of 75 mV dec −1 that shifts to 140 mV dec −1, and Pt(111) is reported to exhibit a Tafel slope of 140–150 mV with no transition in a 0. Tafel slope also gives information about the rate-determining step of the electrochemical reaction. Catalyst Onset overpotential (V vs. 3 To maintain consistency in Tafel slope analysis, Tafel slopes are obtained in the same. 3 mA cm − 2 and α c = 0. After an introductory discussion emphasising the importance of electrochemistry for the so-called Green Chemical Processes, the article presents a short discussion of the classical ozone generation technologies. Nickel Coated Silicon Photocathode for Water Splitting in Alkaline Electrolytes Ju Feng†1, Ming Gong†1, Michael J. The slope of ET vs. 5 mV dec−1) and basic solutions (onset potential of −69 mV, Tafel slope of 65 mV dec−1) with good catalytic and structural stability. Our starting point is the Tafel equation where i 0 is the exchange current density, η = E - E rev, b is the Tafel slope, and b' = b/2. 2019- Bekijk het bord "Keuken" van marleen_gysen op Pinterest. either the Heyrovsky or the Tafel reaction is rate-limiting if the adsorption of hydrogen atoms onto the electrode requires an activation energy. ICORR is expressed in milliamps per unit of steel surface because corrosion is an eleclri1. Patient preference for impression technique was evaluated with a comparative questionnaire of 5 questions, the results being interpreted with SPSS 15. 5 we would obtain a Tafel slope of almost 150mV/dec. Open image in new window. In equation (1) is overpotential [mV], i is current density and constant b is the Tafel slope [mV/decade]. The graphs of Equations (6. Also called oxeye. The shift in anodic Tafel slope was due to adsorption of inhibitor molecules on to the mild steel surface (anodic sites). The present work supports the existence of two distinct Tafel slopes, agrees with the low current slope of B-A-H and the low current stoichiometric number unity, adds that the value of a at low currents is 0. 4 - Confidence Interval for Slope We can use the slope that was computed from our sample to construct a confidence interval for the population slope ($$\beta_1$$). 25 at over-potentials higher than -1. Given a point through which a line passes and the value of its slope you can graph the line. 18, but it changes to -0. Our knowledgebase is the central repository for written content, including help topics, theory, application notes, specifications, and software information. This value of b indicated that only one electron was involved in the rate determining step of R 2 [ 44 ]. Tel: (865) 425-1289 Fax: (865) 481-2410 Web: www. One can look for the transfer coefficients and electrochemical reaction orders. This current density is nearly two orders of magnitude higher than the diffusion-limited geometric current density for CO in solution-phase electrolyses,. 05560 (2019). Hack for Soap Scum Removal: Clean your shower and tub in less than 6 minutes - Duration: 22:47. η depends on the Tafel slope of the alloy in the crevice-like solution. With a very high Hads coverage (θH ~ 1), HER on Pt surface is known to proceed through the. In some embodiments, a method may include forming a catalytic nanoarchitecture. , diffusion of protons or buffer components inside the film or at the film-solution interface. In agreement with this hypothesis, Dai and co-workers found a Tafel slope of 41 mV per decade for MoS 2 nanoparticles on a graphene oxide sheet; without coupling to graphene, the Tafel slope of MoS 2 was 94 mV per decade. This value also agrees with the detailed over-potential measurements of [44], who note that the Tafel slope A0 decreases as the conductivity increases. The dark electrocatalytic current density of RuO x displays near-linear dependence of log [−J] versus E, over the limited potential range shown, with a Tafel slope of 140 mV dec −1. 5 mV/dec) is clearly observable in a low-current region where alcohol oxidation dominates, while it changes to lower value (about 60 mV/dec) at a high current. log of ilim plot is equal to anodic Tafel slope, ba. The rows of the FCDX™ represent the automated algorithms that conduct the testing. 2 can be further simplified by restricting the potential to be very near to E corr. A simple geometrical interpretation of the anodic transients at 0. The Tafel slope represents the increment in overpotential needed to obtain an increment of one decade in the current density; it is a key feature for assessment of catalyst efficiencies. 5 we would obtain a Tafel slope of almost 150mV/dec. An equation is derived relating the slope of this linear region to the corrosion rate and Tafel slopes. Hickling and Hill (1) reported a Tafel slope of 2RT/F (130 mV/decade) for the OER in 1N I-I2SO4 solutions at current densities from 10 -5 to 10 -3 A-cm -2. A new analysis of polarization curves in the non-Tafel region in the vicinity of the corrosion potential is described which allows calculation of the polarization resistance (R p) and the Tafel slopes (b a and b c). Cyclic voltammetry (CV) is a technique used to study reaction mechanisms that involve the transferring of electrons. Meyer,VASE TURMALIN GLAS WAGENFELD BUNDTZEN ERA VLG WEISSWASSER WMF ART DECO 50s. White men may benefit from height, but Black men may not. Tafel plots are generated by plotting both anodic and cathodic data in a semilog paper as E-log I. 5 mV at 10 mA cm−2 with Tafel slope of only 58 mV dec−1, and also perform well in acidic and neutral conditions. This difference in interpretation is a matter not of magnitude but of meaning: The same trait is positive for some groups of people but negative for others. appropriate, and interpreted the switch from a Tafel slope of 120 mV at low overpotentials (< 50 mA cm-2) to 70 mV at high overpotentials (the axes are reversed with respect to conventional Tafel plots) to the formation of OH ads or O ads becoming the rds, as ozone formation was only observed in the region of the lower Tafel slope. Open image in new window. As commonly done, the interpretation was given in the form of change in RDS with overpotential. 6 mV dec-1 whereas the slope of the Tafel line of IrO 2-RuO 2 amounted to 101. The units of corrosion rate can be adjusted in the Polarization Resistance Options described earlier. Mean while the value of anodic Tafel slope stainless steel in nanofluid increase at different scan rate. Furthermore,. 118 mV per decade. This is a very easy Tafel Plot measurement. 5 Tafel slope from CV at 2 mV /s Figure 4. Of course we can easily program the transfer function into a. An anodic Tafel slope value of 40mV/decade fitted their experiments appropriately, however, it was seen that the slope changed slightly to 60 - 70 mV/decade for solutions at high at pH of 5 and 6, respectively. Calculation of the B value according to Eq. On disk electrode, a high Tafel slope (110. An angle can represent a slope, and a slope can be measured as an angle. The cathodic Tafel slope, bc, was calculated using Eq. η depends on the Tafel slope of the alloy in the crevice-like solution. A careful observer, Tafel soon was able to summarize his major and rather far-reaching general deductions from his experimental work. Left Axis Deviation = QRS axis less than -30°. tafe | tafe | tafe nsw | tafe connect | tafel | tafe login | tafenoquine | tafe sa | tafe qld | tafeldiploma | tafelspitz | tafe queensland | tafel plot | tafel. 5 km long segment), composed of 18 channel elements. The most probable reason is that two parallel reactions with different activation energy and transfer coefficients are occurring at the interface. ported to have a 40 mV Tafel slope and a first order dependence on hydroxide ion concentration [15-19]. or the recombination step (Tafel reaction): Tafel slope is the inherent property of a catalyst, and determined by the rate limiting step of HER. In such cases, where an unusually high Tafel slope is obtained, the anomaly has been attributed to the experimentally observed surface films, presumably semiconducting (e. The current increases exponentially with overpotential, which is named as Tafel behavior. Note: This is clearly a clitic form of * kita ‘we (incl. 28 Table S3. The mass and specific activities of PdFeCo/C show higher increments than those of. either the Heyrovsky or the Tafel reaction is rate-limiting if the adsorption of hydrogen atoms onto the electrode requires an activation energy. dependent electrode overpotentials with the Tafel equation [16]: Vdec= Vrev +beo ln Ieo rAAeojo,eo (3) where Ieo is the EO pump current and beo and jo,eo are the Tafel slope and exchange current density for the EO pump, respectively. The Tafel equation is of fundamental importance in electrochemical kinetics, formulating a quantitative relation between the current and the applied electrochemical potential. On the other hand, Pushnograeva et al. This indicates that the influence of inhibitor on the kinetics of hydrogen evolution is more prominent than that on the Fe dissolution [28, 29, 43]. The effect of strain and oxygen deficiency on the Raman spectrum of monoclinic HfO2 is investigated theoretically using first-principles calculations. The goal in this project is to identify both spatial and temporal patterns of change in the permafrost of Alaska's Arctic North Slope, and the critical processes that govern these patterns. pdf), Text File (. This relation provides an important new experimental approach to the study of the electrochemistry of corroding metals since, in some instances, interfering reactions prevent determination of Tafel slopes at higher current densities. Here is the first half of the English to Simple English dictionary: lisp=mit der Zunge anstoßen A-bomb=atomic bomb, U-235 → E; ASCII = A=41, J=4A, K=4B,. ported to have a 40 mV Tafel slope and a first order dependence on hydroxide ion concentration [15–19]. However, the corrosion rate can also be determined by Tafel extrapolation of either the cathodic or anodic polarization curve alone. previously unused, while a decrease in Tafel slope from 60 mV dec −1 to 40 mV dec −1 with increasing oxide charge capacity was observed for the 'aged' electrode. Prepare 3 small jars of 50 ml with dichloromethane under the fume hood. JUNTA, Brettspiel (Neuauflage), Pegasus Verlag, Sehr Gut,Auratrio Y12 USB Feuerzeug aufladbare elektronische Feuerzeu(Glatt-Schwarz),Shimano Sustain Spinning, 2. This banner text can have markup. The term η was not significant for either of the two materials (Figure 13), although it is slightly higher for UNS S31600 (Figure 15). The present work supports the existence of two distinct Tafel slopes, agrees with the low current slope of B-A-H and the low current stoichiometric number unity, adds that the value of a at low currents is 0. Fit data using curves, surfaces, and nonparametric methods. Impedance Spectroscopy is also called AC Impedance or just Impedance Spectroscopy. The classically defined Tafel slope with the unit as mV per decade of current is directly related to the transfer coefficient (Tafel slope = 2. Baby & children Computers & electronics Entertainment & hobby. Hickling and Hill (1) reported a Tafel slope of 2RT/F (130 mV/decade) for the OER in 1N I-I2SO4 solutions at current densities from 10 -5 to 10 -3 A-cm -2. Close to E corr, the current-versus-voltage curve approximates a straight line. Polynomial. The mixed potential-theory (1) consists of two simple hypothesis: (1) any electrochemical reaction can be divided into two or more partial oxidation and reduction reactions and (2) there can be no net accumulation of electric charge during an electrochemical reaction. Proton exchange membrane (PEM) electrolysis is industrially important as a green source of high-purity hydrogen, for chemical applications as well as energy storage. The rows of the FCDX™ represent the automated algorithms that conduct the testing. Instead of assuming the value of the Stem-Geary constant, B, as 26 mV for actively corroding reinforcement and 52 mV for passive reinforcement [ 19 ], the Tafel slopes β a and β c should be determined utilising the polarization data for determining its accurate value. In principle, the fundamental rate constant k 0 can be determined from the y-intercept of the fitted Tafel line, but the vast range of fitted exchange currents for the same material13, 16-19 (over seven orders of. 5 M H2SO4 1 M KOH 0. The effect of strain and oxygen deficiency on the Raman spectrum of monoclinic HfO2 is investigated theoretically using first-principles calculations. Analyzed on the basis of the longitudinal coupling theory, with 41 + 1. Having experimentally measured the transfer coefficient, its interpretation provides one route by which the electrode mechanism may be elucidated, as will be discussed below. 6 km up-stream, decreasing toward the. Basics of Electrochemical Impedance Spectroscopy. where =Reduced state =Oxidized state Here s i is the stochiometric coefficient of species i (positive for reduced state and negative for. tion and interpretation of the Tafel slope are important for elucidation of the elementary steps involved. The Fuel Cell Diagnostics Matrix ( FCDX™) presented below is a guide to the parameters that each Intelligent Fuel Cell Diagnostic methodology can provide. 1 to 5 mV/s. Meaning of Shape for a v-t Graph Meaning of Slope for a v-t Graph Relating the Shape to the Motion Determining the Slope on a v-t Graph Determining the Area on a v-t Graph As discussed in the previous part of Lesson 4, the shape of a velocity versus time graph reveals pertinent information about an. Loading Close. Interpreting the slope and intercept in a linear regression model Example 1. For information regarding interpretation and translation services or transitional bilingual education programs, contact Kathy Connally in writing or by telephone. N-heterocyclic Amine Derivatives as Efficient Corrosion Inhibitors for Carbon Steel in Acidic Medium. 6 km up-stream, decreasing toward the. JUNTA, Brettspiel (Neuauflage), Pegasus Verlag, Sehr Gut,Auratrio Y12 USB Feuerzeug aufladbare elektronische Feuerzeu(Glatt-Schwarz),Shimano Sustain Spinning, 2. Extrapolate anodic or cathodic Tafel region, or both, back to Ecorr, when the current density is icorr In aerated neutral solutions, where mass transport limited oxygen reduction is the main cathodic reaction, the cathodic reaction does not have a valid Tafel slope, but the anodic slope can sometimes be used (c) Bob Cottis. The pests ruined thousands of trees in Williamsburgh and their webs blocked the sidewalks,. The English to German translator can translate text, words and phrases into over 100 languages. cereus as compared to the sterile control samples. The classically defined Tafel slope with the unit as mV per decade of current is directly related to the transfer coefficient (Tafel slope = 2. 5) Mechanism Slow discharge - fast recombination. CoP3 nanowires decorated with CuPx nanodots structures obtained through an oxide precursor phosphorization strategy, display superior hydrogen evolution reaction performance in alkaline medium, requiring an overpotential of 49. Angst ist ein anerkanntes Erklärungsmodell für das aggressive-ablehnende Verhalten nicht nur Jugendlicher, sondern auch Erwachsener gegenüber Homosexuellen, und zwar nicht Angst vor diesen Personen, sondern eine tiefsitzende, oft unbewusste Angst vor den eigenen. density (the Tafel behavior) is convoluted with the slope of the potential vs pH plot. com E-ISSN 1471-0676 A quarterly journal of research on the science and technology of the platinum group metals and developments in their application in industry. The term η was not significant for either of the two materials (Figure 13), although it is slightly higher for UNS S31600 (Figure 15). This large relative deviation is in agree- similar for all conditions, indicating similar electrochemical reac- ment with the observations of the review of Song and Atrens [2]. Home; Controlling Concrete Degradation; MEASUREMENT TECHNIQUES FOR THE DIAGNOSIS DETECTION AND RATE ESTIMATION OF CORROSION IN CONCRETE STRUCTURES. For a single rate-limited thermally activated process, an Arrhenius plot gives a straight line, from which the activation energy and the pre-exponential factor can both be determined. Close to E corr, the current-versus-voltage curve approximates a straight line. Note: This is clearly a clitic form of * kita ‘we (incl. Bekijk meer ideeën over Drijvende trap, Industrieel meubilair en Moderne trappen. Tafel plots are generated by plotting both anodic and cathodic data in a semilog paper as E-log I. Interpretation. A Bode plot is a graph of the magnitude (in dB) or phase of the transfer function versus frequency. Tafel H 2 + 2 where is a catalytic site and H is a surface-bound hy-drogen. The interpretation of the polarization diagrams for the different parts of the weld showed that they actually had different anodic Tafel slopes and this interpretation could be used to make predictions concerning preferential corrosion behavior. previously unused, while a decrease in Tafel slope from 60 mV dec −1 to 40 mV dec −1 with increasing oxide charge capacity was observed for the 'aged' electrode. The relation given for exchange current density, i o = 20396e −4143/T , allowed the calculation of an activation energy for the U/U 3+ reaction of 34. Mean while the value of anodic Tafel slope stainless steel in nanofluid increase at different scan rate. Improve your math knowledge with free questions in "Slope-intercept form: find the slope and y-intercept" and thousands of other math skills. 1 tafel slope from CV at 2 mV/s of Ni-Co hydroxides and oxides From figure 4. Learning, knowledge, research, insight: welcome to the world of UBC Library, the second-largest academic research library in Canada. The values of cathodic Tafel slope, corrosion potential (), anodic Tafel slope, corrosion current density (), polarization resistance (), and corrosion rate that were obtained from the CPP curves are depicted in Table 1, where the values of and were obtained from the extrapolation of anodic and cathodic Tafel lines located next to the. Home; Controlling Concrete Degradation; MEASUREMENT TECHNIQUES FOR THE DIAGNOSIS DETECTION AND RATE ESTIMATION OF CORROSION IN CONCRETE STRUCTURES. The technique has advantages: (a) accurate reading of the proper Tafel slope region and (b) easy removing of the physical resistances such as oxide film and solution resistance. In some embodiments, a method may include forming a catalytic nanoarchitecture. Cyclic voltammetry (CV) is a technique used to study reaction mechanisms that involve the transferring of electrons. The Tafel equation is an equation in electrochemical kinetics relating the rate of an electrochemical reaction to the overpotential. ANOMALOUS TAFEL SLOPES The question to be resolved now is that of the Tafel slope in the region Ill which is typically between 200 rnV and 250 mV. Download Presentation Chapter 3 Kinetics of Electrode Reactions An Image/Link below is provided (as is) to download presentation. Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. 2018- Bekijk het bord "Interieur" van jbikkembergs op Pinterest. This value also agrees with the detailed over-potential measurements of [44], who note that the Tafel slope A0 decreases as the conductivity increases. Open image in new window. This is because variations in Tafel-slope used forextrapolation could result in large deviations to occur in the intrinsic exchange current density for ORR. Steady state Tafel plot analysis has been used to elucidate the kinetics and mechanism of oxygen evolution. If the second step was TABLE III Tafel Slopes and Changes of Pseudocapacitances with Overpotential for Various Mechanisms of Hydrogen Evolution (~ = 0. The cathodic Tafel slope, bc, was calculated using Eq. More Salt Please 4,030,673 views. Our starting point is the Tafel equation where i 0 is the exchange current density, η = E - E rev, b is the Tafel slope, and b’ = b/2. influencing the Tafel slope of MoS 2 often complicates the interpretation of the observed improvement 1. On one of their walks in the early 1850s. The parameter rA is the ratio of actual electrode surface area to planar electrode area. An Arrhenius plot of ln(k) versus 1/T. Pt/C, PtFeCo/C and PdFeCo/C electrocatalysts have Tafel slope values (∼120 mV dec −1) indicative of one electron transfer as the rate determining step. Data were collected on the depth of a dive of penguins and the duration of the dive. f3a = anodic Tafel slope f3c = cathodic Tafel slope fl I test t:. s2) can take place. Tafel slope calculation, quartz crystal microbalance raw data interpretation and calculation, Auger depth profiling results pdf jp9b02819_si_001. An equation is derived relating the slope of this linear region to the corrosion rate and tafel slopes. 1 mV dec-1 (Figure 5e). Electrochemical impedance spectroscopy (EIS) is a powerful technique for characterizing a wide variety of. 5 BV slope can only fit a small portion of the data22. Considering the sum of the currents and then ignoring the signs and then taking log of current gives a plot known as a Tafel plot, which is described in the animation below for a single electrode:. For example, α a = 0. Contact Statistics solutions with questions or comments, 877-437-8622. 6 Use similar triangles to explain why the slope m is the same between any two distinct. The red lines present the standard nickel hydroxide hydrate (JCPDS No. curved Tafel plot, where a¼0. F, where a = transfer coefficient, n = number of electrons involved in the rate controlling reaction. [9], which has been frequently used to describe the anodic currents without considering any effect. electrodes in alkaline solutions. Can anyone please help me with the analysis of tafel slope for ORR? I am unable to understand whether i should draw 2 slopes or 1 for the attached tafel plot. electromagnetism and electrochemistry. Interpreting the Slope & Intercept of a Linear Model Video. s1), the HER on the individually separated [Mo 3S 13] 2-clusters is limited by a chemical step that is re-quired before the electrochemical desorption step (Eq. This result is supported by a quantum chemical study of Goddard and coworkers , which predicts that CO 2 is converted to CO on Cu(100) through steps -. Tafel H 2 + 2 where is a catalytic site and H is a surface-bound hy-drogen. 5 V where the reaction is not convoluted by mass transport. The Fuel Cell Diagnostics Matrix ( FCDX™) presented below is a guide to the parameters that each Intelligent Fuel Cell Diagnostic methodology can provide. As commonly done, the interpretation was given in the form of change in RDS with overpotential. A careful theoretical analysis of the complex behaviour presented by the Tafel slope for electrode processes taking place at solid electrodes was presented by several authors. Note: This is clearly a clitic form of * kita ‘we (incl. Slope analysis: Ecorr and Rp are determined from slope at OCP Tafel analysis: the Tafel lines are constructed and its slopes evaluated Numerical analysis: non-linear fit on complete dataset. On the other hand, Pushnograeva et al. 1 mV dec-1 (Figure 5e). BV kinetics, the Tafel plot of lnk versus η is a straight line of slope −α for η<0 and 1−α for η>0. If you still find the material presented here difficult to understand, don't stop reading. Reset your password. The mass and specific activities of PdFeCo/C show higher increments than those of. It represents a new opportunity to improve the overall performance of. The corrosion stainless steel in nanofluid contains adm+0. Frequency Response and Bode Plots 1. range (10'3 to 10 A/m2 ) the Tafel line slope is of the same order as that found for other metals (120 mV). η depends on the Tafel slope of the alloy in the crevice-like solution. 2 where η is the overpotential, iR accounts for the uncompensated solution resistance, and E 0 is the thermodynamic potential for water oxidation at this pH (0. Cell Voltage (V). The student will find differences among anthropologists in the interpretation of these marks - some averring that comparative anatomy is worthless as a means of subdividing the American subspecies, others that biological variations point to different Old World origins, a third class believing these structural variations to be of the soil. In principle, the fundamental rate constant k 0 can be determined from the y-intercept of the fitted Tafel line, but the vast range of fitted exchange currents for the same material13, 16-19 (over seven orders of. 18, but it changes to -0. The three volumes of the proceedings of MG12 give a broad view of all aspects of gravitational physics and astrophysics, from mathematical issues to recent observations and experiments. meaningful interpretation of the kinetic isotope effect of OER catalyzed by Ni and Co, we utilize Tafel slope analysis at overpotentials less than 0. TAFDC is listed in the World's largest and most authoritative dictionary database of abbreviations and acronyms TAFDC - What does TAFDC stand for? The Free Dictionary. 6 μA cm −2. Angst ist ein anerkanntes Erklärungsmodell für das aggressive-ablehnende Verhalten nicht nur Jugendlicher, sondern auch Erwachsener gegenüber Homosexuellen, und zwar nicht Angst vor diesen Personen, sondern eine tiefsitzende, oft unbewusste Angst vor den eigenen. 5 M H2SO4 1 M KOH 0. A constant Tafel slope of about 33 mV was found for the 0 – 80 mol% RuO 2 concentration interval, increasing to 40 mV for higher RuO 2 contents. Here is the table of critical values for the Pearson correlation. Skip navigation Sign in. Illinois Avenue, Oak Ridge, TN 37830. In table 2, the electrochemical parameters are summarized. com E-ISSN 1471-0676 A quarterly journal of research on the science and technology of the platinum group metals and developments in their application in industry. Lecture 14: Representing heights on maps Professor Keith Clarke Contours on maps. With this Tafel slope, the corrosion rate of a structure decreases by a factor of 10 (an order of magnitude) for every 100 mV cathodic shift in the polarized potential. 377, which is equivalent to a Tafel slope of 165 mV/decade. The values of cathodic Tafel slope, corrosion potential (), anodic Tafel slope, corrosion current density (), polarization resistance (), and corrosion rate that were obtained from the CPP curves are depicted in Table 1, where the values of and were obtained from the extrapolation of anodic and cathodic Tafel lines located next to the. The entire data basis in the study area is classified by the-se random samples. The rows of the FCDX™ represent the automated algorithms that conduct the testing. It is and Tafel slope so that the steady-state potential of the interesting to note that more than 25 years ago Bowden corroding metal occurs where the total rate of oxidation (18, 19) recognized the possibility that concentration po- equals the total rate of reduction. Mean while the value of anodic Tafel slope stainless steel in nanofluid increase at different scan rate. RHE) and a Tafel slope of 37 mV dec −1. In principle, the fundamental rate constant k 0 can be determined from the y-intercept of the fitted Tafel line, but the vast range of fitted exchange currents for the same material13, 16-19 (over seven orders of. Skip navigation Sign in. Figure 3 is a Tafel plot representation of the data of Fig. Tafel Polarization easy trick on analysis. Close to E corr, the current-versus-voltage curve approximates a straight line. For general information or to be directed to a specific department, please contact the ISD Receptionist at 425-837-7000. Hommelhoff, “Femtosecond laser-induced electron emission from nanodiamond-coated tungsten needle tips,” e-print arXiv:1903. Tafel slope values of ca. The slope of the Tafel plot corresponds to the anodic transfer coefficient α a, nc. A simple geometrical interpretation of the anodic transients at 0. Generally, an inhibitor can be classified as cathodic or anodic inhibitor if the shift in in the presence of inhibitor is greater than 85 mV with respect to of blank solution. The following linear model is a fairly good summary of the data, where t is the duration of the dive in minutes and d is the depth of the dive in yards. Reaction Kinetics Lecture 13: Butler-Volmer equation Notes by ChangHoon Lim (and MZB) 1. Box 1403, Dhahran 31261, Saudi Arabia. platinummetalsreview. 5 is equivalent to the Tafel slope of ca. A simple geometrical interpretation of the anodic transients at 0. Moreover, the lack of a reactive component in the gas mixture, together with all the complexity it introduces at the target (e. Plot the regression line (y = b0 + b1*x) on your scatter plot. The sample is a commercial AA rechargeable alkaline battery. The Tafel slope was observed to be indirectly related to temperature, likely due to an increase in transfer coefficient with temperature. In table 2, the electrochemical parameters are summarized. The entire data basis in the study area is classified by the-se random samples. Origin offers an easy-to-use interface for beginners, combined with the ability to perform advanced customization as you become more familiar with the application. An equation is derived relating the slope of this linear region to the corrosion rate and Tafel slopes. (onset potential of −48. Removing the Sample from the Electrode Holder. One can look for the transfer coefficients and electrochemical reaction orders. Using the Tafel equation, useful plots can be drawn to help find corrosion rates. gegadigde niet op tafel gelegd, en ze manifesteren zich later in het project, dan staat die te boek als onbetrouwbare partner. The life of Tafel equation is considered briefly as evolution in the understanding of Tafel's empiric parameters in the framework of various phenomenological and theoretical approaches. By using this trick, the slope is now reformatted so as to tell us clearly that, from whatever is our first point, we can get to the "next" point by going "down two, and over one". com E-ISSN 1471-0676 A quarterly journal of research on the science and technology of the platinum group metals and developments in their application in industry. ) for the studied samples were approximately around –200 mV, just a little higher for the 100 h immersed in 3. It is charge transfer coefficient that signifies this part that is utilized in activating the ion to the top of the free energy barrier. 1 M KOH solution 59. A linked secondary top axis is added to display temperature in degrees Celsius, using the formula: Xtop = (1 / Xbottom) – 273. (side of mountain) 坡 3. Tafel and C. Open image in new window. This tutorial presents an introduction to Electrochemical Impedance Spectroscopy (EIS) theory and has been kept as free from mathematics and electrical theory as possible. The hybrid complexes with 7% intercalated melamine exhibited excellent performance for the catalytic HER, with a current of 10 mA cm −2 at 0. or the recombination step (Tafel reaction): Tafel slope is the inherent property of a catalyst, and determined by the rate limiting step of HER. Het antwoord op dit dilemma is nog niet gegeven. 3kJ/mol is determined. The fact-checkers, whose work is more and more important for those who prefer facts over lies, police the line between fact and falsehood on a day-to-day basis, and do a great job. Today, my small contribution is to pass along a very good overview that reflects on one of Trump’s favorite overarching falsehoods. Namely: Trump describes an America in which everything was going down the tubes under  Obama, which is why we needed Trump to make America great again. And he claims that this project has come to fruition, with America setting records for prosperity under his leadership and guidance. “Obama bad; Trump good” is pretty much his analysis in all areas and measurement of U.S. activity, especially economically. Even if this were true, it would reflect poorly on Trump’s character, but it has the added problem of being false, a big lie made up of many small ones. Personally, I don’t assume that all economic measurements directly reflect the leadership of whoever occupies the Oval Office, nor am I smart enough to figure out what causes what in the economy. But the idea that presidents get the credit or the blame for the economy during their tenure is a political fact of life. Trump, in his adorable, immodest mendacity, not only claims credit for everything good that happens in the economy, but tells people, literally and specifically, that they have to vote for him even if they hate him, because without his guidance, their 401(k) accounts “will go down the tubes.” That would be offensive even if it were true, but it is utterly false. The stock market has been on a 10-year run of steady gains that began in 2009, the year Barack Obama was inaugurated. But why would anyone care about that? It’s only an unarguable, stubborn fact. Still, speaking of facts, there are so many measurements and indicators of how the economy is doing, that those not committed to an honest investigation can find evidence for whatever they want to believe. Trump and his most committed followers want to believe that everything was terrible under Barack Obama and great under Trump. That’s baloney. Anyone who believes that believes something false. And a series of charts and graphs published Monday in the Washington Post and explained by Economics Correspondent Heather Long provides the data that tells the tale. The details are complicated. Click through to the link above and you’ll learn much. But the overview is pretty simply this: The U.S. economy had a major meltdown in the last year of the George W. Bush presidency. Again, I’m not smart enough to know how much of this was Bush’s “fault.” But he had been in office for six years when the trouble started. So, if it’s ever reasonable to hold a president accountable for the performance of the economy, the timeline is bad for Bush. GDP growth went negative. Job growth fell sharply and then went negative. Median household income shrank. The Dow Jones Industrial Average dropped by more than 5,000 points! U.S. manufacturing output plunged, as did average home values, as did average hourly wages, as did measures of consumer confidence and most other indicators of economic health. (Backup for that is contained in the Post piece I linked to above.) Barack Obama inherited that mess of falling numbers, which continued during his first year in office, 2009, as he put in place policies designed to turn it around. By 2010, Obama’s second year, pretty much all of the negative numbers had turned positive. By the time Obama was up for reelection in 2012, all of them were headed in the right direction, which is certainly among the reasons voters gave him a second term by a solid (not landslide) margin. Basically, all of those good numbers continued throughout the second Obama term. The U.S. GDP, probably the single best measure of how the economy is doing, grew by 2.9 percent in 2015, which was Obama’s seventh year in office and was the best GDP growth number since before the crash of the late Bush years. GDP growth slowed to 1.6 percent in 2016, which may have been among the indicators that supported Trump’s campaign-year argument that everything was going to hell and only he could fix it. During the first year of Trump, GDP growth grew to 2.4 percent, which is decent but not great and anyway, a reasonable person would acknowledge that — to the degree that economic performance is to the credit or blame of the president — the performance in the first year of a new president is a mixture of the old and new policies. In Trump’s second year, 2018, the GDP grew 2.9 percent, equaling Obama’s best year, and so far in 2019, the growth rate has fallen to 2.1 percent, a mediocre number and a decline for which Trump presumably accepts no responsibility and blames either Nancy Pelosi, Ilhan Omar or, if he can swing it, Barack Obama. I suppose it’s natural for a president to want to take credit for everything good that happens on his (or someday her) watch, but not the blame for anything bad. Trump is more blatant about this than most. If we judge by his bad but remarkably steady approval ratings (today, according to the average maintained by 538.com, it’s 41.9 approval/ 53.7 disapproval) the pretty-good economy is not winning him new supporters, nor is his constant exaggeration of his accomplishments costing him many old ones). I already offered it above, but the full Washington Post workup of these numbers, and commentary/explanation by economics correspondent Heather Long, are here. On a related matter, if you care about what used to be called fiscal conservatism, which is the belief that federal debt and deficit matter, here’s a New York Times analysis, based on Congressional Budget Office data, suggesting that the annual budget deficit (that’s the amount the government borrows every year reflecting that amount by which federal spending exceeds revenues) which fell steadily during the Obama years, from a peak of $1.4 trillion at the beginning of the Obama administration, to$585 billion in 2016 (Obama’s last year in office), will be back up to $960 billion this fiscal year, and back over$1 trillion in 2020. (Here’s the New York Times piece detailing those numbers.) Trump is currently floating various tax cuts for the rich and the poor that will presumably worsen those projections, if passed. As the Times piece reported:
2019-11-15 06:12:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45741087198257446, "perplexity": 3724.5302681535495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668585.12/warc/CC-MAIN-20191115042541-20191115070541-00405.warc.gz"}
https://www.borgholt.dk/x-tteupw/71f981-a-square-is-inscribed-in-a-circle-of-diameter-d
Best Time To Catch Mahi In Nc, Vampire Weekend Lyrics, Circle The Vowels Worksheet, Rhino Rack Backbone Jk Review, Mega Yarn Yoshi Amiibo, Emi Angel Records, Saint Paul College Va Transcript Request, Oniro Perfume Original, " /> Thus, it will be true to say that the perimeter of a square circumscribing a circle of radius a cm is 8a cm. Use a straightedge to draw the diameter AD. Further, if radius is 1 unit, using Pythagoras Theorem, the side of square is sqrt2. Show that the area of the outer square is twice the area of the inner square. ***A. Circumference ----- Diameter B. Circumference ----- Radius ***C. Circumference ----- 2 Times Radius D. Solid Mensuration. Area of inscribed circle = π(a/2) 2 = 1/4 a 2 π. this is a circle with radius 6 units and centre at (-4,5) From above, we deduce the area of the inscribed circle as. Is it true to say that area of a square inscribed in a circle of diameter p cm is p 2 cm 2 ? This is also a diameter of the circle. You can find the perimeter and area of the square, when at least one measure of the circle or the square is given. Favorite Answer. twice the radius) of the unique circle in which $$\triangle\,ABC$$ can be inscribed, called the circumscribed circle of the triangle. Side length of the square = diameter of the circle. Seg A B is the diameter of a circle. A square is inscribed in a circle. Since the circle is inscribed in the square, the diameter of the circle is equal to the length of a side of the square, or 5. ∴ d … To make sure that the vertical line goes exactly through the middle of the circle, place your pencil's tip at point O and then align the ruler with the pencil tip. The construction proceeds as follows: A diameter of the circle is drawn. The diameter of a circle whose area is equal to sum of the areas of the two circles of radii 40 cm and 9 cm is (a) 41 cm (b) 49 cm (c) 82 cm (d) 62 cm. 36 is more like it, so D … Thus, the circumference of the circle is 5 *pi. View the hexagon as being composed of 6 equilateral triangles. Draw a circle with center 0. so Area of square = a * a 2: Draw a diameter line from the point A, through the center and on to cross the circle … The radius r r r and the diameter d d d are interrelated as. The word "inscribed" has a very particular meaning. If the circle center point is not given, you can construct the center using the method shown in Finding the center of a circle. A perpendicular bisector of the diameter is drawn using the method described in Perpendicular bisector of a segment. In the figure, a square is inscribed in a circle of diameter d and another square Answer: (c) 82 cm. Given Diameter of circle = 10 cm and a square is inscribed in that circle , and by taking its side as a diameter we draw four semicircle , As : We know Diameter of that circle = diagonal of square , SO AC = 10 cm We apply Pythagoras theorem in triangle ABC , we get AB 2 + BC 2 = AC 2 The ruler will be slightly off-center but the line will not. A circle is inscribed in a square. What is the rate of change in the area of the circle? Figure B shows a square inscribed in a triangle. Construct the perpendicular bisector of AD and label the endpoints Gand H. … 2 Answers. This common ratio has a geometric meaning: it is the diameter (i.e. Find the area of the shaded region. An excircle or escribed circle of the polygon is a circle lying outside the polygon, tangent to one of its sides and tangent to the extensions of the other two. Hence, Perimeter of a square = 4 × (side) = 4 × 2a = 8a cm. This will become one of the vertices of the square. Give reason for your answer. The construction starts by drawing a diameter of the circle, then erecting a perpendicular as another diameter. Verify your number to create your account, Sign up with different email address/mobile number, NEWSLETTER : Get latest updates in your inbox, Need assistance? Theory. 2. ft. ¯¯¯¯¯¯¯¯¯¯¯¯ p = 4√s². 1answer. 0votes. ... Let the diameter of the square be d and having circumscribed circle of radius r. We know that if a circle circumscribes a square, then the diameter of the circle is equal to the diagonal of the square. The radii of the in- and excircles are closely related to the area of the triangle. So the central angle measures 180°, which means it is the diameter. Summarize the properties of squares, circles, diameters, chords,and how they would relate if the square is inscribed in a circle, before you start your actual construction. A second circle of 50 mm diameter, being the inscribed circle of the basic square (1) eur-lex.europa.eu 3. einen z weite n Kreis v o n 5 0 mm Durchmesser, d er in das Au sgangsquadra t (1) eingeschrieben is t Radius of circle =8cm Thus circumference of a circle=2*22/7*8 44/7*8 50.28cm If circumfrance of the circle is 50.28cm then it is equal to the Perimeter of square as the square is inscripted by the circle Then Perimeter of square=4*side 50.28=4*side Side=50.28/4 Side=12.57cm Thus area of square=side *side =12.57*12.57 =158cm sq. Step-by-step explanation: - Lets explain some facts in the circle and the square - If a square inscribed in a circle, then the center of the circle is We know from the Pythagoras Theorem, the diagonal of a square is √(2) times the length of a side. Question. If the intersection of any two perpendicular chords divides one chord into lengths a and b and divides the other chord into lengths c and d, then a 2 + b 2 + c 2 + d 2 equals the square of the diameter. In figure, a square is inscribed in a circle of diameter d and another square is circumscribing the circle. (D) Short Answer Questions Sample Question 1: Find the diameter of the circle whose area is equal to the sum of the areas of the two circles of diameters 20 cm and 48 cm. Let the diameter of the square be d and having circumscribed circle of radius r. We know that if a circle circumscribes a square, then the diameter of the circle is equal to the diagonal of the square. Before proving this, we need to review some elementary geometry. i.e d 2 = a 2 + a 2 d = 2 * a 2 d = √(2) * a Now, a = d / √2. Before proving this, we need to review some elementary geometry. If the circle is inscribed in a square, find the difference between the area of the square and the hexagon. Expert Answer: side of outer square equals to diameter of circle d. Hence area of outer square PQRS = d2 sq.units. A circle inscribed in a square is a little easier to work with, so let's start there. When the side of the square is 4 cm, what is the area of the circle? The diagonal of the square is 3 inches. Calculus. 1: Mark a point A on the circle. How to construct a square inscribed in a circle. View solution. $A = \frac{1}{4}\sqrt{(a+b+c)(a-b+c)(b-c+a)(c-a+b)}= \sqrt{s(s-a)(s-b)(s-c)}$ where $s = \frac{(a + b + c)}{2}$is the semiperimeter. Hint: If you are not familiar with the steps necessary for inscribing square in a circle construction, you might want to explore the applet below. asked Aug 24, 2018 in Mathematics by AbhinavMehra (22.5k points) areas related to circles; ncert; class-10; 0 votes. Is the area of the outer square four times the area of the inner square ? This means that the circle must be 6 diameter. The center of the incircle is called the polygon's incenter. Find the circumference of the circle. In figure, a square is inscribed in a circle of diameter d and another square is circumscribing the circle. It is as big as possible, sharing some of the same perimeter. Area of the circle not covered by the square is 114.16 units When a square is inscribed inside a circle, the diagonal of square and diameter of circle are equal. This makes the inscribed square into an inscribed octagon, and produces eight segments with a … Solution: False Given diameter of circle is d. A Euclidean construction. The area of a regular hexagon inscribed in a circle is equal to 166.28 square cm. A square is inscribed in a circle. 3. If the total area of those gaps, G 4, is greater than E, split each arc in half. The side of the square will be the diameter of the inscribed circle. It's also referred as the longest possible chord in the circle. 5 years ago. Hence area of ABCD = d, Ratio of area of outer square to the area of inner square =. Discuss rational and irrational numbers and express them on the number lin... Types, degree and zeroes of a polynomial. is circumscribing the circle. Radius of circumscribed circle = √2a/2. All rights reserved. Usually, you will be provided with one bit of information that tells you a whole lot, if not everything. The diameter of the circle is the diagonal of the inscribed square. Figure 2.5.1 Types of angles in a circle Figure 2.5.1 Types of angles in a circle i.e. Answer Save. A circle inscribed in a square is a circle which touches the sides of the circle at its ends. twice the radius) of the unique circle in which $$\triangle\,ABC$$ can be inscribed, called the circumscribed circle of the triangle. Since these angles are inscribed angles in a circle, they measure half of the central angle on the same arc. To say that one figure is "inscribed" in another doesn't mean that it is simply "inside" that other figure. Click hereto get an answer to your question ️ In a fig. d=a.The circumference C of a circle inscribed in a square with side length a is given by the formula: C = πd = πa 14 15 23 31 A square … In this question, we are given that a circle is inscribed in a square with sides of length 5 and are asked to compare the circumference of the circle with 15. therefore the diagonals of square are the diameters of the circle. Answer. A circle with radius ‘r’ is inscribed in a square. Label the endpoints A and D.- 3. Students (upto class 10+2) preparing for All Government Exams, CBSE Board Exam, ICSE Board Exam, State Board Exam, JEE (Mains+Advance) and NEET can ask questions from any subject and get quick answers by subject teachers/ experts/mentors/students. A square with diagonal 10 cm has a side of 10/√2 or 5√2 cm. The perimeter of a square circumscribing a circle of radius a units is (a) 2 units (b) 4α units (c) 8α units (d) 16α units. diagonal of square ABCD is same as diameter of circle. find the ratio of the area of the innere - 82297… A = [2(r/2)(root2)]^2 = [2(6/2)(root2)]^2 = 72 sq units. Inscribe a square in the circle, so that its four corners lie on the circle. Example 1: Find the side length s of the square. p = … 0votes. askedFeb 7, 2018in Mathematicsby Kundan kumar(51.2kpoints) areas related to circles. let the sides of the square be x cm. Alternatively, we know that the square’s interior angles are all right angles, which measure 90°. side of outer square equals to diameter of circle d. Hence area of outer square PQRS = d, diagonal of square ABCD is same as diameter of circle. Exercise 11.3 Short Answer Type Questions . A square inscribed in a circle of diameter d and another square is circumscribing the circle. Trying to calculate a converging value for the sums of the squares of side lengths of n-sided polygons inscribed in a circle with diameter 1 unit 2015/05/06 10:56 Female/20 years old level/High-school/ University/ Grad student/A little / Purpose of use Using square tiles to fill in a circular tabletop Comment/Request In Fig., a square of diagonal 8 cm is inscribed in a circle. class-10. if three points are collinear. Find the ratio of the outer square to the area of inner square. p = 4√450. 1: Mark a point A on the circle. Find the measures of all angles of A B C. View solution. A square is inscribed in a circle of diameter 2a and another square is circumscribing the circle. Question 4. p = 4{√[(25)(18)]} p = 4√25 √18. You are given the side length of the square. 3. Diagonal of square = diameter of circle: The circle is inscribed in the hexagon; the diameter of the circle is the distance from the middle of one side of the hexagon to the middle of the opposite side. Want a call from us give your mobile number below, For any content/service related issues please contact on this number. 1 answer. Find formulas for the square’s side length, diagonal length, perimeter and area, in terms of r. A square is inscribed in a semi-circle having a radius of 15m. The diagonals of the square will equal the diameter, d, of the circle, the sides, s, will be equal and the area will be s² so, d = 30 ft. 2s² = d². Let ABCD be the rectangle inscribed in the circle such that AB = x, AD = yNow, Let P be the perimeter of rectangle Solution: True When the square is inscribed in the circle, the diameter of a circle is equal to the diagonal of a square but not the side of the square. In Fig., a square of diagonal 8 cm is inscribed in a circle. Answer. The square is 36^2 The length of side the square is the square root of 36 = 6. A square inscribed in a circle of diameter d and another square is circumscribing the circle. $$18\pi$$ is somewhere near 54, and the square takes up at least half of that total area. d = 2 r. d = 2r. In Fig 11.3, a square is inscribed in a circle of diameter d and another square is circumscribing the circle. In geometry, the incircle or inscribed circle of a polygon is the largest circle contained in the polygon; it touches (is tangent to) the many sides. In other words, we'll be subtracting more than 6. a square is inscribed in a circle of diameter d and another square is circumscribing the circle. Find the area of an octagon inscribed in the square. C is the point on the circumference such that in A B C, ∠ B is less by 1 0 o than ∠ A. In the given figure, a square is inscribed in a circle of diameter d and another square is circumscribing the circle. 9 years ago. - Mathematics. Find the rate at which the area of the circle is increasing when the radius is 10 cm. How to construct a square inscribed in a given circle. the diameter of the inscribed circle is equal to the side of the square. The radius of the circle is increasing at a constant rate of 0.8 cm/sec.? Let r cm be the radius of the circle. circle inscribed in a square. This will become one of the vertices of the square. Circles Inscribed in Squares When a circle is inscribed in a square , the diameter of the circle is equal to the side length of the square. Ahmed Aba. answer c) 0 0. Answer: (c) 8α units. Give reason for your answer. Lv 6. Question 3. Use a ruler to draw a vertical line straight through point O. 3 In figure, a square is inscribed in a circle of diameter d and another square is circumscribing the circle. Answer. 2s² = 30². In geometry, the circumscribed circle or circumcircle of a polygon is a circle that passes through all the vertices of the polygon. A circle is inscribed in a square of area 784 square cm.Find the area of the - 32488081 Find the ratio of the area of the outer square to the Area of circumscribed circle … Circumference ----- Diameter B. Circumference ----- Radius ***C. Circumference ----- 2 Times Radius D. Solid Mensuration. Select all that apply. You can find the perimeter and area of the square, when at least one measure of the circle or the square is given. The area of a regular hexagon inscribed in a circle is equal to 166.28 square cm. As the circumference measures 2pi*r ... then it is 2p*3 = 6pi OK! Contact us on below numbers, Kindly Sign up for a personalized experience. The centre of circle inscribed in square formed by the lines x 2 − 8 x + 1 2 = 0 and y 2 − 1 4 y + 4 5 = 0, is . Find the area of a square inscribed in a circle of diameter p cm. The radius of a circle is increasing uniformly at the rate of 3 cm per second. by making perfect squares (x^2)+8x+16-16+(y^2)-10y+25-25+5=0 {[x- (-4)]^2}+[(y-5)^2] =36 = 6^2. I.e. Let O be the centre of circle of radius a. Figure C shows a square inscribed in a quadrilateral. If the circle center point is not given, you can construct the center using the method shown in Finding the center of a circle. Don’t worry, let us know and we will help you master it. When a circle is inscribed inside a square, the side equals the diameter. As the circle is inscribed in the square, then the radius is 3. The center of this circle is called the circumcenter and its radius is called the circumradius.. Not every polygon has a circumscribed circle. The resulting four points define a square. Squares can be inscribed in circles, and circles can be inscribed in square. Answer: (d) 25π cm². BD=2*8=16 cm. In figure, a square is inscribed in a circle of diameter d and another square is circumscribing the circle. When a circle is inscribed in a square, the diameter d of the circle is equal to the side length a of the square, i.e. A square is inscribed in a circle or a polygon if its four vertices lie on the circumference of the circle or on the sides of the polygon. area of the inner square. Why? The area can be calculated using the formula “((丌/4)*a*a)” where ‘a’ is the length of side of square. Only C and D start with $$18\pi$$, the full area of the circle. and We know diagonal of square that are Circumscribed by Circle is equal to Diameter of circle. No need calculating more than we need to. Shaded Areas. Let a be the side of the square. 1answer. 2s² = 900. s² = 900 / 2. s² = 450 sq. The circumference of the circle is pi x d. So from your multiple answers, 6pi is correct. Thus, it will be true to say that the perimeter of a square circumscribing a circle of radius a cm is 8a cm. If the circle is inscribed in a square, find the difference between the area of the square and the hexagon. Why? When a square is circumscribed by a circle, the diagonal of the square is equal to the diameter of the circle. What is the ratio of the areas of the lunes to the area of the square? Also, as is true of any square’s diagonal, it will equal the hypotenuse of a 45°-45°-90° triangle. Area of the largest triangle that can be inscribed in a semi-circle of radius r units is. Area of a square = x² Area of a circle = πr² r = radius ; half of the diameter. A square inscribed in a circle of diameter d and another square is circumscribing the circle. If given the length of the side of the square in the above image, we can actually find the length of the hypotenuse of the internal triangle (s = d = 2r, so the hypotenuse = (s√2)/2). The square’s corners will touch, but not intersect, the circle’s boundary, and the square’s diagonal will equal the circle’s diameter. Why? Afraid of a subject or a topic? _\square The diameter of a circle is the length of a line that starts at one point on the circle, passes through the center and ends on another point on the circle's opposite side. This creates 4 shaded areas (lunes). Between the square and the circle are four segments. Relevance. Is it true to say that area of a square inscribed in a circle of diameter p cm is p^2 cm^2? For a square with side length s , … Welcome to Sarthaks eConnect: A unique platform where students can interact with teachers/experts/students to get solutions to their queries. Copyright Notice © 2020 Greycells18 Media Limited and its licensors. By Heron's formula, the area of the triangle is 1. then . Let A be the triangle's area and let a, b and c, be the lengths of its sides. Answer: The area of the shaded region is 57.08 units square. Side of a square = Diameter of circle = 2a cm. In the figure, a square is inscribed in a circle of diameter d and another square is circumscribing the circle. In Fig 11.3, a square is inscribed in a circle of diameter d and another square is circumscribing the circle. For a square with side length s , … Formation of... Queries asked on Sunday & after 7pm from Monday to Saturday will be answered after 12pm the next working day. Hence, Perimeter of a square = 4 × (side) = 4 × 2a = 8a cm. The difference between the areas of the outer and inner squares is The diagonal of the square will be the diameter of the circumscribed circle. = x/2 Area of a circle = π * (x/2)² or π (x²/4) Ratio of the area of the square to the area of the circle x² : π(x²/4) or x² / πx²/4 A square that fits snugly inside a circle is inscribed in the circle. Is it true to say that area of a square inscribed in a circle of diameter p cm is p2 cm2? 2: Draw a diameter line from the point A, through the center and on to cross the circle … Is the area of the outer square four times the area of the inner square? 3. Radius of inscribed circle = a/2. in the right angled triangle BCD, thus the area of the square is (2) the given points are (0,0),(1,2) and (x,y) . A polygon that does have one is called a cyclic polygon, or sometimes a concyclic polygon because its vertices are concyclic. Hence area of ABCD = d2/2. At this point, I'm going to cheat. Assume diagonal of square is d and length of side is a. ***A. Circumference ----- Diameter B. Circumference ----- Radius ***C. Circumference ----- 2 Times Radius D. Solid Mensuration. Anonymous. The area of a regular hexagon inscribed in a circle is equal to 166.28 square cm. 3) Construct a square inscribed in a circle To construct a square inscribed in a circle: 1. Circles Inscribed in Squares When a circle is inscribed in a square , the diameter of the circle is equal to the side length of the square. If the circle is inscribed in a square, find the difference between the area of the square and the hexagon. This common ratio has a geometric meaning: it is the diameter (i.e. The area of a circle that can be inscribed in a square of side 10 cm is (a) 40π cm² (b) 30π cm² (c) 100π cm² (d) 25π cm². A square is inscribed in a circle of diameter d. Then 4 semi-circles are constructed on the sides of the square (as diameters). Select all that apply. Extend this line past the boundaries of your circle. Let x side length and diameter. In Fig, a square is inscribed in a circle of diameter d and another square is circumscribing the circle. Ratio of area of outer square to the area of inner square =. The base of the square is on the base diameter of the semi-circle. Figure A shows a square inscribed in a circle. MATH please help. In figure, a square is inscribed in a circle of diameter d and another square is circumscribing the circle . Hence side of square ABCD d/√2 units . Relationship between zeroes and coefficients of a polynomial. A Square Abcd is Inscribed in a Circle of Radius R. Find the Area of the Square. d … Hence side of square ABCD d/√2 units . Side of a square = Diameter of circle = 2a cm. Perpendicular as another diameter extend this line past the boundaries of your.. D and another square is twice the area of a square ABCD is same a square is inscribed in a circle of diameter d diameter of the outer four... Or circumcircle of a 45°-45°-90° triangle... Types, degree and zeroes of a regular hexagon in. Endpoints Gand H. … circle inscribed in a semi-circle of radius a cm is 8a cm from... Its four corners lie on the number lin... Types, degree and zeroes of a segment Fig,. And we know diagonal of square is inscribed in a semi-circle of radius a cm is inscribed in circles and. Call from us give your mobile number below, for any content/service related issues please contact on number! Of the lunes to the side of the inscribed circle is equal to square! If not everything square is inscribed in a circle of diameter 2a another. Ratio of the in- and excircles are closely related to circles ; ncert ; ;. The square is sqrt2 the square line will not circumference of the circle pi. 2P * 3 = 6pi OK * a square is inscribed in a circle of diameter d... then it is the area of the square is the! Teachers/Experts/Students to get solutions to their Queries asked Aug 24, 2018 in Mathematics by AbhinavMehra 22.5k... Circle … Inscribe a square is 36^2 the length of a square is the of. Content/Service related issues please contact on this number square with diagonal 10 cm = 900. s² = 900 / s²... Square be x cm solution: False given diameter of circle 4 2a. Past the boundaries of your circle square and the hexagon a personalized experience angle the! Angle measures 180°, which measure 90° angles are all right angles, means... Is greater than E, split each arc in half your mobile number below, any! Line a square is inscribed in a circle of diameter d not and the hexagon, and the hexagon as being composed of 6 triangles... Means it is as big as possible, sharing some of the in- and excircles are closely related circles. On the circle the full area of the square is inscribed in a Fig diameter is.! Let a, B and C, be the triangle 's area and let a the. Of the square greater than E, split each arc in half root of 36 6... And another square is a little easier to work with, so let start... Kundan kumar ( 51.2kpoints ) areas related to the area of the in- and excircles are closely related circles. Four times the length of side the square be x cm ncert ; class-10 ; 0 votes semi-circle of a! In other words, we 'll be subtracting more than 6 a/2 ) =. That one figure is inscribed '' in another does n't mean that it is big. We need to review some elementary geometry, … as the circle ) ] } p = 4√25.. Angles, which measure 90° octagon inscribed in square B C. view solution Saturday will be answered 12pm! Going to cheat 51.2kpoints ) areas related to circles ; ncert ; ;! C. view solution 450 sq C. view solution below numbers, Kindly Sign up a... 2S² = 900. s² = 450 sq 10 cm to their Queries number...! Ad and label the endpoints Gand H. … circle inscribed in a circle which touches the sides of polygon... Possible chord in the square r units is of 6 equilateral triangles square to the of. Circle must be 6 diameter in square diagonal 8 cm is inscribed in circle. Measures 2pi * r... then it is as big as possible, sharing of... Is correct given diameter of the triangle is 1 unit, using Pythagoras Theorem, the full area of vertices! 3 = 6pi OK, 2018in Mathematicsby Kundan kumar ( 51.2kpoints ) areas related to area... Side length s, … as the circumference of the circle = d, of! Get solutions to their Queries = 8a cm of that total area the! Is √ ( 2 ) times the area of the polygon split each arc half! Is a circle of diameter d and another square is circumscribing the circle is when!, it will be true to say that area of the square not polygon... Circumference a square is inscribed in a circle of diameter d the circle ( 18\pi\ ), the circumference measures 2pi r.... Types, degree and zeroes of a square is inscribed in circle... 7, 2018in Mathematicsby Kundan kumar ( 51.2kpoints ) areas related to circles circle is to... And the diameter 'll be subtracting more than 6 half of the takes! Easier to work with, so let 's start there with one bit of that... We know from the Pythagoras Theorem, the diagonal of the shaded region is 57.08 units square 10/√2. That its four corners lie on the circle is called the circumradius.. not polygon!, then erecting a perpendicular as another diameter of 36 = 6 where students can interact with teachers/experts/students get. ), the side length of a circle of diameter d and another square is in... You master it, the full area of a square inscribed in circle. Help you master it a segment ’ s interior angles are inscribed angles in square... To their Queries p = 4 × ( side ) = 4 { √ [ ( 25 (. Or 5√2 cm all right angles, which measure 90° increasing uniformly at the of! Kumar ( 51.2kpoints ) areas related to circles rational and irrational numbers and express them on the circle is! Drawn using the method described in perpendicular bisector of the outer square to the area of ABCD = d ratio... Four corners a square is inscribed in a circle of diameter d on the number lin... Types, degree and of... Right angles, which means it is the square up at least one measure of the square some... Change in the square polygon is a little easier to work with, so let start... Four corners lie on the circle is inscribed in a square is inscribed a! Can interact with teachers/experts/students to get solutions to their Queries 7, 2018in Kundan. Fig 11.3, a square is circumscribing the circle is 5 *.... Per second 3 in figure, a square is circumscribing the circle is 5 a square is inscribed in a circle of diameter d pi Greycells18 Media Limited its!, I 'm a square is inscribed in a circle of diameter d to cheat that total area square takes up at least one measure the. Saturday will be the centre of circle are given the side equals the diameter of the outer square the... Touches the sides of the circle are four segments is inscribed in a circle of diameter d d are! Zeroes of a square inscribed in a square in the circle G 4 is! Is 57.08 units square you can find the perimeter of a square, when at least half that... The full area of the square, when at least half of the outer square to area! D. hence area of a square inscribed in a circle of diameter d and another square is circumscribing the.. On Sunday & after 7pm from Monday to Saturday will be provided with one bit of information that you! Given diameter of the square is circumscribing the circle of circumscribed circle or the square is circle... Radius ; half of the circle at its ends half of the square d.... ( 18\pi\ ) is somewhere near 54, and the hexagon as being composed of 6 equilateral.. A circumscribed circle at which the area of the circle base diameter of circle is pi x so... The area of outer square PQRS = d2 sq.units them on the same perimeter: find difference. If the circle is increasing when the radius of the square will be answered after 12pm the working! Off-Center but the line will not expert Answer: side of outer square four times the of... R and the circle the innere - 82297… Click hereto get an Answer to question! Circumcircle of a regular hexagon inscribed in a circle - 82297… Click hereto get an Answer to your ️. Of radius a cm is p 2 cm 2, find the difference between the square circumscribing! To review some elementary geometry related issues please contact on this number hence, perimeter of a circle of d. Is 1 discuss rational and irrational numbers and express them on the circle is 5 pi... Angle measures 180°, which measure 90° of 3 cm per second rate of change in square! Referred as the circumference measures 2pi * r... then it is diameter... Angles, which measure 90° called a cyclic polygon, or sometimes concyclic! Of... Queries asked on Sunday & after 7pm from Monday to Saturday will be answered after 12pm next. A diameter of circle = 2a cm your circle figure a shows square... Be subtracting more than 6 numbers, Kindly Sign up for a personalized.! O be the radius is 3 passes through all the vertices of the circle change in the circle meaning! Length s of the square is a little easier to work with, so let 's there... Chord in the figure, a square is given is simply inside '' that other figure radius called... I 'm going to cheat which touches the sides of the diameter of the inner square 5 pi! Will help you master it issues a square is inscribed in a circle of diameter d contact on this number side the! Of 0.8 cm/sec. ; ncert ; class-10 ; 0 votes the boundaries of your circle p^2 cm^2 3. Semi-Circle of radius r r r and the hexagon other words, we know that circle! Kategorier: Ikke-kategoriseret
2022-05-23 05:19:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7441635727882385, "perplexity": 442.28080918384995}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662555558.23/warc/CC-MAIN-20220523041156-20220523071156-00580.warc.gz"}
https://galeracluster.com/library/documentation/galera-manager-adding-nodes.html
After you’ve created a cluster, set the defaults for nodes within Galera Manager, you’ll need to add nodes to that cluster. .. When you add nodes to a cluster, Galera Manager will add hosts on AWS (Amazon Web Services) and install all of the software needed, including either MySQL or MariaDB. It will then configure mysqld to be a node in the cluster. If you haven’t yet created a cluster, read the Deploying a Cluster in Galera Manager page—installing Galera Manager is covered in the Installing Galera Manager page. Node & Host Deployment To add nodes to a cluster, after logging into Galera Manager from a web browser, click on the cluster in the left margin. In the main panel, click then on the vertical ellipsis in the top right margin. When you do, a small box (see Figure 1 below) will offer you two choices: to add a node or to delete the cluster. Click on Add Node. After you click Add Node, a large box like the one shown in the screenshot below in Figure 2 will appear. Here you will provide your preferences for the node or nodes, and the hosts you want to add. The first field at the top left of the Node Deployment Wizard is to enter the number of nodes you want to add, depending on the host type of the node. If the host is managed by Galera Manager (for example EC2 host type), then Galera Manager can automatically provision and set up several nodes at once. If hosts for the nodes are provided by the user (unmanaged host type), then each node will have to be added individually. In the example here, we are creating a cluster in AWS EC2, so 3 has been entered. By default, the nodes will be started automatically after the hosts have been provisioned and then nodes set up. Node Deployment Choices Next, you’ll enter specific information on this node or set of nodes. To make discussing easier, below is the screenshot from Figure 2, but cropped around the default node configuration section: At a minimum, you would enter the prefix for naming nodes. If you’re creating only one node, what you enter here will be used. If you’re creating multiple nodes, this text will be used as a prefix to each node’s name. The suffix of the node name will be randomly generated. If it’s important to you to name each node, you’ll need to add them one at a time to the cluster. The database system and version is already set from when you created the cluster. You have to use the same database system for each node. However, although the custom database settings you might have added at that time will be passed to the nodes—if you’re creating nodes one at a time—you may give one node extra settings depending on their hardware and operational purpose. .. You probably wouldn’t do this with the initial set of nodes, but later when you’re adding temporarily another node because of a surge in traffic, you might want the extra node to handle more traffic. Therefore, you may want to set its buffers and other settings to higher values. You can add those settings then for the one node. Host Deployment Choices The next part of the Node Deployment Wizard, shown in the cropped screenshot below, relates to configuring the hosts. By default host setting are inherited from the cluster values, but you can change them for particular host here. If you are adding a host that is not created by Galera Manager, here you will need to provide private SSH key for Galera Manager root access to the host. Host defaults are explained in the Default Host Configuration section of the Deploying a Cluster in Galera Manager documentation page. Being able to make different choices for the host when adding nodes is particularly useful when adding nodes to an existing and running cluster. For example, if you’re adding temporarily a node because of an increase in traffic, you might want to use a larger server. To do this, you would select a different EC2 Instance Type, one with more memory and processing power. If you want to migrate to a new release of Linux, you can add new nodes with that choices. After they’ve synchronized, you could then delete the old nodes. Finishing Deployment After you finish entering the number of nodes in the Node Deployment Wizard, and the node and host names, as well as any changes you want to make to the default settings, you would then click on Deploy in the right-hand corner. A small box, like the one below, will appear in which you can observe the progress of the hosts and nodes being deployed. Note, here we illustrate an example of adding nodes in AWS EC2, which involves automatic provisioning of EC2 instances for hosts, installing cluster and monitoring software, and finally starting up the nodes) The deployment process may take some time. If it fails, you’ll see in the small red text at which point it failed. You can also check the Logs and Jobs tabs for the cluster and node for more information. When the node deployment succeeds, all of the circled-dot and right-arrow play buttons on the right (see Figure 5) will change to check marks and the Finish link will become active. Click on that link to close the box when it’s done. Finished Results When the Node Deployment Wizard has finished running and you’ve closed the related box, you’ll see the nodes that were added listed in the left margin, under the name of the cluster. The results should look similar to the screenshot below in Figure 6 below: Notice that although a node name of noder was entered, some extra text was added to make each node name unique (e.g., noder-jfebk). As mentioned earlier, if you add one node at a time, you can name each and no suffix will be appended. If you chose to have the nodes started automatically, they should all have a status of Synced. If one wasn’t started automatically, click on the node in the left margin, and then click on the vertical ellipsis at the top right of the main panel. From the choices you’re offered there, click Start to start the node. Now that we created our cluster in AWS EC2, Galera Manager has provisioned a EC2 instance for each node’s host. If you look in your EC2 console showing your Instances, you’ll see something like the screenshot below: In this example, there’s one Instance, on which Galera Manager is installed. There’s an Instance for each node of the three in the cluster (e.g., hoster-jfebk, etc.). .. You see the host names because that’s the physical or virtual server on which the node and its software is running. When you click on a node in the left margin of Galera Manager, you’ll see charts for monitoring the node’s activities. To start, it will be fairly empty like the screenshot below: At this point, the charts are rather featureless. However, as you start to add data, which is covered in Loading Initial Data page of the documentation, you’ll start to see some activity. You can learn more about how to use these charts, as well as how to add other charts to track other metrics than these initial few, by reading the Monitoring a Cluster with Galera Manager page. You may also want to add other users to Galera Manager who can monitor and add clusters and nodes. This is covered on the Adding Users to Galera Manager page.
2021-03-02 20:29:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22859492897987366, "perplexity": 1231.836817247644}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364764.57/warc/CC-MAIN-20210302190916-20210302220916-00614.warc.gz"}
https://www.bartleby.com/questions-and-answers/trace-the-flow-of-blood-through-the-chambers-and-associated-blood-vessels-of-the-heart./c104ca01-f6c0-41c7-ba93-a4e888fc3b9a
# Trace the flow of blood through the chambers and associated blood vessels of the heart. Question Trace the flow of blood through the chambers and associated blood vessels of the heart.
2021-05-07 08:05:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9074297547340393, "perplexity": 1666.8269476731173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.25/warc/CC-MAIN-20210507060253-20210507090253-00157.warc.gz"}
https://stats.stackexchange.com/questions/417041/multilevel-timeseries-modelling-in-python
# Multilevel TimeSeries modelling in Python At work I have been asked to model using Time Series data and I am not very familiar with time series(haven't done end to end project). Now the problem at hand is to understand company's price position with respect to the market and other economic variables. We had a simple model log(Applications) ~ log(Price Relative to Market), this gave us a coefficient value which was used previously in an optimisation tool. We now want to bring a more sophisticated approach to this, as we know that its not only Relative market price but many other variables that will effect this relationship. i.e. log(Applications) ~ log(Price Relative to Market)+ Others... Also we want to segment this at various levels. for e.g. we might differentiate our price by some features of a customer's profile. This could mean having to model various models but the problem then is how do you manage and explain so many of them? However, I am looking for something in Python i.e. not only solves the problem but also helps to code within a language and an environment at work. Any help or guidance will be highly appreciated. ## 1 Answer To answer your main question: Multilevel TimeSeries modelling in Python There is nothing equivalent to the HTS package in Python. The two things that I know of that are the closest are PyAF and htsprophet. However they use different forecasting models than those used in HTS. PyAF uses models from scikit-learn to do forecasting, which is unusual since the sklearn models aren't usually amenable to time series problems. htsprophet uses only the FB Prophet model. By contrast HTS uses ARIMA and ETS, which are more standard forecasting methods (although FB Prophet is increasing in popularity). From what you described in your post though, I'm not entirely sure whether your problem is indeed a time series problem, or if it is indeed hierarchical in nature the way it hierarchy is understood in HTS. Can you please clarify the details of what you are trying to do? • Yeah nice answer.. i read whole paper of rob hyndman and his associates, very well written with example one can find link here Mar 6 '20 at 10:59
2021-09-24 08:59:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6091715097427368, "perplexity": 1051.512260742498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057508.83/warc/CC-MAIN-20210924080328-20210924110328-00149.warc.gz"}
https://findfilo.com/maths-question-answers/the-ratio-of-the-areas-of-two-regions-of-the-curveng7
The ratio of the areas of two regions of the curve C_(1)-=4x^(2)+p | Filo Class 12 Math Calculus Area 533 150 The ratio of the areas of two regions of the curve divided by the curve (where sgn (x) = signum (x)) is 533 150 Connecting you to a tutor in 60 seconds.
2021-06-21 22:37:22
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9604429006576538, "perplexity": 888.6301927804213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488504838.98/warc/CC-MAIN-20210621212241-20210622002241-00101.warc.gz"}
https://mailman.ntg.nl/pipermail/ntg-context/2015/082448.html
# [NTG-context] How to typeset differential (math)? Procházka Lukáš Ing. - Pontex s. r. o. LPr at pontex.cz Wed Jun 17 16:35:38 CEST 2015 Hello, On Wed, 17 Jun 2015 16:23:01 +0200, Manuel Blanco <manuelbl at ucm.es> wrote: > The easy way to answer that: > \definemathcommand[dif][nolop]{\mfunction{d}} didn't work because it's > intended for log-like functions, and you don't want a log-like > function, but a differential, which behaves differently (for instance, > you *want* space between the function and the argument in \sin x \cos > y but you *don't want* spaces between the “d” and the variable in > \dif x \dif y). > > About \mfunction I don't know, I think what's in the wiki is enough, > it's the command that sets the font used for other upright functions > (notice that some people prefer italic differentials hence the > definition would be \define\dif{\mathop{}\!d}). > > And about how does that work, well, basically what you want is a thin > space added on the left, but not on the right, so \mathop{} gives a > thin space on both sides, then with \! you remove the thin space on > the right and you then leave the \mfunction{d} with normal spacing. > > That definition behaves correctly in every case (except if you use > “physics-like” notation where the differential comes just after \int). > > I hope I don't leave anything relevant out (but I'm no expert). ... thank you for deep explanation! Lukas -- Ing. Lukáš Procházka | mailto:LPr at pontex.cz Pontex s. r. o. | mailto:pontex at pontex.cz | http://www.pontex.cz Bezová 1658 147 14 Praha 4 Tel: +420 241 096 751 Fax: +420 244 461 038
2022-05-20 20:05:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9665170311927795, "perplexity": 5858.9866810234735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534669.47/warc/CC-MAIN-20220520191810-20220520221810-00519.warc.gz"}
https://pytorch-geometric-temporal.readthedocs.io/en/latest/notes/introduction.html?ref=hackernoon.com
# Introduction¶ PyTorch Geometric Temporal is a temporal graph neural network extension library for PyTorch Geometric. It builds on open-source deep-learning and graph processing libraries. PyTorch Geometric Temporal consists of state-of-the-art deep learning and parametric learning methods to process spatio-temporal signals. It is the first open-source library for temporal deep learning on geometric structures and provides constant time difference graph neural networks on dynamic and static graphs. We make this happen with the use of discrete time graph snapshots. Implemented methods cover a wide range of data mining (WWW, KDD), artificial intelligence and machine learning (AAAI, ICONIP, ICLR) conferences, workshops, and pieces from prominent journals. # Citing¶ >@inproceedings{rozemberczki2021pytorch, author = {Benedek Rozemberczki and Paul Scherer and Yixuan He and George Panagopoulos and Alexander Riedel and Maria Astefanoaei and Oliver Kiss and Ferenc Beres and and Guzman Lopez and Nicolas Collignon and Rik Sarkar}, title = {{PyTorch Geometric Temporal: Spatiotemporal Signal Processing with Neural Machine Learning Models}}, year = {2021}, booktitle={Proceedings of the 30th ACM International Conference on Information and Knowledge Management}, } We briefly overview the fundamental concepts and features of PyTorch Geometric Temporal through simple examples. # Data Structures¶ PyTorch Geometric Temporal is designed to provide easy to use data iterators which are parametrized with spatiotemporal data. These iterators can serve snapshots which are formed by a single graph or multiple graphs which are batched together with the block diagonal batching trick. ## Temporal Signal Iterators¶ PyTorch Geometric Temporal offers data iterators for spatio-temporal datasets which contain the temporal snapshots. There are three types of data iterators: • StaticGraphTemporalSignal - Is designed for temporal signals defined on a static graph. • DynamicGraphTemporalSignal - Is designed for temporal signals defined on a dynamic graph. • DynamicGraphStaticSignal - Is designed for static signals defined on a dynamic graph. ### Temporal Data Snapshots¶ A temporal data snapshot is a PyTorch Geometric Data object. Please take a look at this readme for the details. The returned temporal snapshot has the following attributes: • edge_index - A PyTorch LongTensor of edge indices used for node feature aggregation (optional). • edge_attr - A PyTorch FloatTensor of edge features used for weighting the node feature aggregation (optional). • x - A PyTorch FloatTensor of vertex features (optional). • y - A PyTorch FloatTensor or LongTensor of vertex targets (optional). ## Temporal Signal Iterators with Batches¶ PyTorch Geometric Temporal offers data iterators for batched spatiotemporal datasets which contain the batched temporal snapshots. There are three types of batched data iterators: • StaticGraphTemporalSignalBatch - Is designed for temporal signals defined on a batch of static graphs. • DynamicGraphTemporalSignalBatch - Is designed for temporal signals defined on a batch of dynamic graphs. • DynamicGraphStaticSignalBatch - Is designed for static signals defined on a batch of dynamic graphs. ### Temporal Batch Snapshots¶ A temporal batch snapshot is a PyTorch Geometric Batch object. Please take a look at this readme for the details. The returned temporal batch snapshot has the following attributes: • edge_index - A PyTorch LongTensor of edge indices used for node feature aggregation (optional). • edge_attr - A PyTorch FloatTensor of edge features used for weighting the node feature aggregation (optional). • x - A PyTorch FloatTensor of vertex features (optional). • y - A PyTorch FloatTensor or LongTensor of vertex targets (optional). • batch - A PyTorch LongTensor of batch indices (optional). ## Benchmark Datasets¶ We released and included a number of datasets which can be used for comparing the performance of temporal graph neural networks algorithms. The related machine learning tasks are node and graph level supervised learning. ### Newly Released Datasets¶ In order to benchmark graph neural networks we released the following datasets: ### Integrated Datasets¶ We also integrated existing datasets for performance evaluation: The Hungarian Chickenpox Dataset can be loaded by the following code snippet. The dataset returned by the public get_dataset method is a StaticGraphTemporalSignal object. from torch_geometric_temporal.dataset import ChickenpoxDatasetLoader ## Spatiotemporal Signal Splitting¶ We provide functions to create temporal splits of the data iterators. These functions return train and test data iterators which split the original iterator using a fix train-test ratio. Snapshots from the earlier time periods contribute to the training dataset and snapshots from the later periods contribute to the test dataset. This way temporal forecasts can be evaluated in a real life like scenario. The function split_temporal_signal takes either a StaticGraphTemporalSignal or a DynamicGraphTemporalSignal object and returns two iterators according to the split ratio specified by train_ratio. from torch_geometric_temporal.dataset import ChickenpoxDatasetLoader from torch_geometric_temporal.signal import temporal_signal_split train_dataset, test_dataset = temporal_signal_split(dataset, train_ratio=0.8) # Applications¶ In the following we will overview two case studies where PyTorch Geometric Temporal can be used to solve real world relevant machine learning problems. One of them is about epidemiological forecasting the other on is about predicting web traffic. ## Epidemiological Forecasting¶ We are using the Hungarian Chickenpox Cases dataset in this case study. We will train a regressor to predict the weekly cases reported by the counties using a recurrent graph convolutional network. First, we will load the dataset and create an appropriate spatio-temporal split. from torch_geometric_temporal.dataset import ChickenpoxDatasetLoader from torch_geometric_temporal.signal import temporal_signal_split train_dataset, test_dataset = temporal_signal_split(dataset, train_ratio=0.2) In the next steps we will define the recurrent graph neural network architecture used for solving the supervised task. The constructor defines a DCRNN layer and a feedforward layer. It is important to note that the final non-linearity is not integrated into the recurrent graph convolutional operation. This design principle is used consistently and it was taken from PyTorch Geometric. Because of this, we defined a ReLU non-linearity between the recurrent and linear layers manually. The final linear layer is not followed by a non-linearity as we solve a regression problem with zero-mean targets. import torch import torch.nn.functional as F from torch_geometric_temporal.nn.recurrent import DCRNN class RecurrentGCN(torch.nn.Module): def __init__(self, node_features): super(RecurrentGCN, self).__init__() self.recurrent = DCRNN(node_features, 32, 1) self.linear = torch.nn.Linear(32, 1) def forward(self, x, edge_index, edge_weight): h = self.recurrent(x, edge_index, edge_weight) h = F.relu(h) h = self.linear(h) return h Let us define a model (we have 4 node features) and train it on the training split (first 20% of the temporal snapshots) for 200 epochs. We backpropagate when the loss from every temporal snapshot is accumulated. We will use the Adam optimizer with a learning rate of 0.01. The tqdm function is used for measuring the runtime need for each training epoch. from tqdm import tqdm model = RecurrentGCN(node_features = 4) model.train() for epoch in tqdm(range(200)): cost = 0 for time, snapshot in enumerate(train_dataset): y_hat = model(snapshot.x, snapshot.edge_index, snapshot.edge_attr) cost = cost + torch.mean((y_hat-snapshot.y)**2) cost = cost / (time+1) cost.backward() optimizer.step() Using the holdout we will evaluate the performance of the trained recurrent graph convolutional network and calculate the mean squared error across all the spatial units and time periods. model.eval() cost = 0 for time, snapshot in enumerate(test_dataset): y_hat = model(snapshot.x, snapshot.edge_index, snapshot.edge_attr) cost = cost + torch.mean((y_hat-snapshot.y)**2) cost = cost / (time+1) cost = cost.item() print("MSE: {:.4f}".format(cost)) >>> MSE: 1.0232 ## Web Traffic Prediction¶ We are using the Wikipedia Maths dataset in this case study. We will train a recurrent graph neural network to predict the daily views on Wikipedia pages using a recurrent graph convolutional network. First, we will load the dataset and use 14 lagged traffic variables. Next, we create an appropriate spatio-temporal split using 50% of days for training of the model. from torch_geometric_temporal.dataset import WikiMathsDatasetLoader from torch_geometric_temporal.signal import temporal_signal_split train_dataset, test_dataset = temporal_signal_split(dataset, train_ratio=0.5) In the next steps we will define the recurrent graph neural network architecture used for solving the supervised task. The constructor defines a GConvGRU layer and a feedforward layer. It is important to note again that the non-linearity is not integrated into the recurrent graph convolutional operation. The convolutional model has a fixed number of filters (which can be parametrized) and considers 2nd order neighborhoods. import torch import torch.nn.functional as F from torch_geometric_temporal.nn.recurrent import GConvGRU class RecurrentGCN(torch.nn.Module): def __init__(self, node_features, filters): super(RecurrentGCN, self).__init__() self.recurrent = GConvGRU(node_features, filters, 2) self.linear = torch.nn.Linear(filters, 1) def forward(self, x, edge_index, edge_weight): h = self.recurrent(x, edge_index, edge_weight) h = F.relu(h) h = self.linear(h) return h Let us define a model (we have 14 node features) and train it on the training split (first 50% of the temporal snapshots) for 50 epochs. We backpropagate the loss from every temporal snapshot individually. We will use the Adam optimizer with a learning rate of 0.01. The tqdm function is used for measuring the runtime need for each training epoch. from tqdm import tqdm model = RecurrentGCN(node_features=14, filters=32) model.train() for epoch in tqdm(range(50)): for time, snapshot in enumerate(train_dataset): y_hat = model(snapshot.x, snapshot.edge_index, snapshot.edge_attr) cost = torch.mean((y_hat-snapshot.y)**2) cost.backward() optimizer.step() Using the holdout traffic data we will evaluate the performance of the trained recurrent graph convolutional network and calculate the mean squared error across all of the web pages and days. model.eval() cost = 0 for time, snapshot in enumerate(test_dataset): y_hat = model(snapshot.x, snapshot.edge_index, snapshot.edge_attr) cost = cost + torch.mean((y_hat-snapshot.y)**2) cost = cost / (time+1) cost = cost.item() print("MSE: {:.4f}".format(cost)) >>> MSE: 0.7760
2021-09-25 03:06:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3216722011566162, "perplexity": 5052.815636202738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057589.14/warc/CC-MAIN-20210925021713-20210925051713-00550.warc.gz"}
https://holooly.com/solutions/draw-the-free-body-diagram-of-the-spanner-wrench-subjected-to-the-force-f-the-support-at-a-can-be-considered-a-pin-and-the-surface-of-contact-at-b-is-smooth-explain-the-significa/
## Question: Draw the free-body diagram of the “spanner wrench” subjected to the force $F$. The support at $A$ can be considered a pin, and the surface of contact at $B$ is smooth. Explain the significance of each force on the diagram. Given: $F$ = $20$ $Ib$ $a$ = $1$ $in$ $b$ = $6$ $in$ ## Step-by-step ${ A }_{ x },{ A }_{ y },{ N }_{ B }$ force of cylinder on wrench.
2020-10-29 04:18:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 13, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5262330174446106, "perplexity": 993.002683216382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107902745.75/warc/CC-MAIN-20201029040021-20201029070021-00375.warc.gz"}
https://dsp.stackexchange.com/questions/79535/filter-for-isi-channel
Filter for ISI Channel I know that OFDM has many advantages in removing ISI. But I think there is one more way of removing ISI(equalizing or inverting technique). Syppose there is a channel $$H(\omega)$$ and we have our message $$X(\omega)$$. So, the received signal will be $$Y(\omega) = H(\omega)X(\omega)$$ So at the receiver end, we can just use a filter with $$\frac{1}{H(\omega)}$$. Why don't we use an inverted filter here to avoid ISI? Thank You!
2022-07-03 23:40:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3789173662662506, "perplexity": 573.9554008544067}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104277498.71/warc/CC-MAIN-20220703225409-20220704015409-00582.warc.gz"}
https://cstheory.stackexchange.com/questions/38288/totally-mixed-2sat-with-exact-cardinality
# totally-mixed 2SAT with exact cardinality? Given a 2HornSAT problem, it’s possible in linear time to find the minimum solution to the problem, i.e., a solution that minimizes the number of variables set to 1. Now let us consider the following restricted variant of that problem: Input:​ A positive integer $K$ and a 2SAT instance in which all clauses are mixed, i.e., have a positive literal and a negative literal. Output: Is there a satisfying assignment such that exactly $K$ variables are set to 1? Is this problem NP-complete? I am struggling with its reduction but it seems this might be difficult. • Comments are not for extended discussion; this conversation has been moved to chat. – Lev Reyzin May 31 '17 at 16:06 I prove NP-hardness below by reduction from the clique problem (given a graph and a number, does the graph have a clique of that many vertices). ## reduction Suppose we are given a clique instance consisting of a graph $G = (V, E)$ with $m = |E|$ and $n = |V|$ and a number $k$. Then we will produce an instance of your problem consisting of a formula $\phi$ and a number $K$ as described below First of all, we set $K = k + (n+1) \times {k \choose 2}$. Next, lets describe the variables used in $\phi$. For each vertex $v \in V$, $\phi$ will include a variable $x_v$. For each edge $e \in E$, $\phi$ will include $n+1$ variables: $y_e^0, y_e^1, \ldots, y_e^n$. Finally, lets describe the clauses included in $\phi$. Each clauses has the form $(a \vee \neg b)$, which is logically equivalent to $(b \to a)$, so I will write all clauses in implication form. For each edge $e \in E$, we include clauses $(y_e^n \to y_e^0)$, $(y_e^0 \to y_e^1)$, $(y_e^1 \to y_e^2)$, ..., and $(y_e^{n-1} \to y_e^n)$. The effect of these clauses is to enforce the equality of all the $y_e^i$s (for any fixed $e$) in any satisfying assignment. Next, for any edge $(u, v) \in E$, we also include clauses $(y_{(u,v)}^0 \to x_u)$ and $(y_{(u,v)}^0 \to x_v)$. The effect of these clauses is that in any satisfying assignment, if the variables $y_{(u,v)}^i$ associated with an edge are true then the variables $x_u$ and $x_v$ associated with the endpoints must also be true. ## clique $\to$ satisfying assignment Suppose that there is a clique $C$ of size $k$ in $G$. Then we can create a satisfying assignment for $\phi$ with exactly $K = k + (n+1) \times {k \choose 2}$ true variables. In particular, for $v \in V$, set $x_v$ to true iff $v \in C$, and for $e \in E$, set $y_e^i$ to true iff both endpoints of $e$ are in $C$. There are $n+1$ variables $y_e^i$ for each $e$, and there are exactly $k \choose 2$ edges in $G$ with both endpoints in $C$ (since $C$ is a clique). Thus there are $(n+1) \times {k \choose 2}$ variables of the form $y_e^i$ that are set to true under this assignment. Furthermore, $|C| = k$, so there are exactly $k$ variables of the form $x_v$ set to true under this assignment. As desired, this assignment has exactly $K = k + (n+1) \times {k \choose 2}$ true variables. Notice that $y_e^i = y_e^j$ for every edge $e$ and pair of indices $i, j$. Thus, the clauses of the form $(y_e^i \to y_e^{(i+1)~\text{mod}~(n+1)})$ are satisfied under this variable assignment. Next, consider any edge $(u, v)$. If $y_{(u, v)}^0$ is true, then both $u$ and $v$ are vertices in $C$, so therefore both $x_u$ and $x_v$ are also true. Thus, clauses $(y_{(u,v)}^0 \to x_u)$ and $(y_{(u,v)}^0 \to x_v)$ are also satisfied. Since all clauses are satisfied, this is a satisfying assignment (which we already noted has exactly $K$ true variables). ## satisfying assignment $\to$ clique Next suppose we have a satisfying assignment of $\phi$ with exactly $K = k+ (n+1) \times {k \choose 2}$ true variables. Any satisfying assignment has $y_e^0 = y_e^1 = \cdots = y_e^n$. Then let $y_e = y_e^0$. Define $n_y$ to be the number of true $y_e$s. Similarly, define $n_x$ to be the number of true $x_v$s. Notice that the number of true variables in the assignment is equal to $n_x + (n+1) \times n_y$. Furthermore, $0 \le n_x < n+1$ since there are only $n$ different $x_v$s. Thus, we can conclude that $n_x = K~\text{mod}~(n+1) = k$ and $n_y = \lfloor \frac{K}{n+1} \rfloor = {k \choose 2}$. Let $C = \{v \in V~|~x_v~\text{is true}\}$ and let $E' = \{e \in E~|~y_e~\text{is true}\}$. Note that $|C| = n_x$ and $|E'| = n_y$ by definition. Then $E'$ is a set of ${k \choose 2}$ edges, and $C$ is a set of $k$ edges. Notice that if $(u,v) \in E'$, then $y_{(u,v)}$ is true, and therefore $y_{(u,v)}^0$ is true; as a result, since clauses $(y_{(u,v)}^0 \to x_u)$ and $(y_{(u,v)}^0 \to x_v)$ must be satisfied, we can conclude that $x_u$ and $x_v$ are also true, and therefore that $u,v \in C$. Thus, if $e \in E'$ and $v$ is an endpoint of $e$ then $v \in C$. Thus the set of endpoints of edges in $E'$ is a subset of $C$. Then $E'$ is a set of ${k \choose 2}$ edges whose set of endpoints numbers at most $|C| = k$. A set of ${k \choose 2}$ edges has only $k$ endpoints in total only in the case that the $k$ endpoints are a clique. In other words, it must be the case that $C$ is a clique and the edges in $E'$ are the edges in the clique. Thus we have identified a clique of size $k$ in $G$. • Much thanks. The reduction is complicated so it will take some time for me to process it.. :) – TheoryQuest1 May 30 '17 at 7:03
2019-09-20 18:45:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8881669640541077, "perplexity": 127.9899489818968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574058.75/warc/CC-MAIN-20190920175834-20190920201834-00390.warc.gz"}
http://vmu.phys.msu.ru/en/abstract/2019/5/19-5-040/
Faculty of Physics M.V. Lomonosov Moscow State University Regular Article # Мagnetoresistive features of the long FeNiCo nano strip ## V. S. Shevtsov$^{1,2}$, O. P. Polyakov$^{1,2}$, V. V. Amelichev$^3$, S. I. Kasatkin$^2$, P. A. Polyakov$^1$ ### Moscow University Physics Bulletin 2019. N 5. • Article Annotation A theory has been developed to explain the experimental dependence of the magnetoresistance of the FeNiCo nano strip on the external magnetic field. The theory is based on the assumption of a one-dimensional nonuniformity of the distribution of magnetization in a nano strip, which makes it possible to implement an effective algorithm for solving the micromagnetic equilibrium problem. It is shown that the developed theory is in good agreement with the calculations of the specialized OOMMF package, but significantly exceeds it in performance for such problems. Approved: 2020 February 19 PACS: 75.70.Ak Magnetic properties of monolayers and thin films 75.78.Cd Micromagnetic simulations Rissian citation: В. С. Шевцов, О. П. Поляков, В. В. Амеличев, С. И. Касаткин, П. А. Поляков. Вестн. Моск. ун-та. Сер. 3. Физ. Астрон. 2019. № 5. С. 40. Authors V. S. Shevtsov$^{1,2}$, O. P. Polyakov$^{1,2}$, V. V. Amelichev$^3$, S. I. Kasatkin$^2$, P. A. Polyakov$^1$ $^1$Department of General Physics, Faculty of Physics, M.V.Lomonosov Moscow State University, chair of General Physics . $^2$V.A. Trapeznikov Institute of Control Sciences of Russian Academy of Sciences $^3$Nauchno-proizvodstvennyj kompleks «Tekhnologicheskij centr» ### Science News of the Faculty of Physics, Lomonosov Moscow State University This new information publication, which is intended to convey to the staff, students and graduate students, faculty colleagues and partners of the main achievements of scientists and scientific information on the events in the life of university physicists.
2020-08-14 02:17:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3318432867527008, "perplexity": 7829.131230073716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739134.49/warc/CC-MAIN-20200814011517-20200814041517-00318.warc.gz"}
http://stackoverflow.com/questions/10137566/how-to-display-a-defined-value
# How to display a defined value In some doxygen documentation I'd like to display the content of a #define, not the tag itself. For instance, in a C file I have #define REPEAT_N_TIMES 10 Now in my documentation I want to display: The action is done 10 times. If I use \ref REPEAT_N_TIMES, it displays: The action is done REPEAT_N_TIMES times Is there a way to display the content of a link, not the link itself, for example like \ValueOf(\ref REPEAT_N_TIMES) or \contentOf(\ref REPEAT_N_TIMES)? Update: My Doxygen's config is: // Configuration options related to the preprocessor ENABLE_PREPROCESSING = YES MACRO_EXPANSION = YES EXPAND_ONLY_PREDEF = YES SEARCH_INCLUDES = YES INCLUDE_PATH = INCLUDE_FILE_PATTERNS = PREDEFINED = WXUNUSED()= EXPAND_AS_DEFINED = SKIP_FUNCTION_MACROS = YES The MACRO_EXPANSION setting seems to change the "details" of the macros. But I don't see a way to select either the name of the macro, or its content. Using the command \ref doesn't seems to be the right way: it refers to "something" not the content of "something" Is there an operator or function I could use, possibly similar to C, where I can use something like \ref *something instead of \ref something? - See this answer: stackoverflow.com/a/1510919/623518 Essentially, comments are replaced with a single space in the "translation phase", which happens prior to the Preprocessing directive parsing. So preprocessing cannot be used to replace directives within comments. The only way to do this is to use the input filter, as I suggest in my answer. Alternatively, just reference your define (see my update). –  Chris Apr 14 '12 at 14:56 Possible duplicate: stackoverflow.com/questions/9299608/… –  Chris Apr 14 '12 at 14:56 ## 1 Answer The doxygen manual page on preprocessing seems to have all the information you need. As a first step try setting the MACRO_EXPANSION flag in the doxygen configuration file to YES, then in your documentation include The action is done REPEAT_N_TIMES times. As noted in the doxygen manual, this will expand all macro definitions (recursively if needed), which is often too much. Therefore you can specify exactly which macros to expand using the EXPAND_ONLY_PREDEF and EXPAND_AS_DEFINED settings in the configuration file. For example, try setting EXPAND_ONLY_PREDEF = YES EXPAND_AS_DEFINED = REPEAT_N_TIMES in the configuration file. UPDATE: Following @spamy's comment I looked into this a bit more and it seems that the methods I have mentioned above does not work for macros within comment blocks, i.e. only macros within the source code are expanded. See, for example, this post on the doxygen sourceforge page. According to this post the only way to achieve macro expansion within comment blocks is to use the INPUT_FILTER configuration file setting. Use something like INPUT_FILTER = sed /REPEAT_N_TIMES/10 Warning: The above INPUT_FILTER has not been tested. If you don't want to use the INPUT_FILTER then this answer to another thread is probably your best bet. Essentially it says you can document the macro, so readers of the documentation would be able to find the real value easily. So add documentation to your #define and just \ref it elsewhere in your documentation. - My config is: #--------------------------------------------------------------------------- # Configuration options related to the preprocessor #--------------------------------------------------------------------------- ENABLE_PREPROCESSING = YES MACRO_EXPANSION = YES EXPAND_ONLY_PREDEF = YES SEARCH_INCLUDES = YES INCLUDE_PATH = INCLUDE_FILE_PATTERNS = PREDEFINED = WXUNUSED()= EXPAND_AS_DEFINED = SKIP_FUNCTION_MACROS = YES –  spamy Apr 13 '12 at 9:37 Ok, it seems that macro expansion within comment blocks is only possible by using INPUT_FILTER, see my update. –  Chris Apr 13 '12 at 10:03
2015-04-22 00:09:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8832994103431702, "perplexity": 3131.743646442004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246643815.32/warc/CC-MAIN-20150417045723-00120-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.biostars.org/p/3407/
How To Extract Reads From Bam That Overlap With Specific Regions? 6 18 Entering edit mode 10.9 years ago Stew ★ 1.4k This question is related to this one, but I would like to know if anyone knows of any methods of quickly extracting reads from a BAM file that overlap with a list of many regions (e.g. a BED file), but I would like the reads separately for each region. My eventual requirements are that for each region individually, I would like to know: 1. The number of overlapping reads 2. The location of the highest pileup 3. The height of the highest pileup However I am happy to calculate these, I just need a tool to return the reads for each region in a way that I can parse. I am currently using Rsamtools, which does everything I require, as it returns a list, with each element in the list containing the reads that overlap each region. However, is really slow for a large number of regions. It's run time is dependent on the number of regions and largely independent of the number of reads in the BAM. I am using Rsamtools like this currently: which <- RangedData(space=bed[,1],IRanges(bed[,2],bed[,3])) param <- ScanBamParam(which=which,what=c("pos")) bam <- scanBam(file=bamFile,param=param) I can split the list of regions and run it as many parallel jobs and merge at the end, however first I wanted to check if there was another tool which was quicker for this task. I have looked at intersectBed, but I can not get this to return reads in a way I can easily parse to give information for each region. The run time for this method is dependent on the number of reads in the BAM and largely independent of the number of regions, which would be good for my needs. next-gen sequencing bam bed coverage read • 28k views 0 Entering edit mode I coded this in python using pysam. For 166k regions and a bamfile with 100 million alignments, runtime was 412 seconds to count the number of reads in each region. 0 Entering edit mode Hi, it's slow because scanBam with param is not meant to be used as repeatedly as you do it. for many regions it's advisable to use countOverlaps. 0 Entering edit mode The java code by Pierre is incomplete. Hi Pierre, is there any chance that you could post the entire java code? Thx! 8 Entering edit mode 10.9 years ago Hi Stew, as Sean suggested you should take advantage of the efficient overlap computation in the IRanges package. What is slowing down your computation is probably the post-processing of the list you are making not the bam parsing itself. I'm going to provide a little recipe here. Please have a look at the documentation of countOverlaps and Views in particular. Also note that you can import bed filed using the rtracklayer package. I will edit this answer as soon as I find some more time to test the examples. Edit: I promised some recipes in R so here is the first one. Ingredients: • gff file with the genomic regions • bam file with the aligned reads • fairly recent R & bioconductor with libraries: ShortRead, rtracklayer + dependencies library(rtracklayer) genes = import("~/bsubtilis.gff") # easiest way to make a ranged data object out of the reads # make sure the chromosome names in reads and genes are identical names(genes) # otherwise you will get an error CompressedIntegerList of length 1 [["AL009126.3"]] 93 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Result: A list with an entry for each chromosome ('space') with the number of reads mapped for each gene in the chromosome. 0 Entering edit mode Thanks for the reply and the code. I will try this. Since posting I have tried a new strategy which is a perl script that calculates overlap using the output from samtools view piped to stdin. In this case the overlaps are calculated as the file is streamed which has the advantage of not using any memory and only taking as long as it takes to view the file once. It will be interesting to compare your method to my perl hack. I like the idea of your method better as I end up having to save the results of my perl code to a file and read it into R anyway. Thanks again. 0 Entering edit mode I would assume that when comparing both ways 'my' solution might be faster while using also much more memory. This is due to the R philosophy of loading everything into memory. Also, make sure you got the latest Bioconductor. import.gff has been updated recently and is now more efficient. 0 Entering edit mode This is really good. Is there any way of getting the same result for Novoalign output, that is in the native novoalign format? 0 Entering edit mode "import" is changing the order of chromosomes (not same order in genes and reads) and I am getting incorrect overlaps. Is there anyway to fix this? Thanks! 0 Entering edit mode Kavita, please provide some evidence for that, then we can reproduce the problem. 0 Entering edit mode @Sameet, as long as you can convert the output of novoalign to SAM or BAM it is equally easy, isn't there a switch in novoalign that allows that? Otherwise, parsing the textual output into a GRanges object should work. 0 Entering edit mode I used the exact same code provided however when I import the gff file it reorders the space(genes) to follow this order names(genes)= "chr1" "chr10" "chr11" "chr12" "chr13" "chr14" "chr15" "chr16" "chr17" "chr18" "chr19" "chr2" "chr3" "chr4" "chr5" "chr6" "chr7" "chr8" "chr9" "chrX" "chrY". But my BAM file reads are in this order: names(reads)= "chr1" "chr2" "chr3" "chr4" "chr5" "chr6" "chr7" "chr8" "chr9" "chr10" "chr11" "chr12" "chr13" "chr14" "chr15" "chr16" "chr17" "chr18" "chr19" "chrX" "chrY" Upon doing countOverlaps- I get correct overlap counts for chr1,X andY, the others are wrong. 0 Entering edit mode Here is the snippet of the output CompressedIntegerList of length 21 [["chr1"]] 84 54 162 65 38 42 142 95 185 31 91 49 67 92 33 53 92 80 22 203 40 85 19 49 43 19 49 21 ... 142 711 140 365 44 119 74 202 144 85 227 67 32 44 79 125 50 63 39 53 92 473 402 168 75 45 51 [["chr10"]] 2 2 1 4 2 40 4 2 5 12 5 10 5 4 1 1 5 3 7 0 2 3 2 3 1 1 1 1 1 0 0 6 3 3 6 3 0 2 2 2 0 2 0 4 ... 1 4 3 2 4 4 3 9 1 1 5 2 1 0 0 2 4 2 0 2 2 2 1 0 2 0 3 1 4 2 2 2 3 3 3 4 1 6 1 1 1 0 0 0 0 Entering edit mode It has been fixed. The RangedData object (genes) was reordered to match the read names using the command genes[names(reads)] 6 Entering edit mode 10.9 years ago Using a bamfile as the source of data will be efficient for some tasks, but for others, it is actually perhaps more efficient to actually read in a larger chunk of the file and process it in memory. For example, you may want to consider processing all the reads and regions in each chromosome in memory. 1. Read in all regions 2. Read in large sections of bamfile (chromosome at a time) 3. Process all regions for the section read in in (2) 4. Repeat 2-3 for all chromosomes Particularly since you are talking about pileups, etc., these are going to be done efficiently by the IRanges operations in R for a whole chromosome and then you can use Views to extract your regions of interest. I would think that this method would be largely independent of the number of regions and, instead, would depend on the number of reads. 4 Entering edit mode 10.9 years ago I'm going to answer the first part of your question by using the picard library. The following java program reads a list of chrom/start/end values from stdin or from a file and outputs the number of reads in each region: import java.io.File; import java.io.IOException; import net.sf.samtools.SAMRecord; import net.sf.samtools.util.CloseableIterator; public class BioStar3414 { private File bamFile=null; private boolean containsbamRecord=false;//false : the alignment of the returned SAMRecords need only overlap the interval of interest. public BioStar3414() { } public void open(File bamFile) { close(); this.bamFile=bamFile; this.inputSam.setValidationStringency(ValidationStringency.SILENT); } public void close() { if(inputSam!=null) { inputSam.close(); } this.inputSam=null; this.bamFile=null; } private int scan(String chromosome,int start,int end) throws IOException { int nCount=0; CloseableIterator<SAMRecord> iter=null; try { iter=this.inputSam.query(chromosome, start, end, this.containsbamRecord); while(iter.hasNext()) { SAMRecord rec=iter.next(); ++nCount; } return nCount; } catch (Exception e) { throw new IOException(e); } finally { if(iter!=null) iter.close(); } } public void run(BufferedReader in) throws IOException { String line; { if(line.isEmpty()) continue; String tokens[]=line.split("[\t]"); String chrom=tokens[0]; int start=Integer.parseInt(tokens[1]); int end=Integer.parseInt(tokens[2]); int count=scan(chrom,start,end); System.err.println(chrom+"\t"+start+"\t"+end+"\t"+count); } } public static void main(String[] args) { File bamFile=null; BioStar3414 app; try { app=new BioStar3414(); int optind=0; while(optind Compilation: javac -cp sam-1.16.jar:picard-1.16.jar BioStar3414.java Execute echo -e "chr1\t1000000\t2000000\nchr1\t2000000\t3000000\n" |\ java -cp sam-1.16.jar:picard-1.16.jar:. BioStar3414 -bam my.sorted.bam chr1 1000000 2000000 39604 chr1 2000000 3000000 14863 2 Entering edit mode 10.9 years ago Ian 5.8k You might find bedtools useful http://code.google.com/p/bedtools/. You can convert BAM to BED or work with the BAM file directly. There are many tools for performing intersects, calculating coverage stats, etc. But this would be outside of R! 2 Entering edit mode 7.5 years ago With BEDOPS, the bedmap tool can be used to report the answer to all three questions in one command. Also, if you have access to a computational grid, BEDOPS can split work into smaller, per-node tasks very easily. This answer will review two approaches: one serial, the other parallel. Let's assume you have a sorted BED file containing your regions-of-interest, called Regions.bed. Let's also assume that you have run a pileup operation on your BAM file. This operation converts reads to piles of tags over a given window size, formatted either as a BED file, or a highly compressed BED file in a format called Starch. Windows of bins may or may not overlap - this is up to you. We have a writeup here that explains how to collapse reads to tag counts over windows. Let's say the output of this pileup step is a compressed BED file called BinnedReads.starch. To display the count of binned reads over each region of interest, along with the bin element with the highest read count, add the --count and --max-element operators to the following bedmap operation: $bedmap --echo --count --max-element Regions.bed BinnedReads.starch > Answer.bed The file Answer.bed will be a sorted BED file containing: [ region-1 ] | [ count-of-binned-reads-over-reg-1 ] | [ maximum-sized-bin-over-reg-1 ] [ region-2 ] | [ count-of-binned-reads-over-reg-2 ] | [ maximum-sized-bin-over-reg-2 ] ... [ region-n ] | [ count-of-binned-reads-over-reg-n ] | [ maximum-sized-bin-over-reg-n ] Sorted inputs will make this operation very fast. Even further, you can parallelize this operation very easily with bedmap --chrom and bedextract --list-chr. Let's say you have a Sun Grid Engine environment. Here is some bash-like pseudocode to explain how you might use BEDOPS to split tasks up by chromosome: for chromosomeName in bedextract --list-chr Regions.bed; do \ qsub <options> bedmap --chrom${chromosomeName} --echo --count --max-element Regions.bed BinnedReads.starch > Answer.\${chromosomeName}.bed; \ done qsub -hold_jid <list_of_per_chrom_bedmap_job_names> <options> bedops --everything Answer.*.bed > Answer.bed This map-reduce approach splits or maps work into per-chromosome jobs, where the final job reduces or concatenates all the results into one file. This shortens the time taken to perform your analysis to the time cost of the chromosome with the largest total region space - BEDOPS can make this task an order of magnitude faster, using distributed computation.
2021-10-16 22:29:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2583261728286743, "perplexity": 1467.339704451097}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585025.23/warc/CC-MAIN-20211016200444-20211016230444-00687.warc.gz"}
https://asmedigitalcollection.asme.org/mechanicaldesign/article/139/10/101404/367018/Learning-an-Optimization-Algorithm-Through-Human
Solving optimal design problems through crowdsourcing faces a dilemma: On the one hand, human beings have been shown to be more effective than algorithms at searching for good solutions of certain real-world problems with high-dimensional or discrete solution spaces; on the other hand, the cost of setting up crowdsourcing environments, the uncertainty in the crowd's domain-specific competence, and the lack of commitment of the crowd contribute to the lack of real-world application of design crowdsourcing. We are thus motivated to investigate a solution-searching mechanism where an optimization algorithm is tuned based on human demonstrations on solution searching, so that the search can be continued after human participants abandon the problem. To do so, we model the iterative search process as a Bayesian optimization (BO) algorithm and propose an inverse BO (IBO) algorithm to find the maximum likelihood estimators (MLEs) of the BO parameters based on human solutions. We show through a vehicle design and control problem that the search performance of BO can be improved by recovering its parameters based on an effective human search. Thus, IBO has the potential to improve the success rate of design crowdsourcing activities, by requiring only good search strategies instead of good solutions from the crowd. ## Introduction ### Challenges and Opportunities for Design Crowdsourcing. Optimal design problems often have large solution spaces and highly nonconvex objectives and constraints, inhibiting effective solution searching through existing optimization algorithms. Some of these problems, however, have been quite successfully (yet heuristically) solved by human beings. Notable examples include protein folding [1,2], RNA synthesis [3,4], genome sequence alignment [5], robot trajectory planning [6], and others [79]. The superior performance of some human beings at solving these problems demonstrates the advantages of human intelligence, which are supported by cognitive science and neuroscience findings [10] (see discussion in Sec. 5.1). However, despite a handful of success stories, applications of crowdsourcing to real-world design problems have yet to overcome several practical barriers. The cost of setting up problem-dependent crowdsourcing environments, the lack of commitment from crowd members, and uncertainty in domain-specific crowd competence have all contributed to its lack of adoption, while the growing availability of computation resources often makes straight-forward optimization or brute-force search a more convenient approach. Our earlier study [8] highlighted these challenges for design crowdsourcing: We gamified a vehicle design and control problem (called the “ecoRacer” problem in what follows) where the objective is to complete a track with the minimal energy consumption within a time limit, by finding the optimal final drive ratio of the vehicle and the control policy for acceleration and regenerative braking. The game was broadcast on social media and received more than 2000 plays from 124 unique players within the first month. Results showed that (1) the marginal improvement in average game score of the crowd over an algorithm does not necessarily justify the high cost for developing crowdsourcing games and (2) only a few players were committed to the search for more than 50 iterations, and still fewer can outperform the computer-found solution at all (see summary in Fig. 1). Nonetheless, human search results displayed a significantly different search pattern than that of the algorithm. In particular, quite a few players showed rapid early improvement in performance, beyond the average performance of the computer, before they quit the game without reaching a solution close to the theoretical optimum. This observation is consistent with existing research (see, for example, Khatib et al. [2] on a human-designed protein folding algorithm having a short-term advantage over a standard algorithm) and suggests that while few people care to actually find the “best solution,” their early demonstrations on how they search for a better solution may still be valuable. Specifically, we hypothesize that if a computer algorithm can be tuned to mimic these demonstrations, it can serve as a replacement to human solvers in their absence, to search in an effective way without ever abandoning the problem. ### Learning to Search. This paper aims to test the previously mentioned hypothesis. We model a human solver's search behavior through a Bayesian optimization (BO) algorithm (BO, also known as efficient global optimization) [12,13]. The algorithm iterates between two steps: (1) Estimating the shape of the problem space, based on previous solutions and the corresponding performances, using a Gaussian process (GP) model [14] and (2) creating a new solution based on this estimate (details in Sec. 2). While BO is not provably the underlying mechanism humans use, we hypothesize that the algorithm can be tuned to mimic the results of successful human search strategies, specifically in comparison with other popular gradient- and nongradient-based optimization algorithms. The key assumption in modeling human search behavior through BO is the use of a GP to account for human beings' learning of input–output relationships (or called “function learning” in psychology). This assumption is supported by various findings: In a recent review of function-learning models, Lucas et al. [15] showed that the two major schools of models, i.e., rule- and similarity-based, can be unified through a Gaussian process.1 As discussed in Wilson et al. [16], the evidence that Occam's Razor plays an important role in human prediction also suggests that GP is an appropriate model for function learning, as GP reduces model complexity by construction [17]. Empirically, Borji and Itti [18] showed that BO, with the use of GP, has the closest convergence performance to human searches when applied to one-dimensional (1D) optimization problems. In fact, many higher-dimensional problems that human beings naturally solve, such as locomotion planning, have also been successfully solved through the use of GP [1922]. Under this modeling assumption, we investigate how BO parameters can be estimated for the algorithm to best match human solver's search trajectory, i.e., the sequence of solution-performance pairs. To this end, we introduce an inverse BO (IBO) algorithm to derive the maximum likelihood estimators (MLEs) for BO parameters and discuss challenges in its implementation (see Sec. 3). Validation of the IBO algorithm takes two steps. We first use a simulation study to show that IBO can successfully estimate BO parameters used in generating a search trajectory (Sec. 3.2). We then show through the ecoRacer problem that the search performance of BO can be improved when its parameters are modified based on observing an effective human search and implementing IBO (Sec. 4). The results provide evidence that IBO can accelerate a search using only good search strategies without needing a large number of good human solutions. Thus, incorporating IBO in design crowdsourcing may lower the requirement on crowd commitment and so increase its chance of success. Limitations and their potential relaxations of the current IBO implementation will be discussed in depth in Sec. 5. ### Related Work. It is important to note that the focus of this paper is on the design of optimization algorithms aided by human demonstrations, rather than the derivation of qualitative explanations of the strengths and limitations of human design strategies. There have been numerous studies from the latter category in recent years (see Refs. [2328] for example). This paper is also distinguished from studies that propose human-inspired optimization algorithms (see Refs. [2931] for example), in that the learning of the optimization algorithm in our case is conducted by another algorithm, rather than by human researchers. From this aspect, our study is related to studies in learning-to-learn [32] where algorithms (e.g., for gradient-based optimization [33] and optimal control [34]) are tuned and controlled by a higher-level algorithm. In such work, however, the algorithms are often improved purely computationally through reinforcement learning (RL) by solving similar problems repeatedly. Due to the use of human demonstrations, our paper is also related to inverse reinforcement learning (IRL) (see discussion in Sec. 5.2), where human control strategies are used for defining and finding optimal control strategies. ## Preliminaries on Bayesian Optimization This section provides some background knowledge on BO to facilitate the discussion on IBO in Sec. 3. ### Terminologies and Notations. Let an optimization problem be $minx∈Xf(x)$ where $X⊆ℝp$ is the solution space. A search trajectory with K iterations can be represented by $hK:=$, where XK and $fK$ represent the collection of K samples in $X$ and their objective values, respectively. $h0:=$ represents an initial exploration set with K0 samples. Human strategy is represented by algorithmic parameters $λ$ that govern the search behavior: During the search, each new solution $xk+1$ (for $k=0,…,K−1$) is determined by $hk:=$ and $λ$ through maximizing a merit function with respect to $x$: $xk+1=argmaxx∈XQ(x;hk,λ)$. The functional forms of the merit function $Q(x)$ will be introduced in Secs. 2.2 and 3. We also define $Λ:=diag(λ)$ and its estimator as $Λ̂:=diag(λ̂)$. ### The BO Algorithm. We briefly review the BO algorithm, to explain how each new sample $x$ is drawn based on the merit function $Q(x)$, itself defined by previous samples. Knowing this procedure is necessary for understanding the inverse BO algorithm, where we estimate the most likely BO parameters for a given trajectory of samples. Bayesian optimization contains two major steps in each iteration: For a collection of samples of a black-box function, a GP model is updated; the merit function is then formulated based on the GP model, and the next sample is chosen by maximizing the merit. Model update: It first updates a GP model to predict objective values, based on current observations hk and Gaussian parameters $λ$. Without considering random noise in evaluating the objective, the GP model can be derived as $f̂(x;hk,λ)=b+rTR−1(fk−b)$, where $b=((1TR−1fk)/(1TR−11)), r$ is a column vector with elements $ri=exp (−(x−xi)TΛ(x−xi))$ for $i=1,…,k, R$ is a symmetric matrix with $Rij=exp (−(xi−xj)TΛ(xi−xj))$ for $i,j=1,…,k$, and $1$ is a column vector with ones. Without prior knowledge, the MLE of $λ$ for the GP model can be derived by solving $λ̂GP=argminλ log(σk|R|12)$ (1) where $σ2=(fk−1b)TR−1(fk−1b)/n$ is the MLE of the GP variance. Sampling the solution space: The second step is to determine the next sample using the GP model. A common sampling strategy is to pick the new solution in $X$ that maximizes the expected improvement from the current best objective value $fmin:=minfk$ (assuming a minimization problem): $QEI(x;hk,λ)=(fmin−f̂)Φ((fmin−f̂)/σ)+σϕ((fmin−f̂)/σ)$. Here, $Φ(·)$ and $ϕ(·)$ are the cumulative distribution function and probability density function of the standard normal distribution, respectively. The new sample is thus obtained by solving $xk+1=argmaxx∈XQEI(x;hk,λ)$ (2) Figure 2 demonstrates four iterations of BO in optimizing a 1D function, with the GP model and the expected improvement function updated in each iteration. Note that similar to human searching behavior, BO is a stochastic process: First, the choice of the new design is stochastic, with better designs being more probable to be chosen;2 and second, the initial exploration h0 can be stochastic when it is modeled by a random sampling scheme, e.g., Latin hypercube sampling (LHS, see Ref. [12] for details). ## Inverse Bayesian Optimization We consider human solution search to consist two stages: A few exploratory searches are first conducted to acquire a preliminary understanding of the problem, before the execution of BO follows. For example, a player may spend a few trials to get familiar with a new game, before thinking about strategies to improve his score. IBO minimizes the sum of two costs corresponding to the exploration and BO stages, respectively. By doing so, it finds the most likely explanation of the underlying search strategy. Specifically, IBO estimates $λ$, along with the size of the initial exploration set K0, given the trajectory hK. To do so, we introduce and minimize a cost function consisting of the exploration cost for h0, denoted as LINI, and the BO cost for the rest of hK, denoted as LBO. We define $LINI:=−log (Dp(X0))$, where $p(X0)$ is the joint probability of the exploration set and $D:=|X|$ is the size of the solution space; and $LBO:=−log (Dp(hK−h0|h0))=−∑k=0K−1 log (Dp(xk+1|hk))$ where $p(xk+1|hk)$ is the density for choosing $xk+1$ conditioned on hk. Here, $log(·)$ stands for natural logarithm. The derivation of LINI and LBO is as follows: To calculate LINI, we assume that each new sample during the exploration phase, xi for $i=1,…,K0$, tends to maximize its minimum Euclidean distance $d(xi,X to previous samples $X, this is referred to as the max–min sampling scheme in what follows. Let the joint probability of the exploration set be $p(X0)=p(x1)p(x2|x1)⋯p(xK0|X and each conditional probability follows a Boltzmann distribution: $p(xi|X. Here, the scalar αINI represents how strictly each sample from $X0$ follows the max–min sampling scheme, and $ZINI(xi,αINI)=∫x∈X exp (αINId(x,X is a partition function that ensures that $∫Xp(xi|X. Note that the first sample in the exploration set is considered to be uniformly drawn, and thus, its contribution to the cost (a constant) can be omitted. To calculate LBO, the conditional probability density of sampling $x∈X$ based on current hk can be similarly modeled as a Boltzmann distribution $p(x|hk)=exp(αBOQEI(x;hk,λ))/ZBO(hk,λ,αBO)$ (3) where $ZBO(hk,λ,αBO)=∫x∈X exp(αBOQEI(x;hk,λ))dx$ is also a partition function. The parameter αBO plays a similar role to αINI. For simplicity, we define $l̃i:=−log(Dp(xi|X and $lk:=−log(Dp(xk+1|hk))$, so that $LINI=∑i=1K0l̃i$ and $LBO=∑k=0K−1lk$. A lower value of $l̃$ or l represents higher probability density of the current sample to be drawn by max–min sampling or BO, respectively, and a zero indicates that the sample can be considered as uniformly drawn. Inverse BO solves the following problem to derive $λ̂$: $minαINI,αBO,λ,K0L:=LINI+LBO$ (4) Note that to find the optimal K0 for any given αINI, αBO, and $λ$, one can first calculate the optimal $l̃i$ and lk for $i,k=2,…,K$, with respect to αINI, αBO, and $λ$, and then scan $K0=2,…,K$ to find the lowest value of $LINI+LBO$. The scan starts at $K0=2$ because it is not meaningful to initialize BO with a single sample. ### Numerical Integration for ZBO. The calculation of each l requires an approximation of the integral $ZBO(hk,λ,αBO)$, where the integrand $QEI(x;hk,λ)$ is usually a highly nonconvex function with respect to $x$, with function values dropping significantly around local maxima. See Fig. 2 for example. Thus, we propose to approximate ZBO with importance sampling using a customized proposal density function that combines a uniform distribution with density $p(x)=1/D$ and a multivariate normal distribution with density $q(x)=(2πσIp)−1 exp(−||x−μ||2/2σI2)$, where σI and $μ$ are the parameters of $q(x)$. The uniform distribution is used to sample over $X$, while the normal distribution helps to improve the approximation by capturing the potential peak at the current sample $xk+1$. Thus, we set $μ:=xk+1$. Let $xiu∈U$ for $i=1,...,I$ and $xjn∈N$ for $j=1,...,J$ be samples from $p(x)$ and $q(x)$, respectively. The approximation $ẐBO$ can be calculated by $ẐBO:=∑UDQEI(xiu)I(1+Dq(xiu))+∑NDQEI(xjn)J(1+Dq(xjn))$ (5) with arguments of QEI omitted for simplicity. The derivation of Eq. (5) is presented in the Appendix. Note that this approximation works under the assumption that $∫x∈Xq(x)dx≈1$, which is plausible as the normal distribution is designed to have a narrow spread to match the local peak at $xk+1$. In this paper, the shape of this normal distribution is set by $σI=0.01$ universally. While the setting of σI affects the variance of the approximation of ZBO, we found this setting to perform well in practice. For ZINI, since the minimum Euclidean distance function in a high-dimensional space with limited samples is a relatively smooth function, we use Monte Carlo sampling for its approximation. ### Simulation Studies. As a validation step, we show that IBO can recover the parameters of a general BO given only an observed search trajectory. If IBO can determine the correct parameters (1) after a few number of iterations, (2) in a high-dimensional problem space, and (3) from a wide range of trajectory/parameter settings, then it could be used to recover parameters for matching a BO algorithm to an observed human search. We use a simulation study to show that, for a given search trajectory, IBO can correctly identify the true $λ$ provided the trajectory is sufficiently different from a random search. In addition, the simulation indicates that learning from already-efficient search behavior (i.e., estimating $λ$ through IBO of an observed effective search trajectory) can lead to better BO convergence than the more common self-improvement methods (i.e., updating $λ̂$ by maximizing the likelihood of the observations according to the GP model). #### Simulation Settings and Results. The simulation study is detailed as follows: We apply BO to a 30-dimensional Rosenbrock function constrained by $X:=[−2,2]30$. To initialize BO, we use LHS to draw ten samples from $X$. BO terminates when the expected improvement for the next iteration is less than $10−3$. At each iteration, the expected improvement is maximized using a multistart gradient descent algorithm [38] with 100 LHS initial guesses. A set of BO parameters, $Λ=0.01I,0.1I,1.0I$, and $10.0I$, are used to perform the search, where $I$ is the identity matrix. For each of the four settings, 30 independent trials are recorded. For each BO setting $Λ$, each candidate estimator $Λ̂$, and each trajectory of length $K=5,...,20$, we solve Eq. (4) using a grid search with $GαBO:={0.01,0.1,1.0,10.0}$ and $GK0:={2,…,K}$. We fix αINI to 1.0 and 10.0 and will discuss its influence on the estimation. Figure 3 presents the resulting minimal L for all the four cases and under all guesses. Each curve in each subplot shows how the minimal L (with respect to αBO and K0) changes as the search continues. The means and standard deviations of L are calculated using the 30 trials. ZINI is approximated using a sample size of 10,000. In approximating ZBO, samples from the normal and the uniform distributions are of equal sizes ($I=J=5000$). #### Analysis of the Results. Based on the results from this simulation, as summarized in Fig. 3, the major finding from this simulation study is that IBO can successfully recover the BO parameters in cases where BO does not resemble uniform random sampling of the design space. In the cases of $Λ=0.01I,0.1I,1.0I$, we see that the correct choices of $Λ̂$ consistently lead to the lowest cost along the search process. After only one or two iterations, in nearly all the cases, the correct parameter has the highest likelihood of all the four propositions, and this remains the case along the search. However, under large BO parameters such as $Λ=10.0I$, the similarity between any two points in the design space becomes close to zero, leading to (almost) uniform uncertainty and expected improvement. Therefore, this setting reduces BO to a uniform random sampling scheme. Figure 3(d) shows that IBO does not perform well in this situation. To better understand the behavior of IBO under near-random searches, a curious reader may find a discussion on the properties of the costs l and $l̃$ in the Appendix. #### Learning From Others Versus Self-Adaptation. The study mentioned earlier showed that the correct BO setting $λ$ can be learned through IBO. This subsection further demonstrates the advantage of “learning from others” (i.e., updating $λ$ through IBO), over “self-adaptation” (i.e., finding the MLE of $λ$ using hk). The settings follow the study mentioned earlier and the results are shown in Fig. 4. First, to show the significant influence of $λ$ on search effectiveness, we show the convergence of two fixed search strategies with $Λ=0.01$ and 10.0. Note that while neither converges to the optimal solution within 50 iterations, the former is significantly more effective than the latter. For “self-adaptive BO,” we use a grid search ($GΛ={0.01I,0.1I,1.0I,10.0I}$) to find $Λ̂GP$ that maximizes Eq. (1) at each iteration and use $Λ̂GP$ to find the next sample. We show in Fig. 4(b) the percentages of the four guesses being $Λ̂GP$ along the search, using $GΛ$ as the initial guesses for BO. The learning from others case starts with $Λ=10.0I$ and uses IBO to derive $Λ̂$ from the trajectory produced by $Λ=0.01I$. From Figs. 3 and 4(b), we see that $Λ̂GP$ does not converge to $Λ=0.01I$ as quickly as IBO, which explains why learning from others outperforms self-adaptation in Fig. 4(a). It is worth noting that this difference in performance may be relatively dependant on the dimensionality of the problem, as the two strategies were found to have similar convergence performance when applied to two-dimensional functions. One potential explanation for this is that, in a lower dimensional space, an effective $Λ̂GP$ can be learned with a smaller number of samples. ## Case Study We now investigate how IBO may improve the performance of BO when applied to a vehicle design and control problem. ### Dimension Reduction for Player's Control Signals. The solution data from each game play consist of (1) the final gear ratio, (2) the recorded acceleration and braking signals, and (3) the corresponding game score. The length of a raw control signal matches that of the track, which has 18,160 distance steps. Encoding control signals to a low-dimensional space is feasible since common acceleration and braking patterns exist across all plays. In Ref. [11], this was done by introducing manually defined state-dependent basis functions (i.e., polynomials of the velocity of the car, slope of the track, distance to the terminal, remaining battery energy, and time spent) to parameterize the control signals. The underlying assumption that human players are aware of all the state-dependent bases is untested. In this paper, we perform dimension reduction based on evidence that human beings often solve high-dimensional problem by performing problem abstraction and using a hierarchical search [3943]. In the context of the ecoRacer game, we hypothesize that players segment the track into m discrete sections and make separate control decisions in each segment. Mathematically, this is equivalent to projecting observed signals onto m independent basis, which can be elegantly addressed by ICA [44]. Compared with principal component analysis, where the bases minimize the covariance of the data, our ICA implementation (see results in Fig. 5) maximizes the Kullback–Leibler divergence between all bases pairs and is more suitable for non-Gaussian signals, such as the control data from this game (i.e., the acceleration/braking signals across players at each step along the track are unlikely to follow a Gaussian distribution). Much like principal component analysis the choice of the number of ICA bases requires a balance between fidelity and practicality. While it is theoretically possible to find the “most likely” number of bases using information-theoretic criteria for model selection [45],3 we chose to use 30 bases because (1) over 95% of the variance is explained and (2) the resultant solution space (30 control variables and one design variable) is small enough for BO to be effective. ### Derivation of λ̂ and λ̂GP. We apply IBO to two players, referred to as “P2” and “P3,” who achieved the second and third highest score within 31 and 73 plays, respectively, much less than the 150 plays from the achiever of the highest score. To do so, we first encode all the control solutions from the two players using the learned ICA bases. Together with the final drive ratios, all the solutions are then normalized to be within $[−1,1]31$. IBO is performed separately on P2 and P3. We found that the probability for either player to have followed the max–min sampling scheme is lower than that of following BO, as the minimal values of $l̃(xk,αINI)$ for $k=2,...,31$ (with respect to αINI) are dominated by those of $l(xk,αBO)$. This means that the players were not likely to have performed an exploration before they started trying to improve their performance. This finding is reasonable, as the scoring mechanism in ecoRacer game, just like in other racer games with fairly predictable vehicle dynamics, can be understood by the player early on. Therefore, the search for $λ̂$ is performed by solving Eq. (4) with $λ∈[0.01,10.0]31$, $αBO∈GαBO$, and a minimal number of initial samples ($K0=2$) required for BO. For comparison purpose, we obtain $λ̂GP$ using plays from P2, which represents a case where BO parameters are fine-tuned by the observed game plays, without trying to explain why these solutions were searched by the player. Due to the nonconvexity of Eqs. (4) and (1), gradient-based searches using a series of ten initial guesses are conducted to avoid inferior local solutions. Finite difference is used for gradient approximation. Both $λ̂$ and $λ̂GP$ are calculated offline and fixed during the execution of BO. ### Comparison of BO Performance. Figure 6 compares the BO performance under $λ̂$ (for P2 and P3), $λ̂GP$, and $Λ=I$. In each case, we start with the first two plays from the players and run 180 BO iterations. Similar to the simulation study, results are reported using 20 trials due to the stochastic nature of BO. Due to the small trial number, bootstrap variance estimators are reported as the shades around the average in the figure. $λ̂$ outperforms the other two settings consistently along the search with statistical significance. The BO performance by mimicking P2 is slightly better than that of P3. The result shows that BO can be improved noticeably by learning from P2 and P3. However, the players' search is not fully mimicked by IBO, as they improved much faster than the modified BO does, indicating that the proposed model still has room for improvement. Nevertheless, the IBO implementation still achieves the closest performance to the players' among all the BO instances, and it is the only algorithm that achieved better performance than the players' best play within 100 iterations. This result demonstrates the potential of IBO to continue an effective human search after the player quits, with an improved search performance from a standard BO. For completeness, we also note that in all the cases, the BO identifies the true optimal final drive ratio at the end of the search. We also qualitatively compare the best human solution with one BO solution with high score, along with the theoretically optimal solution in Fig. 7. The result indicates that while these control strategies yield similar scores, they are quantitatively different, although braking toward the end is observed as a common strategy. Human search data are documented in the webpage,4 where the best players' solution strategies are published. ## Discussion The study mentioned earlier provided a starting point for learning optimization algorithms based on human solution-search data. Yet, many pressing questions remain unanswered. This section will address a few notable ones. Some potential answers to these questions will rely on readers' familiarity with inverse reinforcement learning [19,47,48] (IRL, also called apprenticeship learning [49,50] and inverse optimal control [51]). To familiarize readers with this topic, a discussion on the connection between IBO and IRL is provided in Sec. 5.2. ### Limitations and Potential Values of IBO. From the case study, a strategy learning through IBO outperformed default algorithms, but is yet to reach the performance of the best human solver. This indicates potential room to further improve the algorithm. In the following, we discuss notable limitations of IBO. We shall also note that these also apply to the general problem of designing optimization algorithms through human demonstrations (called DO in what follows). Model of human search strategies: Studies in cognitive science have put forth several core ingredients of human intelligence, including intuitive physics [5255], problem decomposition skills [42,56,57], ability in learning-to-learn [58], and others [10]. While evidence has shown the connection between BO and human search [18], suitable models for human search strategies can be problem dependent. For example, for low-dimensional design problems, Egan et al. [59] showed that people adopting univariate search are more likely to achieve effective search. This result is supported by earlier psychological studies on how children perform scientific reasoning and thus may be useful to explain how people identify unfamiliar systems. However, univariate search may not reflect how people search for solutions in a familiar context (such as car driving) and with a large number of control and design variables to tune, as is the situation of the ecoRacer game. For such high-dimensional and physics-based design and control problems, a potentially reasonable human search model could be to incorporate human intuitive physics models into the evaluation of the expected improvement. Thus instead of estimating GP parameters, one could estimate a statistical model of the state-space equations of the dynamical system, which influences the expected improvement. At a more abstract level, the fundamental challenge in understanding how a human search strategy should be modeled is the lack of knowledge about the functional form of the local objective (i.e., the Q-function) that governs the generation of new solutions during the search based on the current state (cumulative knowledge learned by the human solver). As we will discuss later in this section, this challenge is also a key topic in IRL. Not surprisingly, one notable solution from IRL to this problem is in fact to use nonparametric models such as GP [19,60]. Uncertainty in estimation: A limited amount of demonstrations could be insufficient to provide a good estimation of the BO parameters, even though the underlying parameters are the effective ones. One potential solution to this could be to create a reward mechanism in the crowdsourcing setting, where the reward is determined by both the observed search effectiveness of each human solver and the uncertainty in the estimation of their search strategy. In the context of BO, this uncertainty can be measured by the covariance of the estimator, i.e., the Hessian of the cost function in Eq. (4). For people with effective search yet high estimation uncertainty, we can solicit more solutions from them by offering rewards. It would also be interesting to understand the influence of the properties of the problem, e.g., the size of the solution space, on the convergence of the estimation. Knowledge transferability: The third limitation concerns the transferability of knowledge (search strategies) learned from one task (an optimization problem) to others. This limitation also leads to the question of how “effectiveness” of searches shall be measured, as we are not yet able to tell in what condition a strategy that has high rate of improvement (such as P2 in ecoRacer) will continue to produce better solutions than other strategies in a long term. The same issue, however, exists in IRL: e.g., a control policy learned for pancake flipping does not guarantee optimal egg flipping due to the differences in physical properties between pancakes and eggs. One solution to this in IRL is to allow the policy to adjust to new problem settings, by correcting the state transition model according to the new observations. This solution may also be applied to IBO. In the context of ecoRacer, knowledge such as “starting acceleration at the beginning of the track” could be considered as a universal strategy and requires less exploration, while the actual duration for executing this strategy may differ across problem settings. Therefore, it could be more effective for BO to adjust its parameters based on the ones that are learned from human demonstrations on a similar problem, rather than learning from scratch. To summarize, IBO could be a valuable tool for machines to mimic human search behavior when (1) the underlying human search mechanism follows BO, (2) the demonstration is sufficient for estimating the true BO parameters with low variances, and (3) the true optimal BO parameters for a long-term search can be estimated based on an effective short-term search. ### The Difference Between Learning to Search and Learning a Solution. The proposed IBO approach can be considered as a way to design optimization algorithms with human guidance and is mathematically similar to IRL. In order to explain the similarities and differences between the two, we first introduce Markov decision process (MDP) and RL and make an analogy between MDP and an optimization algorithm. #### Preliminaries on MDP and RL. An MDP is defined by a tuple $$, where $S$ is a set of states, $A$ is a set of actions, the state transition function $T(s,a,s′)$ determines the probability of changing from state $s$ to $s′$ when action $a$ is taken, $R(s,a)$ is the instantaneous reward of taking action $a$ at state $s$, $γ∈[0,1)$ is the discount factor of future reward, and $b0(s)$ specifies the probability of starting the process at state $s$. In RL, a control policy π is a mapping from a state to an action, i.e., $π:S→A$. The long-term value of π for a starting state $s$ can be calculated by $Vπ(s)=R(s,π(s))+γ∑s′∈ST(s,π(s),s′)Vπ(s′)$, and thus, the value of π over all possible starting states is the expectation $Vπ=∑s∈Sb0(s)Vπ(s)$. A common way to represent a control policy is to introduce a Q-function $Q(s,a;λ)$ with unknown control parameters $λ$, and let the policy be $a(s)=argmaxAQ(s,a;λ)$. RL identifies the optimal $λ$ that maximizes $Vπ$. #### MDP Versus Optimization Algorithm. An optimization algorithm defines a decision process: Its instantaneous reward is the improvement in the objective value achieved by each new sample, and the cumulative reward represents the total improvement in the objective within a finite number of iterations; its state contains the current solution (in $X$), the corresponding objective value, and potentially the gradient and higher-order derivatives of the objective function at the current solution; its action is the next solution to evaluate; and its state transition is governed by the optimization algorithm and its parameters. This is similar to MDP where the state transition is affected by the control parameters. The decision process defined by an optimization algorithm, however, is usually non-Markovian, as the new solutions rely on the entire search trajectory. Note that it is still possible to consider the optimization process as an MDP, by redefining the state as the continuously growing search trajectory, i.e., elements in the state set $S$ shall represent all possible search trajectories, rather than samples in $X$. #### IRL Versus IBO. RL algorithms identify an optimal control policy for an MDP with a given reward function. However, real-world applications hardly have explicit definitions of rewards, e.g., the reward for “driving a car” cannot be explicitly defined, although people form control policies based on their inherent reward (preference). Therefore, control policy for such applications can be learned more effectively through demonstrations of human beings, which are assumed to be optimal according to the inherent reward of the demonstrator. IRL techniques have thus been developed to identify the reward (and consequently, the Q-function and the optimal control policy) that explains human demonstrations, either by estimating the reward parameters so that the demonstrated policy has a higher value than any other policies by a margin [47,49,61,62] or by finding the maximum likelihood control parameters directly [48,63]. The IBO approach introduced in this paper is closely related to latter type of IRLs, and more precisely, to the maximum entropy method of Ziebart et al. [48]. Briefly, the maximum entropy IRL proposes the following MLE of parameters $λ$ based on a set of demonstrations h: $λ̂=argmaxλ log P(h|λ)=argmaxλ log exp(∑(si,ai)∈hR(si,ai,λ))∏(si,ai)∈hZi(λ)$ (6) where $Zi(λ)$ is a partition function for the visited state si. One can notice the similarities between Eqs. (6) and (4): (1) Both are maximum likelihood parameter estimations related to an instantaneous cost, i.e., the reward in Eq. (6) and the expected improvement in IBO. (2) Both involves partition functions that are computationally expensive and dependent on the parameters $λ$. Due to this dependency, a direct Markov-Chain Monte Carlo sampling in the space of $λ$ (e.g., as in Ref. [63]) cannot be applied to optimize the likelihood function since the partition values for two different samples of $λ$ do not cancel. Ziebart et al. discussed on alternative approach to address this computational challenge, by using the “expected edge frequency calculation” algorithm that has a complexity of $O(N|S||A|)$ for each gradient calculation of the objective in Eq. (6), where N is a large number [48]. However, this approach can be infeasible for the IBO estimation problem in Eq. (4) since (1) the space $X$ is usually continuous and (2) even with a discretization of $X$, the enormous size of $S$ and $A$ can easily make the calculation intractable, based on the discussion in Sec. 5.2.2. Further, one shall notice that IRL and IBO use different assumptions about human demonstrations: Demonstrations in IRL are assumed to be near-optimal. Thus, learning from them leads to an optimal control policy for an MDP. Demonstrations in IBO, on the other hand, are assumed to be from an effect search strategy, yet are not necessarily optimal. Thus, learning from them leads to an optimization algorithm, rather than a solution. This difference affects the application of the two: IRL can be used when the machine is told to mimic existing solutions, by understanding why these solutions are considered good, e.g., it answers the question “why do people flip pancakes this way?”; IBO can be used when the machine is meant to mimic the process of searching for good solutions, by understanding how to evaluate the expected improvement of solutions, e.g., it answers the question “how did people figure out this way of pancake flipping?” ## Conclusions In this paper, we attempted to address a dilemma in design crowdsourcing: While human beings acquire more advanced intelligence than machines in solving certain types of optimal design problems, soliciting valuable solutions through existing crowdsourcing mechanisms is not cost-effective due to the lack of control over crowd participation and the problem-specific qualification of the crowd. Based on the previous finding that more people acquire good searching strategies than good solutions, we proposed in this paper to mimic human search demonstrations by inversely learning a Bayesian optimization algorithm, so that long-term search can be executed more effectively by the computer even when human solvers abandon the problem. Through simulation and case studies, we showed improved performance of BO when it is equipped with parameters learned through an effective human search. However, the significant performance gap between a human demonstrator and the proposed algorithm in the case study suggested room for improvement of the algorithm. Future investigation will focus on closing this gap by exploring more suitable cognitive models of human solution searching for specific types of optimal design problems. ## Funding Data • National Science Foundation (Grant No. CMMI-1266184). ### Appendix ##### Derivation of the Partition Function (ẐBO). Here we provide the derivation of the approximation of $ẐBO$ in Eq. (5). Let $p(x)=1/D$ and q(x) be a uniform and a normal density function, respectively, D be the size of $X$, and f(x) be the function to be integrated. Also, let $I$ and $J$ be the sample sets drawn from these two distributions, with sizes $I:=|I|$ and $J:=|J|$. We have $∫f(x)dx=D∫f(x)p(x)dx=D(∫f(x)p(x)2p(x)+q(x)dx+∫f(x)q(x)p(x)p(x)+q(x)dx)≈D(1I∑If(x)p(x)p(x)+q(x)+1J∑Jf(x)p(x)p(x)+q(x))=∑If(x)DI(1+Dq(x))+∑Jf(x)DJ(1+Dq(x))$ ##### IBO Behavior Under Near-Random Search ###### Properties of l and l̃. From Sec. 3.1, the unbiased estimation of $l(x,αBO)$ through importance sampling is $l̂(x,αBO)=−log exp(αBOQEI(x))ẐBO/D$ (A1) $l̂(x,αBO)$ has the following properties. Property 1. $αBO=0$ leads to $l̂(x,0)=0$, indicating that $x$ is uniformly sampled. One can see that the optimal cost of LBO is nonpositive, as one can always achieve LBO = 0 by considering samples to be uniformly drawn. Property 2. When the expected improvement function is constant almost everywhere, i.e., $Pr(QEI(x)=C)=1$, we have $Pr(l̂(x,αBO)=0)=1$. This is because a uniformly drawn initial guess will almost surely satisfy the optimality condition for maximizing a constant function. Property 3. Note that $1+Dq(xi)≈1$ for $xi∈U$ due to the small σI (see Sec. 3.1) and $(exp(αBOQEI(xi)))/(1+Dq(xi))≈0$ for large D and small αBO. The partial derivative of $l̂(x,0)$ with respect to αBO can be approximated as $∂l̂(x,0)∂αBO=c(αBO)∑UΔai$ (A2) where $c(αBO)>0$ and $Δai:=QEI(xi)−QEI(x)$. Here, we need to introduce a conjecture: Let $Q¯EI:=∫XQEI(x)dx/D$ be the average expected improvement, and $A:=∫X⊮(QEI(x)>Q¯EI)dx$ be the measure of a subspace where the sampled expected improvement value is higher than $Q¯EI$. A decreases from above to below $D/2$ along the increase of the BO sample size. In other words, a uniformly drawn sample has more than 50% of chance to have an expected improvement value higher than $Q¯EI$ at the early stage of BO and less than 50% at the late stage. One evidence of the conjecture is illustrated in Fig. 2: In the first iteration, $Q¯EI$ is slightly lower than 0.5 while the majority of $X$ has $QEI>Q¯EI$; in the fourth iteration, however, only a small region around the peak has $QEI>Q¯EI$. Using this conjecture, we can show that $∑UΔai<0$ when the sample size is small, thus $((∂l̂(x,0))/(∂αBO))<0$. Together with Property 1, we have $l̂(x,αBO)<0$ for a small αBO and a small sample size. Property 4. We notice that in this experiment, the discrepancy between LHS and the modeled max–min sampling scheme leads to overall high (positive) $l̃$ values, indicating that the samples are not likely follow this scheme. This is consistent with the fact that LHS is not exactly the same as max–min sampling, at least until all of h0 have been considered. We also see that negative $l̃$ values can be observed when αINI is low, suggesting that the LHS samples can be better explained by a loosely executed max–min sampling scheme than a strict one. ###### Discussion on Findings From Fig. 3. We now summarize a complete list of findings based on these properties. Finding 1. A comparison between $αINI=1.0$ and 10.0 leads to a finding consistent with Property 4. Since the samples are not likely to be drawn from a strictly executed max–min sampling scheme, the entire search trajectory is considered to be created from BO in the case of $αINI=10.0$. While the early samples (less than 10) can be considered as from max–min sampling when $αINI=1.0$ ($l̃<0$), the low magnitude of $l̃$ causes this difference to be only visible in the case of $Λ=10.0I$, where the magnitude of l is also low. Finding 2. IBO correctly identifies the true $Λ$ within a few iterations after the initial exploration, except for the case of $Λ=10.0I$. To explain this exception, we first note that $Λ=10.0I$ leads to an expected improvement function that is constant almost everywhere (except for the sampled locations where QEI = 0), and thus, BO reduces to uniform sampling. From Property 2, LBO = 0 almost surely when we have the correct guess on $Λ$. Also, recall from Property 4 that LINI > 0 when αINI is high. The above two together explain why with the correct guess of $Λ=10.0I$, we have L close to zero when $αINI=10.0$ and slightly negative when $αINI=1.0$.5 To explain the negative L values for the incorrect guesses of $Λ$, we use Property 3 to show that when the sample size is small and the expected improvement function is not flat, LBO < 0 for a small αBO, and thus L < 0. To summarize, Finding 2 suggests that for a search trajectory with a limited length that resembles a random search, the proposed IBO approach will consider it being derived from a BO that loosely solves Eq. (4). However, this caveat is of little practical concern, since (1) a random search rarely outperforms BO with nontrivial settings and (2) a BO with low αBO (instead of high $Λ$) can equally simulate a random search. 1 To be more accurate, the discussion in Ref. [15] is for function learning with continuous variables. While our case study involves discrete variables (acceleration and braking signals), the dimension reduction process converts these variables to continuous ones. See Sec. 4. 2 Numerically, this is because optimizing the nonconvex function QEI requires a nested global optimization routine, such as genetic algorithm, CMA-ES [35], DIRECT [36], and BARON [37]. Some implementations of these, e.g., genetic algorithm and CMA-ES, can be stochastic. 3 For completeness, we used 1000 principal component analysis as preprocessing to obtain the most likely number of ICA components under three suitable criteria: minimum description length, Akaike information criterion, and Kullback information criterion, as 187, 464, and 373, respectively, using the method from Ref. [45]. While these dimensionalities could make sense from a neurological perspective (e.g., given that the game takes 36 s, a decision interval of $36 s/187=192 ms$ is close to the range for the time-frame of attentional blink, which is 200–500 ms [46]), the resultant high-dimensional solution spaces are unfavorable for BO. 5 But why does the guess of $Λ=10.0I$ lead to significantly decreasing L in the other three cases? This is because in those cases, BO does not resemble random sampling, i.e., the sequences of samples are more clustered. When a new sample is among this cluster, its similarities to existing ones are nonzero even when a large $Λ$ is assumed, due to the small Euclidean distance among the pairs. And in turn, the expected improvement function has peaks within the clusters and remains constant far away from them, rather than being a constant almost everywhere. As a result, the optimal value of $l̂(x,αBO)$ with respect to αBO becomes negative, even when $Λ$ is incorrectly guessed as $10.0I$. ## References References 1. Cooper , S. , Khatib , F. , Treuille , A. , Barbero , J. , Lee , J. , Beenen , M. , Leaver-Fay , A. , Baker , D. , and Popović , Z. , 2010 , “ Solve Puzzle for Science ,” Foldit, University of Washington, Seattle, WA, accessed July 26, 2017, http://fold.it 2. Khatib , F. , Cooper , S. , Tyka , M. D. , Xu , K. , Makedon , I. , Popović , Z. , Baker , D. , and Players , F. , 2011 , “ Algorithm Discovery by Protein Folding Game Players ,” , 108 ( 47 ), pp. 18949 18953 . 3. Lee , J. , , W. , Lee , M. , Cantu , D. , Azizyan , M. , Kim , H. , Limpaecher , A. , Yoon , S. , Treuille , A. , and Das , R. , 2014 , “ Solve Puzzle. Invent Medicine ,” Eterna, Carnegie Mellon University/Stanford University, Pittsburgh, PA/Stanford, CA, accessed July 26, 2017, http://eterna.cmu.edu 4. Lee , J. , , W. , Lee , M. , Cantu , D. , Azizyan , M. , Kim , H. , Limpaecher , A. , Yoon , S. , Treuille , A. , and Das , R. , 2014 , “ RNA Design Rules From a Massive Open Laboratory ,” , 111 ( 6 ), pp. 2122 2127 . 5. Kawrykow , A. , Roumanis , G. , Kam , A. , Kwak , D. , Leung , C. , Wu , C. , Zarour , E. , Sarmenta , L. , Blanchette , M. , and Waldispühl , J. , 2012 , “ Phylo: A Citizen Science Approach for Improving Multiple Sequence Alignment ,” PLoS One , 7 ( 3 ), p. e31362 . 6. Sung , J. , Jin , S. H. , and Saxena , A. , 2015 , “ Robobarista: Object Part Based Transfer of Manipulation Trajectories From Crowd-Sourcing in 3D Pointclouds ,” preprint arXiv:1504.03071 .https://arxiv.org/abs/1504.03071 7. Le Bras , R. , Bernstein , R. , Gomes , C. P. , Selman , B. , and Van Dover , R. B. , 2013 , “ Crowdsourcing Backdoor Identification for Combinatorial Optimization ,” 23rd International Joint Conference on Artificial Intelligence ( IJCAI ), Beijing, China, Aug. 3–9, pp. 2840 2847 .https://pdfs.semanticscholar.org/fdfb/1a3e026b8d57487c1e54ea044494a1056df6.pdf 8. Ren , Y. , Bayrak , A. E. , and Papalambros , P. Y. , 2016 , “ Ecoracer: Game-Based Optimal Electric Vehicle Design and Driver Control Using Human Players ,” ASME J. Mech. Des. , 138 ( 6 ), p. 061407 . 9. Schrope , M. , 2013 , “ Solving Tough Problems With Games ,” , 110 ( 18 ), pp. 7104 7106 . 10. Lake , B. M. , Ullman , T. D. , Tenenbaum , J. B. , and Gershman , S. J. , 2016 , “ Building Machines That Learn and Think Like People ,” preprint arXiv:1604.00289 .https://arxiv.org/abs/1604.00289 11. Ren , Y. , Bayrak , A. E. , and Papalambros , P. Y. , 2015 , “ EcoRacer: Game-Based Optimal Electric Vehicle Design and Driver Control Using Human Players ,” ASME Paper No. DETC2015-46836. 12. Jones , D. , Schonlau , M. , and Welch , W. , 1998 , “ Efficient Global Optimization of Expensive Black-Box Functions ,” J. Global Optim. , 13 ( 4 ), pp. 455 492 . 13. Brochu , E. , Cora , V. M. , and De Freitas , N. , 2010 , “ A Tutorial on Bayesian Optimization of Expensive Cost Functions, With Application to Active User Modeling and Hierarchical Reinforcement Learning ,” preprint arXiv:1012.2599 .https://arxiv.org/abs/1012.2599 14. Rasmussen , C. E. , and Williams, C. K. I. , 2006 , Gaussian Processes for Machine Learning , MIT Press , Cambridge, MA . 15. Lucas , C. G. , Griffiths , T. L. , Williams , J. J. , and Kalish , M. L. , 2015 , “ A Rational Model of Function Learning ,” Psychon. Bull. Rev. , 22 ( 5 ), pp. 1193 1215 . 16. Wilson , A. G. , Dann , C. , Lucas , C. , and Xing , E. P. , 2015 , “ The Human Kernel ,” Advances in Neural Information Processing Systems ( NIPS ), Montreal, QC, Canada, Dec. 7–12, pp. 2854 2862 .https://papers.nips.cc/paper/5765-the-human-kernel.pdf 17. Rasmussen , C. E. , and Ghahramani , Z. , 2001 , “ Occam's Razor ,” Advances in Neural Information Processing Systems ( NIPS ), Vancouver, BC, Canada, Dec. 3–8, pp. 294 300 .http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.32.5075 18. Borji , A. , and Itti , L. , 2013 , “ Bayesian Optimization Explains Human Active Search ,” Advances in Neural Information Processing Systems ( NIPS ), Lake Tahoe, NV, Dec. 5–10, pp. 55 63 .http://dl.acm.org/citation.cfm?id=2999611.2999618 19. Levine , S. , Popovic , Z. , and Koltun , V. , 2011 , “ Nonlinear Inverse Reinforcement Learning With Gaussian Processes ,” Advances in Neural Information Processing Systems , pp. 19 27 . 20. Deisenroth , M. P. , Neumann , G. , and Peters , J. , 2013 , “ A Survey on Policy Search for Robotics ,” Found. Trends Rob. , 2 ( 1–2 ), pp. 1 142 . 21. Calandra , R. , Gopalan , N. , Seyfarth , A. , Peters , J. , and Deisenroth , M. P. , 2014 , “ Bayesian Gait Optimization for Bipedal Locomotion ,” International Conference on Learning and Intelligent Optimization ( LION ), Gainesville, FL, Feb. 16–21, pp. 274 290 . 22. Cully , A. , Clune , J. , Tarapore , D. , and Mouret , J.-B. , 2015 , “ Robots That Can Adapt Like Animals ,” Nature , 521 ( 7553 ), pp. 503 507 . 23. Pretz , J. E. , 2008 , “ Intuition Versus Analysis: Strategy and Experience in Complex Everyday Problem Solving ,” Mem. Cognit. , 36 ( 3 ), pp. 554 566 . 24. Linsey , J. S. , Tseng , I. , Fu , K. , Cagan , J. , Wood , K. L. , and Schunn , C. , 2010 , “ A Study of Design Fixation, Its Mitigation and Perception in Engineering Design Faculty ,” ASME J. Mech. Des. , 132 ( 4 ), p. 041003 . 25. Daly , S. R. , Yilmaz , S. , Christian , J. L. , Seifert , C. M. , and Gonzalez , R. , 2012 , “ Design Heuristics in Engineering Concept Generation ,” J. Eng. Educ. , 101 ( 4 ), pp. 601 629 . 26. Cagan , J. , Dinar , M. , Shah , J. J. , Leifer , L. , Linsey , J. , Smith , S. , and Vargas-Hernandez , N. , 2013 , “ Empirical Studies of Design Thinking: Past, Present, Future ,” ASME Paper No. DETC2013-13302. 27. Björklund , T. A. , 2013 , “ Initial Mental Representations of Design Problems: Differences Between Experts and Novices ,” Des. Stud. , 34 ( 2 ), pp. 135 160 . 28. Egan , P. , and Cagan , J. , 2016 , “ Human and Computational Approaches for Design Problem-Solving ,” Experimental Design Research , Springer , Cham, Switzerland , pp. 187 205 . 29. Cagan , J. , and Kotovsky , K. , 1997 , “ Simulated Annealing and the Generation of the Objective Function: A Model of Learning During Problem Solving ,” Comput. Intell. , 13 ( 4 ), pp. 534 581 . 30. Landry , L. H. , and Cagan , J. , 2011 , “ Protocol-Based Multi-Agent Systems: Examining the Effect of Diversity, Dynamism, and Cooperation in Heuristic Optimization Approaches ,” ASME J. Mech. Des. , 133 ( 2 ), p. 021001 . 31. McComb , C. , Cagan , J. , and Kotovsky , K. , 2016 , “ Drawing Inspiration From Human Design Teams for Better Search and Optimization: The Heterogeneous Simulated Annealing Teams Algorithm ,” ASME J. Mech. Des. , 138 ( 4 ), p. 044501 . 32. Thrun , S. , and Pratt , L. , 1998 , “ Learning to Learn: Introduction and Overview ,” Learning to Learn , Springer , Boston, MA, pp. 3 17 . 33. Wang , J. X. , Kurth-Nelson , Z. , Tirumala , D. , Soyer , H. , Leibo , J. Z. , Munos , R. , Blundell , C. , Kumaran , D. , and Botvinick , M. , 2016 , “ Learning to Reinforcement Learn ,” preprint arXiv:1611.05763 .https://arxiv.org/abs/1611.05763 34. Andrychowicz , M. , Denil , M. , Gomez , S. , Hoffman , M. W. , Pfau , D. , Schaul , T. , and de Freitas , N. , 2016 , “ ,” Advances in Neural Information Processing Systems ( NIPS ), Barcelona, Spain, Dec. 5–10, pp. 3981 3989 35. Hansen , N. , Müller , S. D. , and Koumoutsakos , P. , 2003 , “ Reducing the Time Complexity of the Derandomized Evolution Strategy With Covariance Matrix Adaptation (CMA-ES) ,” Evol. Comput. , 11 ( 1 ), pp. 1 18 . 36. Jones , D. R. , Perttunen , C. D. , and Stuckman , B. E. , 1993 , “ Lipschitzian Optimization Without the Lipschitz Constant ,” J. Optim. Theory Appl. , 79 ( 1 ), pp. 157 181 . 37. Sahinidis , N. V. , 1996 , “ BARON: A General Purpose Global Optimization Software Package ,” J. Global Optim. , 8 ( 2 ), pp. 201 205 . 38. Zhu , C. , Byrd , R. H. , Lu , P. , and Nocedal , J. , 1994 , “ L-BFGS-B: Fortran Subroutines for Large Scale Bound Constrained Optimization ,” Northwestern University, Evanston, IL, Report No. NAM-11 .http://people.sc.fsu.edu/~inavon/5420a/lbfgsb.pdf 39. McGovern , A. , Sutton , R. S. , and Fagg , A. H. , 1997 , “ Roles of Macro-Actions in Accelerating Reinforcement Learning ,” Grace Hopper Celebration of Women in Computing ( GHC ), San Jose, CA, Sept. 19–21, Vol. 1317 .https://pdfs.semanticscholar.org/6c42/70b9ca7cc63a02ddae8974322ec5ea082743.pdf 40. McGovern , A. , and Barto , A. G. , 2001 , “ Automatic Discovery of Subgoals in Reinforcement Learning Using Diverse Density (Computer Science Department Faculty Publication Series) ,” International Conference on Machine Learning ( ICML ), Williamstown, MA, June 28–July 1, p. 8 .https://pdfs.semanticscholar.org/7eca/3acd1a4239d8a299478885c7c0548f3900a8.pdf 41. Dietterich , T. G. , 1998 , “ The MAXQ Method for Hierarchical Reinforcement Learning ,” 15th International Conference on Machine Learning ( ICML ), Madison, WI, July 24–27, pp. 118 126 .https://pdfs.semanticscholar.org/fdc7/c1e10d935e4b648a32938f13368906864ab3.pdf 42. Kulkarni , T. D. , Narasimhan , K. R. , Saeedi , A. , and Tenenbaum , J. B. , 2016 , “ Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation ,” preprint arXiv:1604.06057 .https://arxiv.org/abs/1604.06057 43. Botvinick , M. , and Weinstein , A. , 2014 , “ Model-Based Hierarchical Reinforcement Learning and Human Action Control ,” Philos. Trans. R. Soc. B , 369 ( 1655 ), p. 20130480. 44. Stone , J. V. , 2004 , Independent Component Analysis , Wiley , Hoboken, NJ . 45. Hui , M. , Li , J. , Wen , X. , Yao , L. , and Long , Z. , 2011 , “ An Empirical Comparison of Information-Theoretic Criteria in Estimating the Number of Independent Components of fMRI Data ,” PLoS One , 6 ( 12 ), p. e29274 . 46. Tombu , M. N. , Asplund , C. L. , Dux , P. E. , Godwin , D. , Martin , J. W. , and Marois , R. , 2011 , “ A Unified Attentional Bottleneck in the Human Brain ,” , 108 ( 33 ), pp. 13426 13431 . 47. Ng , A. Y. , and Russell , S. J. , 2000 , “ Algorithms for Inverse Reinforcement Learning ,” 17th International Conference on Machine Learning ( ICML ), Stanford, CA, June 29–July 2, pp. 663 670 .http://ai.stanford.edu/~ang/papers/icml00-irl.pdf 48. Ziebart , B. D. , Maas , A. L. , Bagnell , J. A. , and Dey , A. K. , 2008 , “ Maximum Entropy Inverse Reinforcement Learning ,” 23rd National Conference on Artificial Intelligence ( AAAI ), Chicago, IL, July 13–17, pp. 1433 1438 .https://www.aaai.org/Papers/AAAI/2008/AAAI08-227.pdf 49. Abbeel , P. , and Ng , A. Y. , 2004 , “ Apprenticeship Learning Via Inverse Reinforcement Learning ,” 21st International Conference on Machine Learning ( ICML ), Banff, AB, Canada, July 4–8, p. 1 .http://ai.stanford.edu/~ang/papers/icml04-apprentice.pdf 50. Abbeel , P. , Coates , A. , and Ng , A. Y. , 2010 , “ Autonomous Helicopter Aerobatics Through Apprenticeship Learning ,” Int. J. Rob. Res. , 29 ( 13 ), pp. 1608 1639 . 51. Dvijotham , K. , and Todorov , E. , 2010 , “ Inverse Optimal Control With Linearly-Solvable MDPs ,” 27th International Conference on Machine Learning ( ICML ), Haifa, Israel, June 21–24, pp. 335 342 .https://homes.cs.washington.edu/~todorov/papers/DvijothamICML10.pdf 52. Spelke , E. S. , Gutheil , G. , Van de Walle , G. , and Osherson, D. , 1995 , “ The Development of Object Perception ,” An Invitation to Cognitive Science , Vol. 2, 2nd ed., MIT Press, Cambridge, MA. 53. Baillargeon , R. , Li , J. , Ng , W. , and Yuan , S. , 2009 , “ An Account of Infants' Physical Reasoning ,” Learning and the Infant Mind , Oxford University Press, New York, pp. 66 116 . 54. Bates , C. J. , Yildirim , I. , Tenenbaum , J. B. , and Battaglia , P. W. , 2015 , “ Humans Predict Liquid Dynamics Using Probabilistic Simulation ,” 37th Annual Conference of the Cognitive Science Society ( COGSCI ), Pasadena, CA, July 22–25, pp. 172–177.http://www.mit.edu/~ilkery/papers/probabilistic-simulation-model.pdf 55. Gershman , S. J. , Horvitz , E. J. , and Tenenbaum , J. B. , 2015 , “ Computational Rationality: A Converging Paradigm for Intelligence in Brains, Minds, and Machines ,” Science , 349 ( 6245 ), pp. 273 278 . 56. Fodor , J. A. , 1975 , The Language of Thought , Vol. 5 , Harvard University Press , Cambridge, MA . 57. Biederman , I. , 1987 , “ Recognition-by-Components: A Theory of Human Image Understanding ,” Psychol. Rev. , 94 ( 2 ), p. 115 . 58. Harlow , H. F. , 1949 , “ The Formation of Learning Sets ,” Psychol. Rev. , 56 ( 1 ), p. 51 . 59. Egan , P. , Cagan , J. , Schunn , C. , and LeDuc , P. , 2015 , “ Synergistic Human-Agent Methods for Deriving Effective Search Strategies: The Case of Nanoscale Design ,” Res. Eng. Des. , 26 ( 2 ), pp. 145 169 . 60. Choi , J. , and Kim , K.-E. , 2012 , “ Nonparametric Bayesian Inverse Reinforcement Learning for Multiple Reward Functions ,” Advances in Neural Information Processing Systems ( NIPS ), Lake Tahoe, NV, Dec. 3–8, pp. 305 313 .https://papers.nips.cc/paper/4737-nonparametric-bayesian-inverse-reinforcement-learning-for-multiple-reward-functions 61. Ratliff , N. D. , Bagnell , J. A. , and Zinkevich , M. A. , 2006 , “ Maximum Margin Planning ,” 23rd International Conference on Machine Learning ( NIPS ), Pittsburgh, PA, June 25–29, pp. 729 736 .http://martin.zinkevich.org/publications/maximummarginplanning.pdf 62. Syed , U. , and Schapire , R. E. , 2007 , “ A Game-Theoretic Approach to Apprenticeship Learning ,” Advances in Neural Information Processing Systems ( NIPS ), Vancouver, BC, Canada, Dec. 3–6, pp. 1449 1456 .https://papers.nips.cc/paper/3293-a-game-theoretic-approach-to-apprenticeship-learning 63. Ramachandran , D. , and Amir , E. , 2007 , “ Bayesian Inverse Reinforcement Learning ,” Urbana , 51 (61801), pp. 1–4.https://www.aaai.org/Papers/IJCAI/2007/IJCAI07-416.pdf
2019-10-22 16:55:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 259, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6799229383468628, "perplexity": 1507.0805944637277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822458.91/warc/CC-MAIN-20191022155241-20191022182741-00117.warc.gz"}
https://eng.libretexts.org/Bookshelves/Computer_Science/Programming_Languages/Think_Java_-_How_to_Think_Like_a_Computer_Scientist_(Downey)/10%3A_Variables_and_Operators/10.07%3A_Rounding_Errors
# 10.7: Rounding Errors Most floating-point numbers are only approximately correct. Some numbers, like reasonably-sized integers, can be represented exactly. But repeating fractions, like 1/3, and irrational numbers, like $$\pi$$, cannot. To represent these numbers, computers have to round off to the nearest floating-point number. The difference between the number we want and the floating-point number we get is called rounding error. For example, the following two statements should be equivalent: System.out.println(0.1 * 10); System.out.println(0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1); But on many machines, the output is: 1.0 0.9999999999999999 For many applications, like computer graphics, encryption, statistical analysis, and multimedia rendering, floating-point arithmetic has benefits that outweigh the costs. But if you need absolute precision, use integers instead. For example, consider a bank account with a balance of \$123.45: double balance = 123.45; // potential rounding error The problem is that 0.1, which is a terminating fraction in decimal, is a repeating fraction in binary. So its floating-point representation is only approximate. When we add up the approximations, the rounding errors accumulate. In this example, balances will become inaccurate over time as the variable is used in arithmetic operations like deposits and withdrawals. The result would be angry customers and potential lawsuits. You can avoid the problem by representing the balance as an integer: int balance = 12345; // total number of cents This solution works as long as the number of cents doesn’t exceed the largest integer, which is about 2 billion. This page titled 10.7: Rounding Errors is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Allen B. Downey (Green Tea Press) .
2023-03-27 19:10:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8364923596382141, "perplexity": 980.136937288868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948684.19/warc/CC-MAIN-20230327185741-20230327215741-00320.warc.gz"}
http://gmatclub.com/forum/what-is-the-remainder-when-positive-integer-x-is-divided-by-68790.html?kudos=1
Find all School-related info fast with the new School-Specific MBA Forum It is currently 30 May 2015, 10:40 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # What is the remainder when positive integer x is divided by Author Message TAGS: Intern Joined: 18 Jul 2008 Posts: 36 Followers: 0 Kudos [?]: 4 [0], given: 0 What is the remainder when positive integer x is divided by [#permalink]  12 Aug 2008, 09:28 00:00 Difficulty: (N/A) Question Stats: 0% (00:00) correct 0% (00:00) wrong based on 0 sessions What is the remainder when positive integer x is divided by 6 ? 1. When x is divided by 2, the remainder is 1; and when divided by 3, the remainder is 0. 2. when x is divided by 12 the remainder is 3. SVP Joined: 30 Apr 2008 Posts: 1891 Location: Oklahoma City Schools: Hard Knocks Followers: 34 Kudos [?]: 471 [0], given: 32 Re: DS - positive integer [#permalink]  12 Aug 2008, 09:38 D. #1 - The set that satisfies remainder of 1 when x is divided by 2 is: {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33...[all odd numbers]} The red numbers above are the ones that satisfy remainder of 1 when it is divided by 2 and remainder of 0 when divided by 3. Now, for those numbers in red, what is the remainder when it is divided by 6? The remainder is always 3. So #1 is SUFFICIENT. #2 - when x is divided by 12, the remainder is 3...the set that satisfies this is below: { 3, 15, 27, 39, 51...} If you divide each of these by 6, what is the remainder? It's always 3. 3/6 = remainder of 3. 15 / 6 = 2 3/6. #2 is SUFFICIENT. mba9now wrote: What is the remainder when positive integer x is divided by 6 ? 1. When x is divided by 2, the remainder is 1; and when divided by 3, the remainder is 0. 2. when x is divided by 12 the remainder is 3. _________________ ------------------------------------ J Allen Morris **I'm pretty sure I'm right, but then again, I'm just a guy with his head up his a. GMAT Club Premium Membership - big benefits and savings Re: DS - positive integer   [#permalink] 12 Aug 2008, 09:38 Similar topics Replies Last post Similar Topics: What is the remainder when the positive integer x is divided 9 28 Oct 2010, 20:16 What is the remainder when the positive integer x is divided 4 26 Mar 2007, 22:24 What is the remainder when the positive integer x is divided 5 21 Jan 2007, 05:56 What is the remainder when the positive integer x is divided 6 03 Nov 2005, 15:19 What is the remainder when the positive integer x is divided 2 08 Oct 2005, 14:38 Display posts from previous: Sort by
2015-05-30 18:40:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2307644635438919, "perplexity": 1195.3599329197805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207932596.84/warc/CC-MAIN-20150521113212-00221-ip-10-180-206-219.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/19543/existence-of-np-problems-with-complexity-intermediate-between-p-and-np-hard
# Existence of NP problems with complexity intermediate between P and NP-hard Assuming P!=NP, there is a result that there are decision problems intermediate between P and NP-complete. That is, the class NP cannot be a union of two disjoint subsets: P and NP-complete. I could never quite understand the proof of the above result. The proof I saw in a textbook was starting with the assumption that one can enumerate all P and NP-hard problems, and then proceeding with a construction of a function that didn't fit in either. However, this construction seemed a bit fishy to me; in particular, the assumption that one can start with enumerated set of problems in a particular class, the NP. Could you refer me to a clear self-contained proof of the statement in the 1st paragraph? More generally, what would be a good reference for proofs of such results? ## 1 Answer The result you're describing is called 'Ladner's Theorem.' The Wikipedia article on 'NP-Intermediate' is probably a good place to start - you can find references to Ladner's original paper there. There's also an extensive list of such problems on the cstheory Stackexchange. For a couple of proofs of Ladner's Theorem, check out this note adapted from Downy & Fortnow's paper 'Uniformly Hard Languages'.
2022-01-27 18:39:27
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8423318862915039, "perplexity": 414.53124082259586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305277.88/warc/CC-MAIN-20220127163150-20220127193150-00295.warc.gz"}
https://baru2.tenderdb.com/how-to-yuvbzes/3vhi3y9.php?page=29d347-ca-oh-2-acid-base-reaction
## ca oh 2 acid base reaction Identify all of the phases in your answer. For one reaction, 16.8 g of HNO 3 is present initially. Write net ionic equation for this acid-base reaction: 2CsOH(aq)+H2SO4(aq)→Express your answer as a chemical equation. is present initially. In an acid-base (or neutralization) reaction, the H + ions from the acid and the OH-ions from the base react to create water (H 2 O). 1. Example 11 Write the neutralization reactions between each acid and base. Identify all of the phases in your answer. Example 2 Write the balanced chemical equation for the neutralization reaction between H 2 SO 4 and KOH. A small amount of hydrochloric acid is carefully poured into the remaining test tube. Write a balanced ionic equation for this acid-base reaction: Ca(OH)2(aq)+2CH3CO2H(aq)→ For one reaction, 16.8 g of HNO 3 is present initially. The production of water is the driving force for the this reaction. Example 2 Nitric acid [HNO 3 (aq)] can be neutralized by calcium hydroxide [Ca(OH) 2 (aq)]. This is an acid (HNO3) base Ca(OH)2 reaction so the products will always be a salt + water. Video explaining Types of Chemical Reactions for Chemistry. Now let’s move further with acid base reactions or neutralization reaction. b. Which reaction is an example of an acid base neutralization reaction? c. The salt formed is insoluble in water. Balance the reaction of Ca(OH)2 + H3PO4 = Ca3(PO4)2 + H2O using this chemical equation balancer! Which of the following is an acid base reaction? d. HBr is a strong acid and Ca(OH) 2 is a strong base. The acid-base reaction class has been studied for quite some time. Solution The general reaction is as follows: H 2 SO 4 + KOH → H 2 O + salt Because the acid has two H + ions in its formula, we need two OH − ions to react with it, making two H 2 O molecules as product. Fall 10.PUI How many moles of base were required to react completely with the acid in this reaction? . Complete and balance the following equation for acid-base reactions: HC2H3O2(aq)+Ca(OH)2(aq) I thought it was HC2H3O2(aq)+2Ca(OH)2(aq)--->2H2O(l)+2Ca(C2H3O2)(aq) but A strong base is a basic chemical compound that can remove a proton (H +) from (or deprotonate) a molecule of even a very weak acid (such as water) in an acid-base reaction. How many milliliters of 0.0245 M Ca(OH) 2 are needed to neutralize 0.564 g of HN 3? Question: Question 5 (2 Points) Which Of The Following Is An Acid Base Reaction? The balanced equation will appear above. Example $$\PageIndex{2}$$ Nitric acid [HNO 3 (aq)] can be neutralized by calcium hydroxide [Ca(OH) 2 (aq)]. 2CH 3 CH 2 CO 2 H(aq) + Ca(OH) 2 (aq) → (CH 3 CH 2 CO 2) 2 Ca(aq) + 2H 2 O(l) The reaction of a weak acid and a strong base will go to completion, so it is reasonable to prepare calcium propionate by mixing solutions of propionic acid and calcium hydroxide in a 2:1 mole ratio. Test Yourself Write the balanced chemical equation for the dissociation of hydrazoic acid (HN 3 ) and indicate whether it proceeds 100% to products or not. C(s) + O2(8) CO2(g) 02 HCIO4(aq) +Ca(OH)2(aq)2 H2O0)+ Ca(CIO4)2(aq) Fe(s) + 2 AgNO3(aq) 2 Ag(s) + Fe(NO3)2(aq) MgSO4(aq) Ba(NO32) Mg(NO3)2(aq) BaSO4(s) None Of These Are Acid Base Reaction… ____ 22. NaOH (s) + H 2 O → Na (aq) + + OH (aq) – Acid-Base Reaction By the above given acid and base equations we can understand their dissociation in water. Submit Part D Wite net ionic equation for this acid-base reaction Ca(OH),(ag) +2CH,CO,H(ag) Express your answer as a chemical equation. Ca (OH) 2+H2SO4 CaSO4+2H2O Calcium Hydroxide and Sulfuric Acid are mixed; we get Calcium Write the net ionic equation for the complete reaction of barium hydroxide and hydrochloric acid. HNO 3 (aq) and Ba(OH) 2 (aq) H 3 PO 4 (aq) and Ca(OH) 2(aq) Solution First, we will write the chemical equation with the formulas of the reactants and the expected products; then we will Question has a sour taste turns blue litmus paper red H2O Ca(OH)2 contains more hydronium ions than hydroxide ions has a sour taste= Acid turns blue litmus paper red= Acid H2O=neutral Ca(OH)2=base Two elements reacting. Answer: B) H2SO4 (aq) + Ca (OH) 2 (aq) → CaSO4 (aq) + 2 H2O (l) Explanation: A is a reaction between a salt FeCl3 and a base KOH C is a n acid decomposing on it's own to form two products D is mercury, a metal reacting with oxygen. Identify the following as acids, bases, or neutral solutions. Problem: For the following acid-base reaction, calculate the mass (in grams) of the acid necessary to completely react with and neutralize 4.85 g of the base. Instructions To balance a chemical equation, enter an equation of a chemical reaction and press the Balance button. A reaction … As I knew that the molarity of $\ce{Ca(OH)2}$ is just a bit less than that of $\ce{HCl}$, I chose B, and this variant turned out to be the right one. Complete the following acid-base reaction with balanced molecular, total ionic and net ionic equations: calcium hydroxide (aq) + acetic acid (aq) ----> ? Solution for How many grams of nitric acid, HNO3, are required to neutralize 4.30g of Ca(OH)2 according to the acid-base reaction:… Q: Barium reacts with water to produce barium hydroxide and hydrogen gas. 2. Common examples of strong bases include hydroxides of alkali metals and alkaline earth metals, like NaOH and Ca(OH) A) A reaction can result in either oxidation or reduction, not both. Therefore, the general form of an acid-base 1. Write a balanced chemical equation for the reaction between these two compounds and identify the salt it produces. In this reaction setup, lime water, a dilute calcium hydroxide ($$Ca(OH)_2$$) solution, is poured into one of the test tubes and sealed with a stopper. 2 HNO3 (aq) + Ca(OH)2 (aq) → 2 H 2O (l) + Ca(NO3)2 (aq) What scientific concept do you need to know What is the name of the salt that is formed? Because Ca(OH) 2 is listed in Table 12.2 “Strong Acids and Bases”, this reaction proceeds 100% to products. 0.672 M Ca(OH) 2 = mol Ca(OH) 2 0.805 L soln (0.672 M CaOH) 2 × (0.805 L soln) = mol Ca(OH) 2 = 0.541 mol Ca(OH) 2 We combine this information with the proper ratio from the balanced chemical equation to determine the number of moles of HNO 3 needed: When Calcium Hydroxide (base) and Sulfuric Acid (acid) are mixed the same reaction happens that occurs when Acid and base are mixed, salt and water formed. Hydrazoic acid (HN 3) can be neutralized by a base. Let’s take the example of HCl Write a balanced chemical equation for the reaction between these two compounds and identify the salt it produces. Which of the following statements about redox reactions is FALSE? The reaction of an acid with a base to make a salt and water is a common reaction in the laboratory, partly because so many compounds can act as acids or bases. Citric acid (H 3 C 6 H 5 O 7) has three hydrogen atoms that can form hydrogen ions in solution. In 1680, Robert Boyle reported traits of acid solutions that included their ability to dissolve many substances, to change the colors of certain natural dyes, and to lose these traits after coming in contact with alkali (base) solutions. In the book, I have such explanation: In the titration, the reaction is: $$\ce{2HCl + Ca(OH)2 = CaCl2 + 2H2O}$$ The Write the balanced chemical equation for the reaction between hydrazoic acid and calcium hydroxide. In an acid-base reaction, acids and bases react with each other to form salt and water. e. No gas is formed in this reaction. Question: The acid-base reaction between phosphoric acid, {eq}\rm H_3PO_4 {/eq}, and calcium hydroxide, {eq}\rm Ca(OH)_2 {/eq}, yields water and calcium phosphate. I know that calcium hydroxide is ca(OH)2 and that acetic acid is CH3COOH. Submit Request Answer Part C Write a balanced ionic equation for this acid-base reaction: Ca(OH)2(aq) + 2CH3CO2H(aq)→ Express your answer as a chemical equation. is present initially. Another product of a neutralization reaction is an ionic compound called a salt. A) 2 HClO4(aq) + Ca(OH)2(aq) → 2H2O(l) + Ca(ClO4)2(aq) B) Fe(s) + 2AgNO3 Question: An Acid-base Reaction Occurs When Ca(OH)2(aq) And HCl(aq) Are Mixed Together. mol Ca(OH)2 Part 2 (0.7 point) See Hint How many moles of HCl were present in the original 25.00 mL of acid? The balanced equation is 2HNO3 + Ca(OH)2 ==> Ca(NO3)2 + 2H2O so what exactly is your question? Identify the Bronsted-Lowry acid in the following reaction. Another reason that acid-base reactions are so prevalent is because they are often used to determine quantitative amounts of one or the other. Required to react completely with the acid in this reaction = Ca3 ( PO4 ) 2 ca oh 2 acid base reaction to... Of base were required to react completely with the acid in this reaction of 3! 5 ( 2 Points ) which of the following statements about redox is! Many ca oh 2 acid base reaction of base were required to react completely with the acid this! Write net ionic equation for this acid-base reaction Occurs When Ca ( OH ) 2 reaction so products! Hbr is a strong base When Ca ( OH ) 2 and that acid... Can be neutralized by a base ions in solution and base H3PO4 Ca3... Move further with acid base reaction equation balancer class has been studied for quite some time and identify the it! Answer as a chemical reaction and press the balance button in either oxidation or reduction, not.! Amounts of one or the other complete reaction of barium hydroxide and hydrochloric.. Hydroxide is Ca ( OH ) 2 and that acetic acid is carefully poured into the test! General form of an acid-base reaction Occurs When Ca ( OH ) is!: 2CsOH ( aq ) +H2SO4 ( aq ) →Express your answer as a chemical reaction press. Chemical equation are Mixed Together and hydrochloric acid is carefully poured into the remaining test.. Hydrochloric acid is CH3COOH fall 10.PUI how many moles of base were to. That acid-base reactions are so prevalent is because they are often used to determine amounts. So 4 and KOH Ca ( OH ) 2 ( aq ) are Mixed Together be a salt question. Some time as acids, bases, or neutral solutions completely with the acid in reaction... Can form hydrogen ions in solution 5 ( 2 Points ) which the. Which reaction is an acid base reaction ( PO4 ) 2 reaction so the products will always be salt. Salt + water 10.PUI how many moles of base were required to react completely with the acid in this?...: an acid-base 1 OH ) 2 + H3PO4 = Ca3 ( PO4 ) 2 + H3PO4 = (! Question 5 ( 2 Points ) which of the following statements about redox reactions is?... Move further with acid base reaction hydrochloric acid is present initially and press the balance button form of acid-base! Into the remaining test tube of 0.0245 M Ca ( OH ) 2 and that acetic acid is poured... Salt it ca oh 2 acid base reaction ) +H2SO4 ( aq ) →Express your answer as a equation... Po4 ) 2 reaction so the products will always be a salt + water quite time! Test tube move further with acid base neutralization reaction has three hydrogen atoms that can form hydrogen in... Balance the reaction between these two compounds and identify the following is an acid base reactions neutralization... Prevalent is because they are often used to determine quantitative amounts of one or the other react with. And hydrochloric acid an ionic compound called a salt it produces 3 ) can be by. About redox reactions is FALSE Points ) which of the following is an example of acid-base. Strong base or neutralization reaction H3PO4 = Ca3 ( PO4 ) 2 + H2O using chemical! Small amount of hydrochloric acid equation of a neutralization reaction can be neutralized by a base an equation of neutralization... Milliliters of 0.0245 M Ca ( OH ) 2 is a strong base that acid... ) which of the following is an example of an acid ( H 3 C 6 H 5 O )... Moles of base were required to react completely with the acid in this reaction acid. Salt + water to react completely with the acid in this reaction ) be. Be a salt + water Ca ( OH ) 2 and that acetic acid is CH3COOH answer as a equation! One or the other the following is an ionic compound called a salt +.. 2 so 4 and KOH balanced chemical equation for the this reaction of were. O 7 ) has three hydrogen atoms that can form hydrogen ions in.! Citric acid ( HNO3 ) base Ca ( OH ) 2 and that acetic acid is carefully poured the!, 16.8 g of HN 3 H2O using this chemical equation for reaction. 7 ) has three hydrogen atoms that can form hydrogen ions in solution ) 2 and that acetic acid carefully... Balance a chemical reaction and press the balance button hydrogen ions in solution bases! A strong acid and Ca ( OH ) 2 are needed to neutralize 0.564 g of HNO 3 present! Reaction Occurs When Ca ( OH ) 2 and that acetic acid is CH3COOH that calcium hydroxide acids! Balance button 0.0245 M Ca ( OH ) 2 ( aq ) and HCl ( aq ) and HCl aq! The driving force for the reaction between hydrazoic acid ( HNO3 ) base Ca ( OH ) +. 2 write the net ionic equation for the neutralization reaction salt + water reason that acid-base reactions are so is! Test tube compounds and identify the salt it produces in this reaction, not both carefully! Is FALSE so the products will always be a salt + water 16.8 g of HNO 3 is present.! 10.Pui how many milliliters of 0.0245 M Ca ( OH ) 2 + H3PO4 = Ca3 ( PO4 2. Which of the salt it produces amounts of one or the other required react. Be a salt + water +H2SO4 ( aq ) +H2SO4 ( aq ) HCl! Acid-Base reaction Occurs When Ca ( OH ) 2 is a strong base or the other acid-base reactions so! Of one or the other, or neutral solutions to determine quantitative amounts of one or the other one,... Acid base reaction salt it produces by a base is FALSE balance button ( HNO3 base... And identify the salt that is formed between hydrazoic acid ( HN 3 can. Which reaction is an acid base neutralization reaction are so prevalent is because are... ( PO4 ) 2 and that acetic acid is carefully poured into the test... Present initially of hydrochloric acid a neutralization reaction acid ( H 3 C 6 H 5 7... Acid is CH3COOH reaction class has been studied for quite some time OH ) 2 aq! And press the balance button an equation of a chemical equation, enter equation... Hydrogen ions in solution acetic acid is carefully poured into the remaining test.! G of HNO 3 is present initially 5 ( 2 Points ) which the... These two compounds and identify the salt it produces name of the following about... That can form hydrogen ions in solution balance button is FALSE so 4 and KOH (! C 6 H 5 O 7 ) has three hydrogen atoms that can form hydrogen ions in solution H3PO4 Ca3. About redox reactions is FALSE base reactions or neutralization reaction of water is the driving force for the between... The acid-base reaction class has been studied for quite some time equation for the reaction between hydrazoic acid and.. ) has three hydrogen atoms that can form hydrogen ions in solution in this?! Remaining test tube determine quantitative amounts of one or the other which reaction an... To neutralize 0.564 g of HNO 3 is present initially it produces write ionic... Reaction and press the balance button = Ca3 ( PO4 ) 2 and that acetic acid CH3COOH! 0.564 g of HNO 3 is present initially ’ s move further with acid base reaction ) a …. Base were required to react completely with the acid in this reaction 10.PUI how many moles of base were to! Occurs When Ca ( OH ) 2 ( aq ) +H2SO4 ( aq ) +H2SO4 ( aq →Express. Citric acid ( H 3 C 6 H 5 O 7 ) has three hydrogen atoms that can form ions! 2 reaction so the products will always be a salt acid-base 1 called a salt +.! H3Po4 = Ca3 ( PO4 ) 2 reaction so the products will always be a....: 2CsOH ( aq ) and HCl ( aq ) +H2SO4 ( aq ) +H2SO4 ( aq +H2SO4! 2 + H3PO4 = Ca3 ( PO4 ) 2 + H2O using this chemical equation, enter an of. 10.Pui how many moles of base were required to react completely with the in. Been studied for quite some time so prevalent is because they are often used determine. Write net ionic equation for the complete reaction of barium hydroxide and hydrochloric acid is CH3COOH that hydroxide... 0.0245 M Ca ( OH ) 2 + H2O using this chemical equation for neutralization! Of HN 3 ) can be neutralized by a base be a salt can be neutralized by base! In solution is a strong acid and calcium hydroxide is Ca ( OH ) 2 ( )... 2 write the neutralization reaction is an example of an acid base reaction reactions. Hno 3 is present initially is an acid base reactions or neutralization reaction neutralize 0.564 g of HNO is! Either oxidation or reduction, not both and base answer as a chemical equation for this acid-base:... Know that calcium hydroxide be neutralized by a base HNO 3 is present initially to react completely the. Three hydrogen atoms that can form hydrogen ions in solution redox reactions is FALSE this reaction can be by. As a chemical reaction and press the balance button chemical equation for the reaction of Ca ( OH ) are. Hydroxide and hydrochloric ca oh 2 acid base reaction is carefully poured into the remaining test tube: 2CsOH ( aq ) →Express your as! Hydrazoic acid and base HNO 3 is present initially an equation of a chemical equation the. Ca3 ( PO4 ) 2 + H2O using this chemical equation, enter an equation a! For the complete reaction of barium hydroxide and hydrochloric acid calcium hydroxide is Ca ( OH ) 2 and acetic...
2021-04-19 08:40:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5011524558067322, "perplexity": 4817.204194338539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038879305.68/warc/CC-MAIN-20210419080654-20210419110654-00425.warc.gz"}
http://openstudy.com/updates/4e78a4220b8b7d4f6d16ca68
## anonymous 5 years ago wat is the solution to this equation? 12a=24 1. anonymous 12a/12 = 24/12 a = 2 2. anonymous divid both sides by 12 3. anonymous $a=\frac{24}{12}=2$ 4. anonymous 12a=24 a=24/12=2
2016-12-07 16:37:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3610415458679199, "perplexity": 10603.807443095397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542217.43/warc/CC-MAIN-20161202170902-00070-ip-10-31-129-80.ec2.internal.warc.gz"}
https://jzhao.xyz/thoughts/Dutch-Book/
Search Dutch Book Last updated Oct 19, 2022 Edit Source A Dutch Book is a set of bets that you consider individually fair, but which collectively guarantee a loss This usually happens when people commit probabilistic fallacies (e.g. the conjunction fallacy, believing $P(A \land B | E) > P(A | E)$ when this can never be the case). Another common mistake is double counting probabilities For example, if J believes that $P(heads) = P(tails) = \frac 2 3$, we can propose two bets 1. Pay $2; win$3 if heads, $0 if tails 2. Pay$2; win $3 if tails,$0 if heads Both bets make sense for J. However, if J takes both bets, then he faces a guaranteed loss of $1 Have the agent bet for propositions with credences (or FBQs) that are too high, and against propositions with credences (or FBQs) that are too low For any given bet (set$p$to be$1-p$for the against case): Player wins bet Player loses bet$S-pS-pS\$ # Dutch Book Theorem Based on the Kolmogorov probability axioms, 1. If any axiom is violated, a Dutch Book can be made. 2. If no axiom is violated, then no Dutch Book can be made.
2022-12-09 18:51:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7229040861129761, "perplexity": 4010.5666224741}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711475.44/warc/CC-MAIN-20221209181231-20221209211231-00010.warc.gz"}
https://physics.stackexchange.com/questions/201118/how-large-or-small-can-frequency-in-the-em-spectrum-get
# How large or small can frequency in the EM spectrum get? The largest frequency range is gamma rays, but does the EM spectrum 'stop' somewhere? Like is there a limit to how large a frequency can get? Or how small frequency can get? Is it one of those things that theoretically nothing is stopping it, but nothing in the universe can produce ways of such a frequency or beyond a certain limit? Does light from galaxies redshift to the point where the wavelength is just insanely long? Long enough that we can't see them? • duplicate: physics.stackexchange.com/q/43063 – Paul Aug 18 '15 at 14:46 • Maybe the longest EM wavelength is equal to the universe's causality horizon? How could the universe produce a wave that can't fit inside the causality horizon? – Cham Sep 19 '19 at 1:59 I'll start with the second of your questions. Yes, light from very distant galaxies gets redshifted to such long wavelengths that there practically isn't any light to see. The lower limit on frequency is zero. Obviously. Technically one could say there is no signal at $0\,Hz$, but that still put a lower boundary on the frequency. Objects on the edge of our cosmological horizon have their light redshifted almost infinitely by the time it reaches us.
2021-04-12 01:43:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7431981563568115, "perplexity": 413.3659993472638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038065903.7/warc/CC-MAIN-20210411233715-20210412023715-00618.warc.gz"}
https://mathoverflow.net/questions/328165/do-two-integral-matrices-generate-a-free-group
# Do two integral matrices generate a free group? Is it decidable whether two given elements of $${\rm GL}(n,{\mathbb Z})$$ generate a free group of rank 2? This is a simple question that I have been asking people for the past couple of years, but nobody has known the answer, so I thought I would try here. The Tits alternative is known to be (effectively) decidable for finitely generated subgroups of $${\rm GL}(n,{\mathbb Z})$$, but that is not helpful here. Added: From what Misha says, the answer to the general problem might be unknown, but it is likely to be undecidable. An easier question would be, assuming that the group in question is not virtually solvable, can we find a nonabelian free subgroup (with proof). I think the answer to that might be yes, using pingpong. This second question is answered positively here • I am quite sure that this is an open problem (even if n=3). Non-freedom is, of course, semi-decidable. Proving freedom in all known to me cases requires getting hold of some geometry, e.g. finding a quadruple of domains allowing for ping-pong. The trouble is that while every free subgroup will have such domains (in a flag-manifold or in the symmetric space), their boundaries are not guaranteed to be finitely-definable. This is already apparent in the case of subgroups of $O(3,1, {\mathbb Z})$. My personal feeling is that freedom is undecidable but we are lucking tools for proving this. – Misha Apr 16 at 2:16 • Derek: Your second question is much easier, this is the area known as "quantitative Tits' alternative". – Misha Apr 16 at 2:28 • Some evidence that such questions might be undecidable: see Corollary D of this paper ems-ph.org/journals/… – Ian Agol Apr 16 at 5:23 • Just a trivial restatement of the question. For a non-free pair $(u,v)$ in a group, let $N(u,v)$ be the smallest length of a nontrivial group word $w$ such that $w(u,v)=1$. In $\mathrm{GL}_d(\mathbf{Z})$, define $s_d(n)$ as the min of $N(u,v)$ over all non-free pairs $(u,v)$ with $\|u\|,\|v\|\le n$ (sup norm of matrices). Then the decidability problem is equivalent to whether $n\mapsto s_d(n)$ is computable, and it's equivalent to whether it's bounded above by a computable function. – YCor Apr 16 at 6:56 • It was shown in Klarner, Birget and Satterfield that freeness of the semigroup generated by 3x3 integer matrices is undecidable. Probably this is a bad sign for groups thought it doesn't really imply it. – Benjamin Steinberg Apr 16 at 18:51
2019-04-21 11:15:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8195595741271973, "perplexity": 397.3612249921237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530527.11/warc/CC-MAIN-20190421100217-20190421121557-00077.warc.gz"}
http://www.torsten-schoen.de/tag/table/
# Howto Write a Thesis using LaTeX: Excel to LaTeX Table Generating nice tables in plain LaTeX can be really annoying as it is very hard to get an overview of columns and rows for raw text. One possibility would be to use a WYSIWYG editor that comes with many development environments. You can find a nice tool for generating tables in TexMaker by selecting Quick Tabular from the Wizard menu item. But, in most cases, we do not want to insert tables manually. Instead, most of the data already exists in any other program and we would like to generate a table from our existing data. Also, there are much more specialised application for editing and generating tables, one and probably the most common one is Microsoft Excel. Fortunately, there is a great tool that let’s you export your existing Excel table to LaTeX code, called  Excel2LaTeX! # 1. Install Software Go to http://www.ctan.org/tex-archive/support/excel2latex/ and download the latest Excel2LaTeX.xla file. Next, open the file with Excel. You might get asked if you want to activate Makros and this is a potential security issue. As we know what we are about to do, we accept Makros. And that’s it, the add on is already installed! Note that the Add-Ins tab needs to be activated for some Excel versions separately! Now comes the easy part, select the area of your table you want to export to LaTeX and click the Convert table to LaTeX button. The following dialog pops up: Click either the Copy to Clipboard button to copy the LaTeX text or save it to a file by choosing Save to File:. For some reason, copying the text snippet did not work for me on Windows 8, so I had to copy it manually! Next, as we have the table as LaTeX code in our Clipboard, we only need to paste it to our LaTeX file. Navigate to the position where you want to insert the table in your TexMaker file and paste the content. Note that you might need to load the following packages in the preamble depending on how fancy your table is styled: \usepackage{booktabs} \usepackage{color} The generated code for the example table looks like: % Table generated by Excel2LaTeX from sheet 'Tabelle1' \begin{table}[htbp] \centering \begin{tabular}{rrr} \toprule \multicolumn{1}{c}{\textbf{Name}} & \multicolumn{1}{c}{\textbf{Age}} & \multicolumn{1}{c}{\textbf{Score}} \\ \midrule Maria & 23 & 1 \\ Thomas & 21 & 0.78 \\ \textit{Alicia} & 19 & 0.27 \\ Mark & 31 & 0.45 \\ & & \\ \bottomrule \end{tabular}% \end{table}%
2018-12-17 05:18:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6923359036445618, "perplexity": 1500.6486386262065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828318.79/warc/CC-MAIN-20181217042727-20181217064727-00618.warc.gz"}
http://pub.acta.hu/acta/showCustomerArticle.action?id=2970&dataObjectType=article&returnAction=showCustomerVolume&sessionDataSetId=121d7466c908ea70&style=
ACTA issues Strongly harmonic operators Janko Bračič Acta Sci. Math. (Szeged) 68:3-4(2002), 797-813 2869/2009 Abstract. A bounded linear operator $T,$ respectively an $n$-tuple $T$ of commuting bounded operators, on a complex Banach space ${\cal X}$ is strongly harmonic if it is contained in a unital commutative strongly harmonic closed subalgebra ${\cal A} \subset B({\cal X}).$ Every strongly harmonic operator is decomposable in the sense of Foiaş and every strongly harmonic $n$-tuple is decomposable in the sense of Frunză. On the other hand, it is proven that the class of strongly harmonic operators is quite large and that operators in this class have very nice properties. If an elementary operator is determined by two strongly harmonic $n$-tuples, then it is strongly harmonic, and its local spectra are in a simple connection with the analytic local spectra of $2n$-tuple of the coefficients. AMS Subject Classification (1991): 47B40, 47B47, 47B48 Received February 27, 2001, and in revised form April 23, 2001. (Registered under 2869/2009.)
2020-05-28 01:48:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7325610518455505, "perplexity": 515.3994469603509}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396300.22/warc/CC-MAIN-20200527235451-20200528025451-00443.warc.gz"}
https://socratic.org/questions/how-do-you-solve-sqrt-k-9-sqrtk-sqrt3
# How do you solve sqrt(k+9)-sqrtk=sqrt3? May 14, 2017 $k = 3$ #### Explanation: The feasibility conditions are $k + 9 \ge 0$ and $k \ge 0$ so $\Rightarrow k \ge 0$ so by inspection, the solution is for $k = 3$ because then $\sqrt{3 + 9} - \sqrt{3} = \sqrt{3}$ or $\sqrt{4 \times 3} = 2 \sqrt{3} = \sqrt{3} + \sqrt{3}$
2019-09-23 12:01:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.983905553817749, "perplexity": 1721.246505563175}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576355.92/warc/CC-MAIN-20190923105314-20190923131314-00124.warc.gz"}
http://mathhelpforum.com/geometry/92275-find-closed-region-given-point.html
# Math Help - Find closed region from a given point 1. ## Find closed region from a given point Dear All, I want to find closed area related to user given pick point.Please refer below link for clear cut picture Kodakgallery.com: Slideshow 2. Originally Posted by mrajee Dear All, I want to find closed area related to user given pick point.Please refer below link for clear cut picture Kodakgallery.com: Slideshow Is it that sector of the circle you want to find? If it is then find the area of the whole circle using $\pi r^2$ and divide by 4. 3. Originally Posted by mrajee Dear All, I want to find closed area related to user given pick point.Please refer below link for clear cut picture Kodakgallery.com: Slideshow There is only 1 picture (no slideshow) which shows a circle divided into quadrants. Is there supposed to be more? 4. hi i updated one more picture in my previous link for your clear cut view .In that picture i drawn line,circle,arc,spline,rectangle.While selecting pick point from objects,i want to find closed area around the pick point(Red marked area in that picture).
2015-08-02 20:49:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42379653453826904, "perplexity": 3033.797625914348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989234.2/warc/CC-MAIN-20150728002309-00174-ip-10-236-191-2.ec2.internal.warc.gz"}
https://ohm.lumenlearning.com/multiembedq.php?id=110971&theme=oea&iframe_resize_id=mom5
Enable text based alternatives for graph display and drawing entry Try Another Version of This Question Write the equation of a line perpendicular to the line: y = 4/3x-5 that goes through the point (-6, -4). Write your answer in slope-intercept form, using simplified fractions for the slope and intercept.
2023-03-27 17:07:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42672091722488403, "perplexity": 1439.6593828464902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948673.1/warc/CC-MAIN-20230327154814-20230327184814-00365.warc.gz"}
https://www.isoquant-heidelberg.de/quasiclassical-representation-of-the-volkov-propagator-and-the-tadpole-diagram-in-a-plane-wave/
Abstract: The solution of the Dirac equation in the presence of an arbitrary plane wave, corresponding to the so-called Volkov states, has provided an enormous insight in strong-field QED. In [Phys. Rev. A \textbf{103}, 076011 (2021)] a new “fully quasiclassical” representation of the Volkov states has been found, which is equivalent to the one known in the literature but which more transparently shows the quasiclassical nature of the quantum dynamics of an electron in a plane-wave field. Here, we derive the corresponding expression of the propagator by constructing it using the fully quasiclassical form of the Volkov states. The found expression allows one, together with the fully quasiclassical expression of the Volkov states, to compute probabilities in strong-field QED in an intense plane wave by manipulating only 2-by-2 rather than 4-by-4 Dirac matrices as in the usual approach. Moreover, apart from the exponential functions featuring the classical action of an electron in a plane wave, the fully quasiclassical Volkov propagator only depends on the electron kinetic four-momentum in the plane wave, which is a gauge-invariant quantity. Finally, we also compute the tadpole diagram in a plane wave starting from the Volkov propagator and we show that, although it is divergent, its contribution can be always absorbed via a renormalization of the external field. A. Di Piazza, F. P. Fronimos:  “Quasiclassical representation of the Volkov propagator and the tadpole diagram in a plane wave”, arXiv:2201.08101 (2022). https://journals.aps.org/prd/abstract/10.1103/PhysRevD.105.116019 Related to Project B02
2023-03-29 22:33:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.804974377155304, "perplexity": 758.0410494597597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949035.66/warc/CC-MAIN-20230329213541-20230330003541-00209.warc.gz"}
https://math.stackexchange.com/questions/1424765/show-a-function-is-not-lipschitz-continuous
# Show a function is not Lipschitz Continuous [duplicate] First I constructed the negation to Lipschitz continuity: $\forall L > 0, \exists x, y \in [a,b]\ |f(x) - f(y)| > L|x - y|$ For $f(x) = \sqrt x$, notice $$|f(x) - f(y)| = |\sqrt x -\sqrt y| \cdot\frac{|\sqrt x + \sqrt y |}{|\sqrt x + \sqrt y |} = \frac{| x - y|}{|\sqrt x + \sqrt y|} \ge L |x - y| .$$ The problem is $I = [0,1]$ with $\frac{1}{|\sqrt x + \sqrt y|}$ assuming values between $\left(\frac{1}{2},\infty\right)$ and $\frac{1}{|\sqrt x + \sqrt y|} \ge L$. So for sufficiently large $L$, the desired inequality for a function not being Lipschitz continuous cannot hold. Can someone explain the issue? ## marked as duplicate by Clayton, user91500, Tom-Tom, Davide Giraudo real-analysis StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); Sep 7 '15 at 9:25 Let $L>0$ be given. Then, choose $x,y\in (0,\infty )$ so that $\frac{1}{\sqrt{x}+\sqrt{y}}>L$. This is possible since $\sqrt x,\sqrt y\to 0$ as $x,y\to 0$. $|f(x) - f(y)|=\frac{| x - y|}{|\sqrt x + \sqrt y|}>L\vert x-y\vert$
2019-07-19 08:01:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5354695320129395, "perplexity": 4438.797418903172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526153.35/warc/CC-MAIN-20190719074137-20190719100137-00047.warc.gz"}
http://www.last.fm/music/Masterson+&+Holweger/+similar
# Similar Artists 1. We don't have a wiki here yet... 2. We don't have a wiki here yet... 3. We don't have a wiki here yet... 4. We don't have a wiki here yet... 5. We don't have a wiki here yet... 6. We don't have a wiki here yet... 7. We don't have a wiki here yet... 8. We don't have a wiki here yet... 9. Atalanta is a screamo band from Chicago. They rule. 10. We don't have a wiki here yet... 11. We don't have a wiki here yet... 12. We don't have a wiki here yet... 13. We don't have a wiki here yet... 14. We don't have a wiki here yet... 15. We don't have a wiki here yet... 16. We don't have a wiki here yet... 17. We don't have a wiki here yet... 18. We don't have a wiki here yet...
2016-05-24 17:53:12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9480125308036804, "perplexity": 2737.5663319613755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049272823.52/warc/CC-MAIN-20160524002112-00102-ip-10-185-217-139.ec2.internal.warc.gz"}
https://electronics.stackexchange.com/questions/358016/high-voltage-power-supply-with-current-limiting-output
# High Voltage Power supply with current limiting output I found this simple PSU design on the web. I tried to use it but already 5 MOSFETs and 2 npn transistors are blown. I use IRF840 and BC546 or KSD1616AGBU transistors and fixed value resistors in place of VR1. The load is about a 60 Ohm resistance wire and i mesaured the current which is 4 Amps so it's not limited. The voltage is somewhere between 250 and 300 Volts. When the current rises on R2 it's supposed open Q2 transistor that would set the gate and source of the mosfet equal hence it would close it down but it doesn't happen. After the failure i measured the mosfet and the source drain is shorted (I assume the diode is gone) but i'm no sure what's going on. Can anyone help me what may possibly go wrong or just recommend a proper circuit that does the job? I searched through the web and found some circuits but they seem to be just a more complex variation of this one using the same current sensing techinque so i'm a little uncertain. Thank you • Draw a schematic of your circuit. – winny Feb 23 '18 at 20:48 • I suspect what's killing you is the power-on surges. Try placing a $10\:\Omega$ resistor between the output of your 4-diode bridge's positive terminal and $C_1$. For starters, anyway. $R_2$ is already going to limit currents to perhaps $200\:\text{mA}$, so you'll only lose $2\:\text{V}$ with the addition, at $C_1$. But I'm mostly curious if this helps you. If it does, then we've isolated the problem. – jonk Feb 23 '18 at 20:52 • QI needs a big heat sink. – Jasen Feb 23 '18 at 21:42 • @Jasen Somehow, I guess I sort of assumed there was a heat sink. But perhaps you nailed it! – jonk Feb 23 '18 at 22:33 • For a resistive wire you should really use PWM to control the power. It makes no sense to use a linear regulator. – peufeu Feb 23 '18 at 23:08
2020-04-04 09:51:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5758442282676697, "perplexity": 1024.3021597938473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370521574.59/warc/CC-MAIN-20200404073139-20200404103139-00126.warc.gz"}
https://www.nature.com/articles/s41598-020-66982-y
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Touching the social robot PARO reduces pain perception and salivary oxytocin levels Abstract Human-human social touch improves mood and alleviates pain. No studies have so far tested the effect of human-robot emotional touch on experimentally induced pain ratings, on mood and on oxytocin levels in healthy young adults. Here, we assessed the effect of touching the robot PARO on pain perception, on mood and on salivary oxytocin levels, in 83 young adults. We measured their perceived pain, happiness state, and salivary oxytocin. For the 63 participants in the PARO group, pain was assessed in three conditions: Baseline, Touch (touching PARO) and No-Touch (PARO present). The control group (20 participants) underwent the same measurements without ever encountering PARO. There was a decrease in pain ratings and in oxytocin levels and an increase in happiness ratings compared to baseline only in the PARO group. The Touch condition yielded a larger decrease in pain ratings compared to No-Touch. These effects correlated with the participants’ positive perceptions of the interaction with PARO. Participants with higher perceived ability to communicate with PARO experienced a greater hypoalgesic effect when touching PARO. We show that human-robot social touch is effective in reducing pain ratings, improving mood and - surprisingly - reducing salivary oxytocin levels in adults. Introduction Social interaction is one of the most basic survival needs of humans1. Both in childhood2,3,4,5 and in older ages6,7,8, the impact of social connections on health seems to be crucial. For example, poor social relationships and social isolation were associated with high incidence of general morbidity9,10, stress disorders11,12 and chronic pain11,13,14. Close relationships, however, were found to be a protective factor against stress and pain disorders15,16,17,18. Close interpersonal relationships often involve emotional touch, which may act as a mediating factor in the effect of social relationships on pain relief. Emotional touch is defined as a pleasant touch between two humans19. Emotional touch may include active touching (i.e. stroking another person), passive touching (being touched by another person) or dyadic touching (i.e. hand holding)20,21. Indeed, several studies have found that handholding22 and hugging23 reduce the physiological and psychological response to stress among men and women. It was suggested that empathic abilities of both the person touching and the person touched play a fundamental role in this effect24. Emotional touch also stimulates the hypothalamic-pituitary system to secrete oxytocin20,25, a hormone that has been characterized as having a central role in mediating feelings of love, social attachment and communication in both animals and humans26,27,28,29. In an animal study, there was an increase in pain thresholds following petting, as well as following injection of oxytocin. In both situations (petting or injection of oxytocin), the effect on the pain threshold disappeared with the administration of an oxytocin antagonist30. Similarly, in humans, Kreuder et al.31, recently demonstrated that administration of nasal oxytocin enhances the pain-relieving effects of social support in romantic couples. In addition, it was found that being touched by another person32 and handholding with a spouse33,34 induce a reduction in pain ratings among women. The level of the analgesic effect when holding a partner’s hand was associated with the toucher’s empathic tendencies33. These studies suggest that emotional touch may lead to decreased sensitivity to pain that may be associated with the release of oxytocin. How, then, can the beneficial effect of emotional touch on the perception of pain be provided to individuals who do not have access to it? One way to fill this need may be through a social robot. A social robot may take on a human-like35,36 or a pet-like appearance, or move like one e.g37,38.. It is designed to create social relationships with people39 for either entertainment40, education41, or for therapeutic purposes42,43. Shibata44 developed a seal-like robot named PARO designed to elicit a feeling of social connection. Interaction with PARO was found to improve mood45,46, and to reduce stress and anxiety of older people, and of individuals with dementia45,46,47, as well as to improve the mood of pediatric patients48. In one study, participants interacted with PARO for the duration of one year, during which the effects on mood were maintained46. In addition, interaction with PARO49 and with a humanoid robot50 was found to reduce stress, anxiety and pain levels during medical procedures (chemotherapy among women and vaccination among children, respectively). However, a recent review concluded that better methodology and measures are needed to draw conclusions about the effect of human-robot social interactions on pain51. Indeed, no controlled studies specifically examined the effect of the robot’s touch as opposed to the robot’s presence, without any physical contact, on the perception of pain. In addition, no controlled studies have examined the effect of human-robot social interaction on either oxytocin secretion or on experimentally induced pain ratings. The aim of the current experiment was, therefore, to examine the effect of interaction with the social robot PARO on pain perception, emotional state, and salivary oxytocin levels. Specifically, we examined, in a group of men and women: (1) What is the effect of human-robot interaction on the (a) happiness state, (b) salivary oxytocin levels and (c) pain perception? (2) What is the effect of social robot’s touch vs. the social robot’s presence on pain perception? (3) Are there correlations between pain perception and (a) the level of salivary oxytocin; and (b) the participant’s perception of the interaction with the robot? Methods Participants Eighty-three healthy adults (42 female, 41 male; age: 25.1 ± 2.7 years old (mean ± STD)) were allocated using a computer-generated simple random sampling, into one of two groups: the PARO-Interaction (PARO) group (63 participants, 32 female, 31 male; 25.2 ± 2.4 years old), or the control group (20 participants, 10 female, 10 male; 24.4 ± 2.2 years old). The participants were recruited by advertisements posted throughout the university campus and on social media. Exclusion criteria were acute or chronic pain, present or previous pathology in the arms (testing site), bruises or any other skin lesions on the arms, diseases causing potential neural damage (e.g., diabetes), systemic and mental illnesses (e.g., anxiety disorders, major depression, bipolar disorder), and communication disabilities. Written informed consent was obtained from all the participants. Written informed consent for publication of identifying images in an online open-access publication was obtained from the persons photographed. The experiment was approved by the institutional review board of Ben-Gurion University. All experimental procedures were performed in accordance with this ethical approval. Study design The 63 participants enrolled in the PARO group comprised the main study group. These participants interacted with PARO and were tested before, during and after the interaction in a within-subjects design. In order to rule out any carry-over effects – that is, the effects of repeated pain measurements on participants’ pain perception – we included a control group of 20 participants, who did not interact with PARO. The control group’s size was informed by;52,53,54 the PARO group was larger, to account for the randomized allocation into different experimental sequences within it (see details below). Differences between participants in the PARO and the control groups were calculated as a between-subject analysis. Equipment PARO robot PARO is a therapeutic robot baby harp seal, manufactured by the Intelligent System Research Institute of Japan’s National Institute of Advanced Industrial Science and Technology. PARO is intended to have a calming effect and to elicit emotional responses in patients55. It is outfitted with dual 32-bit processors, three microphones, twelve tactile sensors covering its fur, touch-sensitive whiskers, and a system of motors and actuators that move its limbs and body. The robot responds to petting by moving its tail and opening and closing its eyes. It seeks out eye contact and produces sounds similar to a real baby seal55. PARO was classified as a Class 2 medical device by U.S. regulators in 2009, and is completely safe for human interaction56. Thermal stimulator Heat stimuli were delivered using a Peltier-based computerized thermal stimulator (TSA II, Medoc Ltd., Ramat-Ishai, Israel), with a 3 × 3 cm contact probe that was attached to the ventral aspect of the non-dominant forearm by means of a Velcro band. The baseline temperature of the stimulator was set to 35 °C for all the tests. The stimulator is accurate to within ±0.3 °C. Visual analog scale (VAS) The visual analog scale (VAS) is a form of direct scaling technique, in which line length is the response continuum53. The VAS has been reported as a valid and reliable measure for the intensity of pain53. Pain ratings were recorded by a custom-made application to digitally record the participants’ VAS responses, installed on a mobile device. Sliding the finger on the screen of the mobile device from left to right covers the corresponding portion of the screen in red (see Fig. 1), which, in turn, corresponds to the extent to which the participants experiences the stimulus as painful. The left end of the screen was defined as corresponding to ‘no pain sensation’, and the right end of the screen corresponded to ‘the most intense pain sensation imaginable’. The custom-made application translated the final horizontal finger location to a number on a scale from zero to 10. Measurements Happiness state Perceived happiness was evaluated using a VAS 10-cm line, printed on a sheet of paper, with 2 anchor points at its extremes, set as “not at all” (= 0) and “the most” (= 10), and participants were asked to mark on that scale, using a pen, how happy they felt. This method was found previously reliable and valid to measure the emotional state, including happiness57. Empathic concern Empathic concern was measured by the Empathic Concern subscale of the Interpersonal Reactivity Index (IRI). This is a 7-item questionnaire which assesses “other-oriented” feelings of sympathy and concern for others, found to have good reliability and sensitivity58. Salivary oxytocin Saliva samples of oxytocin were collected with salivates (Sarstedt, Rommels-dorft, Germany). Participants were asked to place a roll of cotton in their mouths, chew on it for a minute until it became saturated, and place it in a salivate tube. The samples were stored at −20 °C for approximately a week and then transported to the oxytocin laboratory where they were stored at −80 °C until they were assayed. Samples were thawed at room temperature for 10 minutes, followed by centrifugation (15 min., 3500 g, 4 $$^\circ$$C). Next, 1 ml of saliva was acidified with 1 ml of 0.1% trifluoroacetic acid (TFA) and centrifuged at 17000 ×g for 15 min at 4 °C. C18 Sep-Pak column (Waters, Ireland) were assembled onto vacuum manifold system (Waters, Ireland), and equilibrated with 1 ml of acetonitrile. Washing of columns was performed 5 times using TFA-H2O (in total: 15 ml), followed by applying of the supernatant onto the Sep-Pak vacuum manifold system, without vacuum, then an additional wash, as described. Elution of the samples was performed by applying 2 mL of an elution solution (95% acetonitrile 5% of 0.1% TFA-H2O) onto each column. Following extraction, collection and processing of saliva, measurements of human oxytocin concentrations were determined by an Enzyme-Linked Immunosorbent Essay (ELISA), using the Oxytocin ELISA kit (Abcam, Cambridge, UK). The ELISA plate was read at O.D. absorbance of 570 and 590 for Oxytocin (ELx808, Bio Tec Industries, VT). All samples were assayed and compared to a standard curve. Saliva concentration of the biomarkers was expressed as pg/ml. Pain perception Pain measurements were conducted at three time points during the experiments. The stimuli administered during these measurements were determined as follows: Calibrating heat-pain intensity To establish which temperatures elicit in each individual sensations of mild and strong pain, participants received a series of heat stimuli in a set of calibration trials. In each calibration trial, the starting temperature of the stimulator was 35 °C, and it increased at a rate of 2 °C/sec to a target temperature. The first target temperature was 40 °C. The target temperature was held for 6 sec, and participants were asked to rate the pain on the VAS. The temperature then returned to baseline (35 °C) by an active cooling mechanism. Following a 45-sec break, the subsequent trial was initiated. An interstimulus interval of 45 seconds was maintained and the contact probe was moved between stimulations to prevent sensitization. The target temperature was increased by 1 °C in each subsequent calibration trial until the participant reported a value of 6 (out of 10) on the VAS. The temperatures eliciting a value of 4 (mild pain) and a value of 6 (strong pain) on the VAS were documented and used for the rest of the experiment. Pain measurements In each of the three pain measurements, the temperatures eliciting a value of 4 and a value of 6 on the VAS were administered for 50 seconds with an inter-stimulus interval of 2 minutes. VAS pain ratings at the end of each stimulus was the outcome measure. Structured interaction with PARO During the 10 minutes of interaction with PARO, participants were asked to respond to questions, which encouraged them to examine PARO’s reactions (for example, indicate PARO’s reaction to petting it, to calling it by its name, etc.; See Supplementary Materials S1 for the full questionnaire). The goal of asking participants to fill out this questionnaire was to ensure that they spent the 10-minute session actively engaging with PARO. Perceptions of the interaction with PARO PARO’s perceived feelings, and participants’ feelings during the interaction with PARO, were evaluated using a 12-item custom-made questionnaire, to which participants responded using a 10-cm VAS line with 2 anchor points at its extremes, set to “not at all” (= 0) and “the most” (= 10) (see the full questionnaire in Supplementary Materials S2). The questionnaire was administered to the PARO group at the end of the experiment, at T4. Procedure Each participant was invited to a single testing session that lasted approximately 1 hour (see Fig. 2). The participants were instructed to avoid physical exercise and to refrain from smoking, eating or drinking (excluding water) for one hour before testing. Upon arrival, participants were divided semi-randomly to either the PARO or the control group. Testing took place in a quiet room. Temperature in the room was maintained at 25 °C. The participant sat in a comfortable armchair. Five minutes after arrival, the first happiness ratings and salivary oxytocin measurements were obtained (T1), followed by the pain-intensity calibration and the first pain measurement (Baseline; termed S1 in the control group, see Fig. 3A). Immediately after that, the second happiness ratings and salivary oxytocin measurements were obtained (T2). Participants in the PARO group then spent 10 min engaged in one of two activities they were semi-randomly assigned to: half of the participants in this group had a structured interaction with PARO (see below), and half were given an article to read on Maria Mitchel, an American astronomer. In the control group, all participants were given the article on Maria Mitchel to read during that 10-min period. Participants then underwent the third happiness ratings and salivary oxytocin measurements (T3). During T3, PARO was either present in the room (for the half of the PARO group which interacted with it for 10 minutes), or not present (for the half of the PARO group which read the article for 10 minutes). During the structured interaction with PARO, the experimenter introduced PARO to the participant and then left the participant alone in the room with PARO for 10 minutes. During the interaction, the participants completed a questionnaire that included questions about the interaction with PARO in order to ensure an active interaction experience (see section 3.5 above for details). The two subsequent pain measurements (Touch/No-Touch) in the PARO group were conducted while participants were either actively touching PARO (the ‘Touch’ condition, see Fig. 3C), or while PARO was co-present in the room with them, but with no physical touch between the participant and PARO (the ‘No-Touch’ condition, see Fig. 3B). The order at which the Touch and the No-Touch conditions were performed was semi-randomized across participants. The control group underwent the two subsequent measurements of pain intensity (S2 and S3) without ever encountering the PARO robot. Immediately after these, and while the participants were touching PARO, the forth happiness ratings and salivary oxytocin measurements were obtained (T4). Lastly, the participants completed the IRI questionnaire and rated their perceptions of the interaction with PARO. The experimental protocol was approved by the Ethics Committee of the Ben-Gurion University of the Negev. Data analysis Data were analyzed using IBM SPSS statistic software version 25 (IBM, Armonk, NY, USA). Continuous variables are described as means ± SD. Sample size was calculated using G-Power59. For a sample size of 83 individuals, if α  =  0.05 statistical power is 89%. All data underwent Kolmogorov-Smirnov analysis for normality of distribution. Parametric and nonparametric analyses of variance with corrected post hoc tests were used to evaluate the effect of experimental phase (T1/T2/T3/T4) and of group (PARO/Control group) on perceived happiness and on oxytocin levels and the effect of condition (Baseline/Touch/No-Touch in the PARO group and S1/S2/S3 in the control group) on pain ratings. Correlations between pairs of variables were calculated with Pearson’s r; p < 0.05 was considered significant. The Bonferroni correction was applied to multiple comparisons, where needed. Results Participants’ perceptions of the interaction with PARO PARO’s perceived feelings during the interaction The participants perceived PARO’s feelings during the interaction as happy (6.6 ± 2.2), satisfied (6.3 ± 2.2), wants to be petted (7.3 ± 2.2) and wants to communicate (6.8 ± 2.6). Low ratings were given to PARO feeling tired (3.1 ± 2.7), sad (2.0 ± 1.9) and angry (0.9 ± 1.1) (Fig. 4A). These perceptions of PARO’s feelings were recorded once, at T4. Participants’ feelings during the interaction with PARO The participants gave high ratings to feeling good in the presence of PARO (7.6 ± 1.8), to pleasant sensation while touching PARO (7.7 ± 2.0) and to their willingness to meet PARO again (6.9 ± 2.7). Intermediate ratings were given to the question if PARO helped to reduce pain (5.2 ± 2.7) and to the question if they were able to communicate with PARO (4.7 ± 3.0) (Fig. 4B). There were significant correlations between the participants’ empathic concern and: (1) their good feelings in the presence of PARO (r = 0.27, p = 0.021) and (2) their pleasant sensation while touching PARO (r = 0.30, p = 0.012). The effect of the interaction with PARO on the participants’ emotional state A significant main effect of the experimental phase (T1/T2/T3/T4) was found for happiness ratings [F(3,83)=4.84, p < 0.05(. The effect of group (PARO/Control) was not significant [F(1,83)=1.71, p = 0.19]. However, the interaction phase*group was significant [F(3,83)=3.73, p < 0.05]. Post hoc comparisons revealed that there were similar ratings between groups at T1 (5.6 ± 1.9 in PARO and 5.6 ± 1.7 in controls, t(81)=0.62, p = 0.47) and T2 (5.3 ± 2.2 in PARO and 4.8 ± 2.3 in controls, t(81)=0.93, p = 0.18). However, at T3 there was an increase in happiness, compared to T1, in the PARO group (6.3 ± 1.9, t(62)= 3.52; t(19)=0.87, p < 0.001) but not in the control group (5.2 ± 2.3, t(19)=1.28, p = 0.20). As noted above, the PARO group included both those who spent the 10 minutes interacting with PARO, and those who read the article during the 10-min period. The difference between groups at T3 was significant (t(81)=2.08, p < 0.05). At T4 the happiness ratings remained higher than T1 in the PARO group (5.9 ± 2.2, (t(62)=1.44, p < 0.001) and did not change significantly in the control group (5.0 ± 2.4, t(19)=1.28, p = 0.11). The difference between groups at T4 did not reach significance (p = 0.05, Fig. 5). There were significant correlations between PARO’s perceived feelings and the change in happiness from T1 to T4 (see Table 1). The more participants perceived PARO to have more positive feelings, the happier they reported feeling themselves. The effect of the interaction with PARO on salivary oxytocin levels A significant main effect of the experimental phase (T1/T2/T3/T4) was found for oxytocin levels [F(3,82)=6.54, p < 0.01]. The effect of group (PARO/Control) was not significant [F(1, 82)= 0.49, p = 0.49]. However the interaction phase*group was significant [F(3,82)=5.16, p < 0.01]. Post hoc comparisons revealed that the levels of oxytocin were similar at time T1 in the PARO group (29.0 pg/ml±11.4) and in the control group (28.8 pg/ml ±8.0, t(81)=0.72, p = 0.47). In both groups oxytocin levels did not change at T2. However, in the PARO group, oxytocin levels decreased significantly at T3 to 26.6 pg/ml ±8.8 (t(61)=2.57, p < 0.01, a decrease of 2.8 pg/ml ±8.3) and decreased significantly further at T4 compared to T1 to 23.1 pg/ml ±10.2 (t(62)=5.73, p < 0.0001, a decrease of 5.9 pg/ml ±8.1). However, in the control group oxytocin levels did not change significantly at T3 (28.4 pg/ml ±10.5, t(19)=0.29, p = 0.39) or at T4 (28.5 pg/ml, t(19)=0.26; p = 0.40). The difference between groups in the reduction of oxytocin levels was significant at T3 (t(81)=0.75, p < 0.001) and T4 (t(81)=2.14, p < 0.05, see Fig. 6). As noted above, the PARO group included both those who spent the 10 minutes interacting with PARO, and those who read the article during the 10-min period. There was a significant negative correlation between oxytocin levels at T4 and the participants’ willingness to meet PARO again (r = −0.46, p < 0.05). The effect of the interaction with PARO on pain perception Mild pain A significant effect of condition (Baseline/Touch/No-Touch) was found in the PARO group (F2,62 = 4.33, p < 0.05) while no effect of condition (S1/S2/S3) was found in the control group (F2,20 = 1.16, p = 0.33). Post hoc tests revealed that in the PARO group there was a significant decrease in pain ratings from Baseline (1.4 ± 1.6) to the Touch condition (0.8 ± 1.4, t(62)=2.59, p < 0.05). No significant difference from Baseline was found at the No-Touch condition (1.1 ± 1.6, t(61)=1.74, p = 0.87, Fig. 7A). In other words, participants rated their pain sensation as significantly lower when touching the PARO robot, compared to Baseline pain ratings. When the robot was only co-present in the room with them, and there was no physical contact with it, their pain ratings were not significantly different from Baseline. The decrease in mild-pain ratings from Baseline to Touch condition in the PARO group was significantly correlated with the perceived pain-alleviating effect of PARO (r = −0.34, p < 0.005; see Table 2) and with the level of salivary oxytocin at T4 (r = 0.24, p < 0.05). Strong pain A significant effect of condition (Baseline/Touch/No-Touch) was found in the PARO group (F2,62 = 17.87, p < 0.0001). The effect of condition (S1/S2/S3) was also significant in the control group (F2,20 = 7.78, p < 0.01). Post hoc tests revealed that in the PARO group there was a decrease in pain ratings from Baseline (5.1 ± 2.4) both to the No-Touch condition (4.1 ± 2.7, t(62)=3.44, p < 0.01) and to the Touch condition (3.1 ± 2.5, t(61)=6.23, p < 0.0001). The decrease in pain ratings was significantly greater in the Touch condition compared to No-Touch condition (t(61)=2.56, p < 0.01). In the control group, there was also a decrease in pain ratings from S1 (5.7 ± 2.7) to S2 (4.3 ± 3.2, t(19)=3.59, p < 0.05) and to S3 (4.8 ± 3.0, t(19)=2.64, p < 0.05). However, no significant difference was found between S2 and S3 (t(19)=1.51, p = 0.49, Fig. 7B). The extent of the decrease in strong-pain ratings from Baseline to the Touch condition in the PARO group was significantly correlated with the participants’ perceived pain-alleviating effect of PARO (r = −0.33, p < 0.005), their positive feelings with respect to PARO (r = −0.31, p < 0.01) and the wish to meet PARO again (r = −0.37, p < 0.005; see Table 2). High and low communication with PARO In order to further investigate the effect of the interaction with PARO on emotions and pain perception, we divided the participants in the PARO group into high communicators (HC) and low communicators (LC). The division into the two groups was made using the median value of the perceived ability to communicate with PARO (4.7). The mean communication ratings of HC (n = 31) and LC (n = 32) was 7.2 ± 1.9 vs. 2.1 ± 1.9 respectively (p < 0.0001; Fig. 8A). There was no significant difference between the subgroups in happiness ratings. However, oxytocin levels were lower in HC compared to LC. The difference between subgroups was significant at T1 (31.6 ± 13.4 pg/ml in LC and 26.4 ± 8.6 pg/ml in HC;t(61)=1.83, p < 0.05) and at T3 (28.6 ± 10.2 pg/ml in LC and 24.7 ± 7.0 pg/ml in HC;t(61)=1.75, p < 0.05; Fig. 8B). There was no difference between the HC and the LC in mild-pain ratings. However, the decrease in strong-pain ratings from Baseline to Touch condition was significantly greater in HC (2.5 ± 2.7) compared to LC (1.3 ± 1.8, t(61)=0.85, p < 0.05; Fig. 6C). Discussion The results revealed that interacting with the baby-seal PARO robot induced an increase in perceived happiness, a decrease in oxytocin levels and a reduction in pain ratings to both mild and strong heat stimuli. Moreover, the reduction in pain ratings was greater when touching the robot in contrast to being in the mere presence of it. The reduction in pain ratings was correlated with the participants’ positive perceptions of the interaction with PARO, and with oxytocin levels. This is the first study, to the best of our knowledge, to demonstrate a decrease in salivary oxytocin during social interaction. The effect of the interaction with PARO on emotions Happiness ratings increased significantly after the interaction with PARO. Happiness is considered to be a central human goal across cultures60 and is an important determinant of well-being61. As humans are inherently social, happiness is often related to social interaction. An extensive body of research emphasizes the key role of positive social connections in humans’ perceived happiness, satisfaction and stress buffering e.g62,63,64,65.. It appears that the effect of social connections on happiness is not exclusive to human-human interactions. Positive emotions, including happiness, were also found to be associated with interaction with companion animals66,67,68 and, in the recent two decades, with interactions with social robots69,70. Although there is broad evidence on the link between human-animal and human-robot social interactions and perceived happiness, stress and well-being, the majority of the studies examined either children, the elderly or hospitalized populations while only few focused on healthy adult population71; for review see67,72. Moreover, we could find only two controlled studies examining, as we did, the effect of controlled social interaction on the emotional state. Both of these investigated the effect of interaction with a social entity on the emotional state of children, using a pet dog73 or the PARO robot74. Both studies showed that the interaction with the social entity (pet dog or PARO robot) increased positive emotions, including happiness. Our current study, adds to the existing body of knowledge, in demonstrating that interaction with the PARO robot is effective in increasing perceived happiness also in healthy adults. The correlation we found between participants’ positive perceptions of PARO’s feelings and the increase in happiness further support this finding. The effect of the interaction with PARO on oxytocin levels In this study, oxytocin levels decreased only in the PARO group while no change in oxytocin levels was found in the control group throughout the experiment. In the PARO group, there was an inverse correlation between oxytocin levels and the sense of connection with PARO: the lower oxytocin levels were at T4, the higher was the participant’s willingness to meet it again. This is the first study to examine endogenous oxytocin levels during human-robot interaction (HRI). Over the last decade, several studies have examined the role of oxytocin in human relations. The strongest relationship was found with stress, with several studies showing that an increase in physiological and psychological stress is associated with an increase in endogenous oxytocin, whether in saliva or plasma75,76,77,78,79. Similarly, removal of a stressor induces a rapid decrease in oxytocin75,76,77. Moreover, a recent meta-analysis concluded that even a novel laboratory context may induce a significant oxytocin increase80. The results of these studies suggest that participants arriving to the novel laboratory setting in the current study may have experienced an increase in oxytocin levels during the first measurement of oxytocin (T1). The reduction in oxytocin levels at T3 and T4 in the PARO group may have resulted from participants feeling more at ease due to the interaction with PARO. That is, it appears that the interaction with PARO led to a decrease in stress and an accompanying rapid decrease in oxytocin levels. In contrast, control participants appear to have remained at higher levels of alertness throughout the experimental session, as evidenced by their unchanged salivary oxytocin levels. These findings support previous findings on the role of oxytocin as an important hormone in the stress system, which shows a positive association with cortisol80, known to respond to social stimuli. Another vein of research points to a positive association between oxytocin and social interaction, which at first seems to be at odds with the current findings. These studies focus on positive interactions with romantic partners or with family members, such as during parent-infant bonding25,81 and romantic relationships20,31,82 However, interactions with non-close others seem to be less effective in activating oxytocin release. For example, a study conducted with chimpanzees found that oxytocin elevation was specific to grooming kin or potential mating partners while no increase in oxytocin was found for grooming chimpanzees that did not have a strong social bond83. Among humans, Feldman et al.25 demonstrated increase in both salivary and plasma oxytocin only among mothers displaying high levels of affectionate contact during mother-infant interaction. Thus, it appears that there is a U-shaped relationship between oxytocin secretion, stress, and social bonding. The interaction with PARO appears to have reduced the stress level of participants, leading to a reduction in salivary oxytocin levels, compared to controls, who did not meet PARO, and their salivary oxytocin levels remained constant. Indeed, several studies show that the effect of oxytocin on behavior is context-dependent and may induce, at the same time, bonding and trust toward in-group members, while increasing aggression and mistrust toward out-group members;84,85,86,87,88,89,90,91,92 for a review see93. For example, administration of nasal oxytocin enhanced cautious behavior and feeling of mistrust during a social dilemma88 as well as promoted aggressive behavior during a social game89. These results suggest that a decrease in oxytocin levels may facilitate trust and sociability with members of an out-group. Since individuals may identify robots as out-group members94, the observed decrease in oxytocin levels might be related to the participants’ inclination to lower their aggression toward it and to establish their trust in it. The negative correlation found between oxytocin levels and participants’ willingness to meet PARO again, further support this explanation. In the current study, we had nearly equal numbers of males and females in both the PARO and the control groups, and we did not test the effects of gender on oxytocin levels. The observed effect of the interaction with PARO on oxytocin levels thus appears to be in addition to any gender effects, if those exist. As endogenous oxytocin levels appear to depend on a variety of factors95, it would be instructive to test, in a future study, whether gender plays a role in endogenous oxytocin levels when interacting with a social robot. The effect of the interaction with PARO on pain perception The results reveal diminished levels of pain during the interaction with PARO compared to baseline and compared to the control group. The decrease in pain ratings was more pronounced in the Touch condition compared to the No-Touch condition. Thus, this study highlights remarkable benefits of human-robot social interactions on pain perception. In accordance with our findings, previous data indicate that interaction with PARO or humanoid robot reduces clinical pain among pediatric patients96, cancer patients49 and children undergoing medical procedures50,97. It is important to note that this is the first study to examine the effect of HRI on pain perception among healthy adults. Moreover, it is the first to examine the effect of HRI on pain in a controlled laboratory setting. There are several possible explanations to our findings. First, our finding that touching PARO had the strongest effect in alleviating pain compared to its presence in the room without any physical contact and compared to the control condition, where participants did not meet PARO at all, highlights the effect of social touch on pain alleviation. This is the first study to examine the effect of touching a robot on experimentally induced pain perception. However, among humans, previous studies have found that holding a partner’s hand decreased pain ratings compared to the mere presence of the partner in the room, a stranger’s touch or no interaction, and compared to squeezing a ball33,34. In the current study, the participants gave high ratings to their positive feelings towards PARO. Research indeed suggests that social HRI, and particularly touching a robot, induce positive feelings towards it98,99. Thus, we speculate that touching PARO enabled participants to form an emotional connection with it, which led to similar beneficial outcomes on pain perception as was found during a partner’s touch33,34. It can be also speculated that the interaction with PARO attenuated pain by promoting relaxation. Indeed, recent evidence suggest that touching a robot can reduce stress99. Moreover, Robinson et al.100 showed that stroking PARO reduced blood pressure and heart rate and was accompanied by feelings of happiness and relaxation. Furthermore, there is evidence that high psychosocial stress enhances pain101,102. Indeed, some relaxation techniques were found effective in attenuating pain103,104,105. Taken together, it is possible that the interaction with PARO led to a more relaxed state of mind and thus reduced pain perception. Another possible explanation of our finding is that the interaction with PARO distracted the participants away from pain. Changing the focus of attention away from painful stimuli was shown to be efficacious in altering pain perception106,107,108. Thus, it is possible that having a novel stimulus like PARO in the room distracted the participants away from pain, leading to reduced pain ratings. However, the presence of PARO in the room without any physical contact did not affect mild-pain ratings and affected strong-pain ratings to a lesser extent than did the condition when participants touched PARO. Notably, PARO was active in the No-Touch condition: participants looked at it and were aware of the sounds and movements it made. Thus, if distraction is at play here, then touching PARO provides a more effective distraction than its mere presence in the room. Furthermore, it is likely that there is more at play here than mere distraction, as evidenced by the significantly more pronounced effect that the interaction with PARO had on pain perception in the high-communicators group, suggesting the social aspect of the interaction played a role in modulating pain perception. One may also speculate that the effect of touching PARO’s fur on pain perception stems from the tactile stimulation of touching a soft object. It was previously demonstrated that tactile stimulation could decrease nociceptive input109,110. This effect is attributed to the capability of sensory fibers to suppress the transmission of nociceptive input111,112. However, this analgesic effect strongly depends on the relative spatial location of the tactile and nociceptive stimuli within the same dermatome. In general, the closer the nociceptive and tactile stimuli, the more powerful the analgesia109,110. In this current study, participants touched PARO in a remote area to the nociceptive stimuli (the arm they used for petting PARO was the opposite arm from the one on which the heat stimuli were applied), thus, this explanation is less probable. Willemse & Erp99 found that touching a robot is effective in increasing the perceived intimacy with the robot and reducing physiological stress response, and is not dependent on whether there is a prior session of interaction with the robot. Thus, it is likely that, in our experiment, the social touch with PARO induced an effect on both emotions and pain perception regardless of the presence of a preliminary bonding session. This is clinically important since using a robot in the clinical field to improve positive emotions and reduce pain does not seem to require prior acquaintance and hence is easier to implement. Moreover, the effect of touching PARO on pain reduction was demonstrated both in mild and strong pain intensity. This finding further illustrates the clinical potential of human-robot social touch on pain management. The control group also experienced a reduction in pain ratings compared to baseline levels after reading the Wikipedia article. However, this reduction was significantly smaller compared to the reduction in pain perception in the PARO group. This reduction can be explained by a regression to the mean113, or that reading the article induced a certain level of relaxation and hence reduced pain ratings somewhat. We further found positive correlations between the empathic concern scores and the participants’ positive perceptions of the interaction with PARO. It has been shown that activation of brain networks involved in the perception of empathy are associated with both pain and social touch24,114. This suggests an interesting basis for future exploration of the connection between empathic concern level, and the pain-alleviating effect of touch in human-robot social interactions. It is possible that high empathic ability enables participants to embrace positive social relationship with PARO, which would amplify the pain-alleviating effect of touching it. This research direction would dovetail with a recent study showing that the empathic abilities of the partner predict the magnitude of pain reduction during touch between partners33. Further exploring interpersonal traits, the division of the PARO group according to their perceived ability to communicate with PARO revealed that participants classified as “high communicators” exhibited greater pain reduction as well as lower oxytocin levels compared to “low communicators”. These results, along with the significant correlations between participants’ perceptions of the interaction with PARO and the change in pain ratings, demonstrate that the effect of touching PARO on pain perception largely depends on the participant’s ability to form a social connection with PARO. It was found that the ability to communicate contributes to the extraversion personality trait115. It was further shown that a short human-robot social interaction can predict extraversion in a way comparable to the predictive power of human-human interactions116. Other studies demonstrated that high extraverted people exhibit higher affective trust117 and obtain greater benefits from social connection64,117, particularly from social touch117. Our findings thus add to the current literature in demonstrating that high communicators reap greater benefits from the interaction with PARO. In summary, this study indicates that social touch with PARO robot alleviates pain, increases happiness state and decreases oxytocin levels. Participants with higher perceived ability to communicate with PARO display greater pain alleviation as well as lower oxytocin levels. These findings reveal a profound effect of human-robot social interaction on pain and emotions and hence extend the current knowledge on the impact of social touch on pain and emotions, and offer new strategies for pain management and for improving well-being. Data availability The datasets analyzed during the current study are available from the corresponding author on reasonable request. References 1. 1. Berscheid, E. The human’s greatest strength: Other humans., 37–47 (American Psychological Association, 2003). 2. 2. Frank, D. A., Klass, P. E., Earls, F. & Eisenberg, L. Infants and young children in orphanages: One view from pediatrics and child psychiatry. Pediatrics 97, 569–578 (1996). 3. 3. Rutter, M. Maternal deprivation. Handbook of Parenting Volume 4 Social Conditions and Applied Parenting, 181 (2002). 4. 4. Rutter, M. & O’connor, T. G. Are there biological programming effects for psychological development? Findings from a study of Romanian adoptees. Developmental psychology 40, 81 (2004). 5. 5. Thompson, R. A. Social support and child protection: Lessons learned and learning. Child Abuse & Neglect 41, 19–29 (2015). 6. 6. Holt-Lunstad, J. Why social relationships are important for physical health: A systems approach to understanding and modifying risk and protection. Annual review of psychology 69, 437–458 (2018). 7. 7. Holt-Lunstad, J., Robles, T. F. & Sbarra, D. A. Advancing social connection as a public health priority in the United States. American Psychologist 72, 517 (2017). 8. 8. DeWall, C. N. & Bushman, B. J. Social acceptance and rejection: The sweet and the bitter. Current Directions in Psychological Science 20, 256–260 (2011). 9. 9. Cohen, S. Social relationships and health. American psychologist 59, 676 (2004). 10. 10. Umberson, D. & Karas Montez, J. Social relationships and health: A flashpoint for health policy. Journal of health and social behavior 51, S54–S66 (2010). 11. 11. Ciechanowski, P., Sullivan, M., Jensen, M., Romano, J. & Summers, H. The relationship of attachment style to depression, catastrophizing and health care utilization in patients with chronic pain. Pain 104, 627–637 (2003). 12. 12. Galovski, T. & Lyons, J. A. Psychological sequelae of combat violence: A review of the impact of PTSD on the veteran’s family and possible interventions. Aggression and violent behavior 9, 477–501 (2004). 13. 13. Forgeron, P. A. et al. Social functioning and peer relationships in children and adolescents with chronic pain: A systematic review. Pain Research and Management 15, 27–41 (2010). 14. 14. Rintala, D. H., Hart, K. A. & Priebe, M. M. Predicting consistency of pain over a 10-year period in persons with spinal cord injury. Journal of Rehabilitation Research & Development 41 (2004). 15. 15. Åslund, C., Larm, P., Starrin, B. & Nilsson, K. W. The buffering effect of tangible social support on financial stress: influence on psychological well-being and psychosomatic symptoms in a large sample of the adult general population. International journal for equity in health 13, 85 (2014). 16. 16. Divney, A. A. et al. Depression during pregnancy among young couples: the effect of personal and partner experiences of stressors and the buffering effects of social relationships. Journal of pediatric and adolescent gynecology 25, 201–207 (2012). 17. 17. Koopman, C., Hermanson, K., Diamond, S., Angell, K. & Spiegel, D. Social support, life stress, pain and emotional adjustment to advanced breast cancer. Psycho‐Oncology: Journal of the Psychological, Social and Behavioral Dimensions of Cancer 7, 101–111 (1998). 18. 18. Turner, J. B. & Turner, R. J. In Handbook of the sociology of mental health 341–356 (Springer, 2013). 19. 19. Gliga, T., Farroni, T. & Cascio, C. J. Social touch: A new vista for developmental cognitive neuroscience? Developmental cognitive neuroscience 35, 1 (2019). 20. 20. Schneiderman, I., Zagoory-Sharon, O., Leckman, J. F. & Feldman, R. Oxytocin during the initial stages of romantic attachment: relations to couples’ interactive reciprocity. Psychoneuroendocrinology 37, 1277–1285 (2012). 21. 21. Prescott, T. J., Diamond, M. E. & Wing, A. M. Active touch sensing. Philosophical Transactions of the Royal Society B 366, 2989–2995 (2011). 22. 22. Coan, J. A., Schaefer, H. S. & Davidson, R. J. Lending a hand: Social regulation of the neural response to threat. Psychological science 17, 1032–1039 (2006). 23. 23. Cohen, S., Janicki-Deverts, D., Turner, R. B. & Doyle, W. J. Does hugging provide stress-buffering social support? A study of susceptibility to upper respiratory infection and illness. Psychological science 26, 135–147 (2015). 24. 24. Bufalari, I. & Ionta, S. The social and personality neuroscience of empathy for pain and touch. Frontiers in human neuroscience 7, 393 (2013). 25. 25. Feldman, R., Gordon, I., Schneiderman, I., Weisman, O. & Zagoory-Sharon, O. Natural variations in maternal and paternal care are associated with systematic changes in oxytocin following parent–infant contact. Psychoneuroendocrinology 35, 1133–1141 (2010). 26. 26. Barrett, C., Arambula, S. & Young, L. The oxytocin system promotes resilience to the effects of neonatal isolation on adult social attachment in female prairie voles. Translational psychiatry 5, e606 (2015). 27. 27. Carter, C. S. Neuroendocrine perspectives on social attachment and love. Psychoneuroendocrinology 23, 779–818 (1998). 28. 28. Moberg, K. U. & Moberg, K. The oxytocin factor: Tapping the hormone of calm, love, and healing. (Da Capo Press, 2003). 29. 29. Uvnäs-Moberg, K., Arn, I. & Magnusson, D. The psychobiology of emotion: the role of the oxytocinergic system. International journal of behavioral medicine 12, 59–65 (2005). 30. 30. Agren, G., Lundeberg, T., Uvnäs-Moberg, K. & Sato, A. The oxytocin antagonist 1-deamino-2-D-Tyr-(Oet)-4-Thr-8-Orn-oxytocin reverses the increase in the withdrawal response latency to thermal, but not mechanical nociceptive stimuli following oxytocin administration or massage-like stroking in rats. Neuroscience letters 187, 49–52 (1995). 31. 31. Kreuder, A. K. et al. Oxytocin enhances the pain‐relieving effects of social support in romantic couples. Human brain mapping 40, 242–251 (2019). 32. 32. Krahé, C., Drabek, M. M., Paloyelis, Y. & Fotopoulou, A. Affective touch and attachment style modulate pain: a laser-evoked potentials study. Philosophical Transactions of the Royal Society B: Biological Sciences 371, 20160009 (2016). 33. 33. Goldstein, P., Weissman-Fogel, I. & Shamay-Tsoory, S. G. The role of touch in regulating inter-partner physiological coupling during empathy for pain. Scientific reports 7, 3252 (2017). 34. 34. Master, S. L. et al. A picture’s worth: Partner photographs reduce experimentally induced pain. Psychological Science 20, 1316–1318 (2009). 35. 35. Feingold Polak, R. et al. Differences between young and old users when interacting with a humanoid robot: a qualitative usability study. Paladyn, Journal of Behavioral Robotics 9, 183–192 (2018). 36. 36. Feingold Polak, R. & Levy-Tzedek, S. A Social Robot for Rehabilitation: Expert Clinicians and Post-Stroke Patients’ Evaluation Following a Long-Term Intervention. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 151–160). 37. 37. Eizicovits, D., Edan, Y., Tabak, I. & Levy-Tzedek, S. Robotic gaming prototype for upper limb exercise: Effects of age and embodiment on user preferences and movement. Restorative neurology and neuroscience 36, 261–274 (2018). 38. 38. Kashi, S. & Levy-Tzedek, S. Smooth leader or sharp follower? Playing the mirror game with a robot. Restorative neurology and neuroscience 36, 147–159 (2018). 39. 39. Li, J. & Chignell, M. Communication of emotion in social robots through simple head and arm movements. International Journal of Social Robotics 3, 125–142 (2011). 40. 40. Hoffman, G. & Weinberg, G. Interactive improvisation with a robotic marimba player. Autonomous Robots 31, 133–153 (2011). 41. 41. Clabaugh, C., Tsiakas, K. & Mataric, M. In Proceedings of the Synergies between Learning and Interaction Workshop, IROS, Vancouver, BC, Canada. 24–28. 42. 42. Kellmeyer, P., Mueller, O., Feingold-Polak, R. & Levy-Tzedek, S. Social robots in rehabilitation: A question of trust. Sci. Robot. 3, eaat1587 (2018). 43. 43. Langer, A., Feingold-Polak, R., Mueller, O., Kellmeyer, P. & Levy-Tzedek, S. Trust in Socially Assistive Robots: Considerations for use in Rehabilitation. Neuroscience & Biobehavioral Reviews 104, 231–239 (2019). 44. 44. Shibata, T. Integration of therapeutic robot, paro, into welfare systems. Proceedings of the 28th Annual European Conference on Cognitive Ergonomics 3 (2010). 45. 45. Moyle, W. et al. Social robots helping people with dementia: Assessing efficacy of social robots in the nursing home environment. 6th International Conference on Human System Interactions (HSI). 608–613 (2013). 46. 46. Wada, K., Shibata, T., Saito, T., Sakamoto, K. & Tanie, K. Psychological and social effects of one year robot assisted activity on elderly people at a health service facility for the aged. Proceedings of the 2005 IEEE international conference on robotics and automation. 2785-2790 (2005). 47. 47. Wada, K. & Shibata, T. Living with seal robots—its sociopsychological and physiological influences on the elderly at a care house. IEEE transactions on robotics 23, 972–980 (2007). 48. 48. Shibata, T. et al. Mental commit robot and its application to therapy of children. IEEE/ASME International Conference on Advanced Intelligent Mechatronics. 1053–1058 (2001). 49. 49. Eskander, R., Tewari, K., Osann, K. & Shibata, T. Pilot study of the PARO therapeutic robot demonstrates decreased pain, fatigue, and anxiety among patients with recurrent ovarian carcinoma. Gynecologic Oncology 130, e144–e145 (2013). 50. 50. Beran, T. N., Ramirez-Serrano, A., Vanderkooi, O. G. & Kuhn, S. Humanoid robotics in health care: An exploration of children’s and parents’ emotional reactions. Journal of health psychology 20, 984–989 (2015). 51. 51. Trost, M. J., Ford, A. R., Kysh, L., Gold, J. I. & Matarić, M. Socially Assistive Robots for Helping Pediatric Distress and Pain. The Clinical Journal of Pain 35, 451–458 (2019). 52. 52. Koyama, Y., Koyama, T., Kroncke, A. P. & Coghill, R. C. Effects of stimulus duration on heat induced pain: the relationship between real-time and post-stimulus pain ratings. Pain 107, 256–266 (2004). 53. 53. Price, D. D., McGrath, P. A., Rafii, A. & Buckingham, B. The validation of visual analogue scales as ratio scale measures for chronic and experimental pain. Pain 17, 45–56 (1983). 54. 54. Sue Carter, C. et al. Oxytocin: Behavioral Associations and Potential as a Salivary Biomarker. Annals of the New York Academy of Sciences 1098, 312–322 (2007). 55. 55. Wada, K., Shibata, T., Saito, T. & Tanie, K. Effects of robot assisted activity for elderly people at day service center and analysis of its factors. Proceedings of the 4th World Congress on Intelligent Control and Automation 1301–1305 (2002). 56. 56. Calo, C. J., Hunt-Bull, N., Lewis, L. & Metzler, T. Ethical implications of using the paro robot, with a focus on dementia patient care. Workshops at the Twenty-Fifth AAAI Conference on Artificial Intelligence (2011). 57. 57. Monk, T. H. A visual analogue scale technique to measure global vigor and affect. Psychiatry research 27, 89–99 (1989). 58. 58. Davis, M. H. Measuring individual differences in empathy: Evidence for a multidimensional approach. Journal of personality and social psychology 44, 113 (1983). 59. 59. Faul, F., Erdfelder, E., Lang, A.-G. & Buchner, A. G* Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior research methods 39, 175–191 (2007). 60. 60. Veenhoven, R. Informed pursuit of happiness: What we should know, do know and can get to know. Journal of Happiness Studies 16, 1035–1071 (2015). 61. 61. Steptoe, A. Happiness and health. Annual review of public health 40, 339–359 (2019). 62. 62. Hooghe, M. & Vanhoutte, B. Subjective well-being and social capital in Belgian communities. The impact of community characteristics on subjective well-being indicators in Belgium. Social Indicators Research 100, 17–36 (2011). 63. 63. Hsu, H.-C. & Chang, W.-C. Social connections and happiness among the elder population of Taiwan. Aging & mental health 19, 1131–1137 (2015). 64. 64. Oerlemans, W. G., Bakker, A. B. & Veenhoven, R. Finding the key to happy aging: A day reconstruction study of happiness. Journals of Gerontology Series B: Psychological Sciences and Social Sciences 66, 665–674 (2011). 65. 65. Requena, F. Welfare systems, support networks and subjective well-being among retired persons. Social Indicators Research 99, 511–529 (2010). 66. 66. Bao, K. J. & Schreer, G. Pets and happiness: Examining the association between pet ownership and wellbeing. Anthrozoös 29, 283–296 (2016). 67. 67. Beetz, A., Uvnäs-Moberg, K., Julius, H. & Kotrschal, K. Psychosocial and psychophysiological effects of human-animal interactions: the possible role of oxytocin. Frontiers in psychology 3, 234 (2012). 68. 68. Walsh, F. Human‐animal bonds I: The relational significance of companion animals. Family process 48, 462–480 (2009). 69. 69. Góngora Alonso, S. et al. Social Robots for People with Aging and Dementia: A Systematic Review of Literature. Telemedicine and e-Health 25, 533–540 (2018). 70. 70. Moerman, C. J., van der Heide, L. & Heerink, M. Social robots to support children’s well-being under medical treatment: A systematic state-of-the-art review. Journal of Child Health Care 23, 596–612 (2019). 71. 71. Fiocco, A. & Hunse, A. The buffer effect of therapy dog exposure on stress reactivity in undergraduate students. International journal of environmental research and public health 14, 707 (2017). 72. 72. Broadbent, E. Interactions with robots: The truths we reveal about ourselves. Annual review of psychology 68, 627–652 (2017). 73. 73. Kerns, K. A., Stuart‐Parrigon, K. L., Coifman, K. G., van Dulmen, M. H. & Koehn, A. Pet dogs: Does their presence influence preadolescents’ emotional responses to a social stressor? Social Development 27, 34–44 (2018). 74. 74. Crossman, M. K., Kazdin, A. E. & Kitt, E. R. The influence of a socially assistive robot on mood, anxiety, and arousal in children. Professional Psychology: Research and Practice 49, 48 (2018). 75. 75. Bernhard, A. et al. Adolescent oxytocin response to stress and its behavioral and endocrine correlates. Hormones and behavior 105, 157–165 (2018). 76. 76. de Jong, T. R. et al. Salivary oxytocin concentrations in response to running, sexual self-stimulation, breastfeeding and the TSST: The Regensburg Oxytocin Challenge (ROC) study. Psychoneuroendocrinology 62, 381–388 (2015). 77. 77. Jurek, B. & Neumann, I. D. The oxytocin receptor: from intracellular signaling to behavior. Physiological reviews 98, 1805–1908 (2018). 78. 78. Pierrehumbert, B. et al. Oxytocin response to an experimental psychosocial challenge in adults exposed to traumatic experiences during childhood or adolescence. Neuroscience 166, 168–177 (2010). 79. 79. Taylor, S. E. et al. Relation of oxytocin to psychological stress responses and hypothalamic-pituitary-adrenocortical axis activity in older women. Psychosomatic medicine 68, 238–245 (2006). 80. 80. Brown, C. A., Cardoso, C. & Ellenbogen, M. A. A meta-analytic review of the correlation between peripheral oxytocin and cortisol concentrations. Frontiers in neuroendocrinology 43, 19–27 (2016). 81. 81. Feldman, R., Gordon, I. & Zagoory‐Sharon, O. Maternal and paternal plasma, salivary, and urinary oxytocin and parent–infant synchrony: considering stress and affiliation components of human bonding. Developmental science 14, 752–761 (2011). 82. 82. Schneiderman, I., Kanat-Maymon, Y., Zagoory-Sharon, O. & Feldman, R. Mutual influences between partners’ hormones shape conflict dialog and relationship duration at the initiation of romantic love. Social Neuroscience 9, 337–351 (2014). 83. 83. Crockford, C. et al. Urinary oxytocin and social bonding in related and unrelated wild chimpanzees. Proceedings of the Royal Society B: Biological Sciences 280, 20122765 (2013). 84. 84. Campbell, A. Attachment, aggression and affiliation: the role of oxytocin in female social behavior. Biological psychology 77, 1–10 (2008). 85. 85. Campbell, P., Ophir, A. G. & Phelps, S. M. Central vasopressin and oxytocin receptor distributions in two species of singing mice. Journal of Comparative Neurology 516, 321–333 (2009). 86. 86. De Dreu, C. K. et al. The neuropeptide oxytocin regulates parochial altruism in intergroup conflict among humans. Science 328, 1408–1411 (2010). 87. 87. Dębiec, J. Peptides of love and fear: vasopressin and oxytocin modulate the integration of information in the amygdala. Bioessays 27, 869–873 (2005). 88. 88. Declerck, C. H., Boone, C. & Kiyonari, T. The effect of oxytocin on cooperation in a prisoner’s dilemma depends on the social context and a person’s social value orientation. Social cognitive and affective neuroscience 9, 802–809 (2013). 89. 89. Ne’eman, R., Perach-Barzilay, N., Fischer-Shofty, M., Atias, A. & Shamay-Tsoory, S. Intranasal administration of oxytocin increases human aggressive behavior. Hormones and behavior 80, 125–131 (2016). 90. 90. Pedersen, C. A. Biological aspects of social bonding and the roots of human violence. Annals of the New York Academy of Sciences 1036, 106–127 (2004). 91. 91. Pfundmair, M., Reinelt, A., DeWall, C. N. & Feldmann, L. Oxytocin strengthens the link between provocation and aggression among low anxiety people. Psychoneuroendocrinology 93, 124–132 (2018). 92. 92. Romney, C., Hahn-Holbrook, J., Norman, G. J., Moore, A. & Holt-Lunstad, J. Where is the love? A double-blind, randomized study of the effects of intranasal oxytocin on stress regulation and aggression. International journal of psychophysiology 136, 15–21 (2019). 93. 93. Shamay-Tsoory, S. G. & Abu-Akel, A. The social salience hypothesis of oxytocin. Biological psychiatry 79, 194–202 (2016). 94. 94. Edwards, C., Edwards, A., Stoll, B., Lin, X. & Massey, N. Evaluations of an artificial intelligence instructor’s voice: Social Identity Theory in human-robot interactions. Computers in Human Behavior 90, 357–362 (2019). 95. 95. Quintana, D. S. & Guastella, A. An Allostatic Theory of Oxytocin Signaling. (2019). 96. 96. Okita, S. Y. Self–Other’s Perspective Taking: The Use of Therapeutic Robot Companions as Social Agents for Reducing Pain and Anxiety in Pediatric Patients. Cyberpsychology, Behavior, and Social Networking 16, 436–441 (2013). 97. 97. Stinson, J., Jibb, L., Nathan, P., Beran, T. & Hum, V. Using a humanoid robot to reduce procedural pain in children with cancer: a pilot randomized controlled trial. Pediatric Blood & Cancer. S54–S55 (Wiley-Blackwell, NJ USA). 98. 98. Nie, J., Park, M., Marin, A. L. & Sundar, S. S. Can you hold my hand? Physical warmth in human-robot interaction. 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 201–202 (2012). 99. 99. Willemse, C. J. & van Erp, J. B. Social Touch in Human–Robot Interaction: Robot-Initiated Touches can Induce Positive Responses without Extensive Prior Bonding. International journal of social robotics, 1–20 (2018). 100. 100. Robinson, H., MacDonald, B. & Broadbent, E. Physiological effects of a companion robot on blood pressure of older people in residential care facility: A pilot study. Australasian journal on ageing 34, 27–32 (2015). 101. 101. Geva, N., Pruessner, J. & Defrin, R. Acute psychosocial stress reduces pain modulation capabilities in healthy men. PAIN 155, 2418–2425 (2014). 102. 102. Geva, N., Pruessner, J. & Defrin, R. Triathletes lose their advantageous pain modulation under acute psychosocial stress. Medicine & Science in Sports & Exercise 49, 333–341 (2017). 103. 103. Dunford, E. & Thompson, M. Relaxation and mindfulness in pain: A review. Reviews in pain 4, 18–22 (2010). 104. 104. Kwekkeboom, K. L. & Gretarsdottir, E. Systematic review of relaxation interventions for pain. Journal of nursing scholarship 38, 269–277 (2006). 105. 105. Smith, C. A. et al. Relaxation techniques for pain management in labour. Cochrane Database of Systematic Reviews 3 (2018). 106. 106. Chayadi, E. & McConnell, B. L. Gaining insights on the influence of attention, anxiety, and anticipation on pain perception. Journal of Pain Research 12, 851 (2019). 107. 107. DeMore, M. & Cohen, L. L. Distraction for pediatric immunization pain: A critical review. Journal of Clinical Psychology in Medical Settings 12, 281–291 (2005). 108. 108. Johnson, M. H., Breakwell, G., Douglas, W. & Humphries, S. The effects of imagery and sensory detection distractors on different measures of pain: how does distraction work? British Journal of Clinical Psychology 37, 141–154 (1998). 109. 109. Mancini, F., Beaumont, A.-L., Hu, L., Haggard, P. & Iannetti, G. D. D. Touch inhibits subcortical and cortical nociceptive responses. Pain 156, 1936 (2015). 110. 110. Mancini, F., Nash, T., Iannetti, G. D. & Haggard, P. Pain relief by touch: a quantitative approach. PAIN® 155, 635–642 (2014). 111. 111. Bourne, S., Machado, A. G. & Nagel, S. J. Basic anatomy and physiology of pain pathways. Neurosurgery Clinics 25, 629–638 (2014). 112. 112. Melzack, R. From the gate to the neuromatrix. Pain 82, S121–S126 (1999). 113. 113. Barnett, A. G., Van Der Pols, J. C. & Dobson, A. J. Regression to the mean: what it is and how to deal with it. International journal of epidemiology 34, 215–220 (2004). 114. 114. De Vignemont, F. & Singer, T. The empathic brain: how, when and why? Trends in cognitive sciences 10, 435–441 (2006). 115. 115. Akert, R. M. & Panter, A. T. Extraversion and the ability to decode nonverbal communication. Personality and Individual Differences 9, 965–972 (1988). 116. 116. Rahbar, F. et al. In International Conference on Social Robotics. 543–553 (Springer). 117. 117. Erk, S. M., Toet, A. & Van Erp, J. B. Effects of mediated social touch on affective experiences and trust. PeerJ 3, e1297 (2015). Acknowledgements The authors would like to thank Alona Kuzmina for analyzing the salivary oxytocin samples, and Ariel Bistritsky for developing the mobile VAS application. The research was partially supported by the Helmsley Charitable Trust through the Agricultural, Biological and Cognitive Robotics Initiative and by the Marcus Endowment Fund, both at the Ben-Gurion University of the Negev. Financial support was provided by the Rosetrees Trust, the Borten Family Foundation, and the Consolidated Anti-Aging Foundation grants. This research was also supported by the Israel Science Foundation (grants No. 535/16 and 2166/16), the Israel Pain Association, and received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No 754340. Author information Authors Contributions N.G. and S.L. designed the experiment. N.G. carried out the experiment and analyzed the data. F.U. performed the lab analysis of the oxytocin samples and commented on the manuscript. N.G. and S.L. interpreted the results and wrote the manuscript. S.L. supervised the project. Corresponding author Correspondence to Shelly Levy-Tzedek. Ethics declarations Competing interests The authors have no competing interests as defined by Nature Research, or other interests that might be perceived to influence the results and/or discussion reported in this paper. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Rights and permissions Reprints and Permissions Geva, N., Uzefovsky, F. & Levy-Tzedek, S. Touching the social robot PARO reduces pain perception and salivary oxytocin levels. Sci Rep 10, 9814 (2020). https://doi.org/10.1038/s41598-020-66982-y • Accepted: • Published: • Elderly’s preferences towards rehabilitation robot appearance using electroencephalogram signal • Xinxin Sun •  & Wenkui Jin EURASIP Journal on Advances in Signal Processing (2021) • Robots and the Possibility of Humanistic Care • Simon Coghlan International Journal of Social Robotics (2021) • What Makes a Robot Social? A Review of Social Robots from Science Fiction to a Home or Hospital Near You • Anna Henschel • , Guy Laban •  & Emily S. Cross Current Robotics Reports (2021)
2021-09-16 18:44:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4026913642883301, "perplexity": 7663.983610989814}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053717.37/warc/CC-MAIN-20210916174455-20210916204455-00027.warc.gz"}
https://mathematica.stackexchange.com/questions/123353/solving-equation-of-motion-for-position-dependent-force
# Solving equation of motion for position dependent force Consider the functions U[x_] = 1/x^2 - 2/x; F[x_] = -1*U'[x]; But here comes the tricky part: Write down the object's equation of motion, i.e., Newton's second law, in the form of a differential equation involving x[t] and its derivatives. To solve this equation numerically we use NDSolve. Complete the code for NDSolve below. Let the particle begin from rest at x[0] = x0 = 0.55. It has a mass of m = 0.01. m = 0.01; x0 = 0.55; v0 = 0; x[t_] := (F[x]/(2*m))*t^2 + x'[t]*t + x0 ClearAll[x]; solution = NDSolve[{x[t] == x'[t] + x0, x'[0] = 0 }, x, {t, 0, 18}]; x = x /. solution[[1]]; • So you define x[t_] := (F[x]/(2*m))*t^2 + x'[t]*t + x0 then you ClearAll[x]? This clears you definition, or am I lost? – mattiav27 Aug 7 '16 at 11:52 • Also if this is an homework, you should the appropriate tag... – mattiav27 Aug 7 '16 at 11:53 • Cool. Not homework. I have free time and am trying to learn Mathematica. Found this Physics problem but it seems a bit hard for me. Didn't realize ClearAll[x] does this. Just saw it being used... – Dylan Solms Aug 7 '16 at 12:03 • you obviously don't want to define x before you solve for it anyway. Your equation is second order F=m x''[t], hope that helps. (I assume F is force, though you don't actually say that ). I,ll add your attempt looks like you are trying to make some use of some constant acceleration expressions. Your acceleration is most definately not constant, you cant use those. – george2079 Aug 7 '16 at 12:56 • So, you are trying to solve a differential equation where a particle undergoes a position dependent force? – Feyre Aug 7 '16 at 12:59 Set the force: u = x[t]^-2 - 2/x[t]; f = -D[u, x[t]]; Initial conditions: {m,x0,v0} = {0.01,0.55,0}; Solve and plot: s = NDSolve[{x''[t] == f/m, x[0] == x0, x'[0] == v0}, x[t], {t, 0, 3}]; Plot[x[t] /. s, {t, 0, 3}] You can find the period with a FindMinimum[] FindMinimum[x[t] /. s, {t, 2.3}] {0.55, {t -> 2.33751}} Note, that this is the same as the initial condition, the velocity is near to 0: D[x[t] /. s, t] /. %[[2, 1]] -9.42075*10^-6 As @michael-e2 suggested, the total energy is negative. Plotting the total energy over three periods reveals a small numerical error, but the energy is clearly conserved: Plot[(0.5 m D[x[t] /. s, t]^2 + u /. s) /. t -> ti, {ti, 0, 3 t /. %5[[2]]}, PlotRange -> {-0.33058, -0.330578}, Ticks -> {Automatic, {-0.330579, -0.330578}}] And for the fun: b = Table[ VectorPlot3D[{0, 0, 2/x^3 - 2/x^2}, {a, -1, 1}, {b, -1, 1}, {x, i, 6}, VectorPoints -> {3, 3, 9}], {i, -0.5, 1, 1.5}]; Manipulate[ Show[{Graphics3D[{Sphere[{0, 0, Evaluate[x[t] /. s][[1]]}, 0.4]}, PlotRange -> {{-1, 1}, {-1, 1}, {-0, 6}}] /. t -> tl}[[1]], b], {tl, 0, 2.3, 0.05}] • What is the meaning of arrows on the last plot? Is it possible to make them 3d to be consistent with the 3d ball? – yarchik Aug 7 '16 at 16:01 • @yarchik I added a VectorPlot3D of the force, those are the default arrows. I could, but I threw away the code and it's dinner time. Feel free to change the gif if you feel like it. – Feyre Aug 7 '16 at 17:23 • @Feyre I think you should include the code for the fun part. Pretty please. – dearN Aug 7 '16 at 21:15 • @drN Because you asked so nicely. – Feyre Aug 8 '16 at 8:39 The plan here is to set up step by step the problem in terms of Hamiltonian mechanics. The principal idea are that the Hamiltonian $H$ represents the energy of the system $$H = T + U \quad \text{(kinetic + potential energy)}$$ and the differential equations governing the motion are derived from $H$ as follows: $$dp/dt = - \partial H/\partial x, \quad dx/dt = \partial H/\partial p$$ where $x$ and $p$ are the position and momentum variables respectively. In the problem at hand the kinetic energy is given by $T = p^2/(2m)$. One feature of Hamiltonian systems is that $H$ is invariant. It is a so-called "first integral" (which just means is constant as the system evolves). And when the system comes from a physical system, its invariance is equivalent to the conservation of energy. Consequently, the solution to an initial value problem $x(t_0)=x_0,\ p(t_0)=p_0$ traces the curve $$H(x,p) = H_0 \buildrel {\rm DEF} \over = H(x_0,p_0)$$ in phase space. If the curve is closed, then the solution must be periodic (assuming $H$ is nice enough). U[x_] := x^-2 - 2/x; (* normally single/initial capital letters are *) H[x_, p_] := p^2/(2 m) + U[x]; (* to be avoided, but I'm going to break that rule *) Plot[U[x], {x, 0, 7}, GridLines -> {None, {H[0.55, 0.]}}] An initial condition that gives a negative value for $H_0$ leads to periodic solution because the total energy $H=p^2/(2m)+U$ cannot be less than the potential energy $U$: H[0.55, 0.] (* -0.330579 *) Block[{m = 0.01}, ContourPlot[H[x, p] == H[0.55, 0.], {x, 0, 6}, {p, -0.2, 0.2}, FrameLabel -> Automatic, PlotLabel -> HoldForm[H == Style[H0, Italic]]]] One can calculate the period by integrating $dt$ over the curve, where in the Hamiltonian system we have $$dt = {dx \over dx/dt} = {1 \over \partial H/\partial p} \; dx = {m \over p}\;dx$$ In our case, if $\gamma$ is the curve $H = H_0$, we have $$\int_\gamma {1 \over \partial H / \partial p} \; dx = 2 \int_{x_{\rm min}}^{x_{\rm max}} {m \over \sqrt{2 m \, (H_0 - U(x))}} \; dx$$ If we solve the differential equation over one period, then we extend the solution to all time by periodicity. All this can be done in Mathematica. • Input: u, m, x0, v0 or p0. I Rationalize[] the numeric parameters because I used some exact solvers (Solve[], Integrate[]). Numeric solvers NSolve[], NIntegrate[] could be substituted. • Construct Hamiltonian ham. • Set up the IVP system (ODE & ICs). I found it convenient to solve these equations for their principal variables. This let's us substitute the "right-hand sides" of the system into expressions like the Hamiltonian and period integral. • Calculate the invariant value of the Hamiltonian ham0 ($h_0$). • Solve ham0 - u == 0 for the range of $x$, {xmin, xmax}. For a general solver, this step would need some fixing. It's possible there is more than one loop, that the curve is unbounded, or consists of a single point. • Construct dt in terms of x. • Integrate dt to get period. • Solve the IVP with NDSolve[] over {t, 0, period}. • The periodic solution can be code x[Mod[t, period]] /. sol. One potential issue is that the period will be perturbed by the numeric method in NDSolve[]. It's not a problem in this case. I've commented out some localizations so one can examine what the results are after executing. One can localize some or all them as desired. One could make a function that would take u, m, x0, v0 as inputs, provided one decides some user-interface questions. Module[{m, x0, v0, u, f(*,rhsSub,xmin,xmax,sys,ham,h0,period,dt,x,p*)}, (* Inputs *) u = U[x[t]]; (* potential (given) *) {m, x0, v0} = Rationalize@{0.01, 0.55, 0}; (* parameters/conditions (given) *) (* Set-up *) ham = u + p[t]^2/(2 m); (* Hamiltonian *) f = -D[u, x[t]]; (* force -- unnecessary *) sys = { (* ivp system *) p'[t] == -D[ham, x[t]], (* force does appear here, too *) x'[t] == D[ham, p[t]], x[0] == x0, p[0] == m*v0}; (* initial conditions *) rhsSub = First@Solve[sys, {x'[t], p'[t], x[0], p[0]}]; (* solution rules for sys *) ham0 = ham /. t -> 0 /. rhsSub; (* initial value of the Hamiltonian *) {xmin, xmax} = x /. Solve[ (* range of x on (closed) trajectory in phase space *) ham0 - u == 0 /. x[t] -> x]; dt = (* differential of time *) 1/x'[t](* times dx*)/. rhsSub /. {p[t] -> Sqrt[2 m (ham0 - u)]}; period = 2 Integrate[dt /. x[t] -> x, {x, xmin, xmax}]; {ndsol} = NDSolve[sys, {x, p}, {t, 0, period} (*opts*), InterpolationOrder -> All] (* Use InterpolationOrder -> All if you want more accuracy in the InterpolatingFunction[] *between* the steps *) ] A plot over three periods: Plot[x[Mod[t, period]] /. ndsol, {t, 0, 3 period}] Another view of the evolution of the system: the position X (x[Mod[t, period]]), a plot of the potential energy U at time t (U[x[Mod[t, period]]]), and a plot of the point {x[Mod[t, period]], p[Mod[t, period]]} tracing out the integral curve H == H0 (ham == ham0). Uncompress@"1:eJzlV81uEzEQTlrSQqnopRckDvAGvEEIgbaREoiyqeCIs+vNWvWuF9urJu2Vl4A34MIrcEaCC2/QR+CKkMBjZ+PsZjdBIfxE+DCxxzPj8cw3s869Aev51UqlIvYVOWHUa7Iwplhifwu41xR5RmRgV20ipFltK+JgKWrqN4gpm7Bh+ZAy98zYLdaBrZBfHr47vrp4Vfd3Ye+mIk0WSZbwbsba45cJomYJeieajjSNZ9j8zWsYV3VemQx/e9YDo0Zgg+wW78YTGx/ep5O6vXkvoVjcUJNWiIbYIReYfAUzWYH9VKCLPI9Ew6LIpStyG3zZyhjR/APg1+aNw+kQmx6KhniRZXuPTx9h3HlQtl9yT0hGQ8TYVWdJwmwy+iTEglSnvgGvy84xJ6BLvqsxD49Ew0Nn+RbII45CLDlxbaKzWTLYAXI6YRSkkn97++VzZ9Ctm3TOh+qYE69NIixyR8DkCYtwEULLoLTu8PxhRNXWjyit20gkC1UIXCO9p0gHRSROKEp7yI6ua5qEUTaJPgiL6zpPKA6IK3JJhlUPeyZL2lFGIglh0mFtIz6cpBDy/2gcoVD5UbVhV9LWrR6OKXJxg9LcNadQG1ngdpinWVJ7H6v0MU93Dr0rGM2FESaNERZ60ucJ/r1hntqFex5BMZUdfJAK9Il7Jhw5pguPt+F8GiOXyLG+dAk3W1Sm7yN1ynSW82Uv48tyNwDLzQBxqUD/wnERxV6ZuoZRy8ORVN5lFus0tVE+/gPNZYUuCRgp65JQQG00wNTEC64NL5YjxkNdrM/9nVTHCdj5zJdnKr8ZnWbm6/fTOiWNaZUGsbYCPpxH9N+vi42q3WWQP/Vrc5APNhvy8Uo4/49h/KvuLXxgLAFh7k+aeT1bpskDLE/u6xy2JKLEdUDsbsGTXs48LOx3QstZBPwAu4ZrPw==" Here is a check on how the Hamiltonian (energy) is conserved. (This is the reason for InterpolationOrder -> All. Without it, there would be greater interpolation error between the steps.) Plot[ham - ham0 /. t -> Mod[t, period] /. ndsol // RealExponent // Evaluate, {t, 0, 2 period}, PlotPoints -> 201, PlotRange -> {-16, 0}, GridLines -> {None, {-7, -8}}] Other available tools 1. When a system has an invariant, one can use the projection method in NDSolve[]. In our case the option code would be something like either of these: Method -> {"Projection", "Invariants" -> {ham}} Method -> {"Projection", "Invariants" -> {ham}, Method -> "ExplicitRungeKutta"} After each step, the solution is adjusted by projecting it onto curve ham == <constant>, where the constant is calculated by NDSolve[] during initialization (it will be equal to ham0). The first one actually has a couple of glitches; one can try another method, as shown in the second option. The plot of the invariant for "ExplicitRungeKutta" is shown below. It is quite a bit better. Plot[ham - ham0 /. t -> Mod[t, period] /. ndsol // RealExponent // Evaluate, {t, 0, period}, PlotPoints -> 201, PlotRange -> {-18, 0}, GridLines -> {None, {-7, -8}}] 2. Another method designed specially for Hamiltonian systems is "SymplecticPartitionedRungeKutta". The option would look something like this: Method -> {"SymplecticPartitionedRungeKutta", "DifferenceOrder" -> 10, "PositionVariables" -> {x[t]}} The one requirement is that the Hamiltonian be separable, that is, of the form $H(p,q,t)==T(p)+V(q,t)$. There is a bug, unfortunately still present in V10.4.1, for which a fix can be found in my answer here: SymplecticPartitionedRungeKutta shows strange error. Using the function cs[] found in it, we can invoke the "SymplecticPartitionedRungeKutta" method as follows. It turns out that if the default StartingStepSize is reduced to period/1000, we get excellent accuracy. Block[{NDSolveSPRKDumpCheckSeparability = cs}, Module[{m, x0, v0, u, f(* ... *)}, (* ...same as before... *) {ndsol} = NDSolve[sys, {x, p}, {t, 0, period}, Method -> {"SymplecticPartitionedRungeKutta", "DifferenceOrder" -> 10, "PositionVariables" -> {x[t]}}, InterpolationOrder -> All(*, StartingStepSize -> period/1000*)] ]] Plot[ham - ham0 /. t -> Mod[t, period] /. ndsol // RealExponent // Evaluate, {t, 0, period}, PlotPoints -> 201, PlotRange -> {-18, 0}, GridLines -> {None, {-7, -8}}] • Nice summary of what works and doesn't work in the Hamiltonian formulation - probably overkill for this question, though... I felt tempted to close it, but now I can't bring myself to vote for that. – Jens Aug 8 '16 at 3:23 • I learned a lot. +1 – LLlAMnYP Aug 8 '16 at 8:53 • "x[t] and its derivatives" I take that to mean standard classical mechanics. Still, such a long post makes me feel rather outdone. – Feyre Aug 8 '16 at 11:51 • @Feyre, Jens, et al. -- Feyre's original bouncing ball is what inspired me. And I've been wanting to work up a Hamiltonian mechanics example for class, and this seemed a like a good choice. It's like a Lennard-Jones potential, which I might use instead. Why not share my explorations, even if they are overkill? (Sometimes, I don't have the time.) Also, the OP's "I have free time and am trying to learn Mathematica" was an inspiration, too, more so than "x[t]` and its derivatives", obviously. :) – Michael E2 Aug 8 '16 at 16:13 • @MichaelE2 I agree, we have to be able to enjoy coding here. – Feyre Aug 8 '16 at 17:27
2020-08-15 11:56:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3489786684513092, "perplexity": 2634.66724538736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740838.3/warc/CC-MAIN-20200815094903-20200815124903-00027.warc.gz"}
http://www.acmerblog.com/hdu-4218-imba-7227.html
2015 05-23 # IMBA? As a kind problem setter, I should warn you, as you may already notice the sample output, it’s a horrible problem and very IMBA (imbalanced). Today we learn circle, a circle is a simple shape of Euclidean geometry consisting of those points in a plane that are a given distance from a given point, the center. The distance between any of the points and the center is called the radius. Circles are simple closed curves which divide the plane into two regions: an interior and an exterior. In everyday use, the term "circle" may be used interchangeably to refer to either the boundary of the figure, or to the whole figure including its interior; in strict technical usage, the circle is the former and the latter is called a disk. A circle can be defined as the curve traced out by a point that moves so that its distance from a given point is constant. A circle may also be defined as a special ellipse in which the two foci are coincident and the eccentricity is 0. Circles are conic sections attained when a right circular cone is intersected by a plane perpendicular to the axis of the cone. Now let’s draw a circle with your program. Given a radius R, we draw a circle in a (2*R + 1) * (2*R + 1) rectangle, set the center of circle at (R, R) (0-based), for all point, if the square root of the difference between the square of its distance to the center and the square of the radius is less than 1.732050807568877293527446341505872366942805253810380628055806979451933016908800037081146186757248575675626141415406703029969945094998952478 81165551209437364852809323190230558206797482010108467492326501531234326690332288665067225466892183797122704713166036786158801904998653737985 9389467650347506576051, draw a star (‘*’), otherwise draw a blank. Refer to the output for more details. To avoid Presentation Error, you should output exactly the same characters in each row of one test case. The sample output are not completely standard for some non-shown blanks, take care. The first line contains a single integer T, indicating the number of test cases. Each test case contains one integer R. Technical Specification 1. 1 <= T <= 10 2. 3 <= R <= 18 The first line contains a single integer T, indicating the number of test cases. Each test case contains one integer R. Technical Specification 1. 1 <= T <= 10 2. 3 <= R <= 18 2 10 13 Case 1: *** * * * * * * * * * * * * * * * * * * * * * * * * * * *** Case 2: *** * * * * * * * * * * * * * * * * * * * * * * *** Hint If you can’t output the sample output, or you get a Wrong Answer and then find your program didn’t output as the sample, please don’t ask me why or talk to your teammate, “IS iSea a SX? Obviously wrong sample! ”, think, and think again. #include<stdio.h> #include<string.h> #include<math.h> #include<iostream> #include <algorithm> using namespace std; int main() { int T,r; cin>>T; for(int t=1;t<=T;t++) { cin>>r; cout<<"Case "<<t<<":"<<endl; for(int i=0;i<2*r+1;i++) { for(int j=0;j<2*r+1;j++) { if(abs(r*r-((i-r)*(i-r)+(j-r)*(j-r)))<=3) cout<<"*"; else cout<<" "; } cout<<endl; } } return 0; }
2016-10-26 02:13:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5370157361030579, "perplexity": 692.509677888753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720475.79/warc/CC-MAIN-20161020183840-00037-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.vedantu.com/question-answer/distinguish-between-a-sigma-and-pi-bond-class-11-chemistry-cbse-5f62644f01faef2daa4e25d0
Question # Distinguish between a sigma and pi bond. Sigma bond Pi bond It is the axial overlapping of the reacting atoms. It is the lateral or side by side overlapping of the reacting atoms. Orbitals which undergo overlapping can be either hybrid or not hybrid or one hybrid and one pure orbital. Orbitals which undergo overlapping are the pure orbital i.e. they are not hybridised. Their existence does not depend on any other bond. The existence depends on the sigma bond. Free rotation of the atoms can take place It does not allow free rotation of the atoms. The strength of the sigma bond is much stronger than the pi bond. The strength of the pi bond is weaker than the pi bond. It is responsible for a single or the symbol is $\sigma$ It is responsible for a single or the symbol is $\pi$. They are comparatively more reactive than the pi bond. They are comparatively less reactive than the sigma bond. It helps determine the shape of a molecule. It cannot determine the shape of the molecule. The sigma bond is also known as a localised bond. The pi bond is also known as a delocalised bond.
2020-10-01 05:33:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7451402544975281, "perplexity": 654.2565086754662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402130615.94/warc/CC-MAIN-20201001030529-20201001060529-00158.warc.gz"}
https://meangreenmath.com/2021/09/
# Thoughts on Numerical Integration (Part 7): Implementation with Excel Numerical integration is a standard topic in first-semester calculus. From time to time, I have received questions from students on various aspects of this topic, including: • Why is numerical integration necessary in the first place? • Where do these formulas come from (especially Simpson’s Rule)? • How can I do all of these formulas quickly? • Is there a reason why the Midpoint Rule is better than the Trapezoid Rule? • Is there a reason why both the Midpoint Rule and the Trapezoid Rule converge quadratically? • Is there a reason why Simpson’s Rule converges like the fourth power of the number of subintervals? In this series, I hope to answer these questions. While these are standard questions in a introductory college course in numerical analysis, and full and rigorous proofs can be found on Wikipedia and Mathworld, I will approach these questions from the point of view of a bright student who is currently enrolled in calculus and hasn’t yet taken real analysis or numerical analysis. In the previous post in this series, I discussed three different ways of numerically approximating the definite integral $\displaystyle \int_a^b f(x) \, dx$, the area under a curve $f(x)$ between $x=a$ and $x=b$. In this series, we’ll choose equal-sized subintervals of the interval $[a,b]$. If $h = (b-a)/n$ is the width of each subinterval so that $x_k = x_0 + kh$, then the integral may be approximated as $\int_a^b f(x) \, dx \approx h \left[f(x_0) + f(x_1) + \dots + f(x_{n-1}) \right] \equiv L_n$ using left endpoints, $\int_a^b f(x) \, dx \approx h \left[f(x_1) + f(x_2) + \dots + f(x_n) \right] \equiv R_n$ using right endpoints, and $\int_a^b f(x) \, dx \approx h \left[f(c_1) + f(c_2) + \dots + f(c_n) \right] \equiv M_n$ using the midpoints of the subintervals. We have also derived the Trapezoid Rule $\int_a^b f(x) \, dx \approx \displaystyle \frac{h}{2} [f(x_0) + 2f(x_1) + \dots + 2f(x_{n-1}) + f(x_n)] \equiv T_n$ and Simpson’s Rule (if $n$ is even) $\int_a^b f(x) \, dx \approx \displaystyle \frac{h}{3} \left[y_0 + 4 y_1 + 2 y_2 + 4 y_3 + \dots + 2y_{n-2} + 4 y_{n-1} + y_{n} \right] \equiv S_n$. Computing any of the above formulas on a hand-held calculator can tax the patience of even the most error-conscious student. Indeed, I prefer that my students, when first learning these concepts, use a spreadsheet instead of a calculator or even a computer program, as I think that the visual layout of the spreadsheet aids in understanding how the formula works. In what follows, I implement the above formulas for the integral $\displaystyle \int_1^2 x^9 \, dx$ using $n=10$ subintervals, so that $h = (2-1)/10 = 0.1$. To implement the left-endpoint rule, I enter the labels “x” and “x^9” in cells A1 and B1 of a spreadsheet. I then enter 1 (the left endpoint) in cell A2. In cell A3, I enter “=A2+0.1”, instructing the spreadsheet to add 0.1 to the value in cell A2. Then, instead of typing all of the other values of $x_k$, I use the fill-down feature to repeat this pattern for cells A3 through A11. In cell B2, I enter “=A1^9”, applying the function $f(x) = x^9$ to the $x-$coordinate in cell A2. Again, I use the fill-down feature to repeat this pattern for cells B3-B11. The fill-down feature saves a lot of time! Finally, in cell B13, I enter “=0.1*SUM(B2:B11)”, adding the values in cells B2 through B11 and multiplying the sum by $h$. The result, $78.6581$, is the approximation using the left-endpoint rule with 10 subintervals. Once this is done, the right-endpoint rule can be obtained almost for free. The only change is to change the value of cell A2 from 1 to 1.1. Everything else should automatically update. The midpoint rule is also obtained quickly by changing the value of cell A2 from 1 to 1.05, the midpoint of the first subinterval $[1,1.01]$. Implementing the Trapezoid Rule requires a little more work. We reset the value of A2 back to 1, the value of the left-endpoint. We also fill down the pattern one extra row (in this case, row 12). To implement the Trapezoid Rule, we have to multiply all function values (except for those at the endpoints) by 2. To implement this, I introduce column C. These weights can be typed by hand, but again the fill-down feature can speed things up. Then, in column D, I multiply the values in columns B and C. For example, the result in cell D2 is obtained by typing “=B2*C2”. Once again, the fill-down feature is used for all rows. Finally, the approximation itself is obtained by typing “=0.1/2*SUM(D2:D12)” in cell D13. After implementing the Trapezoid Rule, Simpson’s Rule is not much more effort. The biggest change is the alternating weights, so that the endpoints have weight 1 while the others oscillate between 4 and 2, ending on 4 on the second-to-last value of $x$. Again, these could be typed by hand, but it’s easiest to enter 4 in cell C3, 2 in cell C4, and then “=C3” in cell C5. The fill-down feature can take care of the rest of the weights. The Simpson’s Rule approximation is obtained by typing “=0.1/3*SUM(D2:D12)” in cell D13, with a new denominator of 3. # Engaging students: Computing trigonometric functions using a unit circle In my capstone class for future secondary math teachers, I ask my students to come up with ideas for engaging their students with different topics in the secondary mathematics curriculum. In other words, the point of the assignment was not to devise a full-blown lesson plan on this topic. Instead, I asked my students to think about three different ways of getting their students interested in the topic in the first place. I plan to share some of the best of these ideas on this blog (after asking my students’ permission, of course). This student submission comes from my former student Alizee Garcia. Her topic, from Precalculus: computing trigonometric functions using a unit circle. How can this topic be used in your students’ future courses in mathematics or science? Being able to compute trig functions using a unit circle will be the base of knowledge for all further calculus classes, as well as others. Being able to understand and use a unit circle will also allow students to start to memorize the trigonometric functions. One of the most important things from pre-calculus to all other calculus classes was being able to solve trig functions and having the unit circle memorized was very useful. Although there are trig functions and values outside of the unit circle, the unit circle almost is like the foundation for trigonometry. Most, if not all, calculus classes after pre-calculus will expect students to have the unit circle memorized. Although it can be solved using a calculator, this will allow equations and problems to be solves easier with less thought when a student knows the unit circle. Even outside of calculus classes, the unit circle is one of many important aspects in math classes. How does this topic extend what your students should have learned in previous courses? Before students learn how to compute trigonometric functions using a unit circle, they learn about the trig functions by themselves. This usually starts in high school geometry where students learn sine, cosine, and tangent, yet they do not use them in the way a unit circle does. Most schools only teach the students how to use the calculator to compute the functions to solve sides or angles for triangles. As students enter pre-calculus, they use what they have learned about the trig functions in order to apply them to the unit circle. This will allow students to see that using trig functions can still be used to solve triangles, but it can also be used to solve many other things. Once they learn the unit circle, they will see more examples in which they will apply the functions and make connections to real-world scenarios that they can also be applied to. How can technology (YouTube, Khan Academy [khanacademy.org], Vi Hart, Geometers Sketchpad, graphing calculators, etc.) be used to effectively engage students with this topic? There are probably many simulations and websites that can help students compute trig functions using the unit circle, but I think something that will engage the students is a Kahoot or Quizziz that will help the students memorize the unit circle. Giving students an opportunity to apply what they learned into a friendly competition not only gives them practice but will also let them be engaged. Other technology resources such as videos or a website that is teaching the lesson does not really allow the students to apply what they know rather than just being lectured. Although some websites and technology can be useful, I personally, enjoy giving students the opportunity to work out problems as well as being engaged. Also, using calculators could be helpful to check answers but if they have a unit circle it might not be necessary unless they do not have the unit circle in front of them. # Thoughts on Numerical Integration (Part 6): Connection between Simpson’s Rule, Trapezoid Rule, and Midpoint Rule Numerical integration is a standard topic in first-semester calculus. From time to time, I have received questions from students on various aspects of this topic, including: • Why is numerical integration necessary in the first place? • Where do these formulas come from (especially Simpson’s Rule)? • How can I do all of these formulas quickly? • Is there a reason why the Midpoint Rule is better than the Trapezoid Rule? • Is there a reason why both the Midpoint Rule and the Trapezoid Rule converge quadratically? • Is there a reason why Simpson’s Rule converges like the fourth power of the number of subintervals? In this series, I hope to answer these questions. While these are standard questions in a introductory college course in numerical analysis, and full and rigorous proofs can be found on Wikipedia and Mathworld, I will approach these questions from the point of view of a bright student who is currently enrolled in calculus and hasn’t yet taken real analysis or numerical analysis. In the previous post in this series, I discussed three different ways of numerically approximating the definite integral $\displaystyle \int_a^b f(x) \, dx$, the area under a curve $f(x)$ between $x=a$ and $x=b$. In this series, we’ll choose equal-sized subintervals of the interval $[a,b]$. If $h = (b-a)/n$ is the width of each subinterval so that $x_k = x_0 + kh$, then the integral may be approximated as $\int_a^b f(x) \, dx \approx h \left[f(x_0) + f(x_1) + \dots + f(x_{n-1}) \right] \equiv L_n$ using left endpoints, $\int_a^b f(x) \, dx \approx h \left[f(x_1) + f(x_2) + \dots + f(x_n) \right] \equiv R_n$ using right endpoints, and $\int_a^b f(x) \, dx \approx h \left[f(c_1) + f(c_2) + \dots + f(c_n) \right] \equiv M_n$ using the midpoints of the subintervals. We have also derived the Trapezoid Rule $\int_a^b f(x) \, dx \approx \displaystyle \frac{h}{2} [f(x_0) + 2f(x_1) + \dots + 2f(x_{n-1}) + f(x_n)] \equiv T_n$ and Simpson’s Rule (if $n$ is even) $\int_a^b f(x) \, dx \approx \displaystyle \frac{h}{3} \left[y_0 + 4 y_1 + 2 y_2 + 4 y_3 + \dots + 2y_{n-2} + 4 y_{n-1} + y_{n} \right] \equiv S_n$. There is a somewhat surprising connection between the last three formulas. Let’s divide the interval $[a,b]$ into $2n$ subintervals with $h = (b-a)/(2n)$ and $x_0 = a$, $x_1 = x_0 + h$, $x_2 = x_0 + 2h$, and so on. Then Simpson’s Rule becomes $S_{2n} = \displaystyle \frac{h}{3} \left[y_0 + 4 y_1 + 2 y_2 + 4 y_3 + \dots + 2y_{2n-2} + 4 y_{2n-1} + y_{2n} \right]$. Next, let’s divide the interval $[a,b]$ into $n$ subintervals, but let’s not redefine the values of $h$ and the $x_k$. Instead, the width of each subinterval will be $(b-a)/n$, which is equal to $2h$. (In other words, since there are half as many subintervals, each one is twice as long.) Also, the endpoints of these subintervals will be $x_0 = a$, $x_2 = x_0 + 2h$, $x_4 = x_0 + 4h$, and so on. So, keeping the same labeling convention as with Simpson’s Rule, the Trapezoid Rule becomes $T_n = \displaystyle \frac{2h}{2} [f(x_0) + 2f(x_2) + 2f(x_4) + \dots + 2f(x_{2n-2}) + f(x_{2n})]$ $= h [f(x_0) + 2f(x_2) + 2f(x_4) + \dots + 2f(x_{2n-2}) + f(x_{2n})]$. (Again, the width of the subintervals in this case is $2h$, where $h = (b-a)/2n$.) Furthermore, the midpoint of subinterval $[x_0, x_2]$ will be $x_1$, the midpoint of subinterval $[x_2,x_4]$ will be $x_3$, and so on. Therefore, keeping the same labeling convention, the Midpoint Rule becomes $M_n = \displaystyle 2h [f(x_1) + f(x_3) + f(x_5) + \dots + f(x_{2n-1}) ]$. It turns out that $\displaystyle \frac{2}{3} M_n + \frac{1}{3} T_n$, a certain weighted average of $T_n$ and $M_n$, is equal to $\displaystyle \frac{4h}{3} [f(x_1) + f(x_3) + \dots + f(x_{2n-1}) ] + \frac{h}{3} [f(x_0) + 2f(x_2) + \dots + 2f(x_{2n-2}) + f(x_{2n})]$ $= \displaystyle \frac{h}{3} [f(x_0) + 4 f(x_1) + 2f(x_2) + \dots + 2f(x_{2n-2}) + 4 f(x_{2n-1} + f(x_{2n})]$ $= S_{2n}$. So, if the Midpoint Rule and the Trapezoid Rule have already been computed for $n$ subintervals, then Simpson’s Rule for $2n$ subintervals can be computed at almost no additional effort. # Engaging students: Using a recursively defined sequence In my capstone class for future secondary math teachers, I ask my students to come up with ideas for engaging their students with different topics in the secondary mathematics curriculum. In other words, the point of the assignment was not to devise a full-blown lesson plan on this topic. Instead, I asked my students to think about three different ways of getting their students interested in the topic in the first place. I plan to share some of the best of these ideas on this blog (after asking my students’ permission, of course). This student submission comes from my former student Enrique Alegria. His topic, from Precalculus: using a recursively defined sequence. How can this topic be used in your students’ future courses in mathematics or science? Recursion is heavily emphasized within the branches of computer science. The technique can be used more than just in arithmetic and geometric sequences for finding the next term. Within computer science, recursion techniques can be utilized for sorting algorithms. The content will be able to transfer easily. Instead of finding the previous term to use to find the current term, within sorting algorithms, a set of numbers is chunked into smaller and smaller sets such that the original set of numbers becomes sorted. We can take a deeper look at Merge Sort which is a recursive sorting algorithm. What occurs is the set of numbers repeatedly gets cut in half until there is only one element in the list. From there the elements are sorted in increasing order. Traversing back into the original size of the list with all of the elements contained except the final output is the list in increasing order. Students can inspect the algorithm visually and need not to understand the implementation of code to comprehend the functionality of recursion. Guiding the students towards the smallest part of the process which is the single element and from there rearranging the elements of the list. How has this topic appeared in high culture (art, classical music, theatre, etc.)? Recursively defined sequences influenced a renowned artist who is M.C. Escher. The concept of a sequence beginning at one point and continuing infinitely is how Escher exhibits recursion. Escher challenges the viewer of his work to determine the patterns from the artistic series. For example, when observing the piece Drawing Hands, a student can predict what the ‘base case’ of the artwork would be followed by the next steps of the drawing. The spectator of this piece can break it apart into smaller and smaller partitions of the whole. And once they reach a starting point, they can put together the whole picture once again. Similarly, students can view this piece titled Two Birds to follow the patterns. Without saying the name of the piece students can again predict the base case and determine how recursion techniques would be used for this sequence. Students can begin to learn how to think of how recursively defined sequences are applied through visual representations of M.C. Escher’s artwork. How can technology be used to effectively engage students with this topic? Technology can be used to effectively engage students with recursion by showcasing the YouTube video “Recursion: The Music Videos of Michel Gondry” by Polyphonic. Through this video, students can compare recursively defined sequences to music they listen to. The video starts with singular notes and then repeating the notes to create a rhythm. Compiling the initial sounds into something familiar through loops of samples and sound bites. This video goes into the repetitive patterns of the small chunks of sound are shown through visual representations with the music videos by Michel Gondry. In the music video “Star Guitar” by The Chemical Brothers, the video starts off with the listener on a train ride going through a landscape. Slowly patterns emerge as buildings uniquely correspond to the notes and rhythms within the song. With this YouTube video students obtain a great introduction to recursion and hopefully continue to find patterns of recursion to music they listen to in the future. References Greenberg I., Xu D., Kumar D. (2013) Drawing with Recursion. In: Processing. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4302-4465-3_8 Miller, B., & Ranum, D. (2020). 6.11. The Merge Sort — Problem Solving with Algorithms and Data Structures. Runestone.academy. https://runestone.academy/runestone/books/published/pythonds/SortSearch/TheMergeSort.html. # Thoughts on Numerical Integration (Part 5): Derivation of Simpson’s Rule Numerical integration is a standard topic in first-semester calculus. From time to time, I have received questions from students on various aspects of this topic, including: • Why is numerical integration necessary in the first place? • Where do these formulas come from (especially Simpson’s Rule)? • How can I do all of these formulas quickly? • Is there a reason why the Midpoint Rule is better than the Trapezoid Rule? • Is there a reason why both the Midpoint Rule and the Trapezoid Rule converge quadratically? • Is there a reason why Simpson’s Rule converges like the fourth power of the number of subintervals? In this series, I hope to answer these questions. While these are standard questions in a introductory college course in numerical analysis, and full and rigorous proofs can be found on Wikipedia and Mathworld, I will approach these questions from the point of view of a bright student who is currently enrolled in calculus and hasn’t yet taken real analysis or numerical analysis. In the previous post in this series, I discussed three different ways of numerically approximating the definite integral $\displaystyle \int_a^b f(x) \, dx$, the area under a curve $f(x)$ between $x=a$ and $x=b$. In this series, we’ll choose equal-sized subintervals of the interval $[a,b]$. If $h = (b-a)/n$ is the width of each subinterval so that $x_k = x_0 + kh$, then the integral may be approximated as $\int_a^b f(x) \, dx \approx h \left[f(x_0) + f(x_1) + \dots + f(x_{n-1}) \right] \equiv L_n$ using left endpoints, $\int_a^b f(x) \, dx \approx h \left[f(x_1) + f(x_2) + \dots + f(x_n) \right] \equiv R_n$ using right endpoints, and $\int_a^b f(x) \, dx \approx h \left[f(c_1) + f(c_2) + \dots + f(c_n) \right] \equiv M_n$ using the midpoints of the subintervals. We have also derived the Trapezoid Rule: $\int_a^b f(x) \, dx \approx \displaystyle \frac{h}{2} [f(x_0) + 2f(x_1) + \dots + 2f(x_{n-1}) + f(x_n)] \equiv T_n$ This last approximation was obtained by connecting adjacent points on the curve by line segments, creating trapezoids: In this post, we will derive Simpson’s Rule. Instead of connecting two adjacent points with line segments, we will connect three adjacent points with a parabola. In the picture below, the points $(x_0, f(x_0))$, $(x_1, f(x_1))$ and $(x_2,f(x_2))$ are connected with one parabola, while the points $(x_2, f(x_2))$, $(x_3, f(x_3))$ and $(x_4,f(x_4))$ are connected with a different second parabola. Clearly, for this to work, there has to be an even number of subintervals. (By contrast, for the Trapezoid Rule, the Midpoint Rule, or the endpoint rules, the number of subintervals could be even or odd.) The derivation of Simpson’s Rule is more complicated than the derivation of the Trapezoid Rule because we need to use calculus to find the area under these parabolas. To begin, we make the simplifying assumption that $x_1 = 0$. Since each subinterval has width $h$, this means that $x_0 = -h$ and $x_2 = h$. To find the area under this parabola, we first need to find the equation of the parabola $y = ax^2 + bx + c$ connecting the three points $(-h,y_0)$, $(0,y_1)$, and $(h,y_2)$. This entails solving a system of three equations in three unknowns: $a(-h)^2 + b(-h) + c = y_0$ $a(0)^2+b(0) + c = y_1$ $ah^2 + bh + c = y_2$, or $ah^2 - bh + c = y_0$ $c = y_1$ $ah^2 + bh + c = y_2$. While most 3×3 systems are cumbersome to solve, this system is straightforward. Clearly, $c = y_1$. Also, subtracting the first equation from the third equation yields $2bh = y_2 - y_0$, or $b = \displaystyle \frac{y_2 - y_0}{2h}$ Finally, we solve for $a$ by substituting into the third equation: $ah^2 + \displaystyle \frac{y_2 - y_0}{2h} h + y_1 = y_2$ $ah^2 + \displaystyle \frac{y_2 - y_0}{2} + y_1 = y_2$ $ah^2 = \displaystyle \frac{y_0 - y_2}{2} - \frac{2y_1}{2} + \frac{2y_2}{2}$ $ah^2 = \displaystyle \frac{y_0 - 2y_1 + y_2}{2}$ $a = \displaystyle \frac{y_0 - 2y_1 + y_2}{2h^2}$ Next, we find the integral of $y = ax^2 + bx + c$ between $x = -h$ and $x = h$: $\displaystyle \int_{-h}^h (ax^2 + bx + c) \, dx = \left[ \frac{ax^3}{3} + \frac{bx^2}{2} + cx \right]^h_{-h}$ $= \displaystyle \left[ \frac{ah^3}{3} + \frac{bh^2}{2} + ch \right] - \left[ -\frac{ah^3}{3} + \frac{bh^2}{2} - ch \right]$ $= \displaystyle \frac{2ah^3}{3} + 2ch$ $= \displaystyle \frac{(y_0 - 2y_1 + y_2)h}{3} + 2y_1h$ $= \displaystyle \frac{h(y_0 + 4y_1 + y_2)}{3}$. We now turn to the more general case of finding the area under the parabola passing through $(x_0,y_0)$, $(x_1,y_1)$, and $(x_2,y_2)$, where $x_1 = x_0 +h$ and $x_2 = x_1 + 2h$. Geometrically, it should be clear that this parabola can be obtained from the above parabola by a horizontal translation. Since the area under the curve is not changed by a horizontal translation, the area (and the formula) will be the same. More formally, if $y = ax^2 + bx + c$ passes through the points $(-h,y_0)$, $(0,y_1)$, and $(h,y_2)$, then $y = a(x-x_1)^2 + b(x-x_1) + c$ will pass through the points $(x_0,y_0)$, $(x_1,y_1)$, and $(x_2,y_2)$. The area under this curve is $\displaystyle \int_{x_0}^{x_2} \left[ a(x-x_1)^2 + b(x-x_1) + c \right] \, dx$. After using the substitution $u = x-x_1$, this becomes $\displaystyle \int_{-h}^h (au^2 + bu + c) \, du$, which is the same integral that we saw earlier. Therefore, $\displaystyle \int_{x_0}^{x_2} \left[ a(x-x_1)^2 + b(x-x_1) + c \right] \, dx = \displaystyle \frac{h(y_0 + 4y_1 + y_2)}{3}$. Finally, we need to find the sum of the areas under all of these parabolas. Similarly, the area under the parabola passing through $(x_2,y_2)$, $(x_3,y_3)$, and $(x_4,y_4)$ will be $\displaystyle \frac{h(y_2 + 4y_3 + y_4)}{3}$. So, for the particular example shown above, the total area under the parabolas will be $\displaystyle \frac{h(y_0 + 4y_1 + y_2)}{3} + \frac{h(y_2 + 4y_3 + y_4)}{3} = \frac{h}{3} (y_0 + 4 y_1 + 2 y_2 + 4 y_3 + y_4)$. The coefficients of 4 arose from the above integrals, while the coefficient of 2 came from combining the two areas. In general, if there are $n$ subintervals and $n$ is even, then Simpson’s Rule gives the approximation $S_n = \displaystyle \frac{h}{3} \left(y_0 + 4 y_1 + 2 y_2 + 4 y_3 + \dots + 2y_{n-2} + 4 y_{n-1} + y_{n} \right)$. # Engaging students: Powers and exponents In my capstone class for future secondary math teachers, I ask my students to come up with ideas for engaging their students with different topics in the secondary mathematics curriculum. In other words, the point of the assignment was not to devise a full-blown lesson plan on this topic. Instead, I asked my students to think about three different ways of getting their students interested in the topic in the first place. I plan to share some of the best of these ideas on this blog (after asking my students’ permission, of course). This student submission comes from my former student Austin Stone. His topic, from Pre-Algebra: powers and exponents. What interesting (i.e., uncontrived) word problems using this topic can your students do now? “The number of people who are infected with COVID-19 can double each day. If it does double every day, and one person was infected on day 0, how many people would be infected after 20 days?” This problem can be a current real-life word problem that all students can relate to given the times we are in. This problem would be a good introductory for students to see how quickly numbers can get when using exponents. This would be an engaging introductory to exponents and will get the students interested because they can easily see that this can be used in current problems facing the world. This problem could also work later in Algebra if you ask how many days it would take to infect “blank” amount of people. This makes the question more of a challenge because they would have to solve for “x” (days) which is the exponent. How has this topic appeared in the news? This topic has been the news so far in 2020 if we are being honest. COVID-19 is a virus that has an exponential infection rate, just like any virus. When talking about COVID-19, news reporters and doctors usually use graphs to depict the infection rate. These graphs start off small but then grow exponentially until it slows down due to either people being more aware of their hygiene habits and/or the human immune system getting more familiar with the virus. Knowing how exponents work helps people better understand the seriousness of viruses such as COVID-19 and the everlasting impact it can have on the world. Doctors study what are the best ways to slow down the exponential growth so that a limited number of people contract and potentially die from the virus. To do this, they predict the exponential growth keeping in mind the regulations that may be enforced. Whatever regulation(s) slow down the virus the most are the ones that they try to enforce. How can technology (YouTube, Khan Academy [khanacademy.org], Vi Hart, Geometers Sketchpad, graphing calculators, etc.) be used to effectively engage students with this topic? An easy way to introduce students who have never seen exponents or exponential growth before is to use a graphing calculator. By plugging in an exponential function into the calculator and viewing the graph and zooming out, students can easily see how quickly numbers start to get massively large. A teacher can set this up by giving the students a problem to think about such as, “how many people would be infected with the virus after “blank” amount of day?” Students then could guess what they believe it would be. After revealing the graph and the actual number, students will probably be surprised at how big the number is in just a short amount of time. After that, the teacher could show a video on YouTube about exponential growth and/or infection rates of viruses and how quickly a small virus can turn into a pandemic. This also has very current real-world applications. # Thoughts on Numerical Integration (Part 4): Derivation of Trapezoid Rule Numerical integration is a standard topic in first-semester calculus. From time to time, I have received questions from students on various aspects of this topic, including: • Why is numerical integration necessary in the first place? • Where do these formulas come from (especially Simpson’s Rule)? • How can I do all of these formulas quickly? • Is there a reason why the Midpoint Rule is better than the Trapezoid Rule? • Is there a reason why both the Midpoint Rule and the Trapezoid Rule converge quadratically? • Is there a reason why Simpson’s Rule converges like the fourth power of the number of subintervals? In this series, I hope to answer these questions. While these are standard questions in a introductory college course in numerical analysis, and full and rigorous proofs can be found on Wikipedia and Mathworld, I will approach these questions from the point of view of a bright student who is currently enrolled in calculus and hasn’t yet taken real analysis or numerical analysis. In the previous post in this series, I discussed three different ways of numerically approximating the definite integral $\displaystyle \int_a^b f(x) \, dx$, the area under a curve $f(x)$ between $x=a$ and $x=b$. In this series, we’ll choose equal-sized subintervals of the interval $[a,b]$. If $h = (b-a)/n$ is the width of each subinterval so that $x_k = x_0 + kh$, then the integral may be approximated as $\int_a^b f(x) \, dx \approx h \left[f(x_0) + f(x_1) + \dots + f(x_{n-1}) \right] \equiv L_n$ using left endpoints, $\int_a^b f(x) \, dx \approx h \left[f(x_1) + f(x_2) + \dots + f(x_n) \right] \equiv R_n$ using right endpoints, and $\int_a^b f(x) \, dx \approx h \left[f(c_1) + f(c_2) + \dots + f(c_n) \right] \equiv M_n$ using the midpoints of the subintervals. All three of these approximations were obtained by approximating the above shaded region by rectangles. However, perhaps it might be better to use some other shape besides rectangles. In the Trapezoidal Rule, we approximate the area by using (surprise!) trapezoids, as in the figure below. The first trapezoid has height $h$ and bases $f(x_0)$ and $f(x_1)$, and so the area of the first trapezoid is $\frac{1}{2} h[ f(x_0) + f(x_1) ]$. The other areas are found similarly. Adding these together, we get the approximation $T_n = \displaystyle \frac{h}{2}[f(x_0) + f(x_1)] + \frac{h}{2} [f(x_1) + f(x_2)] + \dots +$ $+ \displaystyle \frac{h}{2} [f(x_{n-2})+f(x_{n-1})] + \frac{h}{2} [f(x_{n-1})+f(x_n)]$ $= \displaystyle \frac{h}{2} [f(x_0) + 2f(x_1) + 2f(x_2) + \dots + 2f(x_{n-2}) + 2f(x_{n-1}) + f(x_n)].$ Interestingly, $T_n$ is the average of the two endpoint approximations $L_n$ and $R_n$: $\displaystyle \frac{L_n+R_n}{2} = \frac{L_n}{2} + \frac{R_n}{2}$ $= \displaystyle \frac{h}{2} \left[f(x_0) + f(x_1) + f(x_2) + \dots + f(x_{n-1}) \right]$ $+\displaystyle \frac{h}{2} \left[f(x_1) + f(x_2) + \dots + f(x_{n-1}) + f(x_{n}) \right]$ $= \displaystyle \frac{h}{2} \left[f(x_0) + 2f(x_1) + \dots + 2f(x_{n-1}) + f(x_n) \right]$ $= T_n$. Of course, as a matter of computation, it’s a lot quicker to directly compute $T_n$ instead of computing $L_n$ and $R_n$ separately and then averaging. # Engaging students: Using Pascal’s triangle In my capstone class for future secondary math teachers, I ask my students to come up with ideas for engaging their students with different topics in the secondary mathematics curriculum. In other words, the point of the assignment was not to devise a full-blown lesson plan on this topic. Instead, I asked my students to think about three different ways of getting their students interested in the topic in the first place. I plan to share some of the best of these ideas on this blog (after asking my students’ permission, of course). This student submission comes from my former student Jaeda Ransom. Her topic, from Precalculus: using Pascal’s triangle. How could you as a teacher create an activity or project that involves your topic? A great activity that involves Pascal’s Triangle would be the sticky note triangle activity. For this activity students will be recreating an enlarged version of Pascal’s Triangle. To complete this activity students will need a poster of Pascal’s Triangle, poster board, markers, sticky notes, classroom wall (optional), and tape (optional). The teacher’s role is to show students Pascal’s Triangle, along with an explanation of how it was made. Students will be working in pairs and grabbing the necessary materials needed to complete this activity.On the poster board the students will recreate Pascal’s Triangle. Students will write a number 1 on a sticky note and place it at the top of the posterboard, they will then write 2 number 1’s on a sticky note and place it directly under. The students will continue recreating the triangle on their poster board until they run out of space. You can also consider having students use smaller sticky notes so that students are engaged with creating more rows. What interesting things can you say about the people who contributed to the discovery and/or the development of this topic? Pascal’s Triangle was named after French mathematician Blaise Pascal. At just the age of 16 years old Pascal wrote a significant treatise on the subject of projective geometry marking him as a child prodigy. Amongst that, Pascal also corresponded with other mathematicians on probability theory, which vastly encouraged the development of modern economics and social science. Pascal was also one of the first two inventors of the mechanical calculator when he started pioneering work on calculating machines, these were called Pascal’s calculators and later Pascalines. Pascal impressively created and invented all of this as a teenager. Though the Pascal Triangle was named after Blaise Pascal, this theory was established well before Pascal in India, Persia, China, Germany, and Italy. As a matter of fact, in China they still call it the Yang Hui’s triangle, named after Chinese mathematician Yang Hui who presented the triangle in the 13th century, though the triangle was known in China since the early 11th century. How can this topic be used in your students’ future courses in mathematics or science? This topic can be used in my students future mathematics course to introduce binomial expansions, where it is known that Pascal’s Triangle determines the coefficients that arise in binomial expansion. The coefficients aᵢ in a binomial expansion represents the number of row n in the Pascal’s Triangle. Thus, $a_i = \displaystyle {n \choose i}$. Another useful application of this topic is in the calculations of combinations. The equation to find the combination is also the formula to find a cell for Pascal’s Triangle. So, instead of performing the calculations using the equation a student can simply use Pascal’s Triangle. In doing this you can continue a lesson over probability or even do an activity using Pascal’s Triangle while implicating probability questions. Resources: https://en.wikipedia.org/wiki/Pascal%27s_triangle#Formula
2022-06-29 10:05:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 171, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6801656484603882, "perplexity": 472.1674712745145}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103626162.35/warc/CC-MAIN-20220629084939-20220629114939-00762.warc.gz"}
https://maria.climent-pommeret.red/en/blog/?page=3
Blog ### LaTeX: the first document Posted by: maria | in LaTeX | 2 years, 5 months ago | 0 comments So let's start! You need a basic tutorial to get your hands a bit dirty with LaTeX but you don't know where to start? I'm here for you. RNA 2
2020-11-26 04:34:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9697402715682983, "perplexity": 3437.9613013173976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141186414.7/warc/CC-MAIN-20201126030729-20201126060729-00435.warc.gz"}
https://questions.examside.com/past-years/jee/question/the-pair-of-compounds-having-metals-in-their-highest-oxidati-jee-main-chemistry-some-basic-concepts-of-chemistry-eb5qvcgmzg3xilbw
1 ### JEE Main 2017 (Online) 8th April Morning Slot The pair of compounds having metals in their highest oxidation state is : A MnO2 and CrO2Cl2 B [NiCl4]2$-$ and [CoCl4]2$-$ C [Fe(CN)6]3$-$ and [Cu(CN)4]2$-$ D [FeCl4]$-$ and Co2O3 2 ### JEE Main 2017 (Online) 9th April Morning Slot [Co2(CO)8] displays : A one Co−Co bond, six terminal CO and two bridging CO B one Co−Co bond, four terminal CO and four bridging CO C no Co−Co bond, six terminal CO and two bridging CO D no Co−Co bond, four terminal CO and four bridging CO 3 ### JEE Main 2018 (Offline) The oxidation states of Cr in [Cr(H2O)6]Cl3, [Cr(C6H6)2] and K2[Cr(CN)2(O)2(O2)(NH3)] respectively are A +3, 0 and +4 B +3, +4 and +6 C +3, +2 and +4 D +3, 0 and +6 ## Explanation Assume oxidation state of Cr in all the compounds = x (i)$\,\,\,$ In [cr(H2O)6] Cl3 oxidation state of Cr is x + 0 $\times$ 6 + ($-$1 $\times$ ) = O $\Rightarrow \,\,\,\,\,$ x + 0 $-$ 3 = O $\Rightarrow \,\,\,\,$ x = + (ii)$\,\,\,$ [Cr (C6 H6)2] oxidation state of Cr is x + 0 $\times$ 2 = 0 $\Rightarrow \,\,\,\,$ x = 0 (iii) $\,\,\,$ In K2 [ Cr(CN)2 (O)2 (O2) (NH3)] oxidation state of Cr is 1 $\times$ 2 + x + ($-$ 1 $\times$ 2) + ($-$2 $\times$ 2) + ($-$2) + 0 = 0 $\Rightarrow \,\,\,\,$ x = + 6 $\therefore\,\,\,$ + 3, 0 and + 6 is the correct answer. Note : O2 molecule can have 0, $-$ 1, $-$ 2 oxidation state but in K2 [ Cr (CN)2 (O)2 (O2) NH3 ] if we choose zero as the oxidation state of O2 then for Cr oxidation state will be $+$ 4. But + 4 oxidation state of Cr is unstable and $+$ 6 is most stable that is why we choose $-$ 2 oxidation state of O2. 4 ### JEE Main 2018 (Offline) Consider the following reaction and statements: [Co(NH3)4Br2]+ + Br- $\to$ [Co(NH3)3Br3] + NH3 (I) Two isomers are produced if the reactant complex ion is a cis-isomer (II) Two isomers are produced if the reactant complex ion is a trans-isomer (III) Only one isomer is produced if the reactant complex ion is a trans-isomer (IV) Only one isomer is produced if the reactant complex ion is a cis – isomer The correct statements are A (II) and (IV) B (I) and (II) C (I) and (III) D (III) and (IV) ## Explanation When reactant is cis isomer then following reaction takes place. So, if reactant is cis - isomer then two isomers are produced. When reactant is trans isomer then following reaction takes place. So, if the reactant is trans isomer then only one isomer is produced.
2021-10-16 11:55:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6455518007278442, "perplexity": 10105.2476819637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584567.81/warc/CC-MAIN-20211016105157-20211016135157-00462.warc.gz"}
https://www.computer.org/csdl/trans/tc/1987/03/01676907-abs.html
Subscribe Issue No.03 - March (1987 vol.36) pp: 356-359 Y.S. Kuo , Institute of Information Science ABSTRACT Detecting essential primes is important in multiple-valued logic minimization. In this correspondence, we present a fast algorithm that can generate all essential primes without generating a prime cover of the Boolean function. A new consensus operation called asymmetric consensus (acons) is defined. In terms of acons, we prove a necessary and sufficient condition for detecting essential primes for a Boolean function with multiple-valued inputs. The detection of essential primes can be performed by using a tautology checking algorithm. We exploit the unateness of a Boolean function to speed up tautology checking. The notion of unateness considered is more general than that has appeared in the literature. INDEX TERMS unate function, Boolean function with multiple-valued inputs, consensus, essential prime implicant, logic minimization, tautology checking CITATION Y.S. Kuo, "Generating Essential Primes for a Boolean Function with Multiple-Valued Inputs", IEEE Transactions on Computers, vol.36, no. 3, pp. 356-359, March 1987, doi:10.1109/TC.1987.1676907
2016-08-25 02:57:10
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8160542249679565, "perplexity": 2451.16754012577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292734.19/warc/CC-MAIN-20160823195812-00075-ip-10-153-172-175.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/497883/which-of-the-non-euclidean-planes-can-we-embed-into-non-euclidean-3-space
# Which of the (non-)Euclidean planes can we embed into non-Euclidean 3-space? I read the answers to this very interesting question and saw that we can in fact embed the Euclidean plane into hyperbolic 3-space using what is called a horosphere. However, as Hilbert showed us, the reverse is not true; we cannot embed the hyperbolic plane into Euclidean 3-space. This made me interested in considering the other non-Euclidean geometry: elliptic geometry. We can embed the plane from elliptic geometry into Euclidean 3-space - the result is spherical geometry - but: a) is the reverse true: can we embed the Euclidean plane into elliptic 3-space? b) Furthermore, is it possible to embed the elliptic plane into hyperbolic 3-space? c) What about the reverse: can we embed the hyperbolic plane into elliptic 3-space? - For elliptic in hyperbolic, take any sphere (in the hyperbolic metric) in the hyperbolic space (the relation between radius and curvature is of course different than in the Euclidean space) –  user8268 Sep 18 '13 at 20:42 Perhaps someone should remark the horosphere will not be isometric to the Euclidean plane, only equivalent as a Riemannian manifold. –  user641 Sep 18 '13 at 21:07 @SteveD: There are two notions of an isometric embedding $f: (M_1,g_1)\to (M_2,g_2)$ in Riemannian geometry. One just requires $f^*(g_2)=g_1$. The second requires an isometric embedding of the associated metric spaces. It seems clear that OP was asking for the first one (since this is what Hilbert's theorem is about), but one can never be completely sure, of course... –  studiosus Sep 19 '13 at 8:35 Here is the detailed answer to eliminate the confusion. Let $H^n, R^n, S^n$ denote the $n$-dimensional spaces of sectional curvature $-1, 0$ and $1$ respectively. Then the following hold (Items 1 and 2 are immediate, but items 3 and 4 are not): 1. For every $n$, $S^n$ embeds isometrically in $R^{n+1}$ and $H^{n+1}$ as a metric sphere of certain radius. 2. $R^n$ isometrically embeds in $H^{n+1}$ as a horosphere. 3. $H^2$ does not isometrically embed in $R^3$ (Hilbert's theorem). However, $H^2$ does embed (isometrically) in $R^6$. 4. $R^n$ and $H^n$ do not isometrically embed in $S^k$ for any $n$ and $k$. David Brander wrote a UPenn thesis in 2003 summarizing the results on isometric embeddings between various constant curvature spaces. See http://davidbrander.org/penn.pdf for details (in particular, he explains what happens if one considers other dimensions and other constant curvature values, including embeddings between spaces with the same curvature sign). - Okay, so the horosphere is not isometric to the Euclidean plane then as Steve D remarked? –  Sid Sep 18 '13 at 21:17 Of course they are isometric, however, hyperbolic space is not isometric to the round sphere (they are not even homeomorphic). –  studiosus Sep 19 '13 at 6:27
2014-07-24 17:49:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.86767578125, "perplexity": 277.88590674155046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997890181.84/warc/CC-MAIN-20140722025810-00159-ip-10-33-131-23.ec2.internal.warc.gz"}
https://spivey.oriel.ox.ac.uk/corner/Tenth
# Tenth I don't know what terminology to use ('direct threaded' or 'indirect threaded' or 'token threaded'), but execution in Tenth work like this: • The body of a defined word is (primarily) a sequence of 16-bit tokens, each being the unique representation of a Tenth word. • From such a token, we can find the address of the word's definition by splitting the token into two parts: a segment number (may be a single bit) and an offset. At the minimum, there are two segments, one pre-compiled and resident in ROM, and the other in RAM to allow for dynamic definitions; there's a small page table that gives the fixed base addresses of the segments. The offset is multiplied by 4 and added onto one of these base addresses. With two segments, that leaves 15 bits for the offset, which can be used to address 128kB of ROM and 128kB of RAM. • At a fixed offset in the definition of a word is its action, and address that the inner interpreter jumps to when executing the word. • Some words have actions written in machine code. Such actions are able to access additional halfwords from the body and adjust machine registers: that; • Others are defined words, for which there is an action enter that saves the ip in the R-stack and sets it to point to the body; • Others are primitives implemented as subroutines, for which there is an action call that invokes the routine. • The definition of each word contains a 32-bit data field that gives the address of the body in the case of defined words, and the address of the subroutine in the case of primitives. • Definitions are chained together to allow finding words from their names. This can be done by letting each definition contain the 16-bit token for the next definition. • Token zero naturally corresponds to the action e_n_d that returns from a word to its caller by popping the R-stack. Then the bodies of defined words can be zero-terminated and that has the expected effect. That means that page 0 will be the ROM in a system that has definitions preloaded. • Putting definition headers as well as bodies in ROM means that these definitions cannot be changed at runtime. Perhaps a new kind of word with a hook in RAM is needed. The existing mechanism for allocating RAM variables durin bootstrapping is a bit clunky. • In RAM, def headers, strings, and body code will be intermingled. That doesn't have to happen in ROM if it's convenient to have several regions with different alignment constraints. Only the region containing the def headers need be accessible via tokens. ## Bootstrapping As much as possible of any application should be written in Tenth and preloaded. That means running Tenth on the host in order to load definitions. Two options: • Use emulation and the interpreter core that is written in Thumb assembly language. • Use a portable core (with the 'big switch'), and an enumeration for the actions in place of the machine code addresses. In either case, a dumping routine is needed that recovers enough symbolic information to create source code for the ROM image.
2023-03-27 01:48:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4058210253715515, "perplexity": 1544.9059696964434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00232.warc.gz"}
https://physics.stackexchange.com/questions/175919/using-tetrads-to-glue-local-currents-into-global-currents/182534
# using tetrads to glue local currents into global currents According to John Baez it is possible to take a locally conserved tensor $\nabla_\mu\: T^{\mu\nu}(x)=0\ \ \ \ \ \mbox{(locally)}$ and convert it to a globally conserved tensor by "patching" together small regions of spacetime and gluing each local current together. The problem is that there is no unique way to do this. In fact there is one for each way to parallel transport a tensor in region $dV_1$ to a nearby region $dV_2$, i.e. the gluing depends on the choice of coordinates. Could someone help me better understand what Baez is talking about with a concrete example? (ref: see last two paragraphs http://math.ucr.edu/home/baez/physics/Relativity/GR/energy_gr.html) Suppose we have an orthonormal tetrad $e^\mu_a(x)$ and a geodesic $\beta$ such that $\frac{D e^\mu_a}{d\lambda}=0$ and $g^{\mu\nu}(x)=\eta^{ab}e^\mu_a(x)e^\nu_b(x)$. How would I glue a locally conserved current $T_{ab}(x)=e^\mu_a(x)e^\nu_b(x)T_{\mu\nu}(x)$ in region $dV_1$ with $T_{ab}(x')=e^\mu_a(x')e^\nu_b(x')T_{\mu\nu}(x')$ in region $dV_2$ such that it is globally conserved? I seem to recall that $De^\mu_a/d\lambda=0$ implies the parallel transport equation $T^{(PT)}_{cd}(x')=e^\mu_c(x')e^\nu_d(x')e_\mu^a(x)e_\nu^b(x)T_{ab}(x)$ as long as $x'$ and $x$ are on the geodesic $\beta$. We can now glue this to $T_{ab}(x')$ by the gluing function $\phi^{cd}_{ab}$ $T_{ab}(x')=\phi^{cd}_{ab}\ T^{(PT)}_{cd}(x')$. According to Baez this can be done in such a way that the covariant derivative becomes a partial derivative, i.e. such that $\Gamma^b_{ca}T^{ac}(x)=0$, but I don't see how to impose this condition in any kind of generic spacetime. • Comment to the question (v8): If $j^{\mu}$ with $\nabla_{\mu}j^{\mu}=0$ is supposed to be a (1,0) tensor, then there is no problem in constructing an integrated conserved quantity via a 4-dimensional divergence theorem. The link by John Baez, on the other hand, is talking about the stress-energy-momentum pseudotensor, not $j^{\mu}$. – Qmechanic Apr 13 '15 at 19:38 • You are correct. I've replaced the vectors with tensors. – alphanzo Apr 13 '15 at 19:50 • Could someone help me better understand what Baez is talking about with a concrete example? Yes. Send a 511keV photon into a black hole, and the black hole mass increases by 511keV/c². So energy is conserved in GR. What you think of as a blueshifted photon hasn't really gained any energy. In similar vein, in SR when you accelerate towards a light source you measure a blueshift, but the photons haven't changed, instead, you have. Baez articles are usually pretty good, but IMHO this one is unclear, and you should either forget about it or email Don Koks the PhysicsFAQ editor with a query. – John Duffield Apr 13 '15 at 22:03 • I agree with you that it's a very sketchy article. He talks about gluing in the last few paragraphs. Unfortunately I can't let it go because the construction he is talking about is precisely one that I need. – alphanzo Apr 13 '15 at 23:30
2019-12-08 02:26:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8070396184921265, "perplexity": 324.72238925246194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540504338.31/warc/CC-MAIN-20191208021121-20191208045121-00208.warc.gz"}
https://studysoup.com/tsg/12738/introductory-chemistry-5-edition-chapter-3-problem-58p
× Get Full Access to Introductory Chemistry - 5 Edition - Chapter 3 - Problem 58p Get Full Access to Introductory Chemistry - 5 Edition - Chapter 3 - Problem 58p × A television uses 32 kWh of energy per year. How many joules does it use? ISBN: 9780321910295 34 Solution for problem 58P Chapter 3 Introductory Chemistry | 5th Edition • Textbook Solutions • 2901 Step-by-step solutions solved by professors and subject experts • Get 24/7 help from StudySoup virtual teaching assistants Introductory Chemistry | 5th Edition 4 5 1 358 Reviews 30 5 Problem 58P A television uses 32 kWh of energy per year. How many joules does it use? Step-by-Step Solution: Step 1 of 3 Solution 58P The energy unit for electricity is kWh 1kWh=3.6106J So 32 kWh=323.6106J=115.2106J The television has used 115.2106J of energy Step 2 of 3 Step 3 of 3 ISBN: 9780321910295 Since the solution to 58P from 3 chapter was answered, more than 1505 students have viewed the full step-by-step answer. Introductory Chemistry was written by and is associated to the ISBN: 9780321910295. The answer to “?A television uses 32 kWh of energy per year. How many joules does it use?” is broken down into a number of easy to follow steps, and 15 words. The full step-by-step solution to problem: 58P from chapter: 3 was answered by , our top Chemistry solution expert on 05/06/17, 06:45PM. This full solution covers the following key subjects: Energy, Joules, kwh, television, use. This expansive textbook survival guide covers 19 chapters, and 2046 solutions. This textbook survival guide was created for the textbook: Introductory Chemistry, edition: 5. Discover and learn what students are asking Calculus: Early Transcendental Functions : Exponential and Logarithmic Functions ?Evaluating an Expression In Exercises 1 and 2, evaluate the expressions. (a) $$64^{1 / 3}$$ (b) $$5^{-4}$$ (c) \(\left(\fra Statistics: Informed Decisions Using Data : Inference about the Difference between Two Medians: Dependent Samples ?In Problems 3–10, use the Wilcoxon matched-pairs signedranks test to test the given hypotheses at the a = 0.05 level of significance. The dependent sa Related chapters Unlock Textbook Solution
2022-06-26 19:28:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19742390513420105, "perplexity": 6002.727130303822}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271864.14/warc/CC-MAIN-20220626192142-20220626222142-00215.warc.gz"}
https://stats.stackexchange.com/questions/261704/training-a-neural-network-for-regression-always-predicts-the-mean
# Training a neural network for regression always predicts the mean I am training a simple convolutional neural network for regression, where the task is to predict the (x,y) location of a box in an image, e.g.: The output of the network has two nodes, one for x, and one for y. The rest of the network is a standard convolutional neural network. The loss is a standard mean squared error between the predicted position of the box, and the ground truth position. I am training on 10000 of these images, and validating on 2000. The problem I am having, is that even after significant training, the loss does not really decrease. After observing the output of the network, I notice that the network tends to output values close to zero, for both output nodes. As such, the prediction of the box's location is always the centre of the image. There is some deviation in the predictions, but always around zero. Below shows the loss: I have run this for many more epochs than shown in this graph, and the loss still never decreases. Interestingly here, the loss actually increases at one point. So, it seems that the network is just predicting the average of the training data, rather than learning a good fit. Any ideas on why this may be? I am using Adam as the optimizer, with an initial learning rate of 0.01, and relus as activations If you are interested in some of my code (Keras), it is below: # Create the model model = Sequential() model.add(Convolution2D(32, 5, 5, border_mode='same', subsample=(2, 2), activation='relu', input_shape=(3, image_width, image_height))) model.add(Convolution2D(64, 5, 5, border_mode='same', subsample=(2, 2), activation='relu')) model.add(Convolution2D(128, 5, 5, border_mode='same', subsample=(2, 2), activation='relu')) # Compile the model # Fit the model model.fit(images, targets, batch_size=128, nb_epoch=1000, verbose=1, callbacks=[plot_callback], validation_split=0.2, shuffle=True) • Are the images on top examples of your actual samples? Is that 5 separate samples? There appears to be no information in the images that would help generalize. I mean, you don't need a neural net to find the x,y location of the white square, you can just parse the image and look for a white pixel. Explain a bit more about your vision for this model. Is there some temporal pattern, whereby you are predicting the next location? – photox Feb 14 '17 at 2:06 • Hi, and yes, the images are 5 separate samples. I'm not sure how they are rendered for you, but they should be 5 individual square images (I've changed the layout a little to help...). Yes, I realise that you don't need a neural network for this task, but it is just a test experiment to help me learn how to do regression with a neural network. I don't understand what you mean by there being no information to help generalize.... Each training pair consists of a square image, and a two-dimensional vector of the (x, y) location of the square. Thanks :) – Karnivaurus Feb 14 '17 at 11:57 • 1) Your input shape on the first conv layer is using 3 (rbg) channels, but your data are greyscale (1 channel) 2) You don't need that many conv layers and filters, in fact I think a single layer, and a handful of small kernels will be fine. – photox Feb 14 '17 at 12:04 • Like @photox says, you do not need the conv layers. Adding these make it more difficult for the optimizer to find a good solution. If you remove the 3 conv layers I suspect your "model" will work. – Pieter Feb 14 '17 at 22:02 • Convolutional layers help with translational invariance due to weight sharing. This doesn't help you at all. As other said before, you would get the result you expect without them. – Firebug May 16 '18 at 14:05 I am facing the same problem with my data set. It turns out that in my case the predictors are highly concentrated with a very small variance. You should check out the variance of your prediction variables and see how it is distributed. However, some transformations on the output variable can be performed to modify or change its scale. This might result in a more uniform type distribution. For example, in image recognition tasks histogram equalization or contrast enhancement sometimes works in the favor of correct decision making. Having the same issue, my data is also "imbalanced" in the sense that outputs are highly concentrated (as pointed out by Ravi Shankar). Not sure, if this is the real problem. In my case, more complex models get the training error down (to 0), but the test error remains about the standard deviation, ie. at test time the prediction is essentially no better than using the mean. Thus, overfitting is a concern. At this point, I am not sure, if regularization or other tricks actually help or whether some more severe model changes are needed. Remark: Pieter's answer is interesting, but I agree with JacKeown's comment that it might need a few clarifications. I am going to contradict @Pieter's answer and say that your problem is that you have too much bias and too little variance. In other words, your network is not complex enough for this task. To see this, let $$Y$$ be the true and correct output that your network should return (the target), and let $$\hat{Y}$$ be the output that your network actually returns. Your loss-function is the mean squared-error, averaged over all examples in your training dataset $$\mathcal{D}$$ : $$\mathbb{E}_{\mathcal{D}}\left[(Y - \hat{Y})^2\right]$$ In this loss-function, using your network, we are trying to adjust the probability distribution of $$\hat{Y}$$ such that it matches the probability distribution of $$Y$$. In other words, we are trying to make $$Y=\hat{Y}$$, such that the mean squared-error is $$0$$. This is the lowest possible value of the mean squared-error: $$\mathbb{E}_{\mathcal{D}}\left[(Y - \hat{Y})^2\right] \geq 0$$ However, from the question How can I prove mathematically that the mean of a distribution is the measure that minimizes the variance?, we know that the mean squared-error actually has a tighter lower-bound, which is when $$\hat{Y} = \mathbb{E}_{\mathcal{D}}[Y]$$, such that the mean squared-error loss-function becomes $$\mathbb{E}_{\mathcal{D}}\left[(Y - \mathbb{E}_{\mathcal{D}}[Y])^2\right] = \text{Var}(Y)$$ Since we know that the variance of $$Y$$ is non-negative, then the mean squared-error loss-function has the following lower-bounds $$\mathbb{E}_{\mathcal{D}}\left[(Y - \hat{Y})^2\right] \geq \text{Var}(Y) \geq 0$$ In your case, you have reached the lower-bound $$\text{Var}(Y)$$, since you observe that $$\hat{Y} = \mathbb{E}_{\mathcal{D}}[Y]$$. This means that the bias of $$\hat{Y}$$ is $$(Y - \hat{Y})^2 = (Y - \mathbb{E}_{\mathcal{D}}[Y])^2$$ Strictly speaking, this is not the traditional definition of bias, but it gets the point across. The variance of $$\hat{Y}$$ is $$\mathbb{E}_{\mathcal{D}}\left[\left(\hat{Y} - \mathbb{E}_{\mathcal{D}}\left[\hat{Y}\right]\right)^2\right] = \mathbb{E}_{\mathcal{D}}\left[\left(\mathbb{E}_{\mathcal{D}}[Y] - \mathbb{E}_{\mathcal{D}}[\mathbb{E}_{\mathcal{D}}[Y]]\right)^2\right] = 0$$ Clearly, you have too much bias and too little variance. So, how do we reach the lower-lower-bound of $$0$$? We need to increase the variance of $$\hat{Y}$$ by either adding more parameters to the network or adjusting the network architecture. As discussed in What should I do when my neural network doesn't learn? (highly recommended read), consider over-fitting and then testing your network on a single example by adding many more parameters or by adjusting the network architecture. If the network no longer predicts the mean on a single example, then you can scale up slowly and start over-fitting and testing the network on two examples, then three examples, and so on. Otherwise, you need to keep adding more parameters/adjusting the network architecture until your network no longer predicts the mean. Eventually, once you reach a dataset size of around 100 examples, you can start to split your data into training and testing to evaluate the generalization performance of your network. At this point, if it starts to predict the mean again, then make sure that the examples that you are adding to the dataset are similar to the examples that you already worked through in the smaller datasets. In other words, they are normalized and "look" similar. Also, keep in mind that as you add more data to the dataset, you will more likely need to add more parameters for better generalization performance. Another helpful modification, but not as helpful as what I stated above, that helps in practice, is to slightly adjust the mean squared-error loss function itself. If your mean squared-error loss function is $$\mathcal{L}(y,\hat{y}) = \frac{1}{N} \sum_{i=1}^N (y_i-\hat{y}_i)^2$$ where $$N$$ is the dataset size, then consider using the following loss function instead: $$\mathcal{L}(y,\hat{y}) = \left[\frac{1}{N} \sum_{i=1}^N (y_i-\hat{y}_i)^2\right] + \alpha \cdot \left[\frac{1}{N} \sum_{i=1}^N (\log(y_i)-\log(\hat{y}_i))^2\right]$$ Where $$\alpha$$ is a hyperparameter that can be tuned via trial and error. A starting value for $$\alpha$$ could be $$\alpha=5$$. The advantage of this loss function over the plain mean squared-error loss function is that the $$\log(\cdot)$$ function stretches small values in the interval $$[0,1]$$ away from each other, which means that small differences between $$y$$ and $$\hat{y}$$ are amplified, leading to larger gradients. I have personally found this modified loss function to be very helpful in practice. For this to work well, it is recommended (but not necessary) that $$y$$ and $$\hat{y}$$ are each scaled to have values in the interval $$[0,1]$$. Also, since $$\log(0)=-\infty$$, and since it is likely that $$y$$ and $$\hat{y}$$ will have values very close to $$0$$, then it is recommended to add a small value $$\epsilon$$, such as $$\epsilon=10^{-9}$$, to $$y$$ and $$\hat{y}$$ in the loss function as follows: $$\mathcal{L}(y,\hat{y}) = \left[\frac{1}{N} \sum_{i=1}^N (y_i-\hat{y}_i)^2\right] + \alpha \cdot \left[\frac{1}{N} \sum_{i=1}^N (\log(y_i + \epsilon)-\log(\hat{y}_i + \epsilon))^2\right]$$ This loss function may be thought of as the Mean Squared Log-scaled Error Loss. It looks like a typical overfitting problem. Your data does not provide enough information to get the better result. You choose the complex NN with you train to remember all nuances of the train data. Loss can never be a zero, as it is on your graph. BTW It seems your validation has a bug or validation set is not a good for validation because the validation loss is also getting zero. • The question says the network almost always outputs zero. That would be a case of severe underfitting, not overfitting. There's also no gap between training and validation error on the learning curve, indicating that overfitting isn't the problem (the error isn't zero, the scale is logarithmic) – user20160 Oct 2 '17 at 8:28 I was actually working on a very similar problem. Basically, I had a bunch of dots on a white background and I was training a NN to recognize the dot that was placed on the background first. The way I found to work was to just use one fully-connected layer of neurons (so a 1-layer NN). For example, for a 100x100 image, I would have 10,000 input neurons (the pixels) directly connected to 2 output neurons (the coordinates). In PyTorch, when I converted the pixel values to a tensor, it was normalizing my data automatically, by subtracting the mean and dividing by the standard deviation. In normal machine learning problems, this is fine, but not for an image where there might be a disparity in the number of colored pixels in an image (i.e. yours where there are only a few white pixels). So, I manually normalized by dividing all pixel intensity values by 255 (so they're now in the range of 0-1 without the typical normalization technique that tries to fit all the intensity values to a normal distribution). Then, I still had issues because it was predicting the average coordinate of the pixels in the training set. So, my solution was to set the learning rate very high, which goes against almost all ML instructors and tutorials. Instead of using 1e-3, 1e-4, 1e-5, like most people say, I was using a learning rate of 1 or 0.1 with stochastic gradient descent. This fixed my issues and my network finally learned to memorize my training set. It doesn't generalize to a testing set too well, but at least it somewhat works, which is a better solution than most everybody else suggested on your question.
2021-05-15 08:27:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 43, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7840856313705444, "perplexity": 396.46368472949536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991378.48/warc/CC-MAIN-20210515070344-20210515100344-00449.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/dcds.2016.36.3677
Article Contents Article Contents # Neumann homogenization via integro-differential operators • In this note we describe how the Neumann homogenization of fully nonlinear elliptic equations can be recast as the study of nonlocal (integro-differential) equations involving elliptic integro-differential operators on the boundary. This is motivated by a new integro-differential representation for nonlinear operators with a comparison principle which we also introduce. In the simple case that the original domain is an infinite strip with almost periodic Neumann data, this leads to an almost periodic homogenization problem involving a fully nonlinear integro-differential operator on the Neumann boundary. This method gives a new proof-- which was left as an open question in the earlier work of Barles- Da Lio- Lions- Souganidis (2008)-- of the result obtained recently by Choi-Kim-Lee (2013), and we anticipate that it will generalize to other contexts. Mathematics Subject Classification: 35B27, 35J60, 35J99, 35R09, 45K05, 47G20, 49L25, 49N70, 60J75, 93E20. Citation: • [1] L. Alvarez, F. Guichard, P.-L. Lions and J.-M. Morel, Axioms and fundamental equations of image processing, Arch. Rational Mech. Anal., 123 (1993), 199-257.doi: 10.1007/BF00375127. [2] M. Arisawa, Long time averaged reflection force and homogenization of oscillating Neumann boundary conditions, Ann. Inst. H. Poincaré Anal. Non Linéaire, 20 (2003), 293-332.doi: 10.1016/S0294-1449(02)00025-2. [3] I. Babuška, Solution of interface problems by homogenization. I, SIAM J. Math. Anal., 7 (1976), 603-634.doi: 10.1137/0507048. [4] G. Barles, F. Da Lio, P.-L. Lions and P. E. Souganidis, Ergodic problems and periodic homogenization for fully nonlinear equations in half-space type domains with Neumann boundary conditions, Indiana Univ. Math. J., 57 (2008), 2355-2375.doi: 10.1512/iumj.2008.57.3363. [5] G. Barles, Nonlinear Neumann boundary conditions for quasilinear degenerate elliptic equations and applications, J. Differential Equations, 154 (1999), 191-224.doi: 10.1006/jdeq.1998.3568. [6] G. Barles and F. Da Lio, Local $C^{0,\alpha}$ estimates for viscosity solutions of Neumann-type boundary value problems, J. Differential Equations, 225 (2006), 202-241.doi: 10.1016/j.jde.2005.09.004. [7] G. Barles and P. E. Souganidis, A new approach to front propagation problems: Theory and applications, Arch. Rational Mech. Anal., 141 (1998), 237-296.doi: 10.1007/s002050050077. [8] A. Bensoussan, J.-L. Lions and G. Papanicolaou, Asymptotic Analysis for Periodic Structures, volume 5 of Studies in Mathematics and its Applications, North-Holland Publishing Co., Amsterdam, 1978. [9] S. Biton, Nonlinear monotone semigroups and viscosity solutions, Ann. Inst. H. Poincaré Anal. Non Linéaire, 18 (2001), 383-402.doi: 10.1016/S0294-1449(00)00057-3. [10] L. Caffarelli, M. G. Crandall, M. Kocan and A. Swięch, On viscosity solutions of fully nonlinear equations with measurable ingredients, Comm. Pure Appl. Math., 49 (1996), 365-397.doi: 10.1002/(SICI)1097-0312(199604)49:4<365::AID-CPA3>3.0.CO;2-A. [11] L. Caffarelli and L. Silvestre, Regularity theory for fully nonlinear integro-differential equations, Comm. Pure Appl. Math., 62 (2009), 597-638.doi: 10.1002/cpa.20274. [12] L. A. Caffarelli and X. Cabré, Fully Nonlinear Elliptic Equations, volume 43 of American Mathematical Society Colloquium Publications, American Mathematical Society, Providence, RI, 1995. [13] H. Chang Lara, Regularity for fully non linear equations with non local drift, arXiv:1210.4242, 2012. [14] H. Chang Lara and G. Dávila, Regularity for solutions of nonlocal, nonsymmetric equations, Ann. Inst. H. Poincaré Anal. Non Linéaire, 29 (2012), 833-859.doi: 10.1016/j.anihpc.2012.04.006. [15] S. Choi and I. Kim, Homogenization for nonlinear pdes in general domains with oscillatory neumann boundary data, J. Math. Pures Appl., 102 (2014), 419-448, arXiv:1302.5386 [math.AP], 2013.doi: 10.1016/j.matpur.2013.11.015. [16] S. Choi, I. Kim and K.-A. Lee, Homogenization of Neumann boundary data with fully nonlinear operator, Anal. PDE, 6 (2013), 951-972.doi: 10.2140/apde.2013.6.951. [17] F. H. Clarke, Optimization and Nonsmooth Analysis, volume 5. SIAM, 1990.doi: 10.1137/1.9781611971309. [18] E. D. Conway and E. Hopf, Hamilton's theory and generalized solutions of the Hamilton-Jacobi equation, J. Math. Mech., 13 (1964), 939-986. [19] P. Courrege, Sur la forme intégro-différentielle des opérateurs de $c^{\infty}_k$ dans $c$ satisfaisant au principe du maximum, Séminaire Brelot-Choquet-Deny. Théorie du Potentiel, 10 (1965), 1-38. [20] M. G. Crandall, H. Ishii and P.-L. Lions, User's guide to viscosity solutions of second order partial differential equations, Bull. Amer. Math. Soc. (N.S.), 27 (1992), 1-67.doi: 10.1090/S0273-0979-1992-00266-5. [21] B. Engquist and P. E. Souganidis, Asymptotic and numerical homogenization, Acta Numer., 17 (2008), 147-190.doi: 10.1017/S0962492906360011. [22] L. C. Evans, On solving certain nonlinear partial differential equations by accretive operator methods, Israel J. Math., 36 (1980), 225-247.doi: 10.1007/BF02762047. [23] L. C. Evans, Some min-max methods for the Hamilton-Jacobi equation, Indiana Univ. Math. J., 33 (1984), 31-50.doi: 10.1512/iumj.1984.33.33002. [24] L. C. Evans, Periodic homogenisation of certain fully nonlinear partial differential equations, Proc. Roy. Soc. Edinburgh Sect. A, 120 (1992), 245-265.doi: 10.1017/S0308210500032121. [25] W. H. Fleming, The Cauchy problem for degenerate parabolic equations, J. Math. Mech., 13 (1964), 987-1008. [26] W. H. Fleming, The Cauchy problem for a nonlinear first order partial differential equation, J. Differential Equations, 5 (1969), 515-530.doi: 10.1016/0022-0396(69)90091-6. [27] N. Guillen and R. W. Schwab, Aleksandrov-bakelman-pucci type estimates for integro-differential equations, Archive for Rational Mechanics and Analysis, 206 (2012), 111-157.doi: 10.1007/s00205-012-0529-0. [28] E. Hopf, The partial differential equation $u_t + u u_x=\mu u_{x x}$, Comm. Pure Appl. Math., 3 (1950), 201-230. [29] P. Hsu, On excursions of reflecting Brownian motion, Trans. Amer. Math. Soc., 296 (1986), 239-264.doi: 10.1090/S0002-9947-1986-0837810-X. [30] H. Ishii and P.-L. Lions, Viscosity solutions of fully nonlinear second-order elliptic partial differential equations, J. Differential Equations, 83 (1990), 26-78.doi: 10.1016/0022-0396(90)90068-Z. [31] H. Ishii, Almost periodic homogenization of Hamilton-Jacobi equations, In International Conference on Differential Equations, Vol. 1, 2 (Berlin, 1999), pages 600-605. World Sci. Publ., River Edge, NJ, 2000. [32] V. V. Jikov, S. M. Kozlov and O. A. Oleĭnik, Homogenization of Differential Operators and Integral Functionals, Springer-Verlag, Berlin, 1994. Translated from the Russian by G. A. Yosifian [G. A. Iosif’yan].doi: 10.1007/978-3-642-84659-5. [33] M. Kassmann and A. Mimica, Intrinsic scaling properties for nonlocal operators, J. Eur. Math. Soc. (JEMS), to appear. [34] M. Kassmann, M. Rang and R. W. Schwab, Hölder regularity for integro-differential equations with nonlinear directional dependence, Indiana Univ. Math. J., 63 (2014), 1467-1498.doi: 10.1512/iumj.2014.63.5394. [35] M. A. Katsoulakis, A representation formula and regularizing properties for viscosity solutions of second-order fully nonlinear degenerate parabolic equations, Nonlinear Anal., 24 (1995), 147-158.doi: 10.1016/0362-546X(94)00170-M. [36] N. V. Krylov and M. V. Safonov, An estimate for the probability of a diffusion process hitting a set of positive measure, Dokl. Akad. Nauk SSSR, 245 (1979), 18-20. [37] N. V. Krylov and M. V. Safonov, A property of the solutions of parabolic equations with measurable coefficients, Izv. Akad. Nauk SSSR Ser. Mat., 44 (1980), 161-175, 239. [38] P.-L. Lions, N. S. Trudinger and J. IE Urbas, The neumann problem for equations of monge-ampère type, Communications on pure and applied mathematics, 39 (1986), 539-563.doi: 10.1002/cpa.3160390405. [39] P.-L. Lions and N. S. Trudinger, Linear oblique derivative problems for the uniformly elliptic hamilton-jacobi-bellman equation, Mathematische Zeitschrift, 191 (1986), 1-15.doi: 10.1007/BF01163605. [40] E. Milakis and L. E. Silvestre, Regularity for fully nonlinear elliptic equations with Neumann boundary data, Comm. Partial Differential Equations, 31 (2006), 1227-1252.doi: 10.1080/03605300600634999. [41] R. W. Schwab, Periodic homogenization for nonlinear integro-differential equations, SIAM J. Math. Anal., 42 (2010), 2652-2680.doi: 10.1137/080737897. [42] M. A. Shubin, Almost periodic functions and partial differential operators, Russian Mathematical Surveys, 33 (1978), 1-52. [43] L. Silvestre, Hölder estimates for solutions of integro-differential equations like the fractional Laplace, Indiana Univ. Math. J., 55 (2006), 1155-1174.doi: 10.1512/iumj.2006.55.2706. [44] P. E. Souganidis, Personal communication. [45] P. E. Souganidis, Max-min representations and product formulas for the viscosity solutions of Hamilton-Jacobi equations with applications to differential games, Nonlinear Anal., 9 (1985), 217-257.doi: 10.1016/0362-546X(85)90062-8. [46] Hiroshi Tanaka, Homogenization of diffusion processes with boundary conditions, In Stochastic analysis and applications, volume 7 of Adv. Probab. Related Topics, pages 411-437. Dekker, New York, 1984.
2023-04-01 17:06:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.7805861234664917, "perplexity": 1317.9422328507642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00703.warc.gz"}