url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://motls.blogspot.com/2009/06/distance-matters-facebook-contacts-have.html?m=1
## Friday, June 19, 2009 ### Distance matters: Facebook contacts have a scale-invariant spectrum The physics arXiv blog has brought our attention to a paper that seems pretty interesting to me: Goldenberg, Levy: Distance Is Not Dead: Social Interaction and Geographical Distance in the Internet Era We often like to say that the Internet has made our blue planet smaller by simplifying the geographically distant relationships. We live in a global village, and so on. However, what is often neglected in these popular clichés is that the Internet has also simplified the local relationships. And in fact, the computers have also simplified our ability to distinguish the nearby contacts from the distant ones. We have good reasons to prefer the local ones in many contexts because they can be coupled to physical interactions and the same interests in local events. As a result, our focus actually became relatively more local and less global, since the first moments when the communication over arbitrary distances became acceptably doable and cheap. The figure above summarizes the best power law fit for the number of Facebook contacts, written as a distribution "f(r)dr" with respect to the distance "r" in miles. You can see that the distribution goes essentially as "1/r": their exponent is "-1.03" which is very close to minus one. This is an interesting exponent because "dr/r" is the same thing as "d ln(r)". It means that the number of contacts is the same for every "decade". One has N contacts between 1 and 10 miles, N contacts between 10 and 100 miles, N contacts between 100 and 1,000 miles, and perhaps N contacts between 1,000 and 10,000 miles. In this sense, the map of the contacts is statistically self-similar, at least if you neglect far infrared effects such as the finiteness of the Earth that fails to be flat. ;-) We are thinking hierarchically and we tend to be equally interested in every level of the hierarchy that we belong to. If you wish, a part of my interest goes to Pilsen, an equally large part to the rest of Western Bohemia, another part to the rest of Bohemia, another part to Central Europe, another part to Europe, and so on. (No, I haven't yet created a Facebook account. With Twitter, I would like be a canonical example of their reactionary "traitor" users, for reasons that look very sensible to me.)
2021-10-21 11:06:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5769723653793335, "perplexity": 1075.653048194487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585405.74/warc/CC-MAIN-20211021102435-20211021132435-00497.warc.gz"}
http://conceptmap.cfapps.io/wikipage?lang=en&name=Talk:Special_relativity/Archive_22
# Talk:Special relativity/Archive 22 Active discussions Archive 15 ← Archive 20 Archive 21 Archive 22 Archive 23 ## Discussion of Postulates The section on the postulates of Special Relativity seems to need some rethinking. In section 1 of his original 1905 paper Einstein states the first postulate as "The laws by which the states of physical systems undergo change are unaffected, whether these changes of state be referred to the one or the other of two systems of coordinates in uniform translatory motion." The second postulate states the invariance of the speed of light. These seem clear enough and we expect that the Lorentz transformation will be derived from them. In Section 3 he derives the Lorentz transformation from two postulates, first that the speed of light is invariant and second that the relation between the observers is symmetrical. There is no mention of the invariance of physical laws; the invariance of the speed of light gets as far as the LT with an arbitrary constant multiplier, and the symmetry postulate settles its value as unity. In section 6, Electrodynamical Part, he shows that the equations of electrodynamics are invariant with respect to the Lorentz transformation just derived, so the laws of physics may be a consequence of the LT but are not used as a postulate. This seems to be a very common mistake; none of the derivations of the LT I have seen use the invariance of physical laws as a postulate.ColinG 03:53, 26 April 2013 (UTC) To Colingordon (talk · contribs): The invariance of physical laws is needed to show that everything in physics (not just electromagnetism) is preserved by Lorentz transformations. It is not the derivation of LT that requires the full power of the first postulate, rather it is the application of LT which requires that full power. JRSpriggs (talk) 09:19, 26 April 2013 (UTC) ## "How far can one travel from the Earth" section inconsistent? The final example indicates 28-year trip is sufficient to reach 2 million light years (Andromeda) but earlier examples seem to take longer at the same acceleration to reach only 148,000 light years. What am I missing?--Jrm2007 (talk) 08:50, 17 October 2013 (UTC) 140,000 lyrs is for a round trip: 10 years accelerating, 10 years decelerating, turn around, repeat. 28 years to the Andromeda is one way. — Reatlas (talk) 09:31, 17 October 2013 (UTC) Even though some authors quoted round-trip and others one-way, can someone convert all of one to the other, or simply remove some of them? Maybe drop the round-trip examples; It's interesting enough that one could reach andromeda in one's lifetime, regardless of whether you (or your children) come back. Coming back is different because you'd be coming back to an earth that had aged hundreds of millions of years, in which time humans had presumably evolved into god-knows-what or gone extinct. DavRosen (talk) 18:16, 5 November 2013 (UTC) ## Time, Proper time, and Standard time It is useful to understand the mutual relation between the above-mentioned three closely related quantities, time, proper time, and standard time, more closely, which may be relevant for this article, at least it may be helpful for understanding: As I see it at present, the standard time is defined by the "National Institute of Standards and Technology" of the country in question, ideally as the proper time of a certain free-falling atom (i.e. the velocity is exactly compensated by the local gravity forces, see General Relativity). The corresponding "ticks" of this time correspond to the periods of the corresponding quantum radiation of this atom. This standard time is then propagated from the institute to railway stations, airports, and elsewhere. Finally this standard time also corresponds to the time of the watches, the passengers of Einstein's "fast trains", or "fast aircrafts", may carry on their arms, if they are sitting at rest on their seats in the vehicle. In particular this standard time, ideally also the time presented by the watches, corresponds to Einstein's ${\displaystyle \mathrm {d} t,}$  i.e. to the (external) clocks on the platforms of the railway station, and not to the (internal) "proper time" ${\displaystyle \mathrm {d} \tau }$  of the well-seated passengers in the train or airplane. In fact, a re-standardization from ${\displaystyle \mathrm {d} t}$  to the last-mentioned ${\displaystyle \mathrm {d} \tau ,}$  according to ${\displaystyle \mathrm {d} \tau =\mathrm {d} t{\sqrt {1-(v^{2}/c^{2})}},}$  would show only a slightly lower result, corresponding to some kind of Hafele-Keating experiment, since ${\displaystyle v\ll c.}$  But for ${\displaystyle v\to c}$  the internal quantity ${\displaystyle \mathrm {d} \tau }$  would diverge according to ${\displaystyle \gamma .}$ Maybe you would like to comment on this, e.g. for my own better understanding? - With regards, Meier99 (talk) 17:17, 5 November 2013 (UTC) Perhaps you could try the wp:reference desk/science. This is where we discuss the article, not the subject—see wp:talk page guidelines. Good luck. - DVdm (talk) 10:58, 6 November 2013 (UTC) The Earth is not an inertial frame of reference. So there will necessarily be distortions involved in any system of standardized time on Earth due to its rotation, revolution about the Sun, and the Earth's gravity. JRSpriggs (talk) 08:38, 7 November 2013 (UTC) ## Wave-Theoretical-Insight into the Relativistic-Length-Contraction, and Time-Dilation of Super-Novae light-curves In a recent paper, titled: "Wave-Theoretical-Insight into the Relativistic-Length-Contraction, and Time-Dilation of Super-Novae light-curves"; published in Advanced Studies in Theoretical Physics Vol. 7, 2013, no. 20, 971 - 976 http://dx.doi.org/10.12988/astp.2013.39102 ; Hasmukh K. Tank has attempted to understand Relativistic Length-Contraction and Time-Dilation, in terms of Fourier Transform. This may be a bigining of unification of Relativity with Quantum-Mechanics.180.87.227.236 (talk) 08:34, 12 February 2014 (UTC) ## Intro article I removed the link to the intro article from the main article because right now the main article is far more accessible to the general reader. The intro is too technical and too incoherent. It's more like a garbled intro to advanced physics students, which makes the intro pointless. 109.186.38.41 (talk) 07:12, 29 November 2014 (UTC) ## Special Relativity is Wrong! Two twin boys staying on two inertial reference frames with a constant relative speed will never find the other boy is younger than himself as they are at completely symmetric positions. This twin paradox actually denies the existence of time dilation as predicted by special relativity. Here is my paper proving the contradiction: Special Relativity Contradicts to ItselfXinhangshen (talk) 11:21, 7 July 2013 (UTC) Whatever validity you might feel that your results might have, this is not the forum in which to self-publish. An encyclopaedia must be compiled from secondary sources. — Quondum 14:23, 7 July 2013 (UTC) They are not in symmetrical positions since the travelling twin undergoes acceleration and deceleration which the stay-at-home twin does not. It's all explained in the twin paradox article. MFlet1 (talk) 11:44, 23 July 2013 (UTC) Another Inconsistency in Special Relativity. SR is a constant-velocity theory, and any attempts to take it out of that limitation are wrong and subject to weird results. Acceleration is not allowed. Gravity is not allowed. http://brokenelevator.weebly.com/ That site attempts to show that free-fall in gravity is yet insufficient for the theory of SR. BTW, SR doesn't apply anywhere. — Preceding unsigned comment added by W2einstein (talkcontribs) 03:46, 6 April 2014 (UTC) It does not require too much intellectual effort to see that Length contraction and Time dilation are only illusory, no-real. I wonder why this aspect of SR is not discussed anywhere? — Preceding unsigned comment added by 176.63.161.109 (talk) 11:23, 26 January 2015 (UTC) and , you may actually be delusional. SR and GR have been tested over and over, with very accurate results, from particle physics experiments to interplanetary spacecraft. Of course GR will break down at some point, and in the future some new theory will replace it to include GR plus the new corrections and predictions. No theory can be truly correct. But SR can allow acceleration and even rotations and angular momentum (see for example relativistic mechanics), this is not the same as gravitation, which corresponds to curvature of spacetime and motion along geodesics. SR places inertial and non-inertial frames on different footing and accelerations are absolute, GR places all frames on equal footing and accelerations are relative. There is no contradiction in having time dilation either since coordinate time is relative. SR applies to any moving objects when gravity is negligible. See reputable texts like Goldstein's Classical Mechanics, Misner, Thorne, Wheeler's Gravitation, and Landau and Lifshitz volume 2 Classical theory of fields. M∧Ŝc2ħεИτlk 07:56, 6 April 2014 (UTC) @Maschen: Per wp:TPG, please try to avoid engaging in this kind of subject discussions. They tend to go nowhere. I have put a second level level chat warning on their user talk page. Cheers. - DVdm (talk) 08:57, 6 April 2014 (UTC) @DVdm: Sorry, I'll stop here. M∧Ŝc2ħεИτlk 08:59, 6 April 2014 (UTC) ## The meaning of "classical" remains ambiguous This article uses the word "classical" several times without adequately defining it. (User:Dr Greg clearly agrees.) The adjective "classical" is often used to mean "nonrelativistic" (equivalent to c → ∞) and/or "assuming the nonquantum limit (ħ → 0), but not always both. The article Classical mechanics assumes Galilean relativity; and Classical electromagnetism assumes the nonquantum limit but it assumes special relativity. I'd suggest therefore that either use of the the term "classical" must be disambiguated within the article for each word to which it is applied, or it should be replaced. —Quondum 15:53, 2 August 2014 (UTC) Problematic indeed, as even the article Newtonian mechanics redirects to Classical mechanics. - DVdm (talk) 21:27, 2 August 2014 (UTC) Standard usage is that "classical" is in contradistinction to "quantum-mechanical." Special relativity is a classical theory. --76.169.116.244 (talk) 03:52, 9 June 2015 (UTC) ## Citations, Quality and Personal Theories Firstly, this article seems to a little light on citations considering its assertions; for example, the intro para makes the following assertion without any references: "The inconsistency of Newtonian mechanics with Maxwell’s equations of electromagnetism and the inability to discover Earth's motion through a luminiferous aether led to the development of special relativity". In the following section it says something a little different: "There is conflicting evidence on the extent to which Einstein was influenced by the null result of the Michelson–Morley experiment.". The current reference list runs to about 45 sources, which is not the longest by any means, so I propose that we back up each assertion (or group of assertions if applicable) with a citation. Secondly, the current status is "Delisted good article". Is there any guidance available as why is was delisted, or what work is required to be done to improve the article? Thirdly, this really should be an encyclopedic article summarising current scientific consensus; so should Talk be a place to discuss personal beliefs or presentation of critiques of the theory? (as opposed to critiques of the article?) If any of the more experienced editors would like to comment on these points and/or provide some guidance, then I would be happy to contribute to the maintenance of this article. PennyDarling (talk) 17:55, 6 August 2015 (UTC) Wikipedia is a DIY encyclopedia so the best place to start is by doing the work you consider necessary yourself. Just apply WP:BRD and see how it goes. Relativity is, unfortunately, a theory that attracts many people with their own idiosyncratic opinions and beliefs so there will always be those who want to promote their personal theories on this talk page. You are quite right in saying that that is not the purpose of the page. All we can do is politely ask these people to take their theories elsewhere, which is what is usually done here. You are welcome to help. Martin Hogbin (talk) 18:50, 6 August 2015 (UTC) OK, thanks - will do! — Preceding unsigned comment added by PennyDarling (talkcontribs) 19:02, 6 August 2015 (UTC) ## Distinction between SR and GR The lead included an antiquated and incorrect description of the distinction between SR and GR. Einstein originally believed that the relevant generalization was from inertial to noninertial frames of reference. This is not the way any professional relativist or any GR textbook has defined the distinction for many decades. It has been known for many decades that SR is perfectly capable of handling accelerated frames of reference and arbitrary coordinate systems, and that doing so does not lead to a theory of gravity. The distinction universally agreed upon by modern relativists is that SR deals with flat spacetime, GR with curved spacetime. I've fixed the error in the lead, and I've added references to two standard textbooks on GR, one by Carroll and one by Wald. (The reference to Carroll is to the shorter online version of the book.)--76.169.116.244 (talk) 20:02, 14 August 2015 (UTC) ## Time dilation description seams unclear It says: To find the relation between the times between these ticks as measured in both systems, the first equation can be used to find: Δt' = γΔt for events satisfying Δx = 0 . How can there be a time dilation without movement between the two events? It seams it would make more sense to define x1 and x2 as being separated by some distance (say x1=0 meters and x2=1 meter); and also have t1=0 seconds (start) and t2=1 second. But I'm not a physicist. Jacob81 (talk) 22:36, 18 September 2015 (UTC) An observer sits (or stands) at one spatial point, then measures time intervals. M∧Ŝc2ħεИτlk 00:27, 19 September 2015 (UTC) The time dilation ( of Δt' ) is seen in the primed coordinate system in which a clock measuring Δt is moving ( Δx' ≠ 0 ), not in the unprimed coordinate system in which that clock is at rest ( Δx = 0 ). JRSpriggs (talk) 02:54, 19 September 2015 (UTC) ## The article is unintelligible to anyone that does not already understand Special Relativity (And people that already understand Special Relativity do not need to read the article.) It needs a basic explanation that is absolutely missing. How it compares with Newtonian physics in simple cases. Sure, this is a sophisticated idea that cannot be introduced trivially. But it should be intelligible to someone with high school physics, at least. The article does not even make an attempt. Talk of Galilean in the introduction shows how clever the authors are, but is worthless to anyone else.Tuntable (talk) 02:04, 4 December 2015 (UTC) There are articles like Introduction to general relativity and Introduction to quantum mechanics, but Introduction to special relativity was turned into a redirect to the article here last July. [1] was the last version. At Wikipedia:Articles for deletion/Introduction to special relativity it says there is consensus for the view that we should not have a separate article about this. I'm not familiar with that discussion, but you could read the AfD and the original talkpage to find out what was wrong with that or why a separate article was deemed not a good idea. Either way, when that introduction article was removed on July 7, 2015, no significant (introductory) content was added to the article here. Maybe an introductory section could be added. Gap9551 (talk) 04:29, 4 December 2015 (UTC) ## This might be a new way to explain causality violation of superluminal communication. I am a teacher (not a researcher, but with 21 refereed pubs) who was trying to find the best way to explain how superluminal communication violates causality (as you might guess, I also like science fiction). I realized if we simplify the question to instantaneous communication, the space-time diagram is much simpler, as shown in figure 1 above. I realize the image of moving train is not a "proper" thing to do in a space-time diagram, but the entire area between the F and R lines is the history of this one-dimensional line segment, and figure 2 reflects this fact. I was not allowed to insert figure 1 into Superluminal communication because it was declared new research (which I hope it is). So my question is this: Has anybody seen a diagram like this used to explain the "impossibility" of superluminal communication? For more discussion, go to Wikiversity:Minkowski diagram--Guy vandegrift (talk) 08:49, 18 May 2016 (UTC) And some more discussion at Please have a look at wp:CANVAS sometime   - DVdm (talk) 09:00, 18 May 2016 (UTC) Einstein writes in the article "To Ehrenfest paradox": "The question of whether the Lorentz contraction is real or not, it makes no sense. Reduction is not realistic, because it does not exist for an observer moving with the body; however, it is real, since it can be proved for the observer who is not moving with the body." (Zum Ehrenfestschen. Phys.Z., 1911, 12, 169). This statement can be clearly demonstrated in the model of the special relativity, see below : Model of special relativity Model of special relativity is a system of two observers and two rods (Figure 1a). Model of special relativity Here ${\displaystyle AB}$  and ${\displaystyle A\,'B\,'}$  - rods with a length ${\displaystyle l_{\,0}}$ . At points ${\displaystyle D}$  and ${\displaystyle D\,'}$  are observers. ${\displaystyle R}$  - permanent distance, ${\displaystyle R_{1}}$  - variable distance. Thus, each observer associated with a respective rod (own reference system indicated in red or blue). From Figure 1a is easy to obtain equations that are valid with respect to both observers ${\displaystyle l\,'=l_{\,0}\left(1-{\frac {R_{1}}{R}}\right)\,\,\,\,\,\,\,\,\,\,(1)}$ ${\displaystyle \tan \alpha '={\frac {\tan \alpha }{1-{R_{1}}/{R}}}\,\,\,\,\,\,\,\,\,\,(2)}$ ${\displaystyle R\tan \alpha =\tan \alpha '(R-R_{1})=invariant\,\,\,\,\,\,\,\,\,\,(3)}$ Suppose that the light signal travels from point ${\displaystyle A}$  to point ${\displaystyle B}$  and returns to the point ${\displaystyle A}$ . Then the formula (1), (2), (3) will have the form ${\displaystyle l\,'=l_{\,0}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}\,\,\,\,\,\,\,\,\,\,(4)}$ ${\displaystyle \Delta t\,'={\frac {\Delta t_{0}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}\,\,\,\,\,\,\,\,\,\,(5)}$ ${\displaystyle c\Delta t_{0}={c\,'}\Delta t\,'={(c^{2}-v^{2})}^{1/2}\Delta t\,'={(c^{2}\Delta {t\,'}^{2}-\Delta {x'}^{2})}^{1/2}=\Delta S=invariant\,\,\,\,\,\,\,\,\,\,(6)}$ Here ${\displaystyle l\,'}$  is a projection of the light beam on the rod ${\displaystyle A'B'}$ ; ${\displaystyle \Delta t_{0}=2\tan \alpha (R/c)}$  and ${\displaystyle \Delta t\,'=2\tan \alpha '(R/c)}$  is times of the light signal forth and back; ${\displaystyle c}$  is speed of light; ${\displaystyle \Delta S}$  is invariant. Formulas (4), (5) and (6) are similar to the formulas of special relativity. Therefore all the conclusions of special relativity clearly displayed in the model. Alexander Klimets (talk) 11:00, 2 September 2016 (UTC) Please note that article talk pages are to be used for discussions about the article, not about the subject. See wp:Talk page guidelines. - DVdm (talk) 15:58, 2 September 2016 (UTC) In my opinion, reducing the length of the rod in the model looks more clearly. Let the lights radiated from ${\displaystyle M}$  to ${\displaystyle A}$  and ${\displaystyle B}$ . Point ${\displaystyle M}$  is the midpoint of the rod ${\displaystyle AB}$ . The light from the ${\displaystyle M}$  comes to the points ${\displaystyle A}$  and ${\displaystyle B}$  simultaneously (for an observer at ${\displaystyle D}$ ). On the other hand, the light arrives at points ${\displaystyle A'}$  and ${\displaystyle B''}$  simultaneously (for an observer at ${\displaystyle D}$ ). But ${\displaystyle A'B''=l'}$  is a shortened length. Alexander Klimets (talk) 08:34, 3 September 2016 (UTC) We cannot discuss this here. See wp:Talk page guidelines. - DVdm (talk) 08:40, 3 September 2016 (UTC) ## break up the intro ? Could the introductory section (the first few paragraphs) be broken up into headings or simplified? .. maybe separate out a 'history' section with a 'main article' template pointing at history of special relativity, and a section 'applicability' (relativistic velocity, low masses.., contrast with galileo / GR) ... and/or 'overview' below the initial statement of 'what it is'. Fmadd (talk) 03:54, 31 January 2017 (UTC) ## Edit request: lede is factually wrong. The lead contains the following false claim:"As of today, special relativity is the most accurate model of motion at any speed." It should be obvious that for macroscopic motion (that is, where.when quantum mechanics isn't needed) that GENERAL relativity is the "most accurate model of motion" (at all possible speeds).71.29.172.222 (talk) 15:39, 3 July 2016 (UTC) Not done. Special relativity can handle all possible speeds—see the article: "The theory is "special" in that it only applies in the special case where the curvature of spacetime due to gravity is negligible.[5][6] In order to include gravity, Einstein formulated general relativity in 1915. Special relativity, contrary to some outdated descriptions, is capable of handling accelerated frames of reference.[1][2]" References 1. ^ Koks, Don (2006). Explorations in Mathematical Physics: The Concepts Behind an Elegant Language (illustrated ed.). Springer Science & Business Media. p. 234. ISBN 978-0-387-32793-8. Extract of page 234 2. ^ Steane, Andrew M. (2012). Relativity Made Relatively Easy (illustrated ed.). OUP Oxford. p. 226. ISBN 978-0-19-966286-9. Extract of page 226 - DVdm (talk) 18:07, 3 July 2016 (UTC) • I agree that it is factually wrong. For example, the motion of a very heavy body in the neighbourhood of another very heavy body is not correctly predicted by special relativity, whereas it is correctly predicted by general relativity. An example would be two neutron stars. Hence special relativity is less accurate than general relativity in this case, meaning that special relativity is not the most accurate model of motion. The text could be changed to "special relativity is the most accurate model of motion at any speed when gravitational effects are negligible". I don't think the statement by DVdm addresses this issue. Ian Hinder (talk) 12:42, 20 May 2017 (UTC) No problem with that: [2]. My objection was to the wording of the request. - DVdm (talk) 13:15, 20 May 2017 (UTC) ## Wrong answer in the section "How far can one travel from the Earth?"? I got ${\displaystyle v\approx 0.72c}$ , not ${\displaystyle v=0.77c}$ , given this equation and these variables ${\displaystyle v(t)={\frac {at}{\sqrt {1+{\frac {a^{2}t^{2}}{c^{2}}}}}},\quad a=9.81\,\mathrm {m} /\mathrm {s} ^{2},\quad t=3.1536\,10^{7}\,\mathrm {s} }$ . Am I missing something? Anyone else got the same result? Ximalas (talk) 20:40, 21 February 2018 (UTC) Yep. Feel free to change. This kind of thing should have a good source. - DVdm (talk) 21:11, 21 February 2018 (UTC) ## Thought experiments In his popular and semi-popular writings, Einstein was well-known for illustrating basic concepts of relativity with the aid of thought experiments. Am I simply missing it, or does there not exist an article in Wikipedia devoted to "Special relativity thought experiments"? Would creation of such an article be desirable? Or would such an article violate wp:NOTTEXTBOOK? Prokaryotic Caspase Homolog (talk) 03:24, 5 April 2018 (UTC) I think that it would be a good article to have, if it is framed as an article about the history of relativity and limited to sourced thought experiments devised by Einstein himself. JRSpriggs (talk) 04:24, 5 April 2018 (UTC) Definitely it needs to be a sourced article. If we wish to make it an historical article strictly about Einstein's unique approach to conceptualizing complex scientific ideas, then the article name could be "Einstein's thought experiments", that would describe the ones that he devised not just for special relativity, but also ones that he devised for general relativity and for quantum mechanics. Prokaryotic Caspase Homolog (talk) 10:11, 5 April 2018 (UTC) I have created Einstein's thought experiments. I hope you find it decent. Prokaryotic Caspase Homolog (talk) 17:14, 28 April 2018 (UTC) Yes, thank you. JRSpriggs (talk) 01:39, 29 April 2018 (UTC) ## Measurement versus visual appearance Triggered by recent edits ... While I have no (perceived) problem in the original, probably terse version of identifying the "measured shape" of an object as a collection of 3d-space-coordinates, obtained from a section of spacetime coordinates, and appropriately associated to corresponding object-inherent coordinates, revealing the length contraction in the direction of the velocity, I am unsure about the term "snapshot" in the current version. I think "snapshot" is "taking a picture", and induces inherently propagation of light, which is carefully excluded in "observing", i.e. taking spacetime coordinates. I must admit that the notion of "visual appearance" is a bit bewildering to me in both versions. I think this is now about taking a "snapshot", which involves a central projection, including dependencies on distance between the object and the observer, the direction of the velocity, and what not. I think that the presented material is excellent, but the presentation is not fully rigorous and sufficiently explicative, and I am unsure, whether the edits constitute an improvement. 12:29, 10 September 2018 (UTC) Can you suggest a better wording? Now that you bring it up, I can see the problem that you might have with the word "snapshot" as a means of describing the "measured shape" of an object. Prokaryotic Caspase Homolog (talk) 02:14, 12 September 2018 (UTC) I am sorry, but my reservations, and only sometimes direct suggestions for marginal improvements, are all I can provide. I am an intuition-less non-expert in STR, heavily suffering from the total collapse of the concept of rigid body in STR already in 1 dimension (rockets with string). Additionally, I disagree with certain adhered to concepts (necessity of talking about moving observers vs. light sources) claiming to be based on Einstein, and I do not feel adequately versed in the use of this non-native tongue, to express such delicate matters. Purgy (talk) 08:02, 12 September 2018 (UTC) Hmmm... You bring up a variety of issues unrelated to your original concern. Born rigidity and Bell's spaceship paradox are not covered in the article as presently written, but one could argue that they need to be covered. One could also argue that coverage of those topics would represent unnecessary digressions, given the article's other deficiencies. The collective authorship paradigm that Wikipedia follows, while very good for developing articles in history, biography, etc. has not proven itself very well adapted to the development of technical articles like special relativity. In common with most other technical articles, the current article is a hodgepodge of parts with widely differing levels of difficulty. It needs a thorough overhaul by a person with a clear vision of how the article should be structured and what the target audience is supposed to be. However, giving this article a thorough overhaul is beyond my competency. I can only focus on the little bits and pieces that I myself have added. The best that I can promise is that I'll continue to think about the points that you raised. Maybe somebody else will find a better wording. Prokaryotic Caspase Homolog (talk) 23:39, 12 September 2018 (UTC) I really had no intention to bring up these topics as issues of this article (in need of covering), but only as prominent in causing me troubles in developing a good intuition about STR. I feel quite similar to the description of your last paragraph, just additionally handicapped by the necessity to use a non-native language. Triggered by your remarks, I want to mention a thorough attempt -not too long ago- to deal with this article in the perspective you mentioned, which seems to have failed the target, but certainly has brought about significant improvements. BTW, I strongly object to the collective authorship paradigm being any good for questionable articles in history or biography. All the best, Purgy (talk) 07:13, 13 September 2018 (UTC) This article? The last really major revamping that I recollect was the decision in mid-2015 to delete Introduction to special relativity as being an even worse hodgepodge than the main article. Prokaryotic Caspase Homolog (talk) 07:52, 14 September 2018 (UTC) I am deeply concerned by me sloppily mixing up this article with Spacetime, which encountered heavy efforts of targeted improvement in 2017. My attention here was by far too focused on the "snapshooting" of "spacetime vectors", i.e., just on the local changes, being related to STR. Pardon! Purgy (talk) 09:36, 14 September 2018 (UTC) The strength (and weakness!) of Spacetime as currently written was the principal editor's determination to adhere, as much as possible, to a purely geometric approach to presenting the material. There are neither trains nor lightning bolts in Spacetime. For the most part, the geometric demonstrations are logically presented, but by their very nature, the demonstrations are somewhat divorced from intuitive understanding. Most people, including myself, are rather more comfortable with a kinematic approach, i.e. with railway cars and spaceships. The problem is, how to add this introductory material? Most "Introduction to" articles get only a few percent of the readership of their associated main articles. Instead of trying to resurrect Introduction to special relativity (which needs to stay dead), I wonder if such material could be added as an extended introductory section to the current article? Against this idea would be the following objections: 1) Such material could very easily violate wp:NOTTEXTBOOK. 2) Such an introductory section could easily double the size of this article. 3) A featured wikibook exists on Special Relativity which has the merits of being principally authored by a single knowledgeable editor. It has a consistent presentation and relatively clear focus, and as a wikibook, it was allowed to take on textbook aspects. Despite this, I'm not very happy with it. Could somebody like myself do any better? Absolutely not. Thoughts? Prokaryotic Caspase Homolog (talk) 15:17, 14 September 2018 (UTC) ... thinking ... Purgy (talk) 08:35, 15 September 2018 (UTC) I take back part of what I said about the wikibook. I'm very unhappy with it. If you're going to write a textbook on special relativity, you need problems with solutions, or at least lots of example scenarios. Prokaryotic Caspase Homolog (talk) 11:10, 15 September 2018 (UTC) To start with the result of my thinking: I have none. I agree on your verdict the wikibook not making me happy, I do not cling to the WP:NOTTEXTBOOK beyond not allowing for collections(!) of examples (paradigmata are a core necessity in WP! imho), yes, the danger of doubling the length is dangling, and finally, given my engagement and eruditeness on this matter, I am convinced I could not do half as good as you. As an aside, I am very skeptical about the usual intuition on kinetics. All this rubbish about "moving observers" stems imho from "intuitively" "observing" TWO reference frames, thereby silently introducing a third frame, leaving the uninitiated confused. Sorry, I think the best I can do, is commenting from the off sometimes. Please, do never assume any malevolence from my side. Purgy (talk) 07:36, 18 September 2018 (UTC) ## Rearranging the sections, and now I'm stuck I've been rearranging the sections of this article so as to put them into a more rational order, and now I'm stuck. There are a variety of approaches to teaching relativity: • The dominant approach found in most college textbooks is begin with the "two postulates" (almost always starting with a stronger, less intuitive form of the second postulate than that adopted by Einstein in his 1905 paper) and to proceed through relativity of simultaneity, time dilation, length contraction etc. to the Lorentz transformations. While traditional, this principle-based approach has many issues. As Miller has noted, "Teaching STR that way is especially problematic because, unlike the case of classical thermodynamics which is also taught as a principle theory, the two postulates or principles in the case of STR are strongly counterintuitive when taken together." • Several textbooks begin with Minkowski spacetime as the central focus, often approaching Minkowski spacetime through constructive arguments. This, for instance, is the approach adopted by Taylor & Wheeler's Spacetime Physics. The article Spacetime attempts consistently to follow this approach, how successfully, I'm not sure. • Some authors advocate beginning with the Lorentz transformations as the first principle. I know of no introductory college textbook that teaches special relativity this way. This article starts off as if it were following a two-postulates presentation, and then suddenly switches over to presenting the Lorentz transformations as the first principle, from which everything else derives. How should I go from here? Any suggestions? Prokaryotic Caspase Homolog (talk) 08:53, 28 October 2018 (UTC) I think that I've managed a kludgy fix by adding some transitional commentary about different approaches to presenting special relativity. Prokaryotic Caspase Homolog (talk) 10:00, 28 October 2018 (UTC) I personally am not happy with basing special relativity on the single postulate of universal Lorentz covariance, but that's the way the article appears to have been written. Prokaryotic Caspase Homolog (talk) 15:36, 28 October 2018 (UTC) ## I need help here Does this section really provide a comprehensible explanation of why FTL is impossible? All it does is state that "one can show" that causal paradoxes can be constructed. Prokaryotic Caspase Homolog (talk) 03:24, 29 October 2018 (UTC) It makes sense to me. It explains how FTL travel would violate causality. There’s no proof of causality but it’s intuitively very appealing as without causality paradoxes arise, so is widely accepted as being true. And if you accept causality then FTL travel must be impossible.--JohnBlackburnewordsdeeds 03:47, 29 October 2018 (UTC) I restored the section, but I still don't like it. At the very least, it needs a second Minkowski diagram showing how, through the exchange of FTL signals, one can generate a causality-violating scenario. As currently written, it demands an act of faith on the part of the reader. I suppose I could draw the necessary diagram and modify the text to work with the new figure. I don't see a ready-made figure on Commons that will do. Prokaryotic Caspase Homolog (talk) 04:57, 29 October 2018 (UTC) It’s alright to me. It’s the sort of thing that’s hard to draw as it quickly gets cluttered, but if you’ve looked at enough such diagrams you can visualise it in your head. Or follow the logic of the text which does not really depend on the diagram except to initially establish the relationship between A, B and C. I’m removing the text too now it’s back in the article; it’s still in the page history if there’s any need to refer to it.--JohnBlackburnewordsdeeds 09:44, 29 October 2018 (UTC) I suggest to get rid of one spatial dimension in the relevant pic. An (x/t + x'/t')-diagram would do the job (there is already a comment about this in the article text) better than this fancy x/y/t-cone. Maybe it is helpful to hint to the trivial fact that any line in the upper half of the first quadrant through the origin and an arbitrary event represents a t'-axis, with an appropriate x'-axis (symmetrically to x = t), whereas any lines through an event in the lower half can only be an x'-axis, because reverse interpretations have no real solutions within the Lorentz transformation (for the less mathy inclined: there is no meaningful place to put the respective other axis ). As an aside: maybe Occham's razor can be considered applicable not only in reducing the number of dimensions in diagrams but also for reducing the number of premises to derive STR from. ;) Purgy (talk) 15:44, 29 October 2018 (UTC) I was already considering replacing the fancy x-y-t light cone diagram with an x-t diagram. The extra dimensions don't add anything to the presentation, and the current figure occupies a disproportionate amount of real estate. John makes a good point about how cluttered a spacetime diagram illustrating causality violation can be. Use of color can be helpful, but I have to be careful not to rely too much on color. Also, since we have already established that the article begins with universal Lorentz covariance as the central principle for its development of special relativity, it would be better to show this using the LTs rather than with a spacetime diagram. However, I have to mind wp:NOR. Anything I add has to be sourced, and the references that I have in mind to use to source my additions all use spacetime diagrams as the simplest way to illustrate their discussion. Prokaryotic Caspase Homolog (talk) 16:12, 29 October 2018 (UTC) I am heavily concerned that I cause so much sighs, really. I was aware that my Spacetime effort would have some superior formulation, but I was convinced that the "1800s" are no good either. ... and now I gave some more chance to sigh deeply about this here article, but again I am convinced that I implemented some hints to a substantial improvement of the status quo ante. BTW, I left the latter part of the paragraph untouched. Simply throw it all away ... I do not mind too much. Purgy (talk) 08:49, 30 October 2018 (UTC) Let's talk about your proposed changes. I'm also working on changes at the same time. Prokaryotic Caspase Homolog (talk) 12:45, 30 October 2018 (UTC) ### Causality and prohibition of motion faster than light Figure 10-4. Light cone In Fig. 10‑4 the interval ${\displaystyle {\text{AB}}}$  is 'time-like'; i.e., the line connecting ${\displaystyle {\text{A}}=(x=0,ct=0)}$ [note 1] and ${\displaystyle {\text{B}}=(x=x_{\text{B}},ct=t_{\text{B}})}$ [note 2] can be taken as a ${\displaystyle ct'}$ -axis, that establishes with the line symmetric to ${\displaystyle ct=x}$  an ${\displaystyle x'/ct'}$ -frame,[note 3] in which events ${\displaystyle {\text{A}}}$  and ${\displaystyle {\text{B}}}$  occur in the primed frame at the same spatial coordinate ${\displaystyle x'=0}$ , separated by a time interval of length ${\displaystyle t'_{\text{B}}.}$ [note 4] The event ${\displaystyle {\text{A}}}$  precedes ${\displaystyle {\text{B}}}$  in all frames possible under Lorentz transformation (${\displaystyle ct'}$ -axis within the light cone).[note 5] It is feasible to observe from the ${\displaystyle x/ct}$ -frame a matter-/information-transport from ${\displaystyle {\text{A}}}$  to ${\displaystyle {\text{B}}}$ [note 6] at some speed smaller than lightspeed,[note 7] so the event ${\displaystyle {\text{A}}}$  can cause the event ${\displaystyle {\text{B}}}$ , if this speed can be achieved by the transport.[note 7] The interval ${\displaystyle {\text{AC}}}$  is 'space-like'. Since the Lorentz transformation prohibits a ${\displaystyle ct'}$ -axis within the shaded cone, the line connecting ${\displaystyle {\text{A}}}$  and ${\displaystyle {\text{C}}}$  cannot be taken for this, but only as an ${\displaystyle x'}$ -axis.[note 3] The suitable ${\displaystyle x'/ct'}$ -frame is again symmetric to the ${\displaystyle ct=x}$ -line,[note 3] and in this frame the events ${\displaystyle {\text{A}}}$  and ${\displaystyle {\text{C}}}$  occur at the same temporal coordinate ${\displaystyle ct'=0.}$  So for all events ${\displaystyle {\text{E}}}$  within the shaded cone there exists a primed frame in which ${\displaystyle {\text{A}}}$  and ${\displaystyle {\text{E}}}$  are simultaneous, separated by some spatial distance.[note 4] Besides this frame with simultaneity, there are frames in which ${\displaystyle {\text{A}}}$  precedes ${\displaystyle {\text{C}}\;(t'_{\text{C}}>0),}$  but also frames in which ${\displaystyle {\text{C}}}$  precedes ${\displaystyle {\text{A}}\;(t'_{\text{C}}<0).}$  Naively, some speed above lightspeed (determined by the slope of the line connecting ${\displaystyle {\text{A}}}$  and ${\displaystyle {\text{C}}}$ ) would allow for the latter frames a transport between the spatial coordinates of ${\displaystyle {\text{A}}}$  and ${\displaystyle {\text{C}}}$  that triggers an event there, prior to the transport's depart, as observed in the ${\displaystyle x/ct}$ -frame, thereby violating causality.[note 4] However, the Lorentz transformation does not yield a solution for such frames.[note 8] Notes 1. ^ Unnecessary to explain that A is at the center of the unprimed coordinate system 2. ^ Unnecessary to explain that B is at the coordinates of B. 3. ^ a b c Not illustrated. You force the reader to have to draw the scenario, either in his/her head or on paper. 4. ^ a b c Verbose restatement of the original. 5. ^ Unnecessary additions to "A precedes B in all frames" 6. ^ Jargonese rewording of "It is hypothetically possible for matter (or information) to travel from A to B" 7. ^ a b Are you implying here that FTL is possible? 8. ^ Why not? What happens to the LT that prohibits this? I care about my wordings, but I do not add much to a stranded investment, I just stand prepared to answer any follow up. I plan for more global remarks in the reply to the second set of notes. 1. It's about making a coordinate explicit, even when it is 0, not about explaining the origin. 2. It's (again) about making the coordinate ${\displaystyle x_{B}}$  explicit, soon afterwards there will be an ${\displaystyle x'_{B},}$  too. It's a flaw of math to rely on microscopic differences in notation. 3. I hoped for getting it illustrated. 4. (a) I'd say it's about different frames. (b) No, E is a totally new event. (c) ??? It's the first occurrence of "causality". 5. They're intended as an introduction to the non-existence of ${\displaystyle ct'}$ -axes in the shaded cone. 6. Jargon in math is at the core of unique meaning, avoiding the "lyrics" of popular introduction (lyrics = much emotion, no precision, just a good feeling for high volume intuition pumping) 7. By no means! Should I have mentioned that worldlines in the first quadrant with slopes greater 1 represent speeds below lightspeed, and that ${\displaystyle \infty }$  is rest = observer? 8. No real solutions exist, because ${\displaystyle \gamma }$  isn't a real factor anymore. I'll take some time for the announced 2. part, since I want to avoid a TL;DR, but want to say sooo much. :) Purgy (talk) 17:36, 31 October 2018 (UTC) ### I'm concurrently working on additions to the text based on this diagram Figure 10-5. Causality violation The narrative will go more or less like this: C and D are on a high speed train. A and B are on the ground. D passes B just as the lottery winning numbers are announced. B tells D the winning numbers. D uses his ansible to instantaneously inform his partner, C, of the lottery results. C, who is passing A at that moment, informs A of the winning numbers. A user her ansible to instantly flash the numbers to B, who writes the numbers on his lottery ticket, submitting it before the drawing. Prokaryotic Caspase Homolog (talk) 13:26, 30 October 2018 (UTC) For the sake of simplification, you could just omit B and C, and have the whole thing go between A and D. Here I made a rather crude simplified version of the diagram, adding a marker for an example location of the lottery draw. --uKER (talk) 14:02, 30 October 2018 (UTC) Looking at your diagram, the lottery event needs to be placed on D's world line, otherwise there would be a communications delay. There is another issue, however, and that is wp:NOR. I based my diagrams on a published reference. Too much deviation from the diagrams and narrative that I used as my source would constitute original research. Prokaryotic Caspase Homolog (talk) 14:45, 30 October 2018 (UTC) The location of the lottery is fine as long as 1. it's in the past light cone of the moment the moving subject sends the info backwards, and 2. it's in the future light cone of the moment when the stationary subject receives it. About keeping your diagrams, yeah, you probably have a point. --uKER (talk) 18:56, 30 October 2018 (UTC) There are lots of spacetime diagrams to choose from to illustrate the paradox. I've seen this one in multiple contexts. I first came across it several years ago in "The Einstein Paradox and other Science Mysteries Solved by Sherlock Holmes" by Colin Bruce, in "The Case of the Faster Businessman". Then I encountered the same diagram in in David Morin's book, and now, doing a web search, I see it in the lecture notes for a course taught at Virginia Tech. Each source accompanies what is essentially the same diagram with a different narrative. On Quora, I remember a soldier being warned just in time not to step on an IED after the driver of a passing troop carrier witnesses the soldier getting his foot blown off. Maybe a Sherlock Holmes mystery would be more appealing and less gory? Prokaryotic Caspase Homolog (talk) 22:49, 30 October 2018 (UTC) ### Purgy's comment • I trimmed my thoughts above and removed the paragraphs at the end to which I deny any comment. I hope you did allow for this. • I cannot comment much on my suggestion, beyond what I stated already: I am convinced that it is by far more consistent than the status quo, and I concede that it maybe hard to read for the details,[note 1] which I consider to be necessary for a thorough understanding.[note 2] To my taste there is way too much lyrics around about STR.[note 3] • I would gladly defend my proposition, but do not know against what and I also would enjoy seeing it improved, I would clarify all I can[note 4] and I can even accept it being ignored, BUT: • May I plead for reconsidering the inclusion of -say- fictional devices in an explanation, intended to be serious? I object with all my argumentative strength against including this "ansible"-story. This is explosion (ex falso quodlibet), but no serious argumentation. I cannot accept a claim being refuted because some contradictory device had lead to a contradiction. This is abuse of space time diagrams, rape of LT, cheap baiting with fraudulent gambling, ... Primarily, I fully accept and support your prerogative on this article. Purgy (talk) 16:01, 30 October 2018 (UTC) Thought experiments invoke particulars that are irrelevant to the generality of their conclusions. You object to the use of these fictional devices. However, it is precisely the invocation of these particulars that give thought experiments their experiment-like appearance. A thought experiment can always be reconstructed as a straightforward argument, without the irrelevant particulars. John D. Norton, a well-known philosopher of science, has noted that "a good thought experiment is a good argument; a bad thought experiment is a bad argument."[1] When effectively used, the irrelevant particulars that convert a straightforward argument into a thought experiment can act as "intuition pumps" that stimulate readers' ability to apply their intuitions to their understanding of a scenario.[2] I could use the spacetime diagrams to support a straightforward argument demonstrating that FTL communications implies violation of causality, but the ensuing description would be verbose and relatively nonintuitive.Prokaryotic Caspase Homolog (talk) References 1. ^ Norton, John (1991). "Thought Experiments in Einstein's Work". In Horowitz, Tamara; Massey, Gerald J. (eds.). Thought Experiments in Science and Philosophy (PDF). Rowman & Littlefield. pp. 129–148. ISBN 9780847677061. Archived from the original (PDF) on June 1, 2012. 2. ^ Brendel, Elke (2004). "Intuition Pumps and the Proper Use of Thought Experiments" (PDF). Dialectica. 58 (1): 89–108. Archived from the original (PDF) on 28 Apr 2018. Retrieved 28 April 2018. Notes 1. ^ VERY hard to read! 2. ^ In general, encyclopedia entries are expected to provide a sketch of a proof, not attempt to provide the entire proof in detail, which is usually impossible within the space limitations of an article. 3. ^ What do you mean by "lyrics"? 4. ^ Excessive detail does not clarify, but obscures. Well, protest as announced: I neither buy the necessity of "irrelevant particulars", not even their usefulness, nor do I believe that they "always" can be removed later on, leaving something that is a real argument. I resort to the opinion this thought experiment is a bad argument (Norton is fine), and I do not want to see "intuition pumps" (Brendel is rubbish) included in WP, but rather -especially in scientific articles- valid arguments, doubly checked for their validity. However, since you seem to be petrified to include this subspace communications, ... I do not bother for your arguments how to render that "ansibles" as irrelevant for the story or how to reconstruct it as a straightforward argument, when it is just a silly story (there are far more respectable time travels in the pertinent literature), as well as I do not ask any longer for any specific leaks or obscurities in my formulations (that you wanted to discuss!?). May you find a fake, that looks like an experiment that helps others. Purgy (talk) 19:03, 30 October 2018 (UTC) This obviously cannot be a matter of my strong opinion against your strong opinion, but will require consensus with others' inputs. By the way, I am looking closely at your revised proposal. I just can't pay a lot of attention to it right at the moment, since I work for a living. I'll return to examining it tonight. The results of our previous discussions have always been improvements in the articles in question. I value your comments, even when I disagree wholeheartedly!   Prokaryotic Caspase Homolog (talk) 21:07, 30 October 2018 (UTC) If you are against colorful descriptions with multi-million dollar lotteries, Sherlock Holmes mysteries, soldiers on tour in Iraq and the like, how about this: Consider the spacetime diagrams in Fig. 10‑5. A and B stand beside railroad tracks. A high speed train passes by with C riding in the last car of the train and D riding in the leading car. The world lines of A and B are vertical reflecting the stationary position of these observers on the ground, while the world lines of C and D are tilted forwards, reflecting the rapid motion of these observers in the train. 1. Fig. 10‑5a. B flashes a message to D as the leading car passes by. 2. D passes the message back to C using an instantaneous communication device. The signal follows along the ${\displaystyle x'}$  axis, which is a line of simultaneity between C and D. 3. Fig. 10‑5b. C flashes the message to A who is standing by the railroad tracks. 4. A passes the message forwards to B via the instantaneous communication device. The signal follows along the ${\displaystyle x}$  axis, which is a line of simultaneity between A and B. Such a description is not very far off from a straightforward argument with none of the "irrelevant particulars" to which you take offense. Prokaryotic Caspase Homolog (talk) 00:49, 31 October 2018 (UTC) Edit conflict with the creation of the newest section, will reply there separately. It's easy to refer to single notes: 1. Yes, it does not belong to my strengths to write directly to the heart, I'm more the nitpicking type, but I am convinced that deep understanding needs deep arguments. 2. I'm not requesting formal proofs (Four color theorem), but I oppose, as strong as is possible to me, to pseudo-explanations for "some thing", like The assumption of the "existence of ansibles" (a logical constant 'False' in the theory) explains "some thing". 3. See previous comment of mine. 4. See #1.: "deep" As regards my general impression about explanations of counter-intuitive consequences in STR, I perceive a desire to flesh these out with most spectacular details (twins, pole in a barn, rockets and ropes, ansibles, Holmes, Iraq war, riches, ... ), even when the coexistence of arbitrarily many observers (~frames), all of them at rest in their respective frames, is not fully appreciated by a good deal of the audience. Oblique coordinate systems, unacquainted by themselves, and, additionally, describing a spatially just one-dimensional world and its temporal sections, should be treated with greater care (=detail!), imho. I appreciate the remark that an ansible works along the x-axis, but I miss the emphasis that it is about simultaneity within the -say- unprimed frame, only, and that there is no connection possible to the primed frame supported by LT. I understand that me being satisfied by the non-existence of a world line from A to C does not pertain to all readers, but I am really convinced that this is the core of the story, and all involvements of additional frames and actors, stories and whatnot impossibilities, only blur the core fact: NO worldline from A to C! As is usual in logic, any assumption to the contrary (e.g.: existence and use of ansibles) allows derivation of all claims, true ones and false ones (I referred to this in my proposal by using the word "naively"). I deeply regret that my idiomatic abilities cannot provide a text with the necessary ease of readability. To my understanding an encyclopedia might (should!) contain information about the most wide spread and most surprising, most funny, ..., stories of counter-intuitivities, but should not(!) involve them in attempts of explanations. Here I am with my personal POV that I only alter for arguments, but not for just "consensus in WP". However, as said already, this won't cause any effects in WP-texts. No edit-warring for me, just TP-skirmishes. You may pump as you might, I won't develop any intuition about ansibles. I do however enjoy any of your appreciations. Purgy (talk) 09:24, 1 November 2018 (UTC) ## LT vs. two postulates vs spacetime approaches to understanding SR Mathematically, it is extremely simple to establish the non-existence of a world line from A to C, end of story, no need to go any further. But for most, the pure mathematical demonstration doesn't satisfy the need for an intuitive understanding of why that should be so. That is why I am so unhappy about the decision of the original authors of this article to develop special relativity starting with the single postulate of universal Lorentz covariance. The appeal of the two-postulates development of special relativity is how, starting with these intuitive principles, one can arrive at all sorts of fantastic results, including the Lorentz transformation. But many people just don't get the deductive style of the two-postulates approach. They get lost at the very start trying to understand relativity of simultaneity, and if one gets stuck there, there is no going forwards. Then there is the spacetime approach, which is frequently taught from a constructive standpoint through analogies with Euclidean geometry. If you buy the analogies and accept the results of experiment, that is the best approach for many people. What I am trying to get at is that your reservations seem, to me anyway, mostly that you are most comfortable with a pure mathematics approach to understanding special relativity, which is why you are so dead set against irrelevant particulars, intuition pumps etc. You are not a thought experiments sort of guy. Prokaryotic Caspase Homolog (talk) 23:14, 1 November 2018 (UTC) Using the LTs, Fig. 10-5 can be explained in just three lines. Let S and S' be two frames in in standard configuration, and let ${\displaystyle {\text{E}}}$  be the event corresponding to the crossing of the B and D world lines. Then ${\displaystyle \beta =ct_{E}/x_{E}=v/c.}$  The event coordinates ${\displaystyle (x_{E},ct_{E})}$  in frame S transform to ${\displaystyle (x_{E}/\gamma ,0)}$  in frame S'. An instantaneous signal from ${\displaystyle {\text{E}}}$  in frame S' intersects ${\displaystyle (0,0),}$  and an instantaneous signal from the origin intersects ${\displaystyle (x_{E},0)}$  in frame S, preceding event ${\displaystyle {\text{E}}}$  by ${\displaystyle t_{E}.}$  This is totally trivial math, but it does not leave me with any sense of satisfaction that I understand what is going on. It's just symbol manipulation. The spacetime diagram, however, is different. I get a visual handle on the transformation that I simply do not get working the symbols. I can see the effect of increasing the speed of the train, and I understand visually why even a speed infinitesimally greater than ${\displaystyle c}$  can result in causality violation. Prokaryotic Caspase Homolog (talk) 03:22, 2 November 2018 (UTC) I frankly admit to be "most comfortable with a pure mathematics approach" (especially as long as the math is simple enough to my abilities), but I strongly refuse not being accessible by thought experiments. Any of the many indirect proofs I valuate much are such (Let blabla, then ..., therefore ¬blabla.), and I often see essential gain in accessibility by affixing a funny hat (irrelevant particular) to some entity (Four color theorem - not very funny, but famous). For reasons given already by elementary formal logic, I am, however, strongly averse to introducing evident antinomies, like ansibles, in any proof of any claim. (Assume "False", then "Anything". –is a tautology.) I would agree to disproving the existence of ansibles, relying on geometry or LT and causality, but I disallow for calling any disquisition on anything, which involves the use of an ansible, a proof of anything (repeating: ex falso quodlibet). I accept the path, leading from the assumption of a speed, infinitesimally (yuck) larger than lightspeed, to violation of causality, but I am in serious doubt, whether this path is easier to describe and(!) to follow, than the attempt of getting familiar with ct'-axes being restricted to the light cone, and x'-axes to the dark cone, converging with increasing speed to the useful limiting case of ct' and x' coinciding along the propagation of light in all frames (with common origin), the ubiquitous simultaneity. Personally, I perceive the introduction of two additional comoving frames as making things more complex (BTW, the Twin paradox in STR gains its life from the frame change of the twin.) The following is less apodictic and more personal: Dealing with verbally formulated principles is a language game, hard to join in non-native languages, so my primary effort is to create formal tools, independent of natural language, applicable also in hard to understand, in counter intuitive, in surprising, ... situations. To me these tools are the LT. I confess getting lost along so called derivations –often enriched with irrelevant particulars, intuition pumps, and other distracting stuff, just there to hide the leaks or even flaws (like using ansibles!) within logic– but with the help of LT I am able to overcome my resentments and arrive at a stable understanding(?)/manipulation, reinforcible at any time by some calculation, even of matters like lost simultaneity. (As an aside, I am not sure, if an "intuitive" understanding of "relative simultaneity" is possible, at all.) Imho, there is no "Euclidean" geometry in the spacetime diagrams: it's about the difference of squares and not their sum. The connecting line of two events in these diagrams is quite misleading wrt their spanned spacetime interval. (I recall to have had a hard time myself to get rid of the Euclidean intuition.) E.g.: Summing the two legs in the twin paradox is "shorter" than the direct connection. I would rather fight Euclidean associations within the x/ct-frame than further them. As for your second part, I am perfectly aware that neither my scaring efforts nor your discouraging formulations are very invitational to read, but I rather accept frightening hardship than logical disaster. Maybe there is an intuitively viable path from the slopes of the axes representing the speed and its reciprocal, converging at light propagation, where I started with "Naively, some speed above lightspeed ..."? Anyhow, you decided to take your road. Anticipating your objection regarding NOR, may I report that I was moved almost to tears by the sad comment by Krea. For heavens sake, what is done here, in this context, at this level, is NO research, so it cannot be original research. All this is just a "making explicit" of reproducible, trivial math, not to be published for higher academic merit, but to aid the interested reader in his struggle to understand physics beyond falling apples. Are the publishers of books about popular interest topics afraid of losing their clientele to WP? There is so much published rubbish, why shouldn't WP contain some coherent information, without calling it research. Purgy (talk) 16:30, 2 November 2018 (UTC) Your opinion is valuable, even if we disagree a lot. Note now the narrative to the FTL spacetime diagram does not mention lotteries or Sherlock Holmes or soldiers getting their feet blown off by an IED. Your doing!   And I agree that the section is better because of your pushing. In regards to NOR, see my reply to Krea here. Prokaryotic Caspase Homolog (talk) 22:57, 2 November 2018 (UTC) It's not the first time while being around in WP that I enjoy disagreement, but it's a rare moment still - thanks. Thanks also for correcting my link; I am sorry not to have checked it myself. I would not have bothered to notify Krea myself, but I understand that it could be considered appropriate, and misspelling the name was certainly inappropriate. Just for completeness sake, I assume that we also disagree about the level, above which even well-written paragraphs in scientific articles are to be removed for being not properly sourced. Finally, I announce some boldness of mine, and humble ask for it being kindly checked. Cheers, Purgy (talk) 09:01, 3 November 2018 (UTC) Looks fine to me. You've achieved the greater precision that you wanted without loading it down with the excessive parenthetical digressions that, to my mind, made your previous attempt unreadable. By the way, the other reason for pushing Krea's contribution to Talk was that it was not written in any sort of encyclopedic style, violating wp:NOTTEXTBOOK in a rather extreme fashion. I also disagreed with some sections that, while not technically incorrect, made somewhat misleading points. You can read his writing here and judge for yourself. Prokaryotic Caspase Homolog (talk) 11:52, 3 November 2018 (UTC) ## Proposed section revision The proposed section revision below represents a distinct issue from Purgy's proposed rewording of the opening two paragraphs for greater precision at the cost of lesser clarity, which Purgy considers a good trade-off. Hence, this can be deployed separately from any decision regarding Purgy's proposed revisions. Issues concerning rewording have been resolved. Prokaryotic Caspase Homolog (talk) 14:13, 3 November 2018 (UTC) I do have a question about placement. The article as written uses the single postulate of universal Lorentz covariance as its basic starting principle. • In terms of subject matter, it belongs in Other consequences. • However, it uses Minkowski diagrams to perform the demonstration rather than Lorentz transforms. Therefore, in terms of presentation, it belongs in Spacetime. Where should it go? Prokaryotic Caspase Homolog (talk) 08:27, 1 November 2018 (UTC) • Spacetime: "No FTL", even when less important, is similarly elementary as contraction and dilation. The current "Other consequences", while certainly worth mentioning, are by no means elementary in the same way. This is not say that I perceive the structure of this article as perfect. The mass-energy-equivalence is most certainly not derived from the LT, what is the difference between "derived" and "other" consequences, the LT takes good care about the whole spacetime-space, ... Purgy (talk) 14:38, 1 November 2018 (UTC) OK. I'll keep it there. I could easily have recast the whole argument in terms of the LT, except that would have constituted original research. There is already too much original research in this article that I have been reluctant to throw away. Prokaryotic Caspase Homolog (talk) 15:03, 1 November 2018 (UTC) Also, it appears that there is plenty of real estate if we want to reinstate the 3-dimensional light cone diagram that was originally Fig. 10‑4. Do we want to revert? Prokaryotic Caspase Homolog (talk) 08:31, 1 November 2018 (UTC) • No revert: A flat spatial geometry in two dimensions offers no additional effects relevant to the question of FTL, when compared to a one-dimensional geometry. Since any picture is a projection to two spatial dimensions, the thereby induced spatial ambiguities do not pay the rent, and a higher artificial appeal is just cheating. The troubles of staying aware of a temporal dimension with different metric properties being projected to a spatial dimension is sufficiently bewildering. Purgy (talk) 14:38, 1 November 2018 (UTC) OK. I had no preference for the 2D drawing just because I drew it. Rather, I want what is best for the article. Prokaryotic Caspase Homolog (talk) 15:03, 1 November 2018 (UTC) ### Causality and prohibition of motion faster than light Section has been transferred to the article via this edit. Besides the reservations rolled out in previous comments, this version has run to fat for explaining that no infinite speed is necessary to derive a contradiction from a contradiction. Sorry, I had to. ;) Purgy (talk) 14:38, 1 November 2018 (UTC) It is not obvious from the figure that a slightly greater than light-speed signal would lead to paradox. Either I had to draw a new figure, or I had to explain how a revised figure would look. Since how a revised figure would look is documented in the supplied reference, that appeared to be the preferable route. Prokaryotic Caspase Homolog (talk) 15:03, 1 November 2018 (UTC) ## I'm going to have to squeeze in an "Introduction to spacetime diagrams" somewhere Although everything in special relativity can be derived from the Lorentz transforms, spacetime diagrams are a highly useful tool for visualization. The two big elephants in the room are the two sections Geometry of spacetime and Physics in spacetime, neither of which were written at a level appropriate for what I deem the prime target audience for this article, high school through lower division college students. The two sections are inadequately sourced, and some of the writing may represent original research. For these reasons, I pushed these sections to the end. I have, however, been exceedingly reluctant to delete them, since technically I have found nothing wrong with them. Somehow or other, I'm going to have to squeeze in a quick "Introduction to spacetime diagrams", since five spacetime diagrams are used in this article without adequate explanation about how they may be interpreted, and as I continue to edit this article, I may introduce more spacetime diagrams. Yes, there will be redundant overlap with the Spacetime section, but that seems an unavoidable evil with the article in its current state. (Despite my efforts so far, I personally would rate the article, in its current state, as C-class because it does not meet the "Readers are not left wanting" criterion necessary to meet B-class.) Prokaryotic Caspase Homolog (talk) 23:13, 5 November 2018 (UTC) Some fringe thoughts, triggered by the above concerns. - There is an article Minkowski diagram, a redirect from "spacetime diagram", that also does not emphasize that the diagrams are just visualizations of the LT. - I dispute the general pedagogic value of deriving STR from principles beyond deriving the LT, as well as of a "constructive" approach to STR. E.g., only the most hardcore intuitive physicists develop an intuition of squished EM-fields and how to express them, imho. - WP is not very apt to address a specific cross-section/level-set of its readership, so I would shed some tears over shooting the elephants. Maybe a title, referring to greater advancedness, and leaving them to the end of the article would help, already. - Classifying any article with a scientific topic is hard (WP-rules are contradictory, incoherent, rudimetary, ... rubbish). I do not argue for or against any capital letter: to suggest the worst, let WMF decide. Purgy (talk) 10:49, 6 November 2018 (UTC) It is possible to develop STR starting from the single postulate of Minkowski spacetime, treating the LTs as a derived principle. Given the emphasis of this article, starting with universal Lorentz covariance as the fundamental principle underlying STR, the approach taken by Minkowski diagram would be completely incorrect. I fully intend to mention that practical computations in STR usually start with the LTs and/or the fundamental effects immediately derived from the LTs. Minkowski diagrams are most useful as a tool for visualization. They are less useful as a tool for computation. Spacetime diagrams will be treated as derived from the LTs. I intend only a bare minimum introduction, with wikilinks to other articles developing them in greater detail. Thanks for the reference to Minkowski diagram, by the way! Shuttling legacy and/or limited-interest advanced topics beyond freshman-sophomore college level to the end and clearly labeling them as "advanced" is the strategy employed in a variety of articles on Wikipedia. For example, see Quadratic equation#Advanced topics and Spacetime#Technical topics. Rather than euthanization, that would probably be my choice of what to do with these sections, except that Special relativity#Causality and prohibition of motion faster than light is not an advanced topic. This section should not be caged with the others. It needs a separate home. Special_relativity#Consequences derived from the Lorentz transformation and Special_relativity#Other consequences are illogical separations of topic. I'm thinking of the following reorganization, taking some inspiration from Rindler: Kinematics: RoS, TD, LC, Thomas rotation, causality and prohibition of FTL Optical effects: Doppler, measurement vs visual appearance Dynamics: mass-energy equivalence, how far can one travel That allows me to cage the two old beasts separately from the others. I can only do this reorganization after developing the "Introduction to spacetime diagrams" section, because of how much use both the RoS and Causality sections make of spacetime diagrams. Different people learn differently. Constructive and deductive approaches are both important. Prokaryotic Caspase Homolog (talk) 12:25, 6 November 2018 (UTC) ## Doubting ... I am disturbed about the last preparatory edits. I did not like the previous setting either, but I sense further blurring. - The transformation equations do not relate arbitrary 'measurements', but strictly spacetime coordinates. Hopefully, the measurements are 'covariant', and the appropriately formulated laws of physics confirm this. - May I suggest to get rid of the "relatively moving observers", at least in new edits? I see an effort to introduce a "standard frame", that is exactly the observer (at rest!) and his frame. It is this frame, in which further frames move and can be said to move relatively wrt each other. (I tried to emphasize this less ambiguous POV in my last edit of a caption). Referring to an observer in one of these further frames, makes this frame the new standard frame, in which coordinates wrt the former standard frame are calculated via the inverse LT of the LT transforming from the old standard to the new (embarrassing). - It should be made explicit that the parallel orientation of the spatial axes and the orientation of the velocity along the x-axis is a simplifying assumption. (I would not dig deep in the hyperbolic rotation, imho applicable only under this restriction.) - BTW, what is the origin in spacetime diagrams? Is it the spatial origin, only? Does it make sense to talk about "whereabouts" of an origin at some time? Just noise from the off. Purgy (talk) 08:24, 8 November 2018 (UTC) I made some changes in the wording. Re your other comment, I thought I was already being very explicit that use of standard configuration represents a simplifying assumption, which with care would allow simpler math without invalidating the generality of the conclusions. Prokaryotic Caspase Homolog (talk) 08:59, 8 November 2018 (UTC) Spacetime diagrams usually compare two frames in standard configuration. So the origin represents ${\displaystyle t=t'=0}$  where all the spatial coordinates line up. The preparatory work is to (1) enable a bare minimum introduction to spacetime diagrams, since they are currently used in the article with no explanation; (2) allow derivation of the invariant interval from the LTs for the simple case of frames in standard configuration. Prokaryotic Caspase Homolog (talk) 09:22, 8 November 2018 (UTC) I was primarily triggered by the diff-display and expected some immediate treatment of spacetime diagrams (x/ct), and so I think I misunderstood not only the term "standard configuration", in wrongly binding it to a frame with orthogonal temporal and spatial axes, but also the term "origin", as the event with full blown coordinates (0,0,0,0), and no worldline of (0,0,0). Looking at the whole section, I understand your hint to "already", nevertheless, I still think that S and S' are depicted as spatial coordinates, whereas the section deals with spacetime coordinates. I experience this as potentially misleading. Let me know, please, when I get a nuisance. Purgy (talk) 13:42, 8 November 2018 (UTC) The treatment of Minkowski spacetime diagrams in progress begins with the spatial diagram as a starting point. It probably won't be ready to upload until the weekend. I have been delayed by libsvg bugs. You wouldn't believe how much trouble I had trying to draw a simple green line! I finally gave up on green lines, in favor of another color.   Prokaryotic Caspase Homolog (talk) 15:41, 8 November 2018 (UTC) ## Let's discuss I don't mind adding in motivations, etc. but your additions need a bit of work. Will add edits with notes in a bit. Mostly considerations of language and target audience, etc. Prokaryotic Caspase Homolog (talk) 10:39, 10 November 2018 (UTC) Well, I don't mind being given the chance to learn on improvements to my writing. I will abuse the tq-template in inserting my remarks directly into your structure. If this intrusion is considered or turns out as inappropriate for some reason, do not hesitate to simply revert my edit. Purgy (talk) 19:50, 10 November 2018 (UTC) • Edit 1: In everyday experience, people do not routinely measure or think about SQUARED distances or times. A bit verbose. I was aware of having been wordy, but I am convinced that hammering on the facts that spacetime coordinates have both spatial and temporal components pays the rent for newbie readers. Maybe, hammering that the ${\displaystyle \Delta }$ 's (pls, allow for the idiotic apostrophe) in ${\displaystyle x}$  and ${\displaystyle t}$  are squared differences and not differences of squared values, but with ${\displaystyle s}$  it's about the difference of squares, and the "square" has here the meaning of, hmm, what?, is too much. I'm always unsure how to be wordy enough, but not too much. I definitely want the items ${\displaystyle \Delta t^{2}}$  and ${\displaystyle \Delta x^{2}}$  to appear separately, to contrast these two independent Galilean invariants to the only one remaining STR invariant ${\displaystyle \Delta s^{2}.}$ • Edit 2: I believe that a majority of working physicists consider classical physics to mean non-quantum physics, i.e. relativity would be a classical theory. I give in to "classic" being wrong, but I do want some serious physics in there, to have some contrast to the non-scientific "everyday" experience: Galilean, pre-Einsteinian, non-relativistic, ... • Edit 3: Counterintuitive to whom? Let's not scare the audience too soon. Also, how new is new? I think 100+ years is not new. :-) We are acquainted to two Galilean (near)-invariants, and the spacetime interval is in a non-trivial way a new one. I gave two reasons,at least, for counterintuitivity. I thought nowadays they scream for trigger-warnings, mine should scare them away? ;) • Edit 4: Most students who have learned about Cartesian coordinates are perfectly comfortable with positive and negative distances, times, temperatures etc. Reference to "positive definite", "imaginary time" and "metric signature" should be relegated to a Note, since no attempt is made to provide an inline definition for the reader. They are just "throwaway" terms. In math, "distances" are non-negative, it takes "pesudo-distances" (like pseudo-Riemannian) to let them be negative. I have no precise notion covering "throwaway term", I introduced these terms for the possibly rare species that wants to connect their acquaintance with STR with rigorous math, or to satisfy their free associations (i² = -1). Pushing to a note preserves your thoughts on this while not interrupting the flow for most readers. • Edit 5: I'll try putting this chunk and the last chunk of text that I deleted into Notes, to see how putting them into Notes work for you. They are obviously matters that are important to you. I prefer the "Pythagoras" to "The complete form", because it gives a reason (I like to hammer on), and it strengthens the relation to coordinate values. (less wordy?) How about "an expanded form"? "Pythagorean" is confusing because of the minus sign, and I'm not sure I want to spend the time to explain here its significance. It is certainly an important topic, but I don't want to digress too much. Maybe expand the note? • Edit 6: I like the overset def. I presume this is a standard form in the math literature? It is one of the notations I have seen, I prefer it to the \equiv for the latter's many other meanings, I myself used the ":=" which is deprecated in WP, I think for the computer scientists sake. However, I think it is not appropriate at the second occurrence. It is a def the first time ("along a straight line"!), then it is just a consistent repetition in a detailed section. Two dimensions versus four dimensions does not seem overmuch a repetition. • Edit 7: Restoring deleted chunk of text as a note. Note is probably fine. • Edit 8: Restoring deleted chunk of text as a note. See #5 • Edit 9: Minor copyedits, both in my language and in yours. "as such" = "as an invariant": This constitutes the content of the derivation. I understand what you were trying to say now.
2020-09-28 14:54:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 112, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.674144983291626, "perplexity": 1587.896244291413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401601278.97/warc/CC-MAIN-20200928135709-20200928165709-00544.warc.gz"}
https://www.journaltocs.ac.uk/index.php?action=browse&subAction=subjects&publisherID=47&journalID=772&pageb=1&userQueryID=&sort=&local_page=&sorType=&sorCol=
for Journals by Title or ISSN for Articles by Keywords help Subjects -> COMPUTER SCIENCE (Total: 1985 journals)     - ANIMATION AND SIMULATION (29 journals)    - ARTIFICIAL INTELLIGENCE (98 journals)    - AUTOMATION AND ROBOTICS (98 journals)    - CLOUD COMPUTING AND NETWORKS (63 journals)    - COMPUTER ARCHITECTURE (9 journals)    - COMPUTER ENGINEERING (9 journals)    - COMPUTER GAMES (16 journals)    - COMPUTER PROGRAMMING (23 journals)    - COMPUTER SCIENCE (1153 journals)    - COMPUTER SECURITY (45 journals)    - DATA BASE MANAGEMENT (13 journals)    - DATA MINING (32 journals)    - E-BUSINESS (22 journals)    - E-LEARNING (27 journals)    - ELECTRONIC DATA PROCESSING (21 journals)    - IMAGE AND VIDEO PROCESSING (40 journals)    - INFORMATION SYSTEMS (104 journals)    - INTERNET (92 journals)    - SOCIAL WEB (50 journals)    - SOFTWARE (33 journals)    - THEORY OF COMPUTING (8 journals) COMPUTER SCIENCE (1153 journals)                  1 2 3 4 5 6 | Last Annals of Mathematics and Artificial Intelligence   [SJR: 0.593]   [H-I: 42]   [6 followers]  Follow         Hybrid journal (It can contain Open Access articles)    ISSN (Print) 1573-7470 - ISSN (Online) 1012-2443    Published by Springer-Verlag  [2355 journals] • A spatio-temporal framework for managing archeological data • Authors: Alberto Belussi; Sara Migliorini Pages: 175 - 218 Abstract: Space and time are two important characteristics of data in many domains. This is particularly true in the archaeological context where information concerning the discovery location of objects allows one to derive important relations between findings of a specific survey or even of different surveys, and time aspects extend from the excavation time, to the dating of archaeological objects. In recent years, several attempts have been performed to develop a spatio-temporal information system tailored for archaeological data.The first aim of this paper is to propose a model, called $$\mathcal {S}$$ tar, for representing spatio-temporal data in archaeology. In particular, since in this domain dates are often subjective, estimated and imprecise, $$\mathcal {S}$$ tar has to incorporate such vague representation by using fuzzy dates and fuzzy relationships among them. Moreover, besides to the topological relations, another kind of spatial relations is particularly useful in archeology: the stratigraphic ones. Therefore, this paper defines a set of rules for deriving temporal knowledge from the topological and stratigraphic relations existing between two findings. Finally, considering the process through which objects are usually manually dated by archeologists, some existing automatic reasoning techniques may be successfully applied to guide such process. For this purpose, the last contribution regards the translation of archaeological temporal data into a Fuzzy Temporal Constraint Network for checking the overall data consistency and reducing the vagueness of some dates based on their relationships with other ones. PubDate: 2017-08-01 DOI: 10.1007/s10472-017-9535-0 Issue No: Vol. 80, No. 3-4 (2017) • The RABTree and RAB − Tree: lean index structures for snapshot access in transaction-time databases • Authors: Fabio Grandi Pages: 219 - 245 Abstract: In this work we introduce two lean temporal index structures to efficiently support snapshot access (i.e., timeslice queries) in a transaction-time database. The two proposed structures, the RABTree and its RAB−Tree variant, are conceptually simple, easy to implement and efficient index solutions. In particular, the RABTree index guarantees optimal performances for transaction-time data which are naturally clustered according to their insertion time without redundancy. A theoretical and experimental evaluation of the two indexes, in comparison with their previously proposed competitors, is also provided. PubDate: 2017-08-01 DOI: 10.1007/s10472-016-9509-7 Issue No: Vol. 80, No. 3-4 (2017) • Erratum to: The RABTree and RAB − Tree: lean index structures for snapshot access in transaction-time databases • Authors: Fabio Grandi Pages: 247 - 247 PubDate: 2017-08-01 DOI: 10.1007/s10472-016-9514-x Issue No: Vol. 80, No. 3-4 (2017) • Parametrized verification diagrams: temporal verification of symmetric parametrized concurrent systems • Authors: Alejandro Sánchez; César Sánchez Pages: 249 - 282 Abstract: This paper studies the problem of verifying temporal properties (including liveness properties) of parametrized concurrent systems executed by an unbounded number of threads. To solve this problem we introduce parametrized verification diagrams (PVDs), that extend the so-called generalized verification diagrams (GVDs) adding support for parametrized verification. Even though GVDs are known to be a sound and complete proof system for non-parametrized systems, the application of GVDs to parametrized systems requires using quantification or finding a potentially different diagram for each instantiation of the parameter (number of threads). As a consequence, the use of GVDs in parametrized verification requires discharging and proving either quantified formulas or an unbounded collection of verification conditions. Parametrized verification diagrams enable the use of asinglediagram to represent the proof that all possible instances of the parametrized concurrent system satisfy the given temporal specification. Checking the proof represented by a PVD requires proving only a finite collection of quantifier-free verification conditions. The PVDs we present here assume that the parametrized systems are symmetric, which covers a large class of concurrent and distributed systems, including concurrent data types. Our second contribution is an implementation of PVDs and its integration into Leap, our prototype theorem prover. Finally, we illustrate empirically, using Leap, the practical applicability of PVDs by building and checking proofs of liveness properties of mutual exclusion protocols and concurrent data structures. To the best of our knowledge, these are the first machine-checkable proofs of liveness properties of these concurrent data types. PubDate: 2017-08-01 DOI: 10.1007/s10472-016-9531-9 Issue No: Vol. 80, No. 3-4 (2017) • Bounded variability of metric temporal logic • Authors: Carlo A. Furia; Paola Spoletini Pages: 283 - 316 Abstract: Deciding validity of Metric Temporal Logic (MTL) formulas is generally very complex and even undecidable over dense time domains; bounded variability is one of the several restrictions that have been proposed to bring decidability back. A temporal model has bounded variability if no more than v events occur over any time interval of length V, for constant parameters v and V. Previous work has shown that MTL validity over models with bounded variability is less complex—and often decidable—than MTL validity over unconstrained models. This paper studies the related problem of deciding whether an MTL formula has intrinsic bounded variability, that is whether it is satisfied only by models with bounded variability. The results of the paper are mainly negative: over dense time domains, the problem is mostly undecidable (even if with an undecidability degree that is typically lower than deciding validity); over discrete time domains, it is decidable with the same complexity as deciding validity. As a partial complement to these negative results, the paper also identifies MTL fragments where deciding bounded variability is simpler than validity, which may provide for a reduction in complexity in some practical cases. PubDate: 2017-08-01 DOI: 10.1007/s10472-016-9532-8 Issue No: Vol. 80, No. 3-4 (2017) • Computing envelopes in dynamic geometry environments • Authors: Francisco Botana; Tomas Recio Pages: 3 - 20 Abstract: We review the behavior of some popular dynamic geometry software when computing envelopes, relating the diverse methods implemented in these programs with the various definitions of envelope. Special attention is given to the new GeoGebra 5.0 version, that incorporates a mathematically rigorous approach for envelope computations. Furthermore, a discussion on the role, in this context, of the cooperation between GeoGebra and a recent parametric polynomial solving algorithm is detailed. This approach seems to yield accurate results, allowing for the first time sound computations of envelopes of families of plane curves in interactive environments. PubDate: 2017-05-01 DOI: 10.1007/s10472-016-9500-3 Issue No: Vol. 80, No. 1 (2017) • Design and implementation of maple packages for processing offsets and conchoids • Authors: Juana Sendra; David Gómez Sánchez-Pascuala; Valerio Morán Pages: 47 - 64 Abstract: In this paper we present two packages, implemented in the computer algebra system Maple, for dealing with offsets and conchoids to algebraic curves, respectively. Help pages and procedures are described. Also in an annex, we provide a brief atlas, created with these packages, and where the offset and the conchoid of several algebraic plane curves are obtained, their rationality is analyzed, and parametrizations are computed. Practical performance of the implemented algorithms shows that the packages execute in reasonable time; we include time cost tables of the computation of the offset and conchoid curves of two rational families of curves using the implemented packages. PubDate: 2017-05-01 DOI: 10.1007/s10472-016-9504-z Issue No: Vol. 80, No. 1 (2017) • Theory blending: extended algorithmic aspects and examples • Authors: M. Martinez; A. M. H. Abdel-Fattah; U. Krumnack; D. Gómez-Ramírez; A. Smaill; T. R. Besold; A. Pease; M. Schmidt; M. Guhe; K.-U. Kühnberger Pages: 65 - 89 Abstract: In Cognitive Science, conceptual blending has been proposed as an important cognitive mechanism that facilitates the creation of new concepts and ideas by constrained combination of available knowledge. It thereby provides a possible theoretical foundation for modeling high-level cognitive faculties such as the ability to understand, learn, and create new concepts and theories. Quite often the development of new mathematical theories and results is based on the combination of previously independent concepts, potentially even originating from distinct subareas of mathematics. Conceptual blending promises to offer a framework for modeling and re-creating this form of mathematical concept invention with computational means. This paper describes a logic-based framework which allows a formal treatment of theory blending (a subform of the general notion of conceptual blending with high relevance for applications in mathematics), discusses an interactive algorithm for blending within the framework, and provides several illustrating worked examples from mathematics. PubDate: 2017-05-01 DOI: 10.1007/s10472-016-9505-y Issue No: Vol. 80, No. 1 (2017) • Foreword to this special issue: conformal and probabilistic prediction with applications • Authors: Alexander Gammerman; Vladimir Vovk PubDate: 2017-07-08 DOI: 10.1007/s10472-017-9557-7 • A symbolic algebra for the computation of expected utilities in multiplicative influence diagrams • Authors: Manuele Leonelli; Eva Riccomagno; Jim Q. Smith Abstract: Influence diagrams provide a compact graphical representation of decision problems. Several algorithms for the quick computation of their associated expected utilities are available in the literature. However, often they rely on a full quantification of both probabilistic uncertainties and utility values. For problems where all random variables and decision spaces are finite and discrete, here we develop a symbolic way to calculate the expected utilities of influence diagrams that does not require a full numerical representation. Within this approach expected utilities correspond to families of polynomials. After characterizing their polynomial structure, we develop an efficient symbolic algorithm for the propagation of expected utilities through the diagram and provide an implementation of this algorithm using a computer algebra system. We then characterize many of the standard manipulations of influence diagrams as transformations of polynomials. We also generalize the decision analytic framework of these diagrams by defining asymmetries as operations over the expected utility polynomials. PubDate: 2017-06-21 DOI: 10.1007/s10472-017-9553-y • Conformal decision-tree approach to instance transfer • Authors: S. Zhou; E. N. Smirnov; G. Schoenmakers; R. Peeters Abstract: Instance transfer for classification aims at boosting generalization performance of classification models for a target domain by exploiting data from a relevant source domain. Most of the instance-transfer approaches assume that the source data is relevant to the target data for the complete set of features used to represent the data. This assumption fails if the target data and source data are relevant only for strict subsets of the input features which we call “partially input-feature relevant”. In this case these approaches may result in sub-optimal classification models or even in a negative transfer. This paper proposes a new decision-tree approach to instance transfer when the source data are partially input-feature relevant to the target data. The approach selects input features for tree nodes using univariate transfer of source instances. The instance transfer is guided by a conformal test for source relevance estimation. Experimental results on real-world data sets demonstrate that the new decision-tree approach is capable of outperforming existing instance-transfer approaches, especially, when the source data are partially input-feature relevant to the target data. PubDate: 2017-06-17 DOI: 10.1007/s10472-017-9554-x • Conformal prediction of biological activity of chemical compounds • Authors: Paolo Toccaceli; Ilia Nouretdinov; Alexander Gammerman Abstract: The paper presents an application of Conformal Predictors to a chemoinformatics problem of predicting the biological activities of chemical compounds. The paper addresses some specific challenges in this domain: a large number of compounds (training examples), high-dimensionality of feature space, sparseness and a strong class imbalance. A variant of conformal predictors called Inductive Mondrian Conformal Predictor is applied to deal with these challenges. Results are presented for several non-conformity measures extracted from underlying algorithms and different kernels. A number of performance measures are used in order to demonstrate the flexibility of Inductive Mondrian Conformal Predictors in dealing with such a complex set of data. This approach allowed us to identify the most likely active compounds for a given biological target and present them in a ranking order. PubDate: 2017-06-16 DOI: 10.1007/s10472-017-9556-8 • Guest Editorial: Temporal representation and reasoning • Authors: Carlo Combi PubDate: 2017-06-14 DOI: 10.1007/s10472-017-9555-9 • On tree-preserving constraints • Authors: Shufeng Kong; Sanjiang Li; Yongming Li; Zhiguo Long Abstract: The study of tractable subclasses of constraint satisfaction problems is a central topic in constraint solving. Tree convex constraints are extensions of the well-known row convex constraints. Just like the latter, every path-consistent tree convex constraint network is globally consistent. However, it is NP-complete to decide whether a tree convex constraint network has solutions. This paper studies and compares three subclasses of tree convex constraints, which are called chain-, path-, and tree-preserving constraints respectively. The class of tree-preserving constraints strictly contains the subclasses of path-preserving and arc-consistent chain-preserving constraints. We prove that, when enforcing strong path-consistency on a tree-preserving constraint network, in each step, the network remains tree-preserving. This ensures the global consistency of consistent tree-preserving networks after enforcing strong path-consistency, and also guarantees the applicability of the partial path-consistency algorithms to tree-preserving constraint networks, which is usually much more efficient than the path-consistency algorithms for large sparse constraint networks. As an application, we show that the class of tree-preserving constraints is useful in solving the scene labelling problem. PubDate: 2017-05-29 DOI: 10.1007/s10472-017-9552-z • Nonlinear multi-output regression on unknown input manifold • Authors: Alexander Kuleshov; Alexander Bernstein Abstract: Consider unknown smooth function which maps high-dimensional inputs to multidimensional outputs and whose domain of definition is unknown low-dimensional input manifold embedded in an ambient high-dimensional input space. Given training dataset consisting of ‘input-output’ pairs, regression on input manifold problem is to estimate the unknown function and its Jacobian matrix, as well to estimate the input manifold. By transforming high-dimensional inputs in their low-dimensional features, initial regression problem is reduced to certain regression on feature space problem. The paper presents a new geometrically motivated method for solving both interrelated regression problems. PubDate: 2017-05-16 DOI: 10.1007/s10472-017-9551-0 • Tennis manipulation: can we help serena williams win another tournament? • Authors: Lior Aronshtam; Havazelet Cohen; Tammar Shrot Abstract: This article focuses on the question of whether a certain candidate’s (player’s) chance to advance further in a tennis tournament can be increased when the ordering of the tournament can be controlled (manipulated by the organizers) according to his own preferences. Is it possible to increase the number of ranking points a player will receive? And most importantly, can it be done in reasonable computational time? The answers to these questions differ for different settings. e.g., the information available on the outcome of each game, the significance of the number of points gained or of the number of games won. We analyzed five different variations of these tournament questions. First the complexity hardness of trying to control the tournaments is shown. Then, the tools of parametrized complexity are used to investigate the source of the problems’ hardness. Specifically, we check whether this hardness holds when the size of the problem is bounded. The findings of this analysis show that it is possible under certain circumstances to control the tournament in favor of a specific candidate in order to help him advance further in the tournament. PubDate: 2017-04-24 DOI: 10.1007/s10472-017-9549-7 • To be fair, use bundles • Authors: John McCabe-Dansted; Mark Reynolds Abstract: Attempts to manage the reasoning about systems with fairness properties are long running. The popular but restricted Computational Tree Logic (CTL) is amenable to automated reasoning but has difficulty expressing some fairness properties. More expressive languages such as CTL* and CTL+ are computationally complex. The main contribution of this paper is to show the usefulness and practicality of employing the bundled variants of these languages to handle fairness. In particular we present a tableau for a bundled variant of CTL that still has the similar computational complexity to the CTL tableau and a simpler implementation. We further show that the decision problem remains in EXPTIME even if a bounded number of CTL* fairness constraints are allowed in the input formulas. By abandoning limit closure the bundled logics can simultaneously be easier to automate and express many typical fairness constraints. PubDate: 2017-04-19 DOI: 10.1007/s10472-017-9546-x • Preface for the special issue devoted to AISC 2014 • Authors: Jacques Calmet PubDate: 2017-04-12 DOI: 10.1007/s10472-017-9548-8 • Robust visual tracking using information theoretical learning • Authors: Weifu Ding; Jiangshe Zhang Abstract: This paper presents a novel online object tracking algorithm with sparse representation for learning effective appearance models under a particle filtering framework. Compared with the state-of-the-art ℓ 1 sparse tracker, which simply assumes that the image pixels are corrupted by independent Gaussian noise, our proposed method is based on information theoretical Learning and is much less sensitive to corruptions; it achieves this by assigning small weights to occluded pixels and outliers. The most appealing aspect of this approach is that it can yield robust estimations without using the trivial templates adopted by the previous sparse tracker. By using a weighted linear least squares with non-negativity constraints at each iteration, a sparse representation of the target candidate is learned; to further improve the tracking performance, target templates are dynamically updated to capture appearance changes. In our template update mechanism, the similarity between the templates and the target candidates is measured by the earth movers’ distance(EMD). Using the largest open benchmark for visual tracking, we empirically compare two ensemble methods constructed from six state-of-the-art trackers, against the individual trackers. The proposed tracking algorithm runs in real-time, and using challenging sequences performs favorably in terms of efficiency, accuracy and robustness against state-of-the-art algorithms. PubDate: 2017-03-23 DOI: 10.1007/s10472-017-9543-0 • Hyper-arc consistency of polynomial constraints over finite domains using the modified Bernstein form • Authors: Federico Bergenti; Stefania Monica Abstract: This paper describes an algorithm to enforce hyper-arc consistency of polynomial constraints defined over finite domains. First, the paper describes the language of so called polynomial constraints over finite domains, and it introduces a canonical form for such constraints. Then, the canonical form is used to transform the problem of testing the satisfiability of a constraint in a box into the problem of studying the sign of a related polynomial function in the same box, a problem which is effectively solved by using the modified Bernstein form of polynomials. The modified Bernstein form of polynomials is briefly discussed, and the proposed hyper-arc consistency algorithm is finally detailed. The proposed algorithm is a subdivision procedure which, starting from an initial approximation of the domains of variables, removes values from domains to enforce hyper-arc consistency. PubDate: 2017-03-20 DOI: 10.1007/s10472-017-9544-z JournalTOCs School of Mathematical and Computer Sciences Heriot-Watt University Edinburgh, EH14 4AS, UK Email: journaltocs@hw.ac.uk Tel: +00 44 (0)131 4513762 Fax: +00 44 (0)131 4513327
2017-07-21 12:33:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5906077027320862, "perplexity": 2554.3682189303604}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423774.37/warc/CC-MAIN-20170721122327-20170721142327-00227.warc.gz"}
https://dsp.stackexchange.com/questions/47718/what-the-entropy-equations-mean
# What the entropy equations mean? In wavelet packet image compression, different types of entropy methods can be used, like Shannon and log-energy. • Shannon entropy uses this equation $\mathrm{ent}= -\sum (x^2 \times \log(x^2))$, where log-energy equation is $\mathrm{ent}=\sum(\log x^2)$, what this equation means? • which technique can achieve better results for images (in terms of PSNR)? and how can I determine the appropriate technique? • I don't understand what you mean with "different types of entropy methods can be used": Entropy is a property of a stochastic signal source. you seem to be referring to a very specific kind of wavelet decomposition-based algorithm, but don't mention which one that is. A decomposition is a thing that you apply to a signal, a method. Entropy is not a method. – Marcus Müller Mar 9 '18 at 22:16 • @MarcusMüller In image processing, people use "entropy" in a different way than in communications. It's somewhat related to the information content of the image, but I don't understand it very well myself. – MBaz Mar 9 '18 at 23:28 • huh, OK; still, I feel a bit left out on what wavelet packet decomposition OP is talking about and would very much like to know where I can read about it. Gimme a buzzword. – Marcus Müller Mar 9 '18 at 23:56 • Entropy is used as a criterion to decide whether a wavelet packet branch should be further decomposed, or to prune a wavelet packet tree. This carries similarities with segmentation techniques like split-and-merge – Laurent Duval Mar 10 '18 at 0:10 • I recommend you to plot the expression inside the sum (-x^2 log(x.^2)). I assume that x are probabilities (or weights). so you would generate x in the range between zero and one. In the Shannon case you will see that probabilities around zero and one are assigned to very low values. So if you are looking for minimum entropy, these are cases which are favoured. Probabilities around 1/2, however, are assigned to the maximum value and so they are of less interest. – Irreducible Mar 12 '18 at 6:44 Entropy can be used as a criterion to decide whether a wavelet packet branch should be further decomposed, or to prune a wavelet packet tree. This carries similarities with segmentation techniques like split-and-merge. Let us take one sign convention for entropy. One hopes that entropy diminishing, locally, across some levels, meas that this part of the image is simplified by one wavelet decomposition, and when the entropy starts to increase (or vice-versa), further decomposition levels become useless to simplification. Following the entropy gradient is a way to find some "optimal" wavelet packet splitting. But first, optimal for what purpose is unclear from your question so far. Denoising, compression? Entropy can provide a nice guess to predict compression performance with an actual coder, but should be applied to binned/quantized wavelet coefficients, and may be limited unless you use higher-order/cross-entropy to take into account advanced vector coders (as compared to scalar ones). Second, traditional wavelets packets for image are implemented separably, and notoriously, separable schemes and packets lack in efficient representation for piecewise regular images, and induce increased aliasing, that may yield mildly efficient decompositions. Codes implementing such decompositions are presented in Matlab, see for instance: wpdec2: Wavelet packet decomposition 2-D, and the reference paper Coifman, R.R.; M.V. Wickerhauser (1992), “Entropy-based algorithms for best basis selection,” IEEE Trans. on Inf. Theory, vol. 38, 2, pp. 713–718. • I use wavelet packet in image compression, i mean better in terms of image quality (PSNR). Which of Shannon and log energy entropy can achieve higher PSNR? – user24907 Mar 10 '18 at 16:51 • Honestly, I have no general response. This is a topic I am currently re-investigating. It depends a lot of the nature of the classe of images you consider – Laurent Duval Mar 10 '18 at 17:00 • I use Lena image, i'll be gratefull if you helped me with a source in this topic. – user24907 Mar 11 '18 at 9:15
2019-11-14 10:04:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.545267641544342, "perplexity": 1036.3706802880217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668334.27/warc/CC-MAIN-20191114081021-20191114105021-00072.warc.gz"}
https://cs.stackexchange.com/questions/60579/is-this-solution-to-the-dining-philosophers-problem-entirely-valid/117406
# Is this solution to the dining philosopher's problem entirely valid? In a question on Stack Overflow, the answer by Patrick Trentin lists the following solution to the dining philosopher's problem: A possible approach for avoiding deadlock without incurring starvation is to introduce the concept of altruistic philosopher, that is a philosopher which drops the fork that it holds when it realises that he can not eat because the other fork is already used. Having just one altruistic philosopher is sufficient in order to avoid deadlock, and thus all other philosophers can continue being greedy and attempt to grab as many forks as possible. However, if you never change who is the altruistic philosopher during your execution, then you might end up with that guy starving himself to death. Thus, the solution is to change the altruistic philosopher infinitely often. A fair strategy is to change it in a round robin fashion each time that one philosopher acts altruistically. In that way, no philosopher is penalised and you can actually verify that no philosopher starves under the fairness condition that the scheduler gives a chance to every philosopher to execute infinitely often. I was intrigued by this solution because I hadn't heard it before. However, I can find no references to it anywhere in the existing literature. • I did a quick check, and saw that it isn't listed on Wikipedia as a standard solution. • Googling 'altruistic philosopher dining' yields only one result from Google Books that does not discuss the round robin strategy and only vaguely alludes to the altruistic philosopher to talk about the perils of starvation. • I tried looking up altruistic philosopher on ArXiv, only to return no results. • Trentin cites a book about nuXmv, a symbolic model checker, as his source but only provides a link to his own lab slides. So my question: does this naive yet seemingly remarkable solution actually work? Or are there pitfalls that have been overlooked? I find it hard to believe that, if it works, it wouldn't have at least some mention somewhere - but at the same time I can't see why it wouldn't work, as it's a fairly naive solution. If it does work, can I either a) have a reference to a proof, or b) an actual proof that this solves the dining philosopher's problem? If it doesn't, same standards apply, except for the negation of a proof. :) • Coffman in 1971 wrote about four simultaneous conditions for deadlock - if you break at least one of them - it is like tie breaker. With alteuist there is no more circular wait and your system become preemptive. – Evil Jul 14 '16 at 5:27 • A formal proof can be quite tedious, see for example cs.stackexchange.com/questions/45763/… The techniques are similar here, try proving this yourself. If you are still stuck, i will try to add a proof later, but see the linked post to realize what you are asking (and don't be disappointed if the proof is full of indexes and doesn't give you any special insight, "formal" proofs for the correctness of even supposedly trivial concurrent protocols might surprise you). – Ariel Jul 14 '16 at 15:47 • The word formal appears in quotation since treating concurrent algorithms formally is an entire field, and one must define precisely all the basic notions (e.g. how is time represented). – Ariel Jul 14 '16 at 15:50 • @Ariel Then I will settle for knowledge that a proof or falsification of the validity of the solution exists. :) – Akshat Mahajan Jul 14 '16 at 16:25 The description of an altruistic philosopher given in this answer, that you quoted in your question, is slightly imprecise. What it really is going on is that: A possible approach for avoiding deadlock without incurring in starvation is to introduce the concept of altruistic philosopher, that is a philosopher which drops the fork that it holds when it realizes that no one can eat because there is a deadlock situation. So the actual condition for altruism is stronger than the one in your quote. Another important condition is that the fork that is put down by the altruistic philosopher is handed over to the philosopher waiting for it, e.g. by waking up the process that is waiting on the resource. (Otherwise the deadlock situation could actually reappear one instant later). I apologize for the confusion I may have created in this regard. In my stackoverflow answer I simplified the actual model to convey the basic idea, but this left out some details that are important for "formally proving" the correctness of the solution. Note: I don't have the time to write down a formal proof in this moment, so I will try to make a simple argument. The pre-conditions are as follows: • Ph_i is the altruistic philosopher • Ph_i holds the left fork and the right fork is hold by its successor Ph_{i+1} • There must be a deadlock: every Ph_k is holding the left fork the actions performed are: • Ph_i gives his left fork to its predecessor Ph_{i-1} the post-conditions are as follows: • Ph_{i+1} becomes the new altruistic philosopher • Ph_{i-1} is now holding two forks and can eat • After eating, Ph_{i-1} puts down both forks. The important invariant is that the predecessor of the altruistic philosopher is guaranteed to eat after a deadlock situation. Given N philosophers, the round robin policy guarantees that each philosopher becomes altruistic exactly once in a sequence of N deadlocks. Hence, each philosopher is guaranteed to eat (at least) once every N deadlocks (because it is the predecessor of an altruistic philosopher exactly once every N deadlocks). This is, of course, the analysis of the worst-case possible outcome. Depending on the work-load, deadlocks may not arise so frequently and in this case each philosopher would have the chance to eat pretty much anytime it wants. The naive solution to the dining philosophers presented in this answer is quite effective for the limited purposes of teaching Model Checking / Formal Verification techniques. However, I would not assume that it is the best engineering solution for practical situations because it requires one to detect a deadlock. Also, it may be hard to reduce practical problems to the simple scenario of the dining philosophers (e.g. there could be many more resources involved, the order in which resources are grabbed is not fixed, etc.). • I'm not sure the stated set of actions is admissible. As far as I know, the admissible set of actions is: 1) try to pick up the left fork 2) put it down 3) try to pick up the right fork 4) put it down. A philosopher can do a lot of internal bookkeeping but cannot communicate with another philosopher other than through these four actions. – reinierpost Nov 21 '19 at 12:33 • @reinierpost that is quite a relevant observation about a potential limitation that may or may not apply depending on who you ask (e.g. Wikipedia lists an arbitrator solution which would not be much different from this). Anyway, that is also one of the reasons why I stated that the solution is probably only academically interesting for the purposes of teaching Model Checking / Formal Verification. There rarely are one-fit-all solutions, after all. – Patrick Trentin Nov 21 '19 at 12:37
2020-01-22 06:27:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6068658232688904, "perplexity": 878.3468273705141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00419.warc.gz"}
https://www.physicsforums.com/threads/this-is-clearly-symmetric.270803/
# This is clearly symmetric! 1. Nov 10, 2008 I was reading about the momentum-energy tensor (or stress-energy tensor), at one point the author says, " $$\theta^{\mu\nu} = (\partial^\mu\phi)(\partial^\nu\phi) - g^{\mu\nu}L$$ This is clearly symmetric in $$\mu$$ and $$\nu$$." $$\theta^{\mu\nu}$$: is the stress-energy tensor $$\phi$$ is a scalar field $$g^{\mu\nu}$$ is the metric (+---) L is the lagrangian density My question (I'm not an expert in tensors) is how do you see that it's "clearly" symmetric? another silly question: when do we need to symmetrize and antisymmetrize tensors? Please, tell me guys if this isn't the right place for my question. 2. Nov 10, 2008 ### cristo Staff Emeritus It's symmetric because: - g is symmetric since it's the metric, thus the second term is symmetric. - The first term is the product of two derivatives, so is symmetric (i.e. $\partial^0\phi \partial^1\phi \equiv \partial^1\phi\partial^0\phi$, and similarly for other values of mu and nu.) 3. Nov 10, 2008 ### astros Sometimes one has a product of a symmetric tensor with another tensor which is not symmetric nor antisymmetric, then one can show that the antisymmetric part of the second is killed by the first, the same thing occurs for the antisymmetric case, this is why we need to antisymmetrise and symmetrise tensors: to see which part remains and which part is killed... 4. Oct 25, 2011 ### matteo137 Suppose to have a generic tensor T, which isn't symmetric or antisymmetric. You can ALWAYS write it as sum of its symmetric part with its antisymmetric part. e.g. for a rank-2 tensor $$T^{\mu\nu}=\frac{1}{2}(T^{\mu\nu}+T^{\nu\mu}) + \frac{1}{2}(T^{\mu\nu}-T^{\nu\mu})$$ On the right-hand side, the first term $\left(\bullet+\bullet\right)$ is a symmetric tensor, while the second $\left(\bullet-\bullet\right)$ is antisymmetric! Hence, in general: $T=T_S+T_A$. Now...if you have a product between tensors, and you know that one of them is explicitly symmetric $S$ or antisymmetric $A$...you can write: $ST=ST_S+ST_A$ $AT=AT_S+AT_A$ Of course the product between a symmetric and an antisymmetric tensor is zero, i.e. $SA=AS=0$! And thus $ST=ST_S+ST_A=ST_S$ $AT=AT_S+AT_A=AT_A$.
2017-12-11 13:24:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8641234636306763, "perplexity": 698.3867788622973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513512.31/warc/CC-MAIN-20171211125234-20171211145234-00145.warc.gz"}
https://stats.stackexchange.com/questions/371579/splitting-range-of-independent-variable-to-maximize-prediction-within-the-subran
# Splitting range of independent variable to maximize prediction within the subranges I have a dataset with two independent variables $$X,Z \in \mathbb{R}$$ and a dependent variable $$Y \in \mathbb{R}$$. This dataset has the following characteristics: • given some number $$z$$ and a "small" $$\varepsilon > 0$$, if we consider only the observations where $$Z \in [z, z+\varepsilon]$$ we find out that the relation $$Y=f(X)$$ in that subset is estimable quite easily with a polynomial function (it is a physical phenomenon). • if we consider an adjacent interval in some sense (i.e. $$[z,z+2\varepsilon]$$ or $$[z+\varepsilon,z+2\varepsilon]$$) the previous estimate may either still be a good estimate or not with respect to some criteria. I'm asked to "find a clusterization in $$N$$ adjacent intervals with respect to $$Z$$ where each interval is the largest possible interval where there exists a good fit $$Y=f(X)$$". How should approach this problem? Is there any literature where problems like this have been studied? What would you recommend? I think this problem may be a "clustering of functionals" or something like that. I hope I made myself clear, it is a bit difficult to explain, feel free to ask more details. • This is a solid question, and it has good application in automotive and complex system engineering. The question behind the question is "where does linearity break down". Your polynomial model is a Taylor series approximation of the actual, and so how do you determine where you need to put your dividing lines? Many folks would say when you get more than 2% error then you need to use an updated model (aka move to another cluster). This problem has been around for decades in control system engineering. – EngrStudent Oct 15 '18 at 18:31 • It would probably help if you could provide a small example dataset. At any rate, what you're describing does not seem to be clustering, in the typical sense of the term. – gung Oct 15 '18 at 19:40 • Please revise if my edit of the title is all right for you. Your problem does not seem to me a cluster analysis proem at all – ttnphns Oct 19 '18 at 10:17 • You are right, this title suits better to the problem, thank you – edoedoedo Oct 21 '18 at 20:48 Here's a go at the (very interesting!) problem. If we can assume some kind of a distribution on $$Y|X, Z, \theta$$, for example a polynomial regression as you suggested, for example, $$Y|X, Z, \theta \sim N(f_{\theta_Z}(X), \sigma_n^2)$$, we obtain the conditional distribution $$p(Y|X, Z, \theta)$$. To construct a full posterior, one needs the joint distribution $$p(Y, X, Z, \theta)$$. The remaining piece, considering the conditional distribution above is $$p(X, Z, \theta)$$. Answering the other part of your question, fitting $$Y=f(X)$$ for different intervals of $$Z$$, is equivalent to finding the set of parameters $$\theta(Z)$$ for the range of $$Z$$ - which a GP could model beautifully. You enforce a different $$\theta$$ for a prespecified number of intervals by treating $$\theta$$s and the interval bounds $$[Z_{(i)}]$$ as parameters. On the other hand, you could enforce a continuity in the function $$\theta(Z)$$ by using, for example, an RBF kernel. This way, we'd have the distribution $$p(\theta|Z)$$ (a multivariate normal), and the "learning" could be done by using MCMC techniques (using Stan for example) or by simply finding the set of parameters that maximises the joint likelihood described here (using Tensorflow or any other optimiser). After obtaining a fit, I guess that it'd be possible to use other parametric models to replace the GP once you're more confident about what the function $$\theta(Z)$$ looks like. Side note: I've used GPs in vaguely similar ways before (to model "latent mappings" I guess I'd call them - which are essentially functionals), and I've had some great results. They're quite powerful when framed well. • Thank you, I'll try to implement your solution to see if I can get it to work! – edoedoedo Oct 21 '18 at 20:49
2019-08-19 12:22:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7654527425765991, "perplexity": 354.9052424943269}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314732.59/warc/CC-MAIN-20190819114330-20190819140330-00174.warc.gz"}
https://fanography.pythonanywhere.com/2-8
Fanography A tool to visually study the geography of Fano 3-folds. Identification Fano variety 2-8 1. double cover of 2-35 with branch locus an anticanonical divisor such that the intersection with the exceptional divisor is smooth 2. double cover of 2-35 with branch locus an anticanonical divisor such that the intersection with the exceptional divisor is singular but reduced Picard rank 2 (others) $-\mathrm{K}_X^3$ 14 $\mathrm{h}^{1,2}(X)$ 9 Hodge diamond 1 0 0 0 2 0 0 9 9 0 0 2 0 0 0 1 Anticanonical bundle index 1 $\dim\mathrm{H}^0(X,\omega_X^\vee)$ 10 $-\mathrm{K}_X$ very ample? yes $-\mathrm{K}_X$ basepoint free? yes hyperelliptic no trigonal no Birational geometry This variety is not rational but unirational. This variety is primitive. Deformation theory number of moduli 1. 18 2. 17 $\mathrm{Aut}^0(X)$ $\dim\mathrm{Aut}^0(X)$ number of moduli $0$ 0 18 Period sequence The following period sequences are associated to this Fano 3-fold: GRDB #144 Fanosearch #26
2019-10-17 18:56:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7620508074760437, "perplexity": 4220.961480669421}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675598.53/warc/CC-MAIN-20191017172920-20191017200420-00319.warc.gz"}
https://proxieslive.com/tag/int_1inftysqrtfraclog/
## $\int_1^{\infty}\sqrt{\frac{\log x}{x^4+1}}dx=$?? Consider the terrible-horrible-not-good-very-bad integral $$I=\int_1^{\infty}\sqrt{\frac{\log x}{x^4+1}}dx$$ Where of course $$\log x$$ denotes the natural logarithm. I don’t know where to even begin, because I can’t think of any series that would give the integral. I’m sure the integrand doesn’t have any elementary antiderivative, and I have no idea what an appropriate substitution for Feynman integration would be. I thought that it might be beneficial to try simplify it with a substitution of $$\log x=t$$, but I can’t see that getting anywhere. Please help.
2021-10-27 13:13:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9868126511573792, "perplexity": 135.5693525352095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588153.7/warc/CC-MAIN-20211027115745-20211027145745-00487.warc.gz"}
https://ask.allseenalliance.org/answers/4713/revisions/
# Revision history [back] Applications built using Standard Core library always connect to a routing node running on the same device (or a routing node bundled with the application). Applications using Thin Core library discover routing nodes nearby and connect to one based on a selection algorithm. When TCP transport is used, you can find out then routing node to which a Thin Core leaf node has connected to via the output of netstat command. When using UDP transport, this determination is more involved. You can either print the unique name of a leaf node which will indicate the routing node to which it connected to or analyze the traffic using Wireshark. Given the nature of AllJoyn (message-based communication between various devices in a LAN), Wireshark is a very valuable tool to debug issues. In your case, you can also try tuning routing node parameters. A helpful guide is here.
2019-02-17 09:22:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20757007598876953, "perplexity": 2307.1319554158727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481832.13/warc/CC-MAIN-20190217091542-20190217113542-00636.warc.gz"}
https://scipost.org/submissions/1712.08657v2/
# Two-point boundary correlation functions of dense loop models ### Submission summary As Contributors: Jesper Lykke Jacobsen · Alexi Morin-Duchesne Arxiv Link: http://arxiv.org/abs/1712.08657v2 (pdf) Date accepted: 2018-03-14 Date submitted: 2018-03-05 01:00 Submitted by: Morin-Duchesne, Alexi Submitted to: SciPost Physics Academic field: Physics Specialties: Condensed Matter Physics - Theory Mathematical Physics Approach: Theoretical ### Abstract We investigate six types of two-point boundary correlation functions in the dense loop model. These are defined as ratios $Z/Z^0$ of partition functions on the $m\times n$ square lattice, with the boundary condition for $Z$ depending on two points $x$ and $y$. We consider: the insertion of an isolated defect (a) and a pair of defects (b) in a Dirichlet boundary condition, the transition (c) between Dirichlet and Neumann boundary conditions, and the connectivity of clusters (d), loops (e) and boundary segments (f) in a Neumann boundary condition. For the model of critical dense polymers, corresponding to a vanishing loop weight ($\beta = 0$), we find determinant and pfaffian expressions for these correlators. We extract the conformal weights of the underlying conformal fields and find $\Delta = -\frac18$, $0$, $-\frac3{32}$, $\frac38$, $1$, $\tfrac \theta \pi (1+\tfrac{2\theta}\pi)$, where $\theta$ encodes the weight of one class of loops for the correlator of type f. These results are obtained by analysing the asymptotics of the exact expressions, and by using the Cardy-Peschel formula in the case where $x$ and $y$ are set to the corners. For type b, we find a $\log|x-y|$ dependence from the asymptotics, and a $\ln (\ln n)$ term in the corner free energy. This is consistent with the interpretation of the boundary condition of type b as the insertion of a logarithmic field belonging to a rank two Jordan cell. For the other values of $\beta = 2 \cos \lambda$, we use the hypothesis of conformal invariance to predict the conformal weights and find $\Delta = \Delta_{1,2}$, $\Delta_{1,3}$, $\Delta_{0,\frac12}$, $\Delta_{1,0}$, $\Delta_{1,-1}$ and $\Delta_{\frac{2\theta}\lambda+1,\frac{2\theta}\lambda+1}$, extending the results of critical dense polymers. With the results for type f, we reproduce a Coulomb gas prediction for the valence bond entanglement entropy of Jacobsen and Saleur. ### Ontology / Topics See full Ontology or Topics database. Published as SciPost Phys. 4, 034 (2018) Dear editor, We thank the referees for their reports on our article. Please find below a list of comments and changes made in response to the second referee’s report. All the best, Alexi Morin-Duchesne and Jesper Jacobsen ### List of changes —————— 0. We have adapted the spelling to the best of our knowledge so that British English is used throughout. 1. 2. 3. We have made the changes requested by the referee. 4. Our claim was indeed incorrect. We have changed the text to remove this false statement. 5. The sentence just below (2.22) has been fixed. 6. The parentheses have been removed. 7. As explained above (2.21), for generic values of q and v a linear combination of link states, <v| is obtained by taking the dagger of |v>, with q mapped to q, and not to 1/q even if in the end we consider q on the unit circle. In section 3, this applies to the state v_0: <v_0| is then the transpose of |v_0>, with no complex conjugation. We indeed have <v_0|v_0> = 0, but because the entries of |v_0> are complex-valued, this does not imply that |v_0> = 0. No changes were made to the text. 8. 9. We have corrected the two typos. 10. Experience from other lattice models indicates that changes in boundary conditions often correspond to primary fields in CFT. Comparing our lattice result with the correlation functions of primary fields is indeed the first natural things to do. But in some cases, such changes in the boundary instead correspond to logarithmic fields or to compositions of primary fields, for instance. To justify the type of correlator that arises from the lattice computation, one must dig deeper in the CFT arguments. This is precisely the goal pursued in Sections 4.1 and 4.2: to give the detailed CFT argument that explains why (3.45) and (3.68) must respectively be compared with (2.4) and (2.5). We have added a sentence at the beginning of Section 4 to make this clearer. 11. 12. 13. These typos have been fixed. We have also performed these two extra changes: (i) We have added a sentence in Section 4.2 about projective modules, announcing that these will be discussed further in the Conclusion. (ii) We have added two new references in Section 3.1. ### Submission & Refereeing History Resubmission 1712.08657v2 on 5 March 2018 Submission 1712.08657v1 on 24 January 2018
2022-12-10 08:49:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7218027114868164, "perplexity": 731.6241561935095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710421.14/warc/CC-MAIN-20221210074242-20221210104242-00706.warc.gz"}
http://www.nag.com/numeric/cl/nagdoc_cl23/html/E05/e05jbc.html
e05 Chapter Contents e05 Chapter Introduction NAG C Library Manual # NAG Library Function Documentnag_glopt_bnd_mcs_solve (e05jbc) Note: this function uses optional arguments to define choices in the problem specification and in the details of the algorithm. If you wish to use default settings for all of the optional arguments, you need only read Sections 1 to 9 of this document. If, however, you wish to reset some or all of the settings please refer to Section 10 for a detailed description of the algorithm, and to Section 11 for a detailed description of the specification of the optional arguments. ## 1  Purpose nag_glopt_bnd_mcs_solve (e05jbc) is designed to find the global minimum or maximum of an arbitrary function, subject to simple bound-constraints using a multi-level coordinate search method. Derivatives are not required, but convergence is only guaranteed if the objective function is continuous in a neighbourhood of a global optimum. It is not intended for large problems. The initialization function nag_glopt_bnd_mcs_init (e05jac) must have been called before calling nag_glopt_bnd_mcs_solve (e05jbc). ## 2  Specification #include #include void  nag_glopt_bnd_mcs_solve (Integer n, void (*objfun)(Integer n, const double x[], double *f, Integer nstate, Nag_Comm *comm, Integer *inform), Nag_BoundType bound, Nag_MCSInitMethod initmethod, double bl[], double bu[], Integer sdlist, double list[], Integer numpts[], Integer initpt[], void (*monit)(Integer n, Integer ncall, const double xbest[], const Integer icount[], Integer ninit, const double list[], const Integer numpts[], const Integer initpt[], Integer nbaskt, const double xbaskt[], const double boxl[], const double boxu[], Integer nstate, Nag_Comm *comm, Integer *inform), double x[], double *obj, Nag_E05State *state, Nag_Comm *comm, NagError *fail) nag_glopt_bnd_mcs_init (e05jac) must be called before calling nag_glopt_bnd_mcs_solve (e05jbc), or any of the option-setting or option-getting functions: You must not alter the number of non-fixed variables in your problem or the contents of state between calls of the functions: ## 3  Description nag_glopt_bnd_mcs_solve (e05jbc) is designed to solve modestly sized global optimization problems having simple bound-constraints only; it finds the global optimum of a nonlinear function subject to a set of bound constraints on the variables. Without loss of generality, the problem is assumed to be stated in the following form: $minimize x∈Rn Fx subject to ℓ ≤ x ≤ u and ℓ≤u ,$ where $F\left(\mathbf{x}\right)$ (the objective function) is a nonlinear scalar function (assumed to be continuous in a neighbourhood of a global minimum), and the bound vectors are elements of ${\stackrel{-}{R}}^{n}$, where $\stackrel{-}{R}$ denotes the extended reals $R\cup \left\{-\infty ,\infty \right\}$. Relational operators between vectors are interpreted elementwise. The optional argument ${\mathbf{Maximize}}$ should be set if you wish to solve maximization, rather than minimization, problems. If certain bounds are not present, the associated elements of $\mathbf{\ell }$ or $\mathbf{u}$ can be set to special values that will be treated as $-\infty$ or $+\infty$. See the description of the optional argument ${\mathbf{Infinite Bound Size}}$. Phrases in this document containing terms like ‘unbounded values’ should be understood to be taken relative to this optional argument. Fixing variables (that is, setting ${l}_{i}={u}_{i}$ for some $i$) is allowed in nag_glopt_bnd_mcs_solve (e05jbc). A typical excerpt from a function calling nag_glopt_bnd_mcs_solve (e05jbc) is: ```nag_glopt_bnd_mcs_init(n_r, &state, ...); nag_glopt_bnd_mcs_optset_string(optstr, &state, ...); nag_glopt_bnd_mcs_solve(n, objfun, ...); ``` where nag_glopt_bnd_mcs_optset_string (e05jdc) sets the optional argument and value specified in optstr. The initialization function nag_glopt_bnd_mcs_init (e05jac) does not need to be called before each invocation of nag_glopt_bnd_mcs_solve (e05jbc). You should be aware that a call to the initialization function will reset each optional argument to its default value, and, if you are using repeatable randomized initialization lists (see the description of the argument initmethod), the random state stored in state will be destroyed. You must supply a function that evaluates $F\left(\mathbf{x}\right)$; derivatives are not required. The method used by nag_glopt_bnd_mcs_solve (e05jbc) is based on MCS, the Multi-level Coordinate Search method described in Huyer and Neumaier (1999), and the algorithm it uses is described in detail in Section 10. ## 4  References Huyer W and Neumaier A (1999) Global optimization by multi-level coordinate search Journal of Global Optimization 14 331–355 ## 5  Arguments Note: for convenience the subarray notation $a\left(i:j,k:l\right)$, as described in Section 3.2.1.4 in the Essential Introduction, is used. Using this notation, the term ‘column index’ refers to the index $j$ in ${\mathbf{LIST}}\left(i,j\right)$, say (see list for the definition of LIST). 1:     nIntegerInput On entry: $n$, the number of variables. Constraint: ${\mathbf{n}}>0$. 2:     objfunfunction, supplied by the userExternal Function objfun must evaluate the objective function $F\left(\mathbf{x}\right)$ for a specified $n$-vector $\mathbf{x}$. The specification of objfun is: void objfun (Integer n, const double x[], double *f, Integer nstate, Nag_Comm *comm, Integer *inform) 1:     nIntegerInput On entry: $n$, the number of variables. 2:     x[n]const doubleInput On entry: $\mathbf{x}$, the vector at which the objective function is to be evaluated. 3:     fdouble *Output On exit: must be set to the value of the objective function at $\mathbf{x}$, unless you have specified termination of the current problem using inform. 4:     nstateIntegerInput On entry: if ${\mathbf{nstate}}=1$ then nag_glopt_bnd_mcs_solve (e05jbc) is calling objfun for the first time. This argument setting allows you to save computation time if certain data must be read or calculated only once. 5:     commNag_Comm * Pointer to structure of type Nag_Comm; the following members are relevant to objfun. userdouble * iuserInteger * pPointer The type Pointer will be void *. Before calling nag_glopt_bnd_mcs_solve (e05jbc) you may allocate memory and initialize these pointers with various quantities for use by objfun when called from nag_glopt_bnd_mcs_solve (e05jbc) (see Section 3.2.1 in the Essential Introduction). 6:     informInteger *Output On exit: must be set to a value describing the action to be taken by the solver on return from objfun. Specifically, if the value is negative the solution of the current problem will terminate immediately; otherwise, computations will continue. 3:     boundNag_BoundTypeInput On entry: indicates whether the facility for dealing with bounds of special forms is to be used. bound must be set to one of the following values. ${\mathbf{bound}}=\mathrm{Nag_Bounds}$ You will supply $\mathbf{\ell }$ and $\mathbf{u}$ individually. ${\mathbf{bound}}=\mathrm{Nag_NoBounds}$ There are no bounds on $\mathbf{x}$. ${\mathbf{bound}}=\mathrm{Nag_BoundsZero}$ There are semi-infinite bounds $0\le \mathbf{x}$. ${\mathbf{bound}}=\mathrm{Nag_BoundsEqual}$ There are constant bounds $\mathbf{\ell }={\ell }_{1}$ and $\mathbf{u}={u}_{1}$. Note that it only makes sense to fix any components of $\mathbf{x}$ when ${\mathbf{bound}}=\mathrm{Nag_Bounds}$. Constraint: ${\mathbf{bound}}=\mathrm{Nag_Bounds}$, $\mathrm{Nag_NoBounds}$, $\mathrm{Nag_BoundsZero}$ or $\mathrm{Nag_BoundsEqual}$. 4:     initmethodNag_MCSInitMethodInput On entry: selects which initialization method to use. ${\mathbf{initmethod}}=\mathrm{Nag_SimpleBdry}$ Simple initialization (boundary and midpoint), with ${\mathbf{numpts}}\left[i-1\right]=3$, ${\mathbf{initpt}}\left[i-1\right]=2$ and ${\mathbf{LIST}}\left(i,j\right)=\left({\mathbf{bl}}\left[i-1\right],\left({\mathbf{bl}}\left[i-1\right]+{\mathbf{bu}}\left[i-1\right]\right)/2,{\mathbf{bu}}\left[i-1\right]\right)$, for $i=1,2,\dots ,{\mathbf{n}}$ and $j=1,2,3$. ${\mathbf{initmethod}}=\mathrm{Nag_SimpleOffBdry}$ Simple initialization (off-boundary and midpoint), with ${\mathbf{numpts}}\left[i-1\right]=3$, ${\mathbf{initpt}}\left[i-1\right]=2$ and ${\mathbf{LIST}}\left(i,j\right)=\phantom{\rule{0ex}{0ex}}\left(\left(5{\mathbf{bl}}\left[i-1\right]+{\mathbf{bu}}\left[i-1\right]\right)/6,\left({\mathbf{bl}}\left[i-1\right]+{\mathbf{bu}}\left[i-1\right]\right)/2,\left({\mathbf{bl}}\left[i-1\right]+5{\mathbf{bu}}\left[i-1\right]\right)/6\right)$, for $i=1,2,\dots ,{\mathbf{n}}$ and $j=1,2,3$. ${\mathbf{initmethod}}=\mathrm{Nag_Linesearch}$ Initialization using linesearches. ${\mathbf{initmethod}}=\mathrm{Nag_UserSet}$ You are providing your own initialization list. ${\mathbf{initmethod}}=\mathrm{Nag_Random}$ Generate a random initialization list. See list for the definition of LIST. For more information on methods ${\mathbf{initmethod}}=\mathrm{Nag_Linesearch}$, $\mathrm{Nag_UserSet}$ or $\mathrm{Nag_Random}$ see Section 10.1. If ‘infinite’ values (as determined by the value of the optional argument ${\mathbf{Infinite Bound Size}}$) are detected by nag_glopt_bnd_mcs_solve (e05jbc) when you are using a simple initialization method (${\mathbf{initmethod}}=\mathrm{Nag_SimpleBdry}$ or $\mathrm{Nag_SimpleOffBdry}$), a safeguarded initialization procedure will be attempted, to avoid overflow. Suggested value: ${\mathbf{initmethod}}=\mathrm{Nag_SimpleBdry}$ Constraint: ${\mathbf{initmethod}}=\mathrm{Nag_SimpleBdry}$, $\mathrm{Nag_SimpleOffBdry}$, $\mathrm{Nag_Linesearch}$, $\mathrm{Nag_UserSet}$ or $\mathrm{Nag_Random}$. 5:     bl[n]doubleInput/Output 6:     bu[n]doubleInput/Output On entry: ${\mathbf{bl}}$ is $\mathbf{\ell }$, the array of lower bounds. ${\mathbf{bu}}$ is $\mathbf{u}$, the array of upper bounds. If ${\mathbf{bound}}=\mathrm{Nag_Bounds}$, you must set ${\mathbf{bl}}\left[\mathit{i}-1\right]$ to ${\ell }_{\mathit{i}}$ and ${\mathbf{bu}}\left[\mathit{i}-1\right]$ to ${u}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$. If a particular ${x}_{i}$ is to be unbounded below, the corresponding ${\mathbf{bl}}\left[i-1\right]$ should be set to $-\mathit{infbnd}$, where $\mathit{infbnd}$ is the value of the optional argument ${\mathbf{Infinite Bound Size}}$. Similarly, if a particular ${x}_{i}$ is to be unbounded above, the corresponding ${\mathbf{bu}}\left[i-1\right]$ should be set to $\mathit{infbnd}$. If ${\mathbf{bound}}=\mathrm{Nag_NoBounds}$ or $\mathrm{Nag_BoundsZero}$, arrays bl and bu need not be set on input. If ${\mathbf{bound}}=\mathrm{Nag_BoundsEqual}$, you must set ${\mathbf{bl}}\left[0\right]$ to ${\ell }_{1}$ and ${\mathbf{bu}}\left[0\right]$ to ${u}_{1}$. The remaining elements of bl and bu will then be populated by these initial values. On exit: unless NE_INT, NE_INT_2, NE_NOT_INIT, NE_REAL or NE_REAL_2 on exit, bl and bu are the actual arrays of bounds used by nag_glopt_bnd_mcs_solve (e05jbc). Constraints: • if ${\mathbf{bound}}=\mathrm{Nag_Bounds}$, ${\mathbf{bl}}\left[\mathit{i}-1\right]\le {\mathbf{bu}}\left[\mathit{i}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$; • if ${\mathbf{bound}}=\mathrm{Nag_BoundsEqual}$, ${\mathbf{bl}}\left[0\right]<{\mathbf{bu}}\left[0\right]$. 7:     sdlistIntegerInput On entry: must be set to, at least, the maximum over $i$ of the number of points in coordinate $i$ at which to split according to the initialization list list; that is, ${\mathbf{sdlist}}\ge \underset{i}{\mathrm{max}}\phantom{\rule{0.25em}{0ex}}{\mathbf{numpts}}\left[i-1\right]$. Internally, nag_glopt_bnd_mcs_solve (e05jbc) uses list to determine sets of points along each coordinate direction to which it fits quadratic interpolants. Since fitting a quadratic requires at least three distinct points, this puts a lower bound on sdlist. Furthermore, in the case of initialization by linesearches (${\mathbf{initmethod}}=\mathrm{Nag_Linesearch}$) internal storage considerations require that sdlist be at least $192$. Constraints: • if ${\mathbf{initmethod}}\ne \mathrm{Nag_Linesearch}$, ${\mathbf{sdlist}}\ge 3$; • if ${\mathbf{initmethod}}=\mathrm{Nag_Linesearch}$, ${\mathbf{sdlist}}\ge 192$; • if ${\mathbf{initmethod}}=\mathrm{Nag_UserSet}$, ${\mathbf{sdlist}}\ge \underset{i}{\mathrm{max}}\phantom{\rule{0.25em}{0ex}}{\mathbf{numpts}}\left[i-1\right]$. 8:     list[${\mathbf{n}}×{\mathbf{sdlist}}$]doubleInput/Output Note: where ${\mathbf{LIST}}\left(i,j\right)$ appears in this document, it refers to the array element ${\mathbf{list}}\left[\left(i-1\right)×{\mathbf{sdlist}}+j-1\right]$. For convenience the subarray notation ${\mathbf{LIST}}\left(i:j,k:l\right)$, as described in Section 3.2.1.4 in the Essential Introduction, is used. Using this notation, the term ‘column index’ refers to the index $j$ in ${\mathbf{LIST}}\left(i,j\right)$, say. On entry: this argument need not be set on entry if you wish to use one of the preset initialization methods (${\mathbf{initmethod}}\ne \mathrm{Nag_UserSet}$). list is the ‘initialization list’: whenever a sub-box in the algorithm is split for the first time (either during the initialization procedure or later), for each non-fixed coordinate $i$ the split is done at the values ${\mathbf{LIST}}\left(i,1:{\mathbf{numpts}}\left[i-1\right]\right)$, as well as at some adaptively chosen intermediate points. The array sections ${\mathbf{LIST}}\left(\mathit{i},1:{\mathbf{numpts}}\left[\mathit{i}-1\right]\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$, must be in ascending order with each entry being distinct. In this context, ‘distinct’ should be taken to mean relative to the safe-range argument (see nag_real_safe_small_number (X02AMC)). On exit: unless NE_ALLOC_FAIL, NE_INT, NE_INT_2, NE_NOT_INIT, NE_REAL or NE_REAL_2 on exit, the actual initialization data used by nag_glopt_bnd_mcs_solve (e05jbc). If you wish to monitor the contents of list you are advised to do so solely through monit, not through the output value here. Constraint: if ${\mathbf{x}}\left[\mathit{i}-1\right]$ is not fixed, ${\mathbf{LIST}}\left(\mathit{i},1:{\mathbf{numpts}}\left[\mathit{i}-1\right]\right)$ is in ascending order with each entry being distinct, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$${\mathbf{bl}}\left[\mathit{i}-1\right]\le {\mathbf{LIST}}\left(\mathit{i},\mathit{j}\right)\le {\mathbf{bu}}\left[\mathit{i}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{numpts}}\left[\mathit{i}-1\right]$. 9:     numpts[n]IntegerInput/Output On entry: this argument need not be set on entry if you wish to use one of the preset initialization methods (${\mathbf{initmethod}}\ne \mathrm{Nag_UserSet}$). numpts encodes the number of splitting points in each non-fixed dimension. On exit: unless NE_ALLOC_FAIL, NE_INT, NE_INT_2, NE_NOT_INIT, NE_REAL or NE_REAL_2 on exit, the actual initialization data used by nag_glopt_bnd_mcs_solve (e05jbc). Constraints: • if ${\mathbf{x}}\left[\mathit{i}-1\right]$ is not fixed, ${\mathbf{numpts}}\left[\mathit{i}-1\right]\le {\mathbf{sdlist}}$; • ${\mathbf{numpts}}\left[\mathit{i}-1\right]\ge 3$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$. 10:   initpt[n]IntegerInput/Output On entry: this argument need not be set on entry if you wish to use one of the preset initialization methods (${\mathbf{initmethod}}\ne \mathrm{Nag_UserSet}$). You must designate a point stored in list that you wish nag_glopt_bnd_mcs_solve (e05jbc) to consider as an ‘initial point’ for the purposes of the splitting procedure. Call this initial point ${\mathbf{x}}^{*}$. The coordinates of ${\mathbf{x}}^{*}$ correspond to a set of indices ${J}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,n$, such that ${\mathbf{x}}_{\mathit{i}}^{*}$ is stored in ${\mathbf{LIST}}\left(\mathit{i},{J}_{\mathit{i}}\right)$, for $\mathit{i}=1,2,\dots ,n$. You must set ${\mathbf{initpt}}\left[\mathit{i}-1\right]={J}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,n$. On exit: unless NE_ALLOC_FAIL, NE_INT, NE_INT_2, NE_NOT_INIT, NE_REAL or NE_REAL_2 on exit, the actual initialization data used by nag_glopt_bnd_mcs_solve (e05jbc). Constraint: if ${\mathbf{x}}\left[\mathit{i}-1\right]$ is not fixed, $1\le {\mathbf{initpt}}\left[\mathit{i}-1\right]\le {\mathbf{sdlist}}$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$. 11:   monitfunction, supplied by the userExternal Function monit may be used to monitor the optimization process. It is invoked upon every successful completion of the procedure in which a sub-box is considered for splitting. It will also be called just before nag_glopt_bnd_mcs_solve (e05jbc) exits if that splitting procedure was not successful. If no monitoring is required, monit may be specified as NULL. The specification of monit is: void monit (Integer n, Integer ncall, const double xbest[], const Integer icount[], Integer ninit, const double list[], const Integer numpts[], const Integer initpt[], Integer nbaskt, const double xbaskt[], const double boxl[], const double boxu[], Integer nstate, Nag_Comm *comm, Integer *inform) 1:     nIntegerInput On entry: $n$, the number of variables. 2:     ncallIntegerInput On entry: the cumulative number of calls to objfun. 3:     xbest[n]const doubleInput On entry: the current best point. 4:     icount[$6$]const IntegerInput On entry: an array of counters. ${\mathbf{icount}}\left[0\right]$ $\mathit{nboxes}$, the current number of sub-boxes. ${\mathbf{icount}}\left[1\right]$ $\mathit{ncloc}$, the cumulative number of calls to objfun made in local searches. ${\mathbf{icount}}\left[2\right]$ $\mathit{nloc}$, the cumulative number of points used as start points for local searches. ${\mathbf{icount}}\left[3\right]$ $\mathit{nsweep}$, the cumulative number of sweeps through levels. ${\mathbf{icount}}\left[4\right]$ $\mathit{m}$, the cumulative number of splits by initialization list. ${\mathbf{icount}}\left[5\right]$ $\mathit{s}$, the current lowest level containing non-split boxes. 5:     ninitIntegerInput On entry: the maximum over $i$ of the number of points in coordinate $i$ at which to split according to the initialization list list. See also the description of the argument numpts. 6:     list[${\mathbf{n}}×{\mathbf{ninit}}$]const doubleInput On entry: the initialization list. 7:     numpts[n]const IntegerInput On entry: the number of points in each coordinate at which to split according to the initialization list list. 8:     initpt[n]const IntegerInput On entry: a pointer to the ‘initial point’ in list. Element ${\mathbf{initpt}}\left[i-1\right]$ is the column index in LIST of the $i$th coordinate of the initial point. 10:   xbaskt[${\mathbf{n}}×{\mathbf{nbaskt}}$]const doubleInput Note: the $j$th candidate minimum has its $i$th coordinate stored in ${\mathbf{xbaskt}}\left[\left(\mathit{j}-1\right)×{\mathbf{nbaskt}}+\mathit{i}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{nbaskt}}$. On entry: the ‘shopping basket’ of candidate minima. 11:   boxl[n]const doubleInput On entry: the array of lower bounds of the current search box. 12:   boxu[n]const doubleInput On entry: the array of upper bounds of the current search box. 13:   nstateIntegerInput On entry: is set by nag_glopt_bnd_mcs_solve (e05jbc) to indicate at what stage of the minimization monit was called. ${\mathbf{nstate}}=1$ This is the first time that monit has been called. ${\mathbf{nstate}}=-1$ This is the last time monit will be called. ${\mathbf{nstate}}=0$ This is the first and last time monit will be called. 14:   commNag_Comm * Pointer to structure of type Nag_Comm; the following members are relevant to monit. userdouble * iuserInteger * pPointer The type Pointer will be void *. Before calling nag_glopt_bnd_mcs_solve (e05jbc) you may allocate memory and initialize these pointers with various quantities for use by monit when called from nag_glopt_bnd_mcs_solve (e05jbc) (see Section 3.2.1 in the Essential Introduction). 15:   informInteger *Output On exit: must be set to a value describing the action to be taken by the solver on return from monit. Specifically, if the value is negative the solution of the current problem will terminate immediately; otherwise, computations will continue. 12:   x[n]doubleOutput On exit: if NE_NOERROR, contains an estimate of the global optimum (see also Section 7). 13:   objdouble *Output On exit: if NE_NOERROR, contains the function value at x. If you request early termination of nag_glopt_bnd_mcs_solve (e05jbc) using inform in objfun or the analogous inform in monit, there is no guarantee that the function value at x equals obj. 14:   stateNag_E05State *Communication Structure state contains information required by other functions in this suite. You must not modify it directly in any way. 15:   commNag_Comm *Communication Structure The NAG communication argument (see Section 3.2.1.1 in the Essential Introduction). 16:   failNagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). nag_glopt_bnd_mcs_solve (e05jbc) returns with NE_NOERROR if your termination criterion has been met: either a target value has been found to the required relative error (as determined by the values of the optional arguments ${\mathbf{Target Objective Value}}$, ${\mathbf{Target Objective Error}}$ and ${\mathbf{Target Objective Safeguard}}$), or the best function value was static for the number of sweeps through levels given by the optional argument ${\mathbf{Static Limit}}$. The latter criterion is the default. ## 6  Error Indicators and Warnings NE_ALLOC_FAIL Dynamic memory allocation failed. On entry, argument $⟨\mathit{\text{value}}⟩$ had an illegal value. NE_DIV_COMPLETE The division procedure completed but your target value could not be reached. Despite every sub-box being processed ${\mathbf{Splits Limit}}$ times, the target value you provided in ${\mathbf{Target Objective Value}}$ could not be found to the tolerances given in ${\mathbf{Target Objective Error}}$ and ${\mathbf{Target Objective Safeguard}}$. You could try reducing ${\mathbf{Splits Limit}}$ or the objective tolerances. NE_INF_INIT_LIST A finite initialization list could not be computed internally. Consider reformulating the bounds on the problem, try providing your own initialization list, use the randomization option (${\mathbf{initmethod}}=\mathrm{Nag_Random}$) or vary the value of ${\mathbf{Infinite Bound Size}}$. The user-supplied initialization list contained infinite values, as determined by the optional argument ${\mathbf{Infinite Bound Size}}$. NE_INLIST_CLOSE An error occurred during initialization. It is likely that points from the initialization list are very close together. Try relaxing the bounds on the variables or use a different initialization method. NE_INT On entry, ${\mathbf{initmethod}}=\mathrm{Nag_Linesearch}$ and ${\mathbf{sdlist}}=⟨\mathit{\text{value}}⟩$. Constraint: if ${\mathbf{initmethod}}=\mathrm{Nag_Linesearch}$ then ${\mathbf{sdlist}}\ge 192$. On entry, ${\mathbf{initmethod}}=⟨\mathit{\text{value}}⟩$ and ${\mathbf{sdlist}}=⟨\mathit{\text{value}}⟩$. Constraint: if ${\mathbf{initmethod}}\ne \mathrm{Nag_Linesearch}$ then ${\mathbf{sdlist}}\ge 3$. On entry, ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{n}}>0$. On entry, user-supplied section ${\mathbf{LIST}}\left(i,1:{\mathbf{numpts}}\left[i-1\right]\right)$ contained $\mathit{ndist}$ distinct elements, and $\mathit{ndist}<{\mathbf{numpts}}\left[i-1\right]$: $\mathit{ndist}=⟨\mathit{\text{value}}⟩$, ${\mathbf{numpts}}\left[i-1\right]=⟨\mathit{\text{value}}⟩$, $i=⟨\mathit{\text{value}}⟩$. The number of non-fixed variables ${n}_{r}=0$. Constraint: ${n}_{r}>0$. NE_INT_2 A value of ${\mathbf{Splits Limit}}$ ($\mathit{smax}$) smaller than ${n}_{r}+3$ was set: $\mathit{smax}=⟨\mathit{\text{value}}⟩$, ${n}_{r}=⟨\mathit{\text{value}}⟩$. On entry, user-supplied ${\mathbf{initpt}}\left[i-1\right]=⟨\mathit{\text{value}}⟩$, $i=⟨\mathit{\text{value}}⟩$. Constraint: if ${\mathbf{x}}\left[i-1\right]$ is not fixed then ${\mathbf{initpt}}\left[\mathit{i}-1\right]\ge 1$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$. On entry, user-supplied ${\mathbf{initpt}}\left[i-1\right]=⟨\mathit{\text{value}}⟩$, $i=⟨\mathit{\text{value}}⟩$ and ${\mathbf{sdlist}}=⟨\mathit{\text{value}}⟩$. Constraint: if ${\mathbf{x}}\left[i-1\right]$ is not fixed then ${\mathbf{initpt}}\left[\mathit{i}-1\right]\le {\mathbf{sdlist}}$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$. On entry, user-supplied ${\mathbf{numpts}}\left[i-1\right]=⟨\mathit{\text{value}}⟩$, $i=⟨\mathit{\text{value}}⟩$. Constraint: if ${\mathbf{x}}\left[i-1\right]$ is not fixed then ${\mathbf{numpts}}\left[\mathit{i}-1\right]\ge 3$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$. On entry, user-supplied ${\mathbf{numpts}}\left[i-1\right]=⟨\mathit{\text{value}}⟩$, $i=⟨\mathit{\text{value}}⟩$ and ${\mathbf{sdlist}}=⟨\mathit{\text{value}}⟩$. Constraint: if ${\mathbf{x}}\left[i-1\right]$ is not fixed then ${\mathbf{numpts}}\left[\mathit{i}-1\right]\le {\mathbf{sdlist}}$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$. On entry, user-supplied section ${\mathbf{LIST}}\left(i,1:{\mathbf{numpts}}\left[i-1\right]\right)$ was not in ascending order: ${\mathbf{numpts}}\left[i-1\right]=⟨\mathit{\text{value}}⟩$, $i=⟨\mathit{\text{value}}⟩$. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. NE_LINESEARCH_ERROR An error occurred during linesearching. It is likely that your objective function is badly scaled: try rescaling it. Also, try relaxing the bounds or use a different initialization method. If the problem persists, please contact NAG quoting error code $⟨\mathit{\text{value}}⟩$. NE_MONIT_TERMIN User-supplied monitoring function requested termination. NE_NOT_INIT Initialization function nag_glopt_bnd_mcs_init (e05jac) has not been called. NE_OBJFUN_TERMIN User-supplied objective function requested termination. NE_REAL On entry, ${\mathbf{bound}}=\mathrm{Nag_BoundsEqual}$ and ${\mathbf{bl}}\left[0\right]={\mathbf{bu}}\left[0\right]=⟨\mathit{\text{value}}⟩$. Constraint: if ${\mathbf{bound}}=\mathrm{Nag_BoundsEqual}$ then ${\mathbf{bl}}\left[0\right]<{\mathbf{bu}}\left[0\right]$. NE_REAL_2 On entry, ${\mathbf{bound}}=\mathrm{Nag_Bounds}$ or $\mathrm{Nag_BoundsEqual}$ and ${\mathbf{bl}}\left[i-1\right]=⟨\mathit{\text{value}}⟩$, ${\mathbf{bu}}\left[i-1\right]=⟨\mathit{\text{value}}⟩$ and $i=⟨\mathit{\text{value}}⟩$. Constraint: if ${\mathbf{bound}}=\mathrm{Nag_Bounds}$ then ${\mathbf{bl}}\left[\mathit{i}-1\right]\le {\mathbf{bu}}\left[\mathit{i}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$; if ${\mathbf{bound}}=\mathrm{Nag_BoundsEqual}$ then ${\mathbf{bl}}\left[0\right]<{\mathbf{bu}}\left[0\right]$. On entry, user-supplied ${\mathbf{LIST}}\left(i,j\right)=⟨\mathit{\text{value}}⟩$, $i=⟨\mathit{\text{value}}⟩$, $j=⟨\mathit{\text{value}}⟩$, and ${\mathbf{bl}}\left[i-1\right]=⟨\mathit{\text{value}}⟩$. Constraint: if ${\mathbf{x}}\left[i-1\right]$ is not fixed then ${\mathbf{LIST}}\left(\mathit{i},\mathit{j}\right)\ge {\mathbf{bl}}\left[\mathit{i}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{numpts}}\left[\mathit{i}-1\right]$. On entry, user-supplied ${\mathbf{LIST}}\left(i,j\right)=⟨\mathit{\text{value}}⟩$, $i=⟨\mathit{\text{value}}⟩$, $j=⟨\mathit{\text{value}}⟩$, and ${\mathbf{bu}}\left[i-1\right]=⟨\mathit{\text{value}}⟩$. Constraint: if ${\mathbf{x}}\left[i-1\right]$ is not fixed then ${\mathbf{LIST}}\left(\mathit{i},\mathit{j}\right)\le {\mathbf{bu}}\left[\mathit{i}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{numpts}}\left[\mathit{i}-1\right]$. NE_TOO_MANY_FEVALS The function evaluations limit was exceeded. Approximately ${\mathbf{Function Evaluations Limit}}$ function calls have been made without your chosen termination criterion being satisfied. ## 7  Accuracy If NE_NOERROR on exit, then the vector returned in the array x is an estimate of the solution $\mathbf{x}$ whose function value satisfies your termination criterion: the function value was static for ${\mathbf{Static Limit}}$ sweeps through levels, or $Fx - objval ≤ max objerr × objval ,objsfg ,$ where $\mathit{objval}$ is the value of the optional argument ${\mathbf{Target Objective Value}}$, $\mathit{objerr}$ is the value of the optional argument ${\mathbf{Target Objective Error}}$, and $\mathit{objsfg}$ is the value of the optional argument ${\mathbf{Target Objective Safeguard}}$. For each invocation of nag_glopt_bnd_mcs_solve (e05jbc), local workspace arrays of fixed length are allocated internally. The total size of these arrays amounts to $13{n}_{r}+\mathit{smax}-1$ Integer elements, where $\mathit{smax}$ is the value of the optional argument ${\mathbf{Splits Limit}}$ and ${n}_{r}$ is the number of non-fixed variables, and $\left(2+{n}_{r}\right){\mathbf{sdlist}}+2{\mathbf{n}}+21{n}_{r}+3{n}_{r}^{2}+1$ double elements. In addition, if you are using randomized initialization lists (see the description of the argument initmethod), a further $21$ Integer elements are allocated internally. In order to keep track of the regions of the search space that have been visited while looking for a global optimum, nag_glopt_bnd_mcs_solve (e05jbc) internally allocates arrays of increasing sizes depending on the difficulty of the problem. Two of the main factors that govern the amount allocated are the number of sub-boxes (call this quantity $\mathit{nboxes}$) and the number of points in the ‘shopping basket’ (the argument nbaskt on entry to monit). Safe, pessimistic upper bounds on these two quantities are so large as to be impractical. In fact, the worst-case number of sub-boxes for even the most simple initialization list (when ${\mathbf{ninit}}=3$ on entry to monit) grows like ${{n}_{r}}^{{n}_{r}}$. Thus nag_glopt_bnd_mcs_solve (e05jbc) does not attempt to estimate in advance the final values of $\mathit{nboxes}$ or nbaskt for a given problem. There are a total of $5$ Integer arrays and $4+{n}_{r}+{\mathbf{ninit}}$ double arrays whose lengths depend on $\mathit{nboxes}$, and there are a total of $2$ Integer arrays and $3+{\mathbf{n}}+{n}_{r}$ double arrays whose lengths depend on nbaskt. nag_glopt_bnd_mcs_solve (e05jbc) makes a fixed initial guess that the maximum number of sub-boxes required will be $10000$ and that the maximum number of points in the ‘shopping basket’ will be $1000$. If ever a greater amount of sub-boxes or more room in the ‘shopping basket’ is required, nag_glopt_bnd_mcs_solve (e05jbc) performs reallocation, usually doubling the size of the inadequately-sized arrays. Clearly this process requires periods where the original array and its extension exist in memory simultaneously, so that the data within can be copied, which compounds the complexity of nag_glopt_bnd_mcs_solve (e05jbc)'s memory usage. It is possible (although not likely) that if your problem is particularly difficult to solve, or of a large size (hundreds of variables), you may run out of memory. One array that could be dynamically resized by nag_glopt_bnd_mcs_solve (e05jbc) is the ‘shopping basket’ (xbaskt on entry to monit). If the initial attempt to allocate $1000{n}_{r}$ doubles for this array fails, monit will not be called on exit from nag_glopt_bnd_mcs_solve (e05jbc). nag_glopt_bnd_mcs_solve (e05jbc) performs better if your problem is well-scaled. It is worth trying (by guesswork perhaps) to rescale the problem if necessary, as sensible scaling will reduce the difficulty of the optimization problem, so that nag_glopt_bnd_mcs_solve (e05jbc) will take less computer time. ## 9  Example This example finds the global minimum of the ‘peaks’ function in two dimensions $Fx,y = 3 1-x 2 exp - x 2 - y+1 2 -10 x 5 - x 3 - y 5 exp - x 2 - y 2 - 1 3 exp - x+1 2 - y 2$ on the box $\left[-3,3\right]×\left[-3,3\right]$. The function $F$ has several local minima and one global minimum in the given box. The global minimum is approximately located at $\left(0.23,-1.63\right)$, where the function value is approximately $-6.55$. We use default values for all the optional arguments, and we instruct nag_glopt_bnd_mcs_solve (e05jbc) to use the simple initialization list corresponding to ${\mathbf{initmethod}}=\mathrm{Nag_SimpleBdry}$. In particular, this will set for us the initial point $\left(0,0\right)$ (see Section 9.3). ### 9.1  Program Text Program Text (e05jbce.c) ### 9.2  Program Data Program Data (e05jbce.d) ### 9.3  Program Results Program Results (e05jbce.r) Note: the remainder of this document is intended for more advanced users. Section 10 contains a detailed description of the algorithm. This information may be needed in order to understand Section 11, which describes the optional arguments that can be set by calls to nag_glopt_bnd_mcs_optset_file (e05jcc), nag_glopt_bnd_mcs_optset_string (e05jdc), nag_glopt_bnd_mcs_optset_char (e05jec), nag_glopt_bnd_mcs_optset_int (e05jfc) and/or nag_glopt_bnd_mcs_optset_real (e05jgc). ## 10  Algorithmic Details Here we summarize the main features of the MCS algorithm used in nag_glopt_bnd_mcs_solve (e05jbc), and we introduce some terminology used in the description of the function and its arguments. We assume throughout that we will only do any work in coordinates $i$ in which ${x}_{i}$ is free to vary. The MCS algorithm is fully described in Huyer and Neumaier (1999). ### 10.1  Initialization and Sweeps Each sub-box is determined by a basepoint $\mathbf{x}$ and an opposite point $\mathbf{y}$. We denote such a sub-box by $B\left[\mathbf{x},\mathbf{y}\right]$. The basepoint is allowed to belong to more than one sub-box, is usually a boundary point, and is often a vertex. An initialization procedure produces an initial set of sub-boxes. Whenever a sub-box is split along a coordinate $i$ for the first time (in the initialization procedure or later), the splitting is done at three or more user-defined values ${\left\{{x}_{i}^{j}\right\}}_{j}$ at which the objective function is sampled, and at some adaptively chosen intermediate points. At least four children are generated. More precisely, we assume that we are given $ℓi ≤ xi1 < xi2 < ⋯ < xiLi ≤ ui , Li ≥ 3 , for ​ i=1,2,…,n$ and a vector $\mathbf{p}$ that, for each $i$, locates within ${\left\{{x}_{i}^{j}\right\}}_{j}$ the $i$th coordinate of an initial point ${\mathbf{x}}^{0}$; that is, if ${x}_{i}^{0}={x}_{i}^{j}$ for some $j=1,2,\dots ,{L}_{i}$, then ${p}_{i}=j$. A good guess for the global optimum can be used as ${\mathbf{x}}^{0}$. The initialization points and the vectors $\mathbf{\ell }$ and $\mathbf{p}$ are collectively called the initialization list (and sometimes we will refer to just the initialization points as ‘the initialization list’, whenever this causes no confusion). The initialization data may be input by you, or they can be set to sensible default values by nag_glopt_bnd_mcs_solve (e05jbc): if you provide them yourself, ${\mathbf{LIST}}\left(i,j\right)$ should contain ${x}_{i}^{j}$, ${\mathbf{numpts}}\left[i-1\right]$ should contain ${L}_{i}$, and ${\mathbf{initpt}}\left[i-1\right]$ should contain ${p}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,n$ and $\mathit{j}=1,2,\dots ,{L}_{\mathit{i}}$; if you wish nag_glopt_bnd_mcs_solve (e05jbc) to use one of its preset initialization methods, you could choose one of two simple, three-point methods (see Figure 1). If the list generated by one of these methods contains infinite values, attempts are made to generate a safeguarded list using the function $\mathrm{subint}\left(x,y\right)$ (which is also used during the splitting procedure, and is described in Section 10.2). If infinite values persist, nag_glopt_bnd_mcs_solve (e05jbc) exits with NE_INF_INIT_LIST. There is also the option to generate an initialization list with the aid of linesearches (by setting ${\mathbf{initmethod}}=\mathrm{Nag_Linesearch}$). Starting with the absolutely smallest point in the root box, linesearches are made along each coordinate. For each coordinate, the local minimizers found by the linesearches are put into the initialization list. If there were fewer than three minimizers, they are augmented by close-by values. The final preset initialization option (${\mathbf{initmethod}}=\mathrm{Nag_Random}$) generates a randomized list, so that independent multiple runs may be made if you suspect a global optimum has not been found. Each call to the initialization function nag_glopt_bnd_mcs_init (e05jac) resets the initial-state vector for the Wichmann–Hill base-generator that is used. Depending on whether you set the optional argument ${\mathbf{Repeatability}}$ to ‘ON’ or ‘OFF’, the random state is initialized to give a repeatable or non-repeatable sequence. Then, a random integer between $3$ and sdlist is selected, which is then used to determine the number of points to be generated in each coordinate; that is, numpts becomes a constant vector, set to this value. The components of list are then generated, from a uniform distribution on the root box if the box is finite, or else in a safeguarded fashion if any bound is infinite. The array ${\mathbf{initpt}}$ is set to point to the best point in list. Given an initialization list (preset or otherwise), nag_glopt_bnd_mcs_solve (e05jbc) evaluates $F$ at ${\mathbf{x}}^{0}$, and sets the initial estimate of the global minimum, ${\mathbf{x}}^{*}$, to ${\mathbf{x}}^{0}$. Then, for $i=1,2,\dots ,n$, the objective function $F$ is evaluated at ${L}_{i}-1$ points that agree with ${\mathbf{x}}^{*}$ in all but the $i$th coordinate. We obtain pairs $\left({\stackrel{^}{\mathbf{x}}}^{\mathit{j}},{f}_{i}^{\mathit{j}}\right)$, for $\mathit{j}=1,2,\dots ,{L}_{i}$, with: ${\mathbf{x}}^{*}={\stackrel{^}{\mathbf{x}}}^{{j}_{1}}$, say; with, for $j\ne {j}_{1}$, $x^kj = xk* if ​k≠i; xkj otherwise;$ and with $fij = F x^ j .$ The point having the smallest function value is renamed ${\mathbf{x}}^{*}$ and the procedure is repeated with the next coordinate. Once nag_glopt_bnd_mcs_solve (e05jbc) has a full set of initialization points and function values, it can generate an initial set of sub-boxes. Recall that the root box is $B\left[\mathbf{x},\mathbf{y}\right]=\left[\mathbf{\ell },\mathbf{u}\right]$, having basepoint $\mathbf{x}={\mathbf{x}}^{0}$. The opposite point $\mathbf{y}$ is a corner of $\left[\mathbf{\ell },\mathbf{u}\right]$ farthest away from $\mathbf{x}$, in some sense. The point $\mathbf{x}$ need not be a vertex of $\left[\mathbf{\ell },\mathbf{u}\right]$, and $\mathbf{y}$ is entitled to have infinite coordinates. We loop over each coordinate $i$, splitting the current box along coordinate $i$ into $2{L}_{i}-2$, $2{L}_{i}-1$ or $2{L}_{i}$ subintervals with exactly one of the ${\stackrel{^}{x}}_{i}^{j}$ as endpoints, depending on whether two, one or none of the ${\stackrel{^}{x}}_{i}^{j}$ are on the boundary. Thus, as well as splitting at ${\stackrel{^}{x}}_{i}^{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,{L}_{i}$, we split at additional points ${z}_{i}^{\mathit{j}}$, for $\mathit{j}=2,3,\dots ,{L}_{i}$. These additional ${z}_{i}^{j}$ are such that $zij = x^ i j-1 + qm x^ i j - x^ i j-1 , j=2,…,Li ,$ where $q$ is the golden-section ratio $\left(\sqrt{5}-1\right)/2$, and the exponent $m$ takes the value $1$ or $2$, chosen so that the sub-box with the smaller function value gets the larger fraction of the interval. Each child sub-box gets as basepoint the point obtained from ${\mathbf{x}}^{*}$ by changing ${x}_{i}^{*}$ to the ${x}_{i}^{j}$ that is a boundary point of the corresponding $i$th coordinate interval; this new basepoint therefore has function value ${f}_{i}^{j}$. The opposite point is derived from $\mathbf{y}$ by changing ${y}_{i}$ to the other end of that interval. nag_glopt_bnd_mcs_solve (e05jbc) can now rank the coordinates based on an estimated variability of $F$. For each $i$ we compute the union of the ranges of the quadratic interpolant through any three consecutive ${\stackrel{^}{x}}_{i}^{j}$, taking the difference between the upper and lower bounds obtained as a measure of the variability of $F$ in coordinate $i$. A vector $\mathbf{\pi }$ is populated in such a way that coordinate $i$ has the ${\pi }_{i}$th highest estimated variability. For tiebreaks, when the ${\mathbf{x}}^{*}$ obtained after splitting coordinate $i$ belongs to two sub-boxes, the one that contains the minimizer of the quadratic models is designated the current sub-box for coordinate $i+1$. Boxes are assigned levels in the following manner. The root box is given level $1$. When a sub-box of level $s$ is split, the child with the smaller fraction of the golden-section split receives level $s+2$; all other children receive level $s+1$. The box with the better function value is given the larger fraction of the splitting interval and the smaller level because then it is more likely to be split again more quickly. We see that after the initialization procedure the first level is empty and the non-split boxes have levels $2,\dots ,{n}_{r}+2$, so it is meaningful to choose ${s}_{\mathrm{max}}$ much larger than ${n}_{r}$. Note that the internal structure of nag_glopt_bnd_mcs_solve (e05jbc) demands that ${s}_{\mathrm{max}}$ be at least ${n}_{r}+3$. Examples of initializations in two dimensions are given in Figure 1. In both cases the initial point is ${\mathbf{x}}^{0}=\left(\mathbf{\ell }+\mathbf{u}\right)/2$; on the left the initialization points are $x1 = ℓ , x2 = ℓ+u / 2 , x3 = u ,$ while on the right the points are $x1 = 5 ℓ + u / 6 , x2 = ℓ + u / 2 , x3 = ℓ + 5 u / 6 .$ In Figure 1, basepoints and levels after initialization are displayed. Note that these initialization lists correspond to ${\mathbf{initmethod}}=\mathrm{Nag_SimpleBdry}$ and ${\mathbf{initmethod}}=\mathrm{Nag_SimpleOffBdry}$, respectively. Figure 1: Examples of the initialization procedure After initialization, a series of sweeps through levels is begun. A sweep is defined by three steps: (i) scan the list of non-split sub-boxes. Fill a record list $\mathbf{b}$ according to ${b}_{s}=0$ if there is no box at level $s$, and with ${b}_{s}$ pointing to a sub-box with the lowest function value among all sub-boxes with level $s$ otherwise, for $0; (ii) the sub-box with label ${b}_{s}$ is a candidate for splitting. If the sub-box is not to be split, according to the rules described in Section 10.2, increase its level by $1$ and update ${b}_{s+1}$ if necessary. If the sub-box is split, mark it so, insert its children into the list of sub-boxes, and update $\mathbf{b}$ if any child with level ${s}^{\prime }$ yields a strict improvement of $F$ over those sub-boxes at level ${s}^{\prime }$; (iii) increment $s$ by $1$. If $s={s}_{\mathrm{max}}$ then displaying monitoring information and start a new sweep; else if ${b}_{s}=0$ then repeat this step; else display monitoring information and go to the previous step. Clearly, each sweep ends after at most ${s}_{\mathrm{max}}-1$ visits of the third step. ### 10.2  Splitting Each sub-box is stored by nag_glopt_bnd_mcs_solve (e05jbc) as a set of information about the history of the sub-box: the label of its parent, a label identifying which child of the parent it is, etc. Whenever a sub-box $B\left[\mathbf{x},\mathbf{y}\right]$ of level $s<{s}_{\mathrm{max}}$ is a candidate for splitting, as described in Section 10.1, we recover $\mathbf{x}$, $\mathbf{y}$, and the number, ${n}_{j}$, of times coordinate $j$ has been split in the history of $B$. Sub-box $B$ could be split in one of two ways. (i) Splitting by rank If $s>2{n}_{r}\left(\mathrm{min}\phantom{\rule{0.25em}{0ex}}{n}_{j}+1\right)$, the box is always split. The splitting index is set to a coordinate $i$ such that ${n}_{i}=\mathrm{min}\phantom{\rule{0.25em}{0ex}}{n}_{j}$. (ii) Splitting by expected gain If $s\le 2{n}_{r}\left(\mathrm{min}\phantom{\rule{0.25em}{0ex}}{n}_{j}+1\right)$, the sub-box could be split along a coordinate where a maximal gain in function value is expected. This gain is estimated according to a local separable quadratic model obtained by fitting to $2{n}_{r}+1$ function values. If the expected gain is too small the sub-box is not split at all, and its level is increased by $1$. Eventually, a sub-box that is not eligible for splitting by expected gain will reach level $2{n}_{r}\left(\mathrm{min}\phantom{\rule{0.25em}{0ex}}{n}_{j}+1\right)+1$ and then be split by rank, as long as ${s}_{\mathrm{max}}$ is large enough. As ${s}_{\mathrm{max}}\to \infty$, the rule for splitting by rank ensures that each coordinate is split arbitrarily often. Before describing the details of each splitting method, we introduce the procedure for correctly handling splitting at adaptive points and for dealing with unbounded intervals. Suppose we want to split the $i$th coordinate interval $▯\left\{{x}_{i},{y}_{i}\right\}$, where we define $▯\left\{{x}_{i},{y}_{i}\right\}=\left[\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({x}_{i},{y}_{i}\right),\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({x}_{i},{y}_{i}\right)\right]$, for ${x}_{i}\in R$ and ${y}_{i}\in \stackrel{-}{R}$, and where $\mathbf{x}$ is the basepoint of the sub-box being considered. The descendants of the sub-box should shrink sufficiently fast, so we should not split too close to ${x}_{i}$. Moreover, if ${y}_{i}$ is large we want the new splitting value to not be too large, so we force it to belong to some smaller interval $▯\left\{{\xi }^{\prime },{\xi }^{\prime \prime }\right\}$, determined by $ξ′′ = subint xi,yi , ξ′ = xi + ξ′′ - xi / 10 ,$ where the function $\mathrm{subint}$ is defined by $subint x,y = signy if ​ 1000x<1 ​ and ​ y>1000 ; 10signyx if ​ 1000x≥1 ​ and ​ y>1000x ; y otherwise.$ #### 10.2.1  Splitting by rank Consider a sub-box $B$ with level $s>2{n}_{r}\left(\mathrm{min}\phantom{\rule{0.25em}{0ex}}{n}_{j}+1\right)$. Although the sub-box has reached a high level, there is at least one coordinate along which it has not been split very often. Among the $i$ such that ${n}_{i}=\mathrm{min}\phantom{\rule{0.25em}{0ex}}{n}_{j}$ for $B$, select the splitting index to be the coordinate with the lowest ${\pi }_{i}$ (and hence highest variability rank). ‘Splitting by rank’ refers to the ranking of the coordinates by ${n}_{i}$ and ${\pi }_{i}$. If ${n}_{i}=0$, so that $B$ has never been split along coordinate $i$, the splitting is done according to the initialization list and the adaptively chosen golden-section split points, as described in Section 10.1. Also as covered there, new basepoints and opposite points are generated. The children having the smaller fraction of the golden-section split (that is, those with larger function values) are given level $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left\{s+2,{s}_{\mathrm{max}}\right\}$. All other children are given level $s+1$. Otherwise, $B$ ranges between ${x}_{i}$ and ${y}_{i}$ in the $i$th coordinate direction. The splitting value is selected to be ${z}_{i}={x}_{i}+2\left(\mathrm{subint}\left({x}_{i},{y}_{i}\right)-{x}_{i}\right)/3$; we are not attempting to split based on a large reduction in function value, merely in order to reduce the size of a large interval, so ${z}_{i}$ may not be optimal. Sub-box $B$ is split at ${z}_{i}$ and the golden-section split point, producing three parts and requiring only one additional function evaluation, at the point ${\mathbf{x}}^{\prime }$ obtained from $\mathbf{x}$ by changing the $i$th coordinate to ${z}_{i}$. The child with the smaller fraction of the golden-section split is given level $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left\{s+2,{s}_{\mathrm{max}}\right\}$, while the other two parts are given level $s+1$. Basepoints are assigned as follows: the basepoint of the first child is taken to be $\mathbf{x}$, and the basepoint of the second and third children is the point ${\mathbf{x}}^{\prime }$. Opposite points are obtained by changing ${y}_{i}$ to the other end of the $i$th coordinate-interval of the corresponding child. #### 10.2.2  Splitting by expected gain When a sub-box $B$ has level $s\le 2{n}_{r}\left(\mathrm{min}\phantom{\rule{0.25em}{0ex}}{n}_{j}+1\right)$, we compute the optimal splitting index and splitting value from a local separable quadratic used as a simple local approximation of the objective function. To fit this curve, for each coordinate we need two additional points and their function values. Such data may be recoverable from the history of $B$: whenever the $i$th coordinate was split in the history of $B$, we obtained values that can be used for the current quadratic interpolation in coordinate $i$. We loop over $i$; for each coordinate we pursue the history of $B$ back to the root box, and we take the first two points and function values we find, since these are expected to be closest to the current basepoint $\mathbf{x}$. If the current coordinate has not yet been split we use the initialization list. Then we generate a local separable model $e\left(\mathbf{\xi }\right)$ for $F\left(\mathbf{\xi }\right)$ by interpolation at $\mathbf{x}$ and the $2{n}_{r}$ additional points just collected: $eξ = Fx + ∑ i=1 n ei ξi .$ We define the expected gain ${\stackrel{^}{e}}_{i}$ in function value when we evaluate at a new point obtained by changing coordinate $i$ in the basepoint, for each $i$, based on two cases: (i) ${n}_{i}=0$. We compute the expected gain as $e^i = min 1≤j≤ Li fij - f i pi .$ Again, we split according to the initialization list, with the new basepoints and opposite points being as before. (ii) ${n}_{i}>0$. Now, the $i$th component of our sub-box ranges from ${x}_{i}$ to ${y}_{i}$. Using the quadratic partial correction function $ei ξi = αi ξi - xi + βi ξi - xi 2$ we can approximate the maximal gain expected when changing ${x}_{i}$ only. We will choose the splitting value from $▯\left\{{\xi }^{\prime },{\xi }^{\prime \prime }\right\}$. We compute $e^i = min ξi ∈ ▯ ξ′,ξ′′ ei ξi$ and call ${z}_{i}$ the minimizer in $▯\left\{{\xi }^{\prime },{\xi }^{\prime \prime }\right\}$. If the expected best function value ${f}_{\mathrm{exp}}$ satisfies $fexp = Fx + min 1≤i≤n e^i < fbest ,$ (1) where ${f}_{\mathrm{best}}$ is the current best function value (including those function values obtained by local optimization), we expect the sub-box to contain a better point and so we split it, using as splitting index the component with minimal ${\stackrel{^}{e}}_{i}$. Equation (1) prevents wasting function calls by avoiding splitting sub-boxes whose basepoints have bad function values. These sub-boxes will eventually be split by rank anyway. We now have a splitting index and a splitting value ${z}_{i}$. The sub-box is split at ${z}_{i}$ as long as ${z}_{i}\ne {y}_{i}$, and at the golden-section split point; two or three children are produced. The larger fraction of the golden-section split receives level $s+1$, while the smaller fraction receives level $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left\{s+2,{s}_{\mathrm{max}}\right\}$. If it is the case that ${z}_{i}\ne {y}_{i}$ and the third child is larger than the smaller of the two children from the golden-section split, the third child receives level $s+1$. Otherwise it is given the level $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left\{s+2,{s}_{\mathrm{max}}\right\}$. The basepoint of the first child is set to $\mathbf{x}$, and the basepoint of the second (and third if it exists) is obtained by changing the $i$th coordinate of $\mathbf{x}$ to ${z}_{i}$. The opposite points are again derived by changing ${y}_{i}$ to the other end of the $i$th coordinate interval of $B$. If equation (1) does not hold, we expect no improvement. We do not split, and we increase the level of $B$ by $1$. ### 10.3  Local Search The local optimization algorithm used by nag_glopt_bnd_mcs_solve (e05jbc) uses linesearches along directions that are determined by minimizing quadratic models, all subject to bound constraints. Triples of vectors are computed using coordinate searches based on linesearches. These triples are used in triple search procedures to build local quadratic models for $F$. A trust-region-type approach to minimize these models is then carried out, and more information about the coordinate search and the triple search can be found in Huyer and Neumaier (1999). The local search starts by looking for better points without being too local, by making a triple search using points found by a coordinate search. This yields a new point and function value, an approximation of the gradient of the objective, and an approximation of the Hessian of the objective. Then the quadratic model for $F$ is minimized over a small box, with the solution to that minimization problem then being used as a linesearch direction to minimize the objective. A measure $r$ is computed to quantify the predictive quality of the quadratic model. The third stage is the checking of termination criteria. The local search will stop if more than $\mathit{loclim}$ visits to this part of the local search have occurred, where $\mathit{loclim}$ is the value of the optional argument ${\mathbf{Local Searches Limit}}$. If that is not the case, it will stop if the limit on function calls has been exceeded (see the description of the optional argument ${\mathbf{Function Evaluations Limit}}$). The final criterion checks if no improvement can be made to the function value, or whether the approximated gradient $\mathbf{g}$ is small, in the sense that $gT maxx, x old < loctol f0-f .$ The vector ${\mathbf{x}}_{\mathrm{old}}$ is the best point at the start of the current loop in this iterative local-search procedure, the constant $\mathit{loctol}$ is the value of the optional argument ${\mathbf{Local Searches Tolerance}}$, $f$ is the objective value at $\mathbf{x}$, and ${f}_{0}$ is the smallest function value found by the initialization procedure. Next, nag_glopt_bnd_mcs_solve (e05jbc) attempts to move away from the boundary, if any components of the current point lie there, using linesearches along the offending coordinates. Local searches are terminated if no improvement could be made. The fifth stage carries out another triple search, but this time it does not use points from a coordinate search, rather points lying within the trust-region box are taken. The final stage modifies the trust-region box to be bigger or smaller, depending on the quality of the quadratic model, minimizes the new quadratic model on that box, and does a linesearch in the direction of the minimizer. The value of $r$ is updated using the new data, and then we go back to the third stage (checking of termination criteria). The Hessians of the quadratic models generated by the local search may not be positive definite, so nag_glopt_bnd_mcs_solve (e05jbc) uses the general nonlinear optimizer nag_opt_sparse_nlp_solve (e04vhc) to minimize the models. ## 11  Optional Arguments Several optional arguments in nag_glopt_bnd_mcs_solve (e05jbc) define choices in the problem specification or the algorithm logic. In order to reduce the number of formal arguments of nag_glopt_bnd_mcs_solve (e05jbc) these optional arguments have associated default values that are appropriate for most problems. Therefore, you need only specify those optional arguments whose values are to be different from their default values. The remainder of this section can be skipped if you wish to use the default values for all optional arguments. The following is a list of the optional arguments available and a full description of each optional argument is provided in Section 11.1. Optional arguments may be specified by calling one, or more, of the functions nag_glopt_bnd_mcs_optset_file (e05jcc), nag_glopt_bnd_mcs_optset_string (e05jdc), nag_glopt_bnd_mcs_optset_char (e05jec), nag_glopt_bnd_mcs_optset_int (e05jfc) and nag_glopt_bnd_mcs_optset_real (e05jgc) before a call to nag_glopt_bnd_mcs_solve (e05jbc). nag_glopt_bnd_mcs_optset_file (e05jcc) reads options from an external options file, with Begin and End as the first and last lines respectively, and with each intermediate line defining a single optional argument. For example, ``` Begin Static Limit = 50 End ``` The call ``` e05jcc (fileid, &state, &fail); ``` can then be used to read the file on the descriptor fileid as returned by a call of nag_open_file (x04acc). The value NE_NOERROR is returned on successful exit. nag_glopt_bnd_mcs_optset_file (e05jcc) should be consulted for a full description of this method of supplying optional arguments. nag_glopt_bnd_mcs_optset_string (e05jdc), nag_glopt_bnd_mcs_optset_char (e05jec), nag_glopt_bnd_mcs_optset_int (e05jfc) or nag_glopt_bnd_mcs_optset_real (e05jgc) can be called to supply options directly, one call being necessary for each optional argument. nag_glopt_bnd_mcs_optset_string (e05jdc), nag_glopt_bnd_mcs_optset_char (e05jec), nag_glopt_bnd_mcs_optset_int (e05jfc) or nag_glopt_bnd_mcs_optset_real (e05jgc) should be consulted for a full description of this method of supplying optional arguments. All optional arguments not specified by you are set to their default values. Valid values of optional arguments specified by you are unaltered by nag_glopt_bnd_mcs_solve (e05jbc) and so remain in effect for subsequent calls to nag_glopt_bnd_mcs_solve (e05jbc), unless you explicitly change them. ### 11.1  Description of the Optional Arguments For each option, we give a summary line, a description of the optional argument and details of constraints. The summary line contains: • a parameter value, where the letters $a$, $i\text{​ and ​}r$ denote options that take character, integer and real values respectively, and where the letter $a$ denotes an option that takes an ‘ON’ or ‘OFF’ value; • the default value, where the symbol $\epsilon$ is a generic notation for machine precision (see nag_machine_precision (X02AJC)), the symbol ${r}_{\mathrm{max}}$ stands for the largest positive model number (see nag_real_largest_number (X02ALC)), ${n}_{r}$ represents the number of non-fixed variables, and the symbol $d$ stands for the maximum number of decimal digits that can be represented (see nag_decimal_digits (X02BEC)). Option names are case-insensitive and must be provided in full; abbreviations are not recognized. Defaults This special keyword is used to reset all optional arguments to their default values, and any random state stored in state will be destroyed. Any option value given with this keyword will be ignored. This optional argument cannot be queried or got. Function Evaluations Limit $i$ Default $\text{}=100{n}_{r}^{2}$ This puts an approximate limit on the number of function calls allowed. The total number of calls made is checked at the top of an internal iteration loop, so it is possible that a few calls more than $\mathit{nf}$ may be made. Constraint: $\mathit{nf}>0$. Infinite Bound Size $r$ Default $\text{}={r}_{\mathrm{max}}^{\frac{1}{4}}$ This defines the ‘infinite’ bound $\mathit{infbnd}$ in the definition of the problem constraints. Any upper bound greater than or equal to $\mathit{infbnd}$ will be regarded as $\infty$ (and similarly any lower bound less than or equal to $-\mathit{infbnd}$ will be regarded as $-\infty$). Constraint: ${r}_{\mathrm{max}}^{\frac{1}{4}}\le \mathit{infbnd}\le {r}_{\mathrm{max}}^{\frac{1}{2}}$. Local Searches $a$ Default $\text{}=\text{'ON'}$ If you want to try to accelerate convergence of nag_glopt_bnd_mcs_solve (e05jbc) by starting local searches from candidate minima, you will require $\mathit{lcsrch}$ to be ‘ON’. Constraint: $\mathit{lcsrch}=\text{'ON'}\text{​ or ​}\text{'OFF'}$. Local Searches Limit $i$ Default $\text{}=50$ This defines the maximal number of iterations to be used in the trust-region loop of the local-search procedure. Constraint: $\mathit{loclim}>0$. Local Searches Tolerance $r$ Default $\text{}=2\epsilon$ The value of $\mathit{loctol}$ is the multiplier used during local searches as a stopping criterion for when the approximated gradient is small, in the sense described in Section 10.3. Constraint: $\mathit{loctol}\ge 2\epsilon$. Minimize Default Maximize These keywords specify the required direction of optimization. Any option value given with these keywords will be ignored. Nolist Default List These options control the echoing of each optional argument specification as it is supplied. ${\mathbf{List}}$ turns printing on, ${\mathbf{Nolist}}$ turns printing off. The output is sent to stdout. Any option value given with these keywords will be ignored. This optional argument cannot be queried or got. Repeatability $a$ Default $\text{}=\text{'OFF'}$ For use with random initialization lists (${\mathbf{initmethod}}=\mathrm{Nag_Random}$). When set to ‘ON’, an internally-initialized random state is stored in state for use in subsequent calls to nag_glopt_bnd_mcs_solve (e05jbc). Constraint: $\mathit{repeat}=\text{'ON'}\text{​ or ​}\text{'OFF'}$. Splits Limit $i$ Default $\text{}=⌊d\left({n}_{r}+2\right)/3⌋$ Along with the initialization list list, this defines a limit on the number of times the root box will be split along any single coordinate direction. If ${\mathbf{Local Searches}}$ is ‘OFF’ you may find the default value to be too small. Constraint: $\mathit{smax}>{n}_{r}+2$. Static Limit $i$ Default $\text{}=3{n}_{r}$ As the default termination criterion, computation stops when the best function value is static for $\mathit{stclim}$ sweeps through levels. This argument is ignored if you have specified a target value to reach in ${\mathbf{Target Objective Value}}$. Constraint: $\mathit{stclim}>0$. Target Objective Error $r$ Default $\text{}={\epsilon }^{\frac{1}{4}}$ If you have given a target objective value to reach in $\mathit{objval}$ (the value of the optional argument ${\mathbf{Target Objective Value}}$), $\mathit{objerr}$ sets your desired relative error (from above if ${\mathbf{Minimize}}$ is set, from below if ${\mathbf{Maximize}}$ is set) between obj and $\mathit{objval}$, as described in Section 7. See also the description of the optional argument ${\mathbf{Target Objective Safeguard}}$. Constraint: $\mathit{objerr}\ge 2\epsilon$. Target Objective Safeguard $r$ Default $\text{}={\epsilon }^{\frac{1}{2}}$ If you have given a target objective value to reach in $\mathit{objval}$ (the value of the optional argument ${\mathbf{Target Objective Value}}$), $\mathit{objsfg}$ sets your desired safeguarded termination tolerance, for when $\mathit{objval}$ is close to zero. Constraint: $\mathit{objsfg}\ge 2\epsilon$. Target Objective Value $r$ This argument may be set if you wish nag_glopt_bnd_mcs_solve (e05jbc) to use a specific value as the target function value to reach during the optimization. Setting $\mathit{objval}$ overrides the default termination criterion determined by the optional argument ${\mathbf{Static Limit}}$.
2014-07-29 06:39:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 616, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9564715027809143, "perplexity": 1200.0074717790296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510266597.23/warc/CC-MAIN-20140728011746-00276-ip-10-146-231-18.ec2.internal.warc.gz"}
https://math.stackexchange.com/tags/riemann-surfaces/hot
# Tag Info The answer to 1 and 3 is yes. Furthermore the formula that lies behind this answer shows that there is no "hole in the middle of it" and that the answer to 2 is no. All these answers are obtained using the inversion formula f(z) = \begin{cases} \frac{1}{z} & \quad\text{if $z \not\in \{0,\infty\}$} \\ \infty & \quad\text{if $z=0$} \\ 0 & \quad\... Unfortunately You can't just use $D/\Gamma$ because then the boundary $S^1$ will be very badly behaved (e.g., nonexistence of nontangential limit by choosing the appropriate point on each fundamental $4g$-gon). The usual introductory way is to just use Hilbert space theory with minimal amount of Sobolev spaces thrown in. $(I+\Delta)^{-1}\colon L^2(M)\to H^2(... 1 What you want is the reason for naming a theorem this way. So the answer has to be some kind of guess and by analogy. It is possible in algebraic topology to find two non-homeomorphic topological spaces having the same fundamental groups (or homology groups). That is within a "collection of topological spaces" of same fundamental groups one can find two ... 1 They are sometimes called cone points, because they look like taking a piece of paper with a corner of angle$2\pi/m_i\$ and folding it over to identify opposite sides of the vertex. This produces something which looks like a cone at the singular point. They are also sometimes called pillowcase points as they look like the corners of a pillow. So, for ...
2019-05-25 21:59:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9941917061805725, "perplexity": 275.334241763944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258451.88/warc/CC-MAIN-20190525204936-20190525230936-00302.warc.gz"}
https://en.wikibooks.org/wiki/Electronics/Inductors/Special_Cases
# Electronics/Inductors/Special Cases This section list formulas for inductances in specific situations. Beware that some of the equations are in Imperial units. The permeability of free space, μ0, is constant and is defined to be exactly equal to 4π×10-7 H m-1. ## 1. Basic inductance formula for a cylindrical coil $L=\frac{\mu_0\mu_rN^2A}{l}$ • L = inductance / H • μr = relative permeability of core material • N = number of turns • A = area of cross-section of the coil / m2 • l = length of coil / m ## 2. The self-inductance of a straight, round wire in free space $L_{self} = \frac{\mu_0 b}{2 \pi} \left[ \ln \left(\frac{b}{a}+\sqrt{1+\frac{b^{2}}{a^{2}}}\right) -\sqrt{1+\frac{a^{2}}{b^{2}}}+\frac{a}{b}+\frac{\mu_{r}}{4} \right]$ • Lself = self inductance / H • b = wire length /m • a = wire radius /m • $\mu_r$ = relative permeability of wire If you make the assumption that b >> a and that the wire is nonmagnetic ($\mu_r=1$), then this equation can be approximated to $L_{self} = \frac{\mu_0 b}{2 \pi} \left[ \ln \left( \frac{2b}{a} \right) - 3/4 \right]$ (for low frequencies) $L_{self} = \frac{\mu_0 b}{2 \pi} \left[ \ln \left( \frac{2b}{a} \right) - 1 \right]$ (for high frequencies due to the skin effect) • L = inductance / H • b = wire length / m • a = wire radius / m The inductance of a straight wire is usually so small that it is neglected in most practical problems. If the problem deals with very high frequencies (f > 20 GHz), the calculation may become necessary. For the rest of this book, we will assume that this self-inductance is negligible. ## 3. Inductance of a short air core cylindrical coil in terms of geometric parameters: $L=\frac{r^2N^2}{9r+10l}$ L = inductance in μH r = outer radius of coil in inches l = length of coil in inches N = number of turns ## 4. Multilayer air core coil $L = \frac{0.8r^2N^2}{6r+9l+10d}$ L = inductance in μH r = mean radius of coil in inches l = physical length of coil winding in inches N = number of turns d = depth of coil in inches (i.e., outer radius minus inner radius) ## 5. Flat spiral air core coil $L=\frac{r^2N^2}{(2r+2.8d) \times 10^5}$ L = inductance / H r = mean radius of coil / m N = number of turns d = depth of coil / m (i.e. outer radius minus inner radius) Hence a spiral coil with 8 turns at a mean radius of 25 mm and a depth of 10 mm would have an inductance of 5.13µH. ## 6. Winding around a toroidal core (circular cross-section) $L=\mu_0\mu_r\frac{N^2r^2}{D}$ • L = inductance / H • μr = relative permeability of core material • N = number of turns • r = radius of coil winding / m • D = overall diameter of toroid / m
2015-09-02 08:36:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8339451551437378, "perplexity": 2721.7721752334432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645258858.62/warc/CC-MAIN-20150827031418-00300-ip-10-171-96-226.ec2.internal.warc.gz"}
http://crypto.stackexchange.com/questions/853/google-is-using-rc4-but-isnt-rc4-considered-unsafe?answertab=active
# Google is using RC4, but isn't RC4 considered unsafe? Why is Google using RC4 for their HTTPS/SSL? $openssl s_client -connect www.google.com:443 | grep "Cipher is" New, TLSv1/SSLv3, Cipher is RC4-SHA Isn't RC4 unsafe to use? - RC4 is safe for use in SSL/TSL. See here Many banks still use it including Citibank and Capital One. – David Schwartz Sep 29 '11 at 9:37 @David Schwartz: things changed. – fgrieu Mar 19 '13 at 8:45 Microsoft is using MD5. – Qiu Oct 19 '13 at 20:55 add comment ## 7 Answers Wikipedia has a decent writeup on the known attacks on RC4. Most of them are from biases of the output. To give you an idea of the severity of the attacks see the following quotes from the Wikipedia page: The best such attack is due to Itsik Mantin and Adi Shamir who showed that the second output byte of the cipher was biased toward zero with probability 1/128 (instead of 1/256). There is also this: Souradyuti Paul and Bart Preneel of COSIC showed that the first and the second bytes of the RC4 were also biased. The number of required samples to detect this bias is$2^{25}$bytes. The following bias in RC4 was used to attack WEP: ...over all possible RC4 keys, the statistics for the first few bytes of output keystream are strongly non-random, leaking information about the key. If the long-term key and nonce are simply concatenated to generate the RC4 key, this long-term key can be discovered by analysing a large number of messages encrypted with this key. As far as I know, however, SSL/TLS does not use a long-term key with a nonce. It establishes a new key every connection (and even refreshes the key after some period of time). The take away point is, RC4 has shown some weaknesses that have actually been exploited to attack real-world system under certain configurations. But, no one has shown if/how these weaknesses affect SSL/TLS. If you are worried about it, however, I believe SSL/TLS has a cipher negotiation phase, so there is probably a way to force connections to not be able to use RC4. This could open you up to other attacks though. - add comment A major problem we still face is fazing out old encryption standards with vulnerabilities, without breaking web access to many sites and services for machines or networks running older software like WinXP and IE8, or that may have a high upgrade cost. I think we should just start forcing the issue however by announcing a timeline in which any vulnerable standards should be removed. In the leaked Snowden documents released so far there are references to the NSA already using very powerful super computers to try and exploit crypto as large as 2048-bit RSA keys as well as targeting others like AES. Comments alluded to such enormous breakthroughs that only the chairman and vice chairman and the two staff directors of each intelligence committee were told about it From Bruce Shneier's blog We’re already trying to phase out 1024-bit RSA keys in favor of 2048-bit keys. Perhaps we need to jump even further ahead and consider 3072-bit keys. He later published a new public key of length 4096. - add comment Note that the above discussion may be out of date after the Snowden revelations. For example, see the "Hardening Internet Infrastructure" panel at IETF88 where Schneier speculates that the NSA expects to be able to break RC4 and is recruiting staff to do so. - Hello and welcome to Crypto.SE. Please consider adding a link to the document you mention, as well as quoting the relevant passages. If the information you mention makes RC4 substantially insecure, do include it here as it's directly relevant. Cheers – rath Nov 15 '13 at 3:15 add comment I do think that in the fullness of time the choice to forcibly migrate people to RC4 will be considered a folly. We recently had a PCI auditor command that we use RC4 to avoid the BEAST attack. We had no option but to comply or face losing our PCI certification. Across the industry, people are fleeing from AES-CBC in response to this attack. Yet in my opinion using RC4 in TLS is much, much worse than incorrectly using CBC. Why you ask? Let's say that Alice and Bob disagree about the security of RC4. Alice thinks RC4 is completely fine for TLS, Bob disagrees. They decide to set up a little game. In each round of the game, Bob will submit two messages to Alice, m0 and m1. Alice will then choose an RC4 key at random, encrypt one of the messages and send it back to Bob. She always chooses the same message to encrypt for all the rounds in a specific game: either she always encrypts m0 or she always encrypts m1. Bob can have as many rounds as he likes, but at the end his goal is to tell which of the two messages he is getting back in each round. Now for any secure cipher, Bob wouldn't be able to do this with a probability greater than 50%. However, for RC4 he can do this trivially by requesting the encryption of pairs of two byte messages. Here's how it works. He sets M0 to 0xFFFF and he sets M1 to 0x0000. The second byte of an RC4 key stream as about twice the likelihood of being zero as it should do:$2\over256$. His strategy is to just submit these messages again and again and again and again. He then records how often he sees 0xFF in the second byte versus how often he see 0x00. He can use this to distinguish whether he's getting M0s or M1s. This isn't some academic attack either, you can actually implement it in a few lines of Python and it runs in a handful of seconds. See here for some working code I've knocked together that demonstrates this attack. I've set the default number of trials to 2000. It occasional gets it wrong on this setting but most of the time the challenger and attacker agree. Worse, this game isn't all that academic either. It is often the case that with TLS that same payload is encrypted under multiple keys. Suppose we use TLS to secure POP or IMAP sessions. The initial phase of the protocol are identical for all users. Worse, it often the case the actual user passwords are transmitted over the TLS. An attacker might be able to use the biases in the first few hundred bytes of RC4 to significantly reduce an exhaustive search of a user password. He just sits there waiting for the user to connect to their POP server and records each session's bytes. People often configure their e-mail client to do this every few minutes. We're not expecting the user to do anything out of the ordinary! He then analyses these bytes with respect to already known biases to prioritise a brute-force search of the password space. I don't even think any new cryptanalysis is required to mount this attack. It simply a question of developing a clever heuristic algorithm that uses the biases already known to significantly reduce the search space. It's just a matter of time until someone does this! In summary, I think RC4 is as close to broken as you can get without being able to recover arbitrary plaintext. It should be retired urgently and nobody should be using it. Least of all Google. - add comment It's not very practical yet (at least 224 ciphertexts), but attacks can only get better, not worse. Remember how it was with WEP cracking. - add comment Academically speaking, RC4 is terrible; it has easy distinguishers ("easy" means "can really be demonstrated in lab conditions"). It is also hard to use properly. However, SSL/TLS uses RC4 correctly, and in practice the shortcomings of RC4 have no real importance. The power-that-be at Google decided to switch to RC4 by default because of the recent "BEAST" attack, which demonstrates (again, in lab conditions) a compromise of a Paypal cookie. There is no such dramatic demonstration for an attack on RC4 as used in SSL, so it was estimated that using AES-CBC with SSL/TLS 1.0 was "more risky" than using RC4. The academically "right thing" to do would be to use AES-CBC with TLS 1.1 (or any ulterior version), which has no problem with BEAST and none of the RC4-related weaknesses either. However, Google makes money in the real world, and, as such, they cannot enforce a configuration which would prevent a third of their user base from connecting. - Google was using RC4 by default before BEAST, simply because it's the lowest CPU burden to implement and they were leading the way in SSL-by-default, so at their scale this really mattered. – Phil P May 23 '12 at 15:37 @PhilP: actually, with a recent enough x86 CPU (one with the AES-NI instructions), AES is vastly less expensive than RC4. Anyway, encryption speed is non-negligible only when doing bulk data transfer, which is not typical of what Google does. Most of what Google does is CPU-heavy and encryption cost is quite dwarfed by it. – Thomas Pornin May 23 '12 at 21:53 Google has a lot of machines, replacing all of the CPUs to get a CPU with the latest feature is the sort of event which would turn shareholders white. No doubt at some point they'll have enough front-end boxes with AES-NI, and knowing Google it will be sooner than I might expect. – Phil P May 25 '12 at 13:57 Indeed, there's new stuff; see this and this. The attack seems to recover part of a message repeated in multiple TLS sessions (for some bytes, it starts working at$2^{24}\$ repeats). Attacks only get better; they never get worse. –  fgrieu Mar 19 '13 at 7:56 @WatsonLadd I think it's somewhat unfair to attack an answer that, at this point, is 2 years old and make it appear as if it was "completely and utterly wrong" to begin with - especially when it isn't! Yes, new evidence has come to light about the security of RC4 when used with TLS/SSL and a comment noting that (very important) fact should be made. Yes, the answer could, possibly, be updated to account for the new information. –  Nik Bougalis Mar 20 '13 at 1:26
2014-07-23 09:52:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.596942126750946, "perplexity": 2541.7404153511156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997877693.48/warc/CC-MAIN-20140722025757-00189-ip-10-33-131-23.ec2.internal.warc.gz"}
https://www.hpmuseum.org/forum/thread-13771-page-2.html
RPN scientific calculator firmware for $13 calculator kit 10-22-2019, 09:33 AM Post: #21 toml_12953 Senior Member Posts: 1,739 Joined: Dec 2013 RE: RPN scientific calculator firmware for$13 calculator kit (10-21-2019 04:43 AM)jklsadf Wrote:  The only thing I can think of that could cause LCD issues is either a connection issue or a low battery. You nailed it! The problem was a low battery. That's the last time I buy a Sosumi (So Sue Me?) product. Tom L Cui bono? 10-22-2019, 09:40 AM Post: #22 toml_12953 Senior Member Posts: 1,739 Joined: Dec 2013 RE: RPN scientific calculator firmware for $13 calculator kit (10-22-2019 05:46 AM)jklsadf Wrote: (10-22-2019 04:59 AM)toml_12953 Wrote: I'm trying to compile the new firmware with STO, RCL, etc. I use 64-bit Ubuntu with 8GB RAM. When I type make, I get this: ?ASlink-Error-Insufficient ROM/EPROM/FLASH memory. make: *** [Makefile:19: main] Error 1 What version of Ubuntu/SDCC are you using to build? On Ubuntu 18.04LTS with SDCC 3.5.0 doing "make clean" and then "make" gives a binary 12,212 bytes in size, which is a bit tight (but still 1K free). I think newer versions of SDCC might be less efficient at code generation. If you do want to build it, SDCC has few dependencies, so I think it is possible to install older versions of the package. Other compilers (e.g. Keil) are probably even more efficient, but I don't have a license for the 8051, and also it's not free. I'm using Ubuntu 19.10 with SDCC 3.8.0. Code size is 13312. I'll look for an older SDCC. I wonder why it's getting less efficient! Tom L Cui bono? 10-22-2019, 11:04 AM (This post was last modified: 10-22-2019 11:05 AM by toml_12953.) Post: #23 toml_12953 Senior Member Posts: 1,739 Joined: Dec 2013 RE: RPN scientific calculator firmware for$13 calculator kit (10-22-2019 05:46 AM)jklsadf Wrote:  What version of Ubuntu/SDCC are you using to build? On Ubuntu 18.04LTS with SDCC 3.5.0 doing "make clean" and then "make" gives a binary 12,212 bytes in size, which is a bit tight (but still 1K free). I think newer versions of SDCC might be less efficient at code generation. If you do want to build it, SDCC has few dependencies, so I think it is possible to install older versions of the package. Other compilers (e.g. Keil) are probably even more efficient, but I don't have a license for the 8051, and also it's not free. I found sdcc 3.5.0 and now make works but I get the following: PHP Code: Name                 Start    End      Size     Max        ---------------- -------- -------- -------- --------   ROM/EPROM/FLASH  0x0000   0x2fb7   12216    13312 I guess you won't have room for any trigonometric functions like sine, cosine, etc., eh? Tom L Cui bono? 10-22-2019, 11:59 AM Post: #24 toml_12953 Senior Member Posts: 1,739 Joined: Dec 2013 RE: RPN scientific calculator firmware for $13 calculator kit Success! I was able to compile v1.07 and upload it to the calculator. Now that I have all the software in place, future updates will be easier! One user suggested the universal symbol for roll down (Rv) rather than the loop you have above the 4 now. I'm going to make a similar change to my personal template but with a down arrow instead of the "v". I'd suggest you use that as well but I won't update my public template unless you say OK. Tom L Cui bono? 10-22-2019, 04:03 PM (This post was last modified: 10-22-2019 04:07 PM by jklsadf.) Post: #25 jklsadf Junior Member Posts: 38 Joined: Nov 2017 RE: RPN scientific calculator firmware for$13 calculator kit SDCC is indeed getting less efficient for the 8051, since it's no longer the focus of development. See for example this thread: https://sourceforge.net/p/sdcc/discussio...589cc8d57/ It is kind of sad, since I think SDCC was originally written for the 8051, but to be fair the 8051 is a pretty horrible architecture for programming in anything but assembly (in my opinion). The architectures Philipp says the situation is even worse for (the ds390, pic14, and pic16) are similarly fairly horrible for programming in anything but assembly (again in my opinion), so it's no surprise the compiler developers have moved on to "greener pastures". I think fitting the CORDIC trig functions from the HP 35 is probably not easily doable with SDCC 3.5.0, without first freeing up space some other way. I was thinking of using some of Valentin Albillo's algorithms for the trig functions: http://www.hpcc.org/datafile/hp12/12c_Tr...ctions.pdf Those functions use the taylor series expansion after range reduction/various trig identities, and the series expansion is fairly repetitive/should be less code space. The downside is it's probably slower. Feel free to change the roll-down symbol on the template. It's one of the symbols from Elektronika calculators I personally like (the other being an up arrow for Enter). After you get used to it, it's a pretty good representation of what the stack does, although maybe a bit sacrilegious on an HP calculator forum. Rv is definitely more standard on HP calculators. 10-22-2019, 08:42 PM (This post was last modified: 10-22-2019 08:44 PM by toml_12953.) Post: #26 toml_12953 Senior Member Posts: 1,739 Joined: Dec 2013 RE: RPN scientific calculator firmware for $13 calculator kit OK, now I KNOW the HP 42S doesn't do this! Press 8 Press Enter Press 3 Press + Display shows 0 11 Press Shift CLx Display shows 0 0 Press - Display shows 0 -11 Huh? The HP 42S shows 0 as it should. The X and Y registers should have 0 in them after the CLx and 0 - 0 = 0 (AFAIK) Tom L Cui bono? 10-22-2019, 10:56 PM (This post was last modified: 10-23-2019 12:20 AM by jklsadf.) Post: #27 jklsadf Junior Member Posts: 38 Joined: Nov 2017 RE: RPN scientific calculator firmware for$13 calculator kit That is indeed a bug. ClearX only clears the number entry, but doesn't actually clear the X register (which is separate from digit entry to make processing digit entry easier). I think it should be just a one line fix, but need to test a little. EDIT: just released a new bugfix version 1.08 10-23-2019, 01:47 PM Post: #28 toml_12953 Senior Member Posts: 1,739 Joined: Dec 2013 RE: RPN scientific calculator firmware for $13 calculator kit Is there a way to enter an exponent during number entry? I'd like to be able to enter scientific notation numbers. The only way I see to do it now is this Example: Enter 1.67e14 1.67 Enter 14 Shift 6 (10^x) * That wouldn't be too bad if the base 10 antilog returned integers when given an integer argument but it doesn't always (as in the example above) Tom L Cui bono? 10-23-2019, 03:59 PM Post: #29 jklsadf Junior Member Posts: 38 Joined: Nov 2017 RE: RPN scientific calculator firmware for$13 calculator kit Yes, the . key also serves as the enter exponent key. The first press enters a decimal point, the second press begins exponent entry, and the third press negates the exponent. This is copied from old Sinclair scientific calculators (which also had a limited number of keys). 10-28-2019, 10:50 AM Post: #30 toml_12953 Senior Member Posts: 1,739 Joined: Dec 2013 RE: RPN scientific calculator firmware for $13 calculator kit I was trying to reflash one of my chips and got an error partway through. Now I can't power cycle the calculator to reflash it properly. Is there any hope for the chip? If so, how can I get the flash process to continue? It's not a big deal since$12.00 or so for a whole new kit isn't a big strain on my budget but still if there's a way to save the chip, I'd like to know how. Tom L Cui bono? Post: #31 jklsadf Junior Member Posts: 38 Joined: Nov 2017 RE: RPN scientific calculator firmware for $13 calculator kit These chips are very hard to completely "brick" since the bootloader is permanently burned into ROM, and isn't overwritable as far as I know. Even with no code loaded (or corrupted code loaded) you should be able to start the stcgal programmer software, turn off the calculator by removing the batteries, and then turn on the calculator by reinserting the batteries and pressing the ON button. The stcgal programmer software continuously sends a special sequence of bytes. On poweron, the STC microcontroller runs the bootloader code (before any other code) to check for this sync sequence, and enters programming mode if found. This should all work even with no code loaded (or corrupted code loaded), since the bootloader is permanently burned into ROM. 10-28-2019, 07:44 PM (This post was last modified: 10-28-2019 07:54 PM by jklsadf.) Post: #32 jklsadf Junior Member Posts: 38 Joined: Nov 2017 RE: RPN scientific calculator firmware for$13 calculator kit You might be able to turn off the calculator without removing the battery (or whatever power supply you're using) by shorting the base of Q2 to ground (short the middle pin of Q2 to the left pin of Q2), which should turn off the soft-latching power switch. Then press the On button to turn on again. 10-28-2019, 08:17 PM Post: #33 jklsadf Junior Member Posts: 38 Joined: Nov 2017 RE: RPN scientific calculator firmware for $13 calculator kit One last thing I just thought of...in order to fully turn off the calculator so that it can be turned on again to run the bootloader, it might be necessary to prevent the microcontroller from being parasitically powered through the serial port pins if you have the USB-to-serial adapter connected.. The easiest way to do that is to unplug the USB-to-serial adapter from your computer while turning off the calculator. For a more permanent setup, you can add a diode/resistor as shown on page 12 in the datasheet (note the pinout of the microcontroller shown is different, but the basic idea is the same: the diode prevents the calculator from being parasitically powered, and I think the 300ohm resistor is just for isolation). The diode used is unimportant, just about any diode will work. The stcgal FAQ has other possible ideas for how to prevent parasitic powering, but the datasheet method works well. 10-29-2019, 12:05 PM Post: #34 toml_12953 Senior Member Posts: 1,739 Joined: Dec 2013 RE: RPN scientific calculator firmware for$13 calculator kit (10-28-2019 08:17 PM)jklsadf Wrote:  One last thing I just thought of...in order to fully turn off the calculator so that it can be turned on again to run the bootloader, it might be necessary to prevent the microcontroller from being parasitically powered through the serial port pins if you have the USB-to-serial adapter connected.. The easiest way to do that is to unplug the USB-to-serial adapter from your computer while turning off the calculator. That was my problem! I had removed the batteries but the USB serial converter was plugged in. When I removed it from the computer and plugged it back in, the download executed. Now I have two RPN calculators and I'll have to buy a third DIY to have one in original shape unless I can get the hex file for the resistor color code calculator. Thanks for the help. I've uploaded a revised key layout to Dropbox. It has slightly darker numeric keys and the trig functions are on keys 1, 2, 3 and the arc function is on the minus key. I followed your idea for them. DIY Keyboard Template Tom L Cui bono? 10-30-2019, 02:01 AM Post: #35 jklsadf Junior Member Posts: 38 Joined: Nov 2017 RE: RPN scientific calculator firmware for $13 calculator kit Glad it worked out for you! I did print out your template in color a while back, and have been meaning to solder up a second kit with better components (FSTN LCD, Omron tactile switches, and shorter screws into 12mm standoffs instead of the super long screw and nut provided). Someday...also the trig functions when I have time. (The log/exponential functions were a lot more motivating, because with those gave calculating arbitrary powers. Trigs I rarely use, and there's usually at least 3 calculators and sometimes a slide rule on my desk with trigs on the rare occasion I do need them.) I'd be surprised if you could get the original hex file, since it's not on the website, although luckily they at least included the full schematic. There do seem to be multiple sellers on eBay, so maybe the hex file is out there somewhere. They might all be the same seller though, or from the same original source, since most of the sellers do seem to be in Shen Zhen. I would be interested in knowing how much flash space the original firmware used. 10-31-2019, 06:32 PM Post: #36 toml_12953 Senior Member Posts: 1,739 Joined: Dec 2013 RE: RPN scientific calculator firmware for$13 calculator kit Could you make it possible to raise a negative number to a positive integer power? Right now numbers like -5 ^ 8 give an error. When raising to a positive integer power, you don't need the log functions. Is there enough room left in firmware memory to test for that? Tom L Cui bono? 10-31-2019, 06:49 PM Post: #37 grsbanks Senior Member Posts: 1,219 Joined: Jan 2017 RE: RPN scientific calculator firmware for $13 calculator kit (10-31-2019 06:32 PM)toml_12953 Wrote: Could you make it possible to raise a negative number to a positive integer power? Right now numbers like -5 ^ 8 give an error. When raising to a positive integer power, you don't need the log functions. Is there enough room left in firmware memory to test for that? The same applies to negative integer exponents. Simply raise to the positive power and then calculate the reciprocal. There are only 10 types of people in this world. Those who understand binary and those who don't. 11-01-2019, 01:08 AM Post: #38 jklsadf Junior Member Posts: 38 Joined: Nov 2017 RE: RPN scientific calculator firmware for$13 calculator kit It would be possible to do, but definitely fairly low priority given the amount of flash space left. I don't think it comes up enough in terms of everyday usage for the amount of flash it would take. (For what it's worth, I think the HP 35 also gives an error: it's one of the examples given in the "The New Accuracy: Making 2 ^ 3 = 8" in HP Journal.) My current priorities to fill the flash are approximately in order: trig, "continuous memory" by storing to flash when turning off (all the flash/eeprom is in-application-programmable on the microcontroller used apparently), auto poweroff, resistor color code decoder, hex/dec converter, and then special cases. Probably possible to free up space elsewhere, but there aren't many easy, major changes I can think of that would magically give a large amount of space back (i.e. similar to switching functions to not be re-entrant and not pass pointers). 01-16-2020, 10:43 PM Post: #39 toml_12953 Senior Member Posts: 1,739 Joined: Dec 2013 RE: RPN scientific calculator firmware for $13 calculator kit Any progress on the software front? I have 1.09 but am waiting for further enhancements. Tom L Cui bono? 01-28-2020, 08:20 AM Post: #40 jklsadf Junior Member Posts: 38 Joined: Nov 2017 RE: RPN scientific calculator firmware for$13 calculator kit I have not made much progress recently, what's on github is fairly up to date. The last change I made was some prototyping work for replacing the existing square root implementation (using logs/antilogs) with a fast-reciprocal square root implementation. I haven't done the implementation for the calculator yet though. The motivation was to reduce the decimal floating point register usage for square root, to use it as part of trig algorithms like those used by valentin albillo for the HP 12C. I do still use this calculator regularly (it has pretty much a permanent place on my desk at home), but haven't spent much time on it recently. I do want to get trigs done, but probably not much flash left for much else...even for the trigs will probably need to rewrite some other parts to free up some space, so it hasn't been that motivating given how rarely I use trigs. « Next Oldest | Next Newest » User(s) browsing this thread: 1 Guest(s)
2021-10-21 20:03:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17871859669685364, "perplexity": 3643.160963580361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585441.99/warc/CC-MAIN-20211021195527-20211021225527-00298.warc.gz"}
https://math.stackexchange.com/questions/1756082/what-is-visual-cryptography
# What is visual cryptography? Question: 1. What is visual cryptography? 2. How does it work for secret image sharing? Attempt: I have tried to understand the concept of secret image sharing for black and white pixel from here http://www.datagenetics.com/blog/november32013/. But I have doubt about pixel expansion in this case. Suppose I have a image $212\times 212$ pixels. How can I encode this image to use as cipher text using pixel. • Basically, you make sure that the combined 4-pixel gives all black if the source pixel is black, and gives half black if the source pixel is white. – Kenny Lau Apr 24 '16 at 1:42 • @KL, thank you for your kind response. Due to my network problem I am late. I shall update my post with example. – Warrior Apr 24 '16 at 2:10 The rule is already given here: Let $$\begin{matrix}\blacksquare\square\\\square\blacksquare\end{matrix}$$ be pattern 1 and $$\begin{matrix}\square\blacksquare\\\blacksquare\square\end{matrix}$$ be pattern 2. Notice that they add up to a black square, and when added to themselves, they create a half-black square. If the source pixel is black: • For half of the time, give pattern 1 to the first encrypted image and pattern 2 to the second encrypted image. • For the other half of the time, give pattern 2 to the first image and pattern 1 to the second. If the source pixel is white: • For half of the time, give pattern 1 to the first encrypted image and the second encrypted image. • For the other half of the time, give pattern 2 instead. ### Variations As stated in the website, they do not have to be a checkerboard pattern. As long as pattern 1 is the complement of pattern 2, i.e. they share no common black pixel and they add up to a large black pixel, then you can use them. You are also encouraged to use different varieties. ### Program: In Pyth: J.z "assign the inputs to J" \ =GmmO6lhJlJ "assign G to an array filled with random" \ "numbers from 0 to 5, each corresponding" \ "to a pattern" \ =H.e.e?q@@JkY"■"-5ZZbG \ "assign H to G, then flip the number if" \ "the corresponding pixel is black" jbmsMdG\ jbmsMdH "print both arrays" Try it online! Correspondence: • 0 corresponds to $$\begin{matrix}\blacksquare\blacksquare\\\square\square\end{matrix}$$. • 1 corresponds to $$\begin{matrix}\blacksquare\square\\\blacksquare\square\end{matrix}$$. • 2 corresponds to $$\begin{matrix}\blacksquare\square\\\square\blacksquare\end{matrix}$$. • 3 corresponds to $$\begin{matrix}\square\blacksquare\\\blacksquare\square\end{matrix}$$. • 4 corresponds to $$\begin{matrix}\blacksquare\square\\\blacksquare\square\end{matrix}$$. • 5 corresponds to $$\begin{matrix}\square\square\\\blacksquare\blacksquare\end{matrix}$$. They are deliberately assigned so that if their code add up to 5, then the pixels add up to a large black pixel. • @ KL, it will better if you do this. +1 for you. – Warrior Apr 24 '16 at 2:12 • @Warrior Please tell me if you need another input format. I am aware that the current input format is not so useful. – Kenny Lau Apr 24 '16 at 2:19 • if every pixel is divided into say 2 subpixels horizontally, then this would increse the width of the superimposed image by a factor of 2?? – Upstart Oct 5 '17 at 9:24
2020-02-24 09:26:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48419085144996643, "perplexity": 1276.5969446216177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145910.53/warc/CC-MAIN-20200224071540-20200224101540-00010.warc.gz"}
https://www.semanticscholar.org/paper/Mixture-Models-With-a-Prior-on-the-Number-of-Miller-Harrison/1f18f14f26c365a7cd14d0067870d664e1e7e9bf
# Mixture Models With a Prior on the Number of Components @article{Miller2015MixtureMW, title={Mixture Models With a Prior on the Number of Components}, author={Jeffrey W. Miller and Matthew T. Harrison}, journal={Journal of the American Statistical Association}, year={2015}, volume={113}, pages={340 - 356} } • Published 22 February 2015 • Computer Science • Journal of the American Statistical Association ABSTRACT A natural Bayesian approach for mixture models with an unknown number of components is to take the usual finite mixture model with symmetric Dirichlet weights, and put a prior on the number of components—that is, to use a mixture of finite mixtures (MFM). The most commonly used method of inference for MFMs is reversible jump Markov chain Monte Carlo, but it can be nontrivial to design good reversible jump moves, especially in high-dimensional spaces. Meanwhile, there are samplers for… 175 Citations R rigor is added to data-analysis folk wisdom by proving that under even the slightest model misspecification, the FMM posterior on the number of components is ultraseverely inconsistent: for any finite $k \in \mathbb{N}$, the posterior probability that the numberof components is $k$ converges to 0 in the limit of infinite data. • Computer Science, Mathematics • 2022 It is shown that a post-processing algorithm introduced by Guha et al. (2021) for the Dirichlet process extends to more general models and provides a consistent method to estimate the number of components and discusses possible solutions. A new class of priors is introduced: the Normalized Independent Point Process, which is based on an auxiliary variable MCMC, which allows handling the otherwise intractable posterior distribution and overcomes the challenges associated with the Reversible Jump algorithm. • Computer Science, Mathematics • 2021 This work derives a sampler that is straightforward to implement for mixing distributions with tractable size-biased ordered weights and mitigates the label-switching problem in infinite mixtures. • Computer Science Journal of Computational and Graphical Statistics • 2021 A general framework for mixture models, when the prior of the “cluster centers” is a finite repulsive point process depending on a hyperparameter, specified by a density which may depend on an intractable normalizing constant is presented. • Computer Science • 2022 The results show that the choice of prior is critical for deriving reliable posterior inferences in problems of higher dimensionality, and the use of the DPMM in clustering is also applicable to density estimation. • Mathematics, Computer Science Biometrika • 2022 This work focuses on consistency for the unknown number of clusters when the observed data are generated from a finite mixture, and considers the situation where a prior is placed on the concentration parameter of the underlying Dirichlet process. • Mathematics • 2020 Discrete nonparametric priors play a central role in a variety of Bayesian procedures, most notably when used to model latent features as in clustering, mixtures and curve fitting. They are effective • Computer Science Bernoulli • 2021 It will be shown that the modeling choice of kernel density functions plays perhaps the most impactful roles in determining the posterior contraction rates in the misspecified situations. This thesis shows how to overcome certain intractabilities in order to obtain analogous compact representations for the class of Poisson-Kingman priors which includes the Dirichlet and Pitman-Yor processes. ## References SHOWING 1-10 OF 136 REFERENCES • Computer Science J. Mach. Learn. Res. • 2014 It is shown that the posterior on data from a finite mixture does not concentrate at the true number of components, and this result applies to a large class of nonparametric mixtures, including DPMs and PYMs, over a wide variety of families of component distributions. • Computer Science, Mathematics NIPS • 2013 An elementary proof of this inconsistency is given in what is perhaps the simplest possible setting: a DPM with normal components of unit variance, applied to data from a "mixture" with one standard normal component. • Computer Science • 2006 A variational inference algorithm forDP mixtures is presented and experiments that compare the algorithm to Gibbs sampling algorithms for DP mixtures of Gaussians and present an application to a large-scale image analysis problem are presented. • Computer Science, Mathematics • 2001 A weighted Bayes factor method for consistently estimating d that can be implemented by an iid generalized weighted Chinese restaurant (GWCR) Monte Carlo algorithm and the performance of the new GWCR model selection procedure is compared with that of the Akaike information criterion and the Bayes information criterion implemented through an EM algorithm. • Mathematics • 2005 In recent years the Dirichlet process prior has experienced a great success in the context of Bayesian mixture modeling. The idea of overcoming discreteness of its realizations by exploiting it in • Mathematics Stat. Comput. • 2011 A more efficient version of the slice sampler for Dirichlet process mixture models described by Walker allows for the fitting of infinite mixture models with a wide-range of prior specifications and considers priors defined through infinite sequences of independent positive random variables. • Mathematics • 1997 New methodology for fully Bayesian mixture analysis is developed, making use of reversible jump Markov chain Monte Carlo methods that are capable of jumping between the parameter subspaces Finite mixture distributions are receiving more and more attention from statisticians in many different fields of research because they are a very flexible class of models. They are typically used • Mathematics • 2003 The class of species sampling mixture models is introduced as an exten- sion of semiparametric models based on the Dirichlet process to models based on the general class of species sampling priors, In Bayesian density estimation and prediction using Dirichlet process mixtures of standard, exponential family distributions, the precision or total mass parameter of the mixing Dirichlet process is
2023-02-02 19:03:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7380785346031189, "perplexity": 968.595617548178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500035.14/warc/CC-MAIN-20230202165041-20230202195041-00554.warc.gz"}
https://www.physicsforums.com/threads/description-of-magnetic-field-in-black-holes.236646/
# Description of magnetic field in black holes 1. May 22, 2008 ### Srr Given a Schwarzschild BH. A neutron fall into the BH. The neutron having non zero magnetic moment will carry a magnetic field B with it. How do I describe the new system, on which parameters will the metric depend? In term of classical GR, Kerr Newman solution provides a B in term of the charge Q and angular momentum J (and the no hair theorem is satisfied). I see a paradox might emerge. 1) If the B from the added neutron can be described as a Kerr Newman solution, then I don't know how to explain charge conservation. For both the initial systems schwarzschild BH and neutron, Q is zero. 2) If I have instead a solution for the metric with zero Q and J, the metric will have to depend on the intrinsic magnetic moment. This would violate the no hair theorem Another way to state the problem is the following. Consider a macroscopic neutral magnet. Somehow it shirks to form a BH. Again, which will be the parameter in the metric? If Q and J, then I cannot explain Q conservation. If the magnetic moment, then no hair theorem is violated.
2018-05-24 16:24:27
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8957699537277222, "perplexity": 754.5674498748172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866511.32/warc/CC-MAIN-20180524151157-20180524171157-00578.warc.gz"}
https://zenodo.org/record/3243857/export/xd
Conference paper Open Access # Advanced numerical simulation and modelling for reactor safety – Contributions from the CORTEX, HPMC, McSAFE and NURESAFE projects Demazière C.; Sanchez-Espinoza V. H.; Chanaron B. ### Dublin Core Export <?xml version='1.0' encoding='utf-8'?> <oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"> <dc:creator>Demazière C.</dc:creator> <dc:creator>Sanchez-Espinoza V. H.</dc:creator> <dc:creator>Chanaron B.</dc:creator> <dc:date>2019-06-07</dc:date> <dc:description>Predictive modelling capabilities have long represented one of the pillars of reactor safety. In this paper, an account of some projects funded by the European Commission within the seventh Framework Program (HPMC and NURESAFE projects) and Horizon2020 Program (CORTEX and McSAFE) is given. Such projects aim at, among others, developing improved solution strategies for the modelling of neutronics, thermal-hydraulics, and/or thermo-mechanics during normal operation, reactor transients and/or situations involving stationary perturbations. Although the different projects have different focus areas, they all capitalize on the most recent advancements in deterministic and probabilistic neutron transport, as well as in DNS, LES, CFD and macroscopic thermal-hydraulics modelling. The goal of the simulation strategies is to model complex multi-physics and multi-scale phenomena specific to nuclear reactors. The use of machine learning combined with such advanced simulation tools is also demonstrated to be capable of providing useful information for the detection of anomalies during operation.</dc:description> <dc:identifier>https://zenodo.org/record/3243857</dc:identifier> <dc:identifier>10.5281/zenodo.3243857</dc:identifier> <dc:identifier>oai:zenodo.org:3243857</dc:identifier> <dc:language>eng</dc:language> <dc:relation>info:eu-repo/grantAgreement/EC/H2020/754316/</dc:relation> <dc:relation>doi:10.5281/zenodo.3243856</dc:relation> <dc:rights>info:eu-repo/semantics/openAccess</dc:rights> <dc:subject>simulation, modelling, CORTEX, MCSAFE, NURESAFE, reactor safety</dc:subject> <dc:title>Advanced numerical simulation and modelling for reactor safety – Contributions from the CORTEX, HPMC, McSAFE and NURESAFE projects</dc:title> <dc:type>info:eu-repo/semantics/conferencePaper</dc:type> <dc:type>publication-conferencepaper</dc:type> </oai_dc:dc> 64 50 views
2022-10-05 19:23:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2362317591905594, "perplexity": 9878.951642374928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337663.75/warc/CC-MAIN-20221005172112-20221005202112-00488.warc.gz"}
http://www.kinberg.net/wordpress/stellan/impulse-allergy/
# Impulse allergy Index Intoduction The  name “impulse allergy” or ”impression allergy” (in Swedish intryckssallergi or in “Italian Italian “Allergia di impulsi” o  “allergia di impressioni”)  was mentioned February 11,  in the Swedish news Rapport, telling about the importance of teachers learning about about this  phenomenon that children’s with ADHD suffer. https://www.svt.se/nyheter/lokalt/smaland/fyra-av-fem-larare-tycker-inte-de-lart-sig-att-undervisa-barn-med-diagnos Gustav Fridolin. Image source: Creative commons Wiki ) The Swedish education Minister Gustav Fridolìn was invited to try to learn something with a lot of distracting noices. He said it was difficult after he test. (  https://www.svt.se/nyheter/val2018/har-far-utbildningsministern-testa-adhd-i-fem-minuter ) The focus was on the Attention deficit problem (ADD) many children’s with ADHD have. Impulse allergy was the name given to ADD. I will focus on this phenomenon in this article. . Attention deficit hyperactivity disorder (ADHD) affects children and teens and can continue into adulthood. ADHD is the most commonly diagnosed mental disorder of children. Children with ADHD may be hyperactive and unable control their impulses. Or they may have trouble paying attention. (     ) . “one third of the adults affected with AADD do not show any hyperactive behavior” (  brainblogger.com/  ) The article continues “In the brain of patients with AADD, executive function is impaired. This is the function that governs a person’s ability to monitor their own behavior by organizing and planning. This disorder affects approximately 2 to 4% of adults…”  (  brainblogger.com/  ) congnitive impulsiveness “…. AADD patients are often the types seen by others as not thinking before they speak or act…. ”  (  brainblogger.com/  ) Medication in affected children “…..Ritalin is the most commonly known medication and is used in the treatment of ADD in children with some success.”  (  brainblogger.com/  ) Having worked with children with ADD (attention deficit disorder) and ADHD I find that “Impulse allergy” is a very good word to describle the problem these childrens have in the classroom. I have found that children suffering of ADD have great difficulties to listen to and understand the teacher if there are noice around like a classmate makeing sounds of some sort or if someone is moving around. These childrens can not filter out those disturbing impulses. It seems that the brain is able to lower the  volume of disturbing sounds you are not intrested in so you can better concentrate on what e.g. the teacher is talking about. https://www.svt.se/nyheter/val2018/detta-ar-intrycksallergi explains well about impulse allergy. ( Install the google translate extension  in Chrome webstore to get a discrete translation into your language ). https://www.svt.se/nyheter/val2018/detta-ar-intrycksallergi . The ticking clock example Image source: hwww.kisspng.com I use to explain the brain filtering function with the example of a ticking clock. During the day when you are relaxed, you may not even hear the clock ticking. But at night when your brain is tired and its filtering function is not working as expected, the ticking clock sound is higher and you can not sleep. You simply have to stop that clock to be bale to relax and sleep. Sleeplessness . Sites But there are several Swedish sites  informing about impulse allergy, among others: https://www.svt.se/nyheter/val2018/jag-trodde-alla-kande-sa-har Classrooms for impulse sensitive childrens https://www.svt.se/nyheter/val2018/har-kan-kepsen-bli-ett-hjalpmedel Do you want to know more contact me. [wpforms id=”3470″] ## A pluralist agnostic seeker Insert math as $${}$$
2020-11-27 16:57:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2036607712507248, "perplexity": 6316.406280158313}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141193856.40/warc/CC-MAIN-20201127161801-20201127191801-00523.warc.gz"}
https://quijote-simulations.readthedocs.io/en/latest/png.html
# Primordial non-Gaussianities Quijote contains 4,000 N-body simulations with primordial non-Gaussianities: Quijote-PNG. All these simulations contain $$512^3$$ dark matter particles in a periodic volume of $$(1~h^{-1}{\rm Gpc})^3$$ and share the same cosmology as the fiducial model: $$\Omega_{\rm m}=0.3175$$, $$\Omega_{\rm b}=0.049$$, $$h=0.6711$$, $$n_s=0.9624$$, $$\sigma_8=0.834$$, $$w=-1$$, $$M_\nu=0.0$$ eV. These are standard N-body simulations run with initial conditions generated in a particular way. The simulations in Quijote-PNG can be classified into four different sets: 1) local, 2) equilateral, 3) orthogonal CMB, and 4) orthogonal LSS (see Bispectrum shapes). Each set contains 1,000 simulations: 500 with $$f_{\rm NL}=+100$$ and 500 with $$f_{\rm NL}=-100$$. Quijote-PNG is thus organized into eight different folders, depending on the non-Gaussianity shape and the value of $$f_{\rm NL}$$: • LC_p: contains data from 500 simulations with local type and $$f_{\rm NL}=+100$$ • LC_m: contains data from 500 simulations with local type and $$f_{\rm NL}=-100$$ • EQ_p: contains data from 500 simulations with equilateral type and $$f_{\rm NL}=+100$$ • EQ_m: contains data from 500 simulations with equilateral type and $$f_{\rm NL}=-100$$ • OR_CMB_p: contains data from 500 simulations with orthogonal CMB type and $$f_{\rm NL}=+100$$ • OR_CMB_m: contains data from 500 simulations with orthogonal CMB type and $$f_{\rm NL}=-100$$ • OR_LSS_p: contains data from 500 simulations with orthogonal LSS type and $$f_{\rm NL}=+100$$ • OR_LSS_m: contains data from 500 simulations with orthogonal LSS type and $$f_{\rm NL}=-100$$ Each of the above folders contains 500 sub-folders, each of them hosting the result of a different simulation. For instance, the folder EQ_p/72/ contains the results of the 72th simulation run with $$f_{\rm NL}=+100$$ for the equilateral shape. Depending on the location, these folder will contain the snapshots, halo catalogues, or other data products. ## Bispectrum shapes In Quijote-PNG we only consider models that have a primordial bispectrum, defined as $\langle \Phi(\mathbf{k}_1) \Phi(\mathbf{k}_2) \Phi(\mathbf{k}_3) \rangle = (2\pi)^3 \delta^{(3)}(\mathbf{k}_1+\mathbf{k}_2+\mathbf{k}_3)B_{\Phi}(k_1,k_2,k_3)~,$ where $$\Phi(\mathbf{k})$$ is the primordial potential. We consider four different shapes for the primordial bispectrum: 1. Local. The local shape can be characterized by $B^{\mathrm{local}}_{\Phi}(k_1,k_2,k_3) = 2 f_{\mathrm{NL}}^{\mathrm{local}} P_\Phi(k_1)P_\Phi(k_2)+ \text{ 2 perm.}$ 1. Equilateral. The equilaterial shape is described by $\begin{split} B^{\mathrm{equil.}}_{\Phi}(k_1,k_2,k_3) = 6 f_{\mathrm{NL}}^{\mathrm{equil.}}\Big[- P_\Phi(k_1)P_\Phi(k_2)+\text{ 2 perm.} \\ -2 \left( P_\Phi(k_1)P_\Phi(k_2)P_\Phi(k_3) \right)^{\frac{2}{3}} + P_\Phi(k_1)^{\frac{1}{3}}P_\Phi(k_2)^{\frac{2}{3}}P_\Phi(k_3) + \text{5 perm.}\Big]\end{split}$ 1. Orthogonal CMB. The orthogonal CMB template is given by $\begin{split}B^{\mathrm{ortho-CMB}}_\Phi(k_1,k_2,k_3) = 6 f_{\mathrm{NL}}^{\mathrm{ortho-CMB}}\Big[-3 P_\Phi(k_1)P_\Phi(k_2) \\ +\text{ 2 perm.} -8 \left( P_\Phi(k_1)P_\Phi(k_2)P_\Phi(k_3) \right)^{\frac{2}{3}} + 3P_\Phi(k_1)^{\frac{1}{3}}P_\Phi(k_2)^{\frac{2}{3}}P_\Phi(k_3) + \text{5 perm.}\Big]\end{split}$ 1. Orthogonal LSS. The orthogonal LSS template is given by $\begin{split}B^{\mathrm{ortho-LSS}}_\Phi(k_1,k_2,k_3) = \\ 6 f_{\mathrm{NL}}^{\mathrm{ortho-CMB}} \left(P_\Phi(k_1)P_\Phi(k_2)P_\Phi(k_3)\right)^{\frac{2}{3}}\Bigg[ \\ -\left(1+\frac{9p}{27}\right) \frac{k_3^2}{k_1k_2} + \textrm{2 perms} +\left(1+\frac{15p}{27}\right) \frac{k_1}{k_3} \\ + \textrm{5 perms} -\left(2+\frac{60p}{27}\right) \\ +\frac{p}{27}\frac{k_1^4}{k_2^2k_3^2} + \textrm{2 perms} -\frac{20p}{27}\frac{k_1k_2}{k_3^2}+ \textrm{2 perms} \\ -\frac{6p}{27}\frac{k_1^3}{k_2k_3^2} + \textrm{5 perms}+\frac{15p}{27}\frac{k_1^2}{k_3^2} + \textrm{5 perms}\Big]\end{split}$ ## Initial conditions The initial conditions of the Quijote-PNG simulations have been generated using a modified version of the code described in Scoccimarro et al. 2012. Our modified version of the code is publicly available here. The initial conditions of a given simulation can be found in a folder called ICs, that contains: • ics.X. These are the initial conditions that contain the particle positions, velocities, and IDs. These are Gadget format-II snapshots and can be read as described in Snapshots. X can go from 0 to 127. • 2LPT.params. This is the parameter file used to generate the initial conditions. • logIC. The output of the initial conditions generator code. The value of initial random seed for the simulation $$i$$ is $$10\times i+5$$ (this can be found in the 2LPT.params file) independently of the shape and $$f_{\rm NL}$$ value. For instance, the value of the initial random seed for OR_CMB_p/100 and OR_CMB_m/100 is 1005. This choice enables the calculation of partial derivatives, needed for Fisher matrix calculations. For the details about the linear matter power spectrum used for these simulations see Linear power spectra. ## Snapshots We keep snapshots at redshifts 0, 0.5, 1, 2, and 3. The snapshots are saved as HDF5 files, and they can be read in the standard way (see Snapshots for details on this). ## Halo catalogues We store Friends-of-Friends (FoF) halo catalogues for each snapshot of each simulation in Quijote-PNG. We refer the user to Halo catalogues for details on how to read these files. ## Density fields To facilitate the post-processing of the data we also provide 3D grids containing the overdensity, $$\delta(x)=\rho(x)/\bar{\rho}-1$$, for each redshift of all PNG simulations. We refer the user to Density fields for details on how to read these files. ## Team Quijote-PNG was developed in 2022 by: • William Coulton (CCA, USA) • Francisco Villaescusa-Navarro (CCA/Princeton, USA) • Dionysios Karagiannis (Cape Town, South Africa) • Drew Jamieson (MPA, Germany)
2022-12-06 00:30:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6986289024353027, "perplexity": 2696.0298810202075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711064.71/warc/CC-MAIN-20221205232822-20221206022822-00351.warc.gz"}
https://freedommathdance.blogspot.com/2014/01/radon-measures-form-sheaf-for-natural.html
## Thursday, January 9, 2014 ### Radon measures form a sheaf for a natural Grothendieck topology on topological spaces First post of the year, so let me wish all of you a happy new year! Almost two years ago, Antoine Ducros and I released a preprint about differential forms and currents on Berkovich spaces. We then embarked in revising it thoroughly; unfortunately, we had to correct a lot of inaccuracies, some of them a bit daunting. We made a lot of progress and we now have a much clearer picture in mind. Fortunately, all of the main ideas remain the same. A funny thing emerged, which I want to explain in this blog. One of our mottos was to define sheaves of differential forms, or of currents. Those differential forms were defined in two steps : by definition, they are locally given by tropical geometry, so we defined a presheaf of tropical forms, and passed at once to the associated sheaf. What we observed recently is that it is worth spending some time to study the presheaf of tropical forms. Also, Grothendieck topologies play such an important rôle in analytic geometry over non-archimedean fields; this is obvious for classical rigid spaces, but they are also important in Berkovich geometry, in particular if you want to care about possibly non-good spaces for which points may not have a neighborhood isomorphic to an affinoid space. So it was natural to sheafify the presheaf of tropical forms for the G-topology, giving rise to a G-sheaf of G-forms. Now, every differential form of maximal degree $\omega$ on a Berkovich space $X$ gives rise to a measure on the topological space underlying $X$. Our proof of this is a bit complicated, and was made more complicated by the fact that we first tried to define the integral $\int_X \omega$, and then defined $\int_X f\omega$ for every smooth function $f$, and then got $\int_X f\omega$ for every continuous function with compact support $f$ by approximation, using a version of the Stone-Weierstrass theorem in our context. In the new approach, we directly concentrate on the measure that we want to construct. For G-forms, this requires to glue measures defined locally for the G-topology. As it comes out (we finished to write down the required lemmas today), this is quite nice. Since Berkovich spaces are locally compact, we may restrict ourselves to classical measure theory on locally compact spaces. However, we may not make any metrizability assumption, nor any countability assumption, since the most basic Berkovich spaces lack those properties. Assume that the ground non-archimedean field $k$ is the field $\mathbf C((t))$ of Laurent series over the field $\mathbf C$ of complex numbers. Then the projective line $\mathrm P^1$ over $k$ is not metrizable, and the complement of its Gauss point'' $\gamma$ has uncountably many connected components (in bijection with the projective line over $\mathbf C$). Similarly, the complement of the Gauss point in the projective plane $\mathrm P^2$ over $k$ is connected, but is not countable at infinity, hence not paracompact. As always, there are two points of view on measure theory: Borel measures (countably additive set functions on the $\sigma$-algebra of Borel sets) and Radon measures (linear forms on the vector space of continuous compactly supported functions). By the theorem of Riesz, they are basically equivalent: locally finite, compact inner regular Borel measures are in canonical bijection with Radon measures. Unfortunately, basic litterature is not very nice on that topic; for example, Rudin's book constructs an outer regular Borel measure which may not be inner regular, while for us, the behavior on compact sets is really the relevant one. Secondly, we need to glue Radon measures defined on the members of a G-cover of our Berkovich space $X$. This is possible because Radon measures on a locally compact topological space naturally form a sheaf for a natural Grothendieck topology! Let $X$ be a locally compact topological space and let us consider the category of locally compact subspaces, with injections as morphisms.  Radon measures can be restricted to a locally compact subspace, hence form a presheaf on that category. Let us decree that a family $(A_i)_{i\in I}$ of locally compact subspaces of a locally compact subspace $U$ is a B-cover (B is for Borel) if for every point $x\in U$, there exists a finite subset $J$ of $I$ such that $x\in A_i$ for every $i\in J$ and such that $\bigcup_{i\in J}A_i$ is a neighborhood of $x$. B-covers form a G-topology on the category of locally compact subsets, for which Radon measures form a sheaf! In other words, given Radon measures $\mu_i$ on members $A_i$ of a B-cover of $X$ such that the restrictions to $A_i\cap A_j$ of $\mu_i$ and $\mu_j$ coincide, for all $i,j$, then there exists a unique Radon measure on $X$ whose restriction to $A_i$ equals $\mu_i$, for every $i$. This said, the proof (once written down carefully) is not a big surprise, nor specially difficult,  but I found it nice to get a natural instance of sheaf for a Grothendieck topology within classical analysis.
2017-07-28 14:54:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8701176643371582, "perplexity": 243.89571761541373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500550969387.94/warc/CC-MAIN-20170728143709-20170728163709-00117.warc.gz"}
https://sebspain.co.uk/2014/08/07/graphics-for-scientists-part-4-vector-file-formats.html
# Graphics for Scientists - Part 4 - Vector File Formats 07 Aug 2014 | tags: science, graphics This is part of a series of posts about producing publication quality graphics. See here for the introduction and links to other parts. In part three I went through the major raster image formats. Here I’ll run through the most common vector ones and some advantages/disadvantages. ## Vector Formats ### EPS (.eps) Encapsulated PostScript has been a standard “publication/print quality” vector format for many years but it is not well known outside of professional publishing. The EPS format is now very old (it a derivative of PostScript, released in 1982) so is not very advanced but still maintains strong support from publishers. • widely accepted industry standard • poor software support* • no support for layers • no support for transparency ### PDF (.pdf) You’ll be familiar with Portable Document Format (PDF), particularly for written documents, but PDF is also a very good vector format. • perceived to be difficult to edit/create (they aren’t!) • poor software support* ### SVG (.svg) Scalable Vector Graphics (SVG) are a relative new comer. Originally designed as an open standard vector format for the web they are becoming more common for general graphics work, although I don’t know of any publishers that accept them directly. In essence they are text file that is interpreted into an image so they can easily be compressed in a lossless manner and even searched. For example, the brief code below defines the image beneath it. However, in general you create them with a graphics package. <svg xmlns="http://www.w3.org/2000/svg" version="1.1" width="160px" height="160px"> <rect x="5" y="5" width="150" height="150" fill="rgb(127, 127, 127)" stroke-width="5" stroke="rgb(0, 0, 0)" /> <circle cx="80" cy="80" r="50" stroke="rgb(0,0,0)" stroke-width="1" fill="rgb(127,127,255)" /> </svg> • open standard resulting in high uptake • can be viewed with most web browsers • supports transparency etc • not commonly accepted by publishers • some compatibility issues when opening files created in different software • no “real” layers • poor software support* ### WMF/EWF (.wmf/.emf) Windows/Enhanced Metafiles are Microsoft’s own vector format. • good software support • easily created without specialist software (MS Office will do)
2018-12-19 03:08:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37891602516174316, "perplexity": 6660.151402457106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376830479.82/warc/CC-MAIN-20181219025453-20181219051453-00024.warc.gz"}
https://www.nber.org/people/geert_ridder
NATIONAL BUREAU OF ECONOMIC RESEARCH # Geert Ridder Department of Economics University of Southern California Kaprielian Hall Los Angeles, CA 90089 Tel: 213/740-3511 Fax: 213/740-8543 E-Mail: Institutional Affiliation: University of Southern California ## NBER Working Papers and Publications March 2016 Identification and Efficiency Bounds for the Average Match Function under Conditionally Exogenous Matching with Bryan S. Graham, Guido W. Imbens: w22098 Consider two heterogenous populations of agents who, when matched, jointly produce an output, Y. For example, teachers and classrooms of students together produce achievement, parents raise children, whose life outcomes vary in adulthood, assembly plant managers and workers produce a certain number of cars per month, and lieutenants and their platoons vary in unit effectiveness. Let W\in\mathbb{W}={ w_1,\ldots,w_j} and X\in\mathbb{X}={ x_1,\ldots,x_k} denote agent types in the two populations. Consider the following matching mechanism: take a random draw from the W=w_j subgroup of the first population and match her with an independent random draw fro... October 2010 Measuring the Effects of Segregation in the Presence of Social Spillovers: A Nonparametric Approach with Bryan S. Graham, Guido W. Imbens: w16499 In this paper we nonparametrically analyze the effects of reallocating individuals across social groups in the presence of social spillovers. Individuals are either 'high' or 'low' types. Own outcomes may vary with the fraction of high types in one's social group. We characterize the average outcome and inequality effects of small increases in segregation by type. We also provide a measure of average spillover strength. We generalize the setup used by Benabou (1996) and others to study sorting in the presence of social spillovers by incorporating unobserved individual- and group-level heterogeneity. We relate our reallocation estimands to this theory. For each estimand we provide conditions for nonparametric identification, propose estimators, and characterize their large sample properties... April 2009 Complementarity and Aggregate Implications of Assortative Matching: A Nonparametric Analysis with Bryan S. Graham, Guido W. Imbens: w14860 This paper presents methods for evaluating the effects of reallocating an indivisible input across production units, taking into account resource constraints by keeping the marginal distribution of the input fixed. When the production technology is nonseparable, such reallocations, although leaving the marginal distribution of the reallocated input unchanged by construction, may nonetheless alter average output. Examples include reallocations of teachers across classrooms composed of students of varying mean ability. We focus on the effects of reallocating one input, while holding the assignment of another, potentially complementary, input fixed. We introduce a class of such reallocations -- correlated matching rules -- that includes the status quo allocation, a random allocation, and bot...Published: Bryan S. Graham & Guido W. Imbens & Geert Ridder, 2014. "Complementarity and aggregate implications of assortative matching: A nonparametric analysis," Quantitative Economics, Econometric Society, vol. 5, pages 29-66, 03. citation courtesy of National Bureau of Economic Research, 1050 Massachusetts Ave., Cambridge, MA 02138; 617-868-3900; email: info@nber.org
2019-07-19 08:13:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5344415307044983, "perplexity": 6490.719636244924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526153.35/warc/CC-MAIN-20190719074137-20190719100137-00393.warc.gz"}
http://imi.cas.sc.edu/papers/105/
IMI Interdisciplinary Mathematics InstituteCollege of Arts and Sciences Preprint Series by Year ## The valleys of shadow in Schrödinger landscape A 2003 Preprint by K. Oskolkov • 2003:11 • The probability density function is studied for the one-dimensional quantum particle whose motion is defined by the Schrodinger equation $$\frac{\delta\psi}{\delta{t}}=\frac{1}{2\pi{i}}\frac{\delta^2\psi}{\delta{x^2}},\,\,\,\,\psi(f;t,x)\Big | _ {t=0}=f(x),$$ with the periodic initial data $f,\ f(x+1)\equiv{f(x)}$. For f of the type$f _ \varepsilon(x):=c(\varepsilon)e^{-\frac{\langle{x}\rangle^2}{\varepsilon}},$ $\varepsilon-$ a small positive parameter, $\langle{x}\rangle-$ the distance from x to the nearest integer, Daniel Dix conducted a numerical experiment of 3d-graphing the density $|\psi(f _ \varepsilon;t,x)|^2$. Visually, the graph resembled a mountain landscape scarred by a peculiar discrete collection of deep rectilinear canyons, or "the valleys of shadow". We prove that this phenomenon is common for a wide set of families of the initial data $\{f _ \varepsilon\}$ such that the initial densities $\{|f _ \varepsilon|^2\}$ approximate, as $\varepsilon\to0,$ the periodic Dirack's delta-function: the Radon transformations of $|\psi(f _ \varepsilon)|^2$ are indeed small on a definite collection of lines on the plane $(t,x)$. A complete description of such collections is established, and applications to Hemholtz equation are discussed. © Interdisciplinary Mathematics Institute | The University of South Carolina Board of Trustees | Webmaster
2018-06-25 03:56:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7093490362167358, "perplexity": 1650.4759151961046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867424.77/warc/CC-MAIN-20180625033646-20180625053646-00477.warc.gz"}
https://nhigham.com/2015/11/
# Jack Williams (1943–2015) Jack Williams passed away on November 13th, 2015, at the age of 72. Jack obtained his PhD from the University of Oxford Computing Laboratory in 1968 and spent two years as a Lecturer in Mathematics at the University of Western Australia in Perth. He was appointed Lecturer in Numerical Analysis at the University of Manchester in 1971. He was a member of the Numerical Analysis Group (along with Christopher Baker, Ian Gladwell, Len Freeman, George Hall, Will McLewin, and Joan Walsh) that, together with numerical analysis colleagues at UMIST, took the subject forward at Manchester from the 1970s onwards. Jack’s main research area was approximation theory, focusing particularly on Chebyshev approximation of real and complex functions. He also worked on stiff ordinary differential equations (ODEs). His early work on Chebyshev approximation in the complex plane by polynomials and rationals was particularly influential and is among his most-cited. Example contributions are J. Williams (1972). Numerical Chebyshev approximation by interpolating rationals. Math. Comp., 26(117), 199–206. S. Ellacott and J. Williams (1976). Rational Chebyshev approximation in the complex plane. SIAM J. Numer. Anal., 13(3), 310–323. His later work on discrete Chebyshev approximation was of particular interest to me as it involved linear systems with Chebyshev-Vandermonde coefficient matrices, which I, and a number of other people, worked on a few years later: M. Almacany, C. B. Dunham and J. Williams (1984). Discrete Chebyshev approximation by interpolating rationals. IMA J. Numer. Anal. 4, 467–477. On the differential equations side, Jack wrote the opening chapter “Introduction to discrete variable methods” of the proceedings of a summer school organized jointly by the University of Liverpool and the University of Manchester in 1975 and published in G. Hall and J. M. Watt, eds, Modern Numerical Methods for Ordinary Differential Equations, Oxford University Press, 1976. This book’s timely account of the state of the art, covering stiff and nonstiff problems, boundary value problems, delay-differential equations, and integral equations, was very influential, as indicted by its 549 citations on Google Scholar. Jack contributed articles on ODEs and PDEs to three later Liverpool–Manchester volumes (1979, 1981, 1986). Jack’s interests in approximation theory and differential equations were combined in his later work on parameter estimation in ODEs, where a theory of Chebyshev approximation applied to solutions of parameter-dependent ODEs was established, as exemplified by J. Williams and Z. Kalogiratou (1993). Least squares and Chebyshev fitting for parameter estimation in ODEs. Adv. Comp. Math., 1(3), 357–366. More details on Jack’s publications can be found at his MathSciNet author profile (subscription required). Some of his later unpublished technical reports from the 1990s can be accessed at from the list of Numerical Analysis Reports of the Manchester Centre for Computational Mathematics. Jack spent a sabbatical year in the Department of Computer Science at the University of Toronto, 1976–1977, at the invitation of Professor Tom Hull. Over a number of years several visits between Manchester and Toronto were made in both directions by numerical analysts in the two departments. It’s a fact of academic life that seminars can be boring and even impenetrable. Jack could always be relied on to ask insightful questions, whatever the topic, thereby improving the experience of everyone in the room. Jack was an excellent lecturer, who taught at all levels from first year undergraduate through to Masters courses. He was confident, polished, and entertaining, and always took care to emphasize practicalities along with the theory. He had the charisma—and the loud voice!—to keep the attention of any audience, no matter how large it might be. He studied Spanish at the Instituto Cervantes in Manchester, gaining an A-level in 1989 and a Diploma Basico de Espanol Como Lengua Extranjera from the Spanish Ministerio de Educación y Ciencia in 1992. He subsequently set up a four-year degree in Mathematics with Spanish, linking Manchester with Universidad Complutense de Madrid. Jack was promoted to Senior Lecturer in 1996 and took early retirement in 2000. He continued teaching in the department right up until the end of the 2014/2015 academic year. I benefited greatly from Jack’s advice and support both as a postgraduate student and when I began as a lecturer. My office was next to his, and from time to time I would hear strains of classical guitar, which he studied seriously and sometimes practiced during the day. For many years I shared pots of tea with him in the Senior Common Room at the refectory, where a group of mathematics colleagues met for lunchtime discussions. Jack was gregarious, ever cheerful, and a good friend to many of his colleagues. He will be sadly missed. # Faster SVD via Polar Decomposition The singular value decomposition (SVD) is one of the most important tools in matrix theory and matrix computations. It is described in many textbooks and is provided in all the standard numerical computing packages. I wrote a two-page article about the SVD for The Princeton Companion to Applied Mathematics, which can be downloaded in pre-publication form as an EPrint. The polar decomposition is a close cousin of the SVD. While it is probably less well known it also deserves recognition as a fundamental theoretical and computational tool. The polar decomposition is the natural generalization to matrices of the polar form $z = r \mathrm{e}^{\mathrm{i}\theta}$ for complex numbers, where $r\ge0$, $\mathrm{i}$ is the imaginary unit, and $\theta\in(-\pi,\pi]$. The generalization to an $m\times n$ matrix is $A = UH$, where $U$ is $m\times n$ with orthonormal columns and $H$ is $n\times n$ and Hermitian positive semidefinite. Here, $U$ plays the role of $\mathrm{e}^{\mathrm{i}\theta}$ in the scalar case and $H$ the role of $r$. It is easy to prove existence and uniqueness of the polar decomposition when $A$ has full rank. Since $A = UH$ implies $A^*A = HU^*UH = H^2$, we see that $H$ must be the Hermitian positive definite square root of the Hermitian positive definite matrix $A^*A$. Therefore we set $H = (A^*A)^{1/2}$, after which $U = AH^{-1}$ is forced. It just remains to check that this $U$ has orthonormal columns: $U^*U = H^{-1}A^*AH^{-1} = H^{-1}H^2H^{-1} = I$. Many applications of the polar decomposition stem from a best approximation property: for any $m\times n$ matrix $A$ the nearest matrix with orthonormal columns is the polar factor $U$, for distance measured in the 2-norm, the Frobenius norm, or indeed any unitarily invariant norm. This result is useful in applications where a matrix that should be unitary turns out not to be so because of errors of various kinds: one simply replaces the matrix by its unitary polar factor. However, a more trivial property of the polar decomposition is also proving to be important. Suppose we are given $A = UH$ and we compute the eigenvalue decomposition $H = QDQ^*$, where $D$ is diagonal with the eigenvalues of $H$ on its diagonal and $Q$ is a unitary matrix of eigenvectors. Then $A = UH = UQDQ^* = (UQ)DQ^* \equiv P\Sigma Q^*$ is an SVD! My PhD student Pythagoras Papadimitriou and I proposed using this relation to compute the SVD in 1994, and obtained speedups by a factor six over the LAPACK SVD code on the Kendall Square KSR1, a shared memory parallel machine of the time. Yuji Nakatsukasa and I recently revisited this idea. In a 2013 paper in the SIAM Journal of Scientific Computing we showed that on modern architectures it is possible to compute the SVD via the polar decomposition in a way that is both numerically stable and potentially much faster than the standard Golub–Reinsch SVD algorithm. Our algorithm has two main steps. 1. Compute the polar decomposition by an accelerated Halley algorithm called QDWH devised by Nakatsukasa, Bai, and Gygi (2010), for which the main computational kernel is QR factorization. 2. Compute the eigendecomposition of the Hermitian polar factor by a spectral divide and conquer algorithm. This algorithm repeatedly applies QDWH to the current block to compute an invariant subspace corresponding to the positive or negative eigenvalues and thereby divides the problem into two smaller pieces. The polar decomposition is fundamental to both steps of the algorithm. While the total number of flops required is greater than for the standard SVD algorithm, the new algorithm has lower communication costs and so should be faster on parallel computing architectures once communication costs are sufficiently greater than the costs of floating point arithmetic. Sukkari, Ltaief, and Keyes have recently shown that on a multicore architecture enhanced with multiple GPUs the new QDWH-based algorithm is indeed faster than the standard approach. Another interesting feature of the new algorithm is that it has been found experimentally to have better accuracy. The Halley iteration that underlies the QDWH algorithm for the polar decomposition has cubic convergence. A version of QDWH with order of convergence seventeen, which requires just two iterations to converge to double-precision accuracy, has been developed by Nakatsukasa and Freund (2015), and is aimed particularly at parallel architectures. This is a rare example of an iteration with a very high order of convergence actually being of practical use. # Numerical Linear Algebra and Matrix Analysis Matrix analysis and numerical linear algebra are two very active, and closely related, areas of research. Matrix analysis can be defined as the theory of matrices with a focus on aspects relevant to other areas of mathematics, while numerical linear algebra (also called matrix computations) is concerned with the construction and analysis of algorithms for solving matrix problems, as well as related topics such as problem sensitivity and rounding error analysis. My article Numerical Linear Algebra and Matrix Analysis for The Princeton Companion to Applied Mathematics gives a selective overview of these two topics. The table of contents is as follows. 1 Nonsingularity and Conditioning 2 Matrix Factorizations 3 Distance to Singularity and Low-Rank Perturbations 4 Computational Cost 5 Eigenvalue Problems 5.1 Bounds and Localization 5.2 Eigenvalue Sensitivity 5.3 Companion Matrices and the Characteristic Polynomial 5.4 Eigenvalue Inequalities for Hermitian Matrices 5.5 Solving the Non-Hermitian Eigenproblem 5.6 Solving the Hermitian Eigenproblem 5.7 Computing the SVD 5.8 Generalized Eigenproblems 6 Sparse Linear Systems 7 Overdetermined and Underdetermined Systems 7.1 The Linear Least Squares Problem 7.2 Underdetermined Systems 7.3 Pseudoinverse 8 Numerical Considerations 9 Iterative Methods 10 Nonnormality and Pseudospectra 11 Structured Matrices 11.1 Nonnegative Matrices 11.2 M-Matrices 12 Matrix Inequalities 13 Library Software 14 Outlook # Corless and Fillion’s A Graduate Introduction to Numerical Methods from the Viewpoint of Backward Error Analysis I acquired this book when it first came out in 2013 and have been dipping into it from time to time ever since. At 868 pages long, the book contains a lot of material and I have only sampled a small part of it. In this post I will not attempt to give a detailed review but rather will explain the distinctive features of the book and why I like it. As the title suggests, the book is pitched at graduate level, but it will also be useful for advanced undergraduate courses. The book covers all the main topics of an introductory numerical analysis course: floating point arithmetic, interpolation, nonlinear equations, numerical linear algebra, quadrature, numerical solution of differential equations, and more. In order to stand out in the crowded market of numerical analysis textbooks, a book needs to offer something different. This one certainly does. • The concepts of backward error and conditioning are used throughout—not just in the numerical linear algebra chapters. • Complex analysis, and particularly the residue theorem, is exploited throughout the book, with contour integration used as a fundamental tool in deriving interpolation formulas. I was pleased to see section 11.3.2 on the complex step approximation to the derivative of a real-valued function, which provides an interesting alternative to finite differences. Appendix B, titled “Complex Numbers”, provides in just 8 pages excellent advice on the practical usage of complex numbers and functions of a complex variable that would be hard to find in complex analysis texts. For example, it has a clear discussion of branch cuts, making use of Kahan’s counterclockwise continuity principle (eschewing Riemann surfaces, which have “almost no traction in the computer world”), and makes use of the unwinding number introduced by Corless, Hare, and Jeffrey in 1996. • The barycentric formulation of Lagrange interpolation is used extensively, possibly for the first time in a numerical analysis text. This approach was popularized by Berrut and Trefethen in their 2004 SIAM Review paper, and my proof of the numerical stability of the formulas has helped it to gain popularity. Polynomial interpolation and rational interpolation are both covered. • Both numerical and symbolic computation are employed—whichever is the most appropriate tool for the topic or problem at hand. Corless is well known for his contributions to symbolic computation and to Maple, but he is equally at home in the world of numerics. Chebfun is also used in a number of places. In addition, section 11.7 gives a 2-page treatment of automatic differentiation. This is a book that one can dip into at any page and quickly find something that is interesting and beyond standard textbook content. Not many numerical analysis textbooks include the Lambert W function, a topic on which Corless is uniquely qualified to write. (I note that Corless and Jeffrey wrote an excellent article on the Lambert W function for the Princeton Computation to Applied Mathematics.) And not so many use pseudospectra. I like Notes and References sections and this book has lots of them, with plenty of detail, including references that I was unaware of. As regards the differential equation content, it includes initial and boundary value problems for ODEs, as well as delay differential equations (DDEs) and PDEs. The DDE chapter uses the MATLAB dde23 and ddesd functions for illustration and, like the other differential equation chapters, discusses conditioning. The book would probably have benefited from editing to reduce its length. The index is thorough, but many long entries need breaking up into subentries. Navigation of the book would be easier if algorithms, theorems, definitions, remarks, etc., had been numbered in one sequence instead of as separate sequences. Part of the book’s charm is its sometimes unexpected content. How many numerical analysis textbooks recommend reading a book on the programming language Forth (a small, reverse Polish notation-based language popular on microcomputers when I was a student)? And how many would point out the 1994 “rediscovery” of the trapezoidal rule in an article in the journal Diabetes Care (Google “Tai’s model” for some interesting responses to that article). I bought the book from SpringerLink via the MyCopy feature, whereby any book available electronically via my institution’s subscription can be bought in (monochrome) hard copy for 24.99 euros, dollars, or pounds (the same price in each currency!). I give the last word to John Butcher, who concludes the Foreword with “I love this book.” # Publication Peculiarities: Sequences of Papers This is the third post is my sequence on publication peculiarities. It is not unusual to see a sequence of related papers with similar titles, sometimes labelled “Part I”, “Part II” etc. Here I present two sequences of papers with intriguing titles and interesting stories behind them. ## Computing the Logarithm of a Complex Number The language Algol 60 did not have complex numbers as a built-in data type, so it was necessary to write routines to implement complex arithmetic. The following sequence of papers appeared in Communications of the ACM in the 1960s and concerns writing an Algol 60 code to evaluate the logarithm of a complex number. J. R. Herndon (1961). Algorithm 48: Logarithm of a complex number. Comm. ACM, 4(4), 179. A. P. Relph (1962). Certification of Algorithm 48: Logarithm of a complex number. Comm. ACM, 5(6), 347. M. L. Johnson and W. Sangren (1962). Remark on Algorithm 48: Logarithm of a complex number. Comm. CACM, 5(7), 391. D. S. Collens (1964). Remark on remarks on Algorithm 48: Logarithm of a complex number. Comm. ACM, 7(8), 485. D. S. Collens (1964). Algorithm 243: Logarithm of a complex number: Rewrite of Algorithm 48. Comm. CACM, 7(11), 660. “Remark on remarks”, “rewrite”—what are the reasons for this sequence of papers? The first paper, by Herndon, gives a short code (7 lines in total) that uses the arctan function to find the argument of a complex number $x+iy$ as $\arctan(y/x)$. Relph notes that the code fails when the real part is zero and that, because it adds $\pi$ to the $\arctan$, the imaginary part is on the wrong range, which should be $(-\pi,\pi]$ for the principal logarithm. Moreover, the original code incorrectly uses log (log to base 10) instead of ln (the natural logarithm). It would appear that at this time codes were not always run and tested before publication, presumably because of the lack of an available compiler. Indeed Herndon’s paper was published in the April 1961 issues of CACM, and the first Algol 60 compilers had only become available the year before. according to this Wikipedia timeline. Johnson and Sangren give more discussion about division by zero and obtaining the correct signs. In his first paper, Collens notes that the Johnson and Sangren code wrongly gives $\log 0 = 0$ and has a missing minus sign in one statement. Finally, Collens gives a rewritten algorithm that addresses the previously noted deficiencies. It appears to have been run, since some output is shown. This sequence of papers from the early days of digital computing emphasizes that even for what might seem to be a trivial problem it is not straightforward to design correct, reliable algorithms and codes. I am working on logarithms and other multivalued functions of matrices, for which many additional complications are present. ## Slow Manifolds Edward Lorenz is well-known for introducing the Lorenz equations, discovering the Lorenz attractor, and describing the “butterfly effect”. His sequence of papers E. N. Lorenz (1986). On the existence of a slow manifold. J. Atmos. Sci., 43(15), 1547–1557. E. N. Lorenz and V. Krishnamurthy (1987). On the nonexistence of a slow manifold. J. Atmos. Sci., 44(20), 2940–2950. E. N. Lorenz (1992). The slow manifold—What is it? J. Atmos. Sci., 49(24), 2449–2451. seems to suggest a rather confused line of research! However, inspection of the papers reveals the reasoning behind the choice of titles. The first paper discusses whether or not a slow manifold exists and shows that this question is nontrivial. The second paper shows that a slow manifold does not exist for one particular model. The third paper shows that the apparent contradiction to the second paper’s result by another author’s 1991 proof of the existence of a slow manifold for the same model can be explained by the use of different definitions of slow manifold.
2021-07-25 00:06:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 38, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6310990452766418, "perplexity": 883.2710842824539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151531.67/warc/CC-MAIN-20210724223025-20210725013025-00130.warc.gz"}
https://blog.melindalu.com/
# space/time Things practiced: going on, making a life the only way anyone can, step by step I shoudn’t be but I’m dazed by how much fits into ten years and how little, how much has changed and how everything is just the same. How I get used to it kills me. The way it’s night and then suddenly it’s not and I’m in the full sun of a new day, and I just… stop noticing it. Against all reason, I plan for a future I assume will come, I believe the person I love is always coming back. Like, look how miraculous it is to be here. That each next thing is still the best possible thing, that where I’m going is where I want to go, all over again. That I’m stupidly rebuilding my stupid little dreams. It’s easy to take someone’s life in your hands but not so easy to study, to keep on trying when we mess up, to choose something over and over and believe, truly, that repetition brings transcendence. But the opposite of taking things for granted is also stupid. Like being outside on a very beautiful day in the late summer and stubbornly trying to find meaning it doesn’t need. We don’t have to be good. We can get up anyway, we’re allowed to make the old things new. Throw up a sturdy structure against the swelling dark, throw on a light fleece, and huddle against the cold. And in the morning I wake up thinking this is it, this time, this time, this time. This time I know better, this time I will do it right. # death / desolation Tools used: backpacks, maps, “a Sony point-and-shoot,” a growing obsession with the wilderness # Strange Loop 2021 talk: “Functional distributed systems beyond request/response” Things practiced: trying to give talk while mostly unable to string sentences together, computers Tools used: the Paper app, a profusion of time + patience from those around me Got a chance to speak at Strange Loop this year, in person; the fact of a tech conference in all its normalcy like a miracle, that we’ve gotten through this year so far at all. Sometimes survival feels like a great achievement and maybe all the cruelties and losses and things we’ve forgotten are just life. Anyway it’s about event-driven programming. Time washes us all away. # In the canyonlands / from the edge of the world Things practiced: not looking at screens, suicide postponed, photography I guess The Colorado Plateau has somehow, through a series of coincidences, survived as a coherent block since the Precambrian; nowhere else on Earth can we see such a history of the last two billion years. One couldn’t dream a place like this. The sky is open all the way; I only want to walk a little longer. Here’s a world far larger than our own, that stretches into the ancient past and continues into a future beyond human limits; here the landscape is infinite, its bounds beyond the horizon of human knowledge, its possibilities endless. Weird as hell. The desert has no center. It’s a riddle with no answer, a distant and uninhabitable planet. The desert waits outside, waits for us all: desolate and still and strange, grotesque in its shapes and colors and forms: canyon, butte, mesa, reef, fin, slot, dome, escarpment, pinnacle, maze, sand dune. The desert says nothing, it lies there like the bare skeleton of being, spare, sparse, austere, utterly useless, clean, pure, simple, untouchable by the human mind. The land is sand or sandstone, naked, monolithic, austere, unadorned, like the moon, or Mars. Here are canyons a thousand feet deep where you could fall for days on end, hoodoos pinnacled red and pink and beige, white and pink ice cream swirls of slickrock, labial outlines giving way to jumbles of rock, light surreal blurring the bounds of cloud and sky. Clean air to breathe; stillness, solitude, and space; a weight of time enough to let thought and feeling roam from here to the end of the world; finding something intimate in the lonely, primordial, remote, unfamiliar world. We’re let to roam; we’re let to breathe; suspended in this space between sunrise and sunset. It’s an inverse wilderness — instead of mountains, the land recedes; instead of scaling peaks, we descend into the Earth. The sky empties us so that our minds matches the landscape, our thoughts sparse and simple. Shade and water are key; nothing else matters; consciousness is reduced to its essence. Rocks spreading glorious shade; the whole sky ours to write on; the vanishing point of the sun extinguishing time forever. We are alive here right now; all else is irrelevant; we are pleased as long as we have something to eat, our bodies, light to see by; our vision, touch, hearing, taste, smell are sharpened and heightened; I can’t sleep for being alive. The human heart has the capacity still to remember and say, forever. We learn how to be alone and at the edge of aloneness how to be found by the world. We’re left with a distilled essence of self; we dare a new unknown to find us here. Each stone and grain of sand exists in and for itself with a clarity untarnished by any suspicion of a different dimension. Only the sunlight holds things together, abundant, wasteful. The desert reveals itself nakedly, cruelly, with no meaning but its own existence. Life comes to a standstill, and we wait on the shore of time, free, temporarily, from the requirements of motion and progress, hope and despair. The clocks have gorged themselves on our time. The world is so alive, the strangeness and wonder of existence underlined; life not crowded upon life as in other places but scattered in spareness and simplicity, with space for each so that each living organism stands out bold and brave and vivid against the sand and rock. The moon hangs over the canyon, a tiny radiance in a dark place. Imagine you wake up with a second chance. We expect wilderness to be spectacular — that is, a spectacle outside ourselves, a scenic wonderland, scenery instead of nature. The highest mountain, the largest geyser, the biggest tree, the deepest canyon. In that first moment in which we wake there is a slanting light, a small opening into the day, which closes the moment we begin our plans. What we can plan is too small for us to live. It’s absence that creates the feeling of wildness — the absence of roads, human intrusions, our preconception of what is sublime. Maybe we fear our capacity to feel, and so annihilate physically and symbolically all that might make us tender, silence the music and reduce beauty to a collection of objects we can curate and control. We homogenize the outdoors to homogenize our innards, erase our inner ugliness, destroy our selves. Maybe wilderness lives only in places without maps. We draw maps so that we don’t lose our way, but it’s only because the way was lost to us already that we did. The world cannot be lost, and the maps we’ve drawn ourselves can’t save us, can’t find the way again for us. We think of maps as pictures of reality, but they twist our minds away from what’s real, and create the situation they describe. Instead of helping us find our way they proscribe the way we can think of ourselves. People paying more attention to what other people tell them than to their own perception is the beginning of civilization. Perhaps in our homogenization of landscape, we’ve eliminated beauty from our lives. We stand at the edge of the fathomless crowd. If we can learn to love space as deeply as we’re obsessed with time now, we might discover a new way to live. The light insists on itself in the world; we want to be loved shamelessly, by the shameless. Sometimes we need to push ourselves into existence; to reach into areas where we might potentially fail. Sometimes it takes a sky to find a first slice of freedom in your own heart, someone has written something new in the ashes of your life. You’re not leaving, even as the light fades quickly. You’re arriving. Why not wake at dawn, after all is gone, and go on? Don’t stop; breathe in; breathe out; the world was made to be free in; the rocks framing, only for now, our horizon. In this place we leave everything we know behind. We’re still here, breathing and constellated, small and infinite and quiet. The sky is dimming so fast it seems alive and we have our whole lives ahead of us. # Building: How to build an outdoor climbing cube part 2 — the climbing surface Thing practiced: building the holiest wall (This is the second part of 3 — part 1 covers design, foundation, and framing; and part 3 will cover flooring, finishing touches, and routesetting.) ## So far There’s a frame, but no surface to mount climbing holds to. What next? Tools used for climbing surface (utility rating out of 5): • Stanley yellow panel carry handle (★★★★½) • putty knife (★★★★) • hand sanding block (★★) • rubber mallet (★★★) • 500 golf tees (★★★½) • paint rollers and tray (★★★★) • paint can opener (★★★★★) • paint stirrer (★★★½) • two Wal-Board Tools 8" taping knives (★★★★) • Wal-Board Tools 14" mud pan (★★★★) • a sacrificial sheet of 1" x 4' x 8' foam insulation (★★★★) • Makita 7.25" circular saw with Diablo 40-tooth blade (★★★★½) • tape measure (★★★★½) • regular Sharpies (★★★★) • metallic Sharpies (★★★★) • 2' level (★★★½) • Bosch 12V drill (★★★★½) • Black and Decker drill (★★) • Black and Decker 20V impact driver (★★★★½) • Fisch 7/16" Forstner bit (★★★★★) • standard drill bit and impact bit set (★★★★) • reciprocating saw • hammer (★★★) Other tools used: • whiteboard • 8x11 paper • several cameras • JBL Charge 3 (★★★★★) Parts list (climbing surface): • about 800 2-hole T-nuts from Escape Climbing with accompanying mounting screws • 11 sheets 3/4" x 4' x 8' marine-grade CDX plywood • 3 Simpson Strong-Tie galvanized heavy angles • 6 1/2" bolts/nuts/washers • 400 Deckmate #9 x 3" screws • 300 GRK #9 x 2.5" R4 screws • 40 GRK 5/16" x 4" RSS structural screws • 40 GRK 1/4" x 2.5" RSS structural screws • 2 gallons KILZ 2 interior/exterior primer • 2 gallons Behr porch and patio paint (silver gray low-lustre enamel) • 7 gallons drywall mud • 15 pounds play sand • 3 lbs DAP Plastic Wood-X wood filler ## Planning ### 1. Deciding on the plywood to use Based mainly on the perceived safety of obtaining sheets mid-pandemic, we decided to go with 3/4"-thick marine-grade, pressure-treated plywood from Economy Lumber Oakland. Each sheet was a standard 4' x 8', and we decided to get 11 sheets to cover all our surfaces and leave some extra for adding volumes later. ### 2. Choosing the T-nut layout Climbing holds are mounted to a climbing surface (whether made of wood or not) with 3/8"-16 bolts threaded into 3/8"-16 T-nuts that are attached firmly to the climbing surface. Before getting started with preparing our climbing surface, we needed to decide how we would lay out our T-nuts. To do this, we looked around at other people’s builds, and drew alternatives out on sheets of 8.5” x 11” paper, and settled on a grid with holes sqrt(32) == 5.66” apart, diagonally (equivalent to rows of holes 8" apart horizontally, spaced 4" apart vertically, and offset horizontally by 4" per row). ### 3. Setting up the workspace Four feet by eight feet of plywood turn out to be way more cumbersome to handle in practice than they are in theory, so we needed to amass a few tools and clean out some space to make it possible to work with our plywood sheets. First, we needed to figure out a way to cut our sheets, without snagging our saw blade (i.e. if the sheets collapse in from two sawhorses supporting each short end) or wasting too much wood cutting into our supports (i.e. if we used long 2x6s to support the sheet along its length). After trying to come up with cleverer options, we decided to just get a cheap piece of 4' x 8' foam insulation from Home Depot to place under an entire sheet of plywood as we cut it, and be okay with shredding the insulation over the course of the project. This worked pretty well. Then, learning from the failures of the stubborn approach to post digging last time, we decided to purchase a sheet-goods lifter. This is a \$5 plastic handle-type thing that you use to make it possible to get both arms around a wide sheet to carry it safely. This, disappointingly, also worked pretty well. Then we spent some time moving lumber/other stuff around our home until there was a large-enough space to have three flat-sheet-of-plywood-size spaces: one for resting, one for marking and drilling, and one for one of: cutting, T-nut mounting, priming, texturing, or painting. ## Plywood ### 4. Cutting to size and filling in the surface with wood putty We tried to design the shape of our wall to use an intact sheet of plywood where possible, to minimize excess cutting. The whole climbing cube is pretty small, though, so there were only three sections that were 4' x 8' or larger flat — so there were only three sheets of plywood we could keep whole. For the rest of the climbing surface, we tried to close-pack our sections as tightly as possible to minimize material waste: To make the cuts, we placed the foam insulation directly underneath the sheet being cut, marked our cuts, then just ran the circular saw across the plywood — leaving cuts in the foam but a neat and unburred plywood cut, and no cut-up sawhorses. Because we were using CDX plywood (which has faces of quality C and D — not good), there were significant flaws and gouges in most of the sheets. We tried our best to fill these with wood filler, then sand the surface down to be reasonably flat (but not perfect — we knew that we’d still be texturing and painting on top). ### 5. Marking and drilling After each sheet was cut to size and roughly-flat, we marked and drilled holes for our T-nuts. We tried to stick to the 5.66" grid described in 2., with some holes offset to avoid hitting the framing members that would sit behind them. To make the holes the T-nuts would fit into, we drilled using a 7/16" Forstner bit, which was the perfect size to have the T-nuts sit snugly. (Forstner bits are expensive but make fantastically precise, crisp-edged holes — if we’d used standard bits, the holes would likely be too ragged to fit our T-nuts perfectly.) We thought it’d be important to make the T-nut holes as perfectly perpendicular to the surface as possible, to make sure the holds would mount securely. To try to achieve this without too much tooling, we used a block of wood with a hole drilled through it as a guide. This was imprecise enough (but I was lazy enough to not make a proper jig) that we ended up abandoning this partway through and just doing this by eye. So far this has turned out to be workable. ### 6. Mounting the T-nuts We chose 2-hole T-nuts from Escape Climbing, which hit a sweet spot in optimizing for barrel length, corrosion-resistance, and availability during the pandemic. (In an ideal world these would be stainless and I would’ve gotten to order them from the McMaster-Carr catalog, but that option was prohibitively more expensive.) To mount the T-nuts into the plywood, we hammered them in with a big mallet, predrilled pilot holes in each screw hole, and attached with two (provided) screws. ### 7. Teeing Now, with the sheets cut to size and full of T-nuts, we were ready to add the surface finish to the plywood. To avoid getting any primer/texture/paint into our arduously-mounted T-nut threads, we stole an idea we saw on YouTube and plugged them with golf tees. ### 8. Priming To improve adhesion of our texture to the plywood, we started with a latex primer layer. We used KILZ 2 interior/exterior primer, which seemed to be reasonably priced for the quality. ### 9. Surface texturing We were originally going to skip this step and just use paint with some sand in it directly on top of the primer. We tried this on one sheet, though, and found the wood texture was too pronounced to allow for smearing, and gave us a headache. We thought we’d be consigned to misery forever, but our friend Kris suggested that we look into rock texture, and instead of giving up we figured out how to mix up something nicely-thick that would still stick well to the plywood: about 2:3:8 by volume of sand, primer, and drywall mud. We applied this with drywall taping tools instead of a paint roller, although in retrospect it might’ve been easier to thin it with water and apply with a paint roller. ### 10. Painting By this point we were really bored of doing this. But, painting is fun? We chose Behr porch and patio paint because their display at Home Depot was appealing and because climbing walls, porches, and patios are all walked on with feet. ## Putting it all together ### 11. Mounting the wall G plywood to frame G We had decided (when framing) to use our existing fence posts (which were already concreted down 16") as our main frame, with 2x6s added horizontally to carry load to the posts. Because these 2x6s were placed with their 5.5" face flush to the fence, in order to place the plywood flush against them, we needed to drill out holes where the T-nut backs and hold bolts would intersect with the girts. After preparing the framing members, mounting the sheets was straightforward: we used a handful of large structural screws and several handfuls of wood/decking screws per sheet to attach the plywood both directly into the fenceposts and to the girts. ### 12. Mounting the wall F plywood to frame F Mounting the surface for the right half of wall F was even more straightforward: because the 2x6 framing stringers were mounted in a standard balloon-frame configuration, we had been able to drill our T-nut holes to avoid hitting the stringers. We mounted the sheets using structural screws and wood/decking screws both directly to the posts and to the stringers across the sheet. ### 13. Mounting the WALL·E plywood to FRAME·E For the cave wall (WALL·E) (and the corner of wall F above), we again begged for help from our friend S, who agreed to help us make this work. This was trickier than the other walls for a few reasons: • the sections were overhung at different angles, so we couldn’t rest them on each other while drilling, and attaching the plywood sheets while holding them in place was physically taxing; • the wall was taller, so we needed to do this work on a ladder; • I had dug an asymmetric and partial hole in the ground to get started on the flooring, so using the ladder was a fun and dangerous activity; and • senioritis. Eventually, with the help of S’s reciprocating saw, we got this up. ### 14. Finishing: covering fastener holes and filling in cracks and edges We wanted the sheets to feel like one continuous surface (and for climbers not to be hurt by protruding edges), so we filled the holes and the sub-millimeter gaps between plywood sheets with wood putty, drywall mud, and paint. Now we have a cube! Can we resist climbing on it until we have proper safety matting in place? ### P.S.: references Thanks to the Vancouver Carpenter for being very handsome and teaching us about drywall. # Building: How to build an outdoor climbing cube part 1 — design, foundation, framing Thing practiced: building the tiniest and most-exposed of houses (This is the first part of 3 — part 2 covers how we built the climbing surface; and part 3 will cover flooring, finishing touches, and routesetting.) As a staying-inside project this pandemic, V, some friends, and I built this miniature climbing gym. Its name is minicube: Tools used for foundation and framing (utility rating out of 5): • Makita 7.25" circular saw (★★★★½) • Rockwell 4.5" compact circular saw (★★½) • 2 Swanson Tool speed squares (★★★★★) • Bosch 12V drill (★★★★½) • Black and Decker 20V impact driver (★★★★½) • 16-inch-long 3/8" and 1/2" spade bits • standard drill bit and impact bit set • tape measure (★★★★½) • 2' level (★★★½) • post level (★★½) • shovel (★★★) • hammer (★★★) • regular Sharpies (★★★½) • metallic Sharpies (★★★★) • no. 2 pencils (★★★) • T-bevel (★★) • some string (★★★½) Information tools used: • whiteboard (★★★★½) • 8x11 paper (★★★★) • 4x6 and 5x8 index cards (★★★★) • blue painters/leftover skeleton tape (★★★★) • Jira (★★★) • SketchUp trial (★★★) Other tools used: • several cameras • JBL Charge 3 Parts list (foundation and framing only): • 750 lbs Quikrete quick-setting concrete • 2 pcs 6x6 x 16' pressure-treated lumber for primary posts • 2 pcs 4x4 x 12' pressure-treated lumber for secondary posts • 4 pcs 2x6 x 10' pressure-treated lumber for bottom cross-bracing • 50 pcs 2x6 x 12' douglas fir framing lumber for the rest of the framing • 1/2" carriage bolts/nuts/washers • 3/8" carriage bolts/nuts/washers • Simpson Strong-Tie LUS25Z 2x6 face-mount joist hangers • Simpson Strong-Tie 12 A21, 26 A23Z, 22 A34Z, 4 A35Z angles, 4 H2.5Z hurricane ties, and 1 TP15 tie plate • 600 Simpson Strong-Tie #9 x 1.5" SD connector screws • 300 Simpson Strong-Tie #9 x 2.5" SD connector screws • 12 GRK 5/16" x 4" RSS structural screws • 12 GRK 1/4" x 2.5" RSS structural screws • 20 GRK #9 x 2.5" R4 screws • a handful of nails • Rustoleum gray paint ## Why We saw Stacey’s, and it created a sense of wonder in our hearts. The dream: of a place of refuge a few steps away from one’s usual cares; a place where the outlines of the self could be more distinct. A place that would let us dream in peace; to try on alternate futures and feel how they fit. Something uncomplicated and fun. ## Design ### 1. Exploration, goals, and constraints Our goals were (1) to have a climbing area where people can recreate, (2) with a cave, (3) and a few corners for stemmy climbs; (4) without being annoying to the neighbors. Constraints: it all had to fit in a 110" x 110" footprint, without anchoring to the existing building. Figuring out how to preserve the existing sunlight was also important — building the walls too high would block too much light, but too-short walls aren’t climbable. After a few experiments, we decided on a three-sided configuration with one taller (but still short) wall with 10° and 65° overhangs, close to the existing building (marked as WALL·E), and two shorter vertical walls on the other two sides (walls F and G). WALL·E would be about 10' high, while walls F and G would be about 6' high. ## Design ### 2. Engineering design After we had a rough idea of what we wanted to build, it was time to engineer it. This was fun: I got to revisit beam theory, read about/measure West Oakland soil densities, read and reread the Oakland building code; and a bunch of other desk work. After we finished all the load calculations (love a cantilever), we chose our materials and drew it up in SketchUp: ## Design ### 3. Getting ready for fabrication Fabrication seemed straightforward: we just needed to sink the footings, build the structure, then clad it with a climbing surface. We assembled the materials: Then warmed up and tested our tools by building some watching V build some sawhorses. ## Foundation ### 4. Digging First, we called before we dug. Next we had to kill all the plants that were previously growing in that space (RIP wisteria), and take a few inches of surface soil off to maximize available height. Then, in an excruciatingly tedious process, we dug our post holes 42" deep (note to diggers: please don’t be stubborn, just borrow/rent/buy a post hole digger). ## Foundation ### 5. Setting posts for WALL·E We used pressure-treated lumber from Economy Lumber Oakland for our posts. For WALL·E proper, we used one 6x6 for the leftmost post, and two 4x4s for the center and right posts. Because the posts would be holding up about 1,500 lbs of dead weight and potentially several times that in (dynamic) live loads, we needed to make sure the foundation was sound. To set each one, we filled the bottom 4" of each post hole with gravel, put a few nails in the post to get extra grip for the concrete, poured concrete for the remaining 3+' to ground level, then smoothed the top of the concrete away from the post to try to direct any rainwater runoff away from the post. ## Framing ### 6. Doing the rest of the framing for WALL·E There were two parts to our framing: horizontal girts to tie the posts together and serve as the main load-bearing structure, and a standard-ish balloon frame to carry load from the climbing surfaces back to the posts. We used pressure-treated 2x6s for the parts of the frame that would touch the ground, and standard 2x6 douglas fir, painted with RUST-OLEUM silver gray, for the rest. We framed everything using 2x6s at about 16" on-center, with slight deviances depending on hole placement for the climbing-hold holes (more on that in the next post). First, we put up WALL·E’s horizontal girts. We attached these to the posts with 1/2" and 3/8" carriage bolts because the girts would be part of the critical load path from our climbers to the ground, and these would be critical joints. (This meant that we got to use these absurdly-long drill bits.) For the 10° and 65° overhanging walls, we used joist hangers to carry load from the framing members to the post/girt structure. Joist hangers are the fasteners used to attach floor/ceiling boards to the vertical frames in a standard wood-framed house. In our climbing cube, the overhanging walls carry load more like a ceiling/floor than like a vertical wall in a house. The first section of wall framing we put up was the 10° section. We cut, painted, and beveled the wood stringers, then attached them to the girts, posts, and themselves with joist hangers, 1/2" bolts, various other Simpson Strong-Ties, and structural and wood screws: After that, we put up the framing for the 65° section. This was basically the same as for the 10° section, except that the forces here would be larger, so we sized up the structural elements and fasteners commensurately: We put up a minimal vertical frame at the bottom of WALL·E for mounting the kicker board: That was as far as we wanted to get with framing before putting in our last two support posts. ## Foundation ### 7. Setting the last two posts (in wall F) The last two posts were one pressure-treated 6x6 (our longest post: 112" above ground, 42" below ground) to support WALL·E from the top and wall F, and one pressure-treated 4x4 for the right side of wall F. We used the same process as for the earlier three posts, but this time with some video: ## Framing ### 8. Doing the framing for walls F and G Framing up walls F and G was much more straightforward than WALL·E — these would be vertical walls, going up only to our fence line (about 6'). For wall G, we planned to use our existing fence posts (which were already concreted down 16") as our posts, adding only horizontal girts for bracing and mounting the climbing surface to. First we had to remeasure and replan, to see if our original plan still made sense. On a second pass, we decided to cut the height of these two walls, to (1) block less sunlight, (2) not be such an eyesore for the neighbors, and (3) avoid digging any more post holes. For wall F (the vertical wall adjoining the cave wall), we put up a straightforward balloon frame in place using 2x6s, Simpson Strong-Ties, and some screws — a relief after WALL·E. For wall G (a totally-vertical 16'-wide traverse wall), we cross-braced our 4x4 fenceposts with horizontal 2x6 girts attached with structural screws. ### 9. Back to WALL·E: framing the topmost section To put up the framing for the 0° vertical section at the top of WALL·E, we procrastinated for two weeks, decided we might never complete the project, then called in help from pro contractor team S and C. With them on the team it was a breeze and a joy. Phew. Framing complete. Next post: okay but what about the climbing part? (===, what’s the fastest way to mount 800 T-nuts?) ### P.S.: references Special thanks to jennsends and MattBangsWood for information, dispelling our distrust of YouTube (a little), and inspiration. # Violin: Fritz Kreisler, Praeludium in the style of Pugnani Thing practiced: violin Tool used: Yamaha YSV104 silent violin I’ve been trying to pick up the violin again, after 13 years of being pissed off about not being able to play. Sometimes joy is just defiance, a refusal to let the ghosts of loss or failure be the ones to tell the story. There’s only time moving inexorably on, carrying everything away, and all we can do is refuse, shove forward, stab on. # everything is redeemable and no one will ever die Things practiced: writing, being fallible and weak and vulnerable and embarrassing, trusting anything at all Sometimes you get a run of days or even of weeks or months in which what you anticipate and what comes true live up to one another, delicious and impossibly expansive and taken for granted. I say often that I’m lucky and what I mean is that it’s terrifying. Life is always despair juxtaposed with the purest possible light. All we have is what we will inevitably lose, and all we can do is refuse the dark outside together, for a while. No more dying okay let’s just agree. Anyway wrote some things here: • Set us free: why I’m an abolitionist rape survivor (→) • Silence will not redeem us: East Asian art, being, and betrayal (→) # Not dwelling on it Things practiced: dumb solipsism, self-indulgence, big ball of regret My first memories of death were of my mother’s mother, then my father’s mother. In Chinese culture there are strictly-defined rites around death and mourning: the son breaks a vessel into a thousand pieces to proclaim his grief, the children compete to see who can weep most deeply. Though juvenile and emotionally untutored I recognized the benefit then: it gives the bereft something to do. I learned grief as a teenager, when a dear friend killed himself — except I learned it improperly, kept my thoughts to myself, and placated my restless brain by letting it systematically dismantle all the relationships I’d had. I convinced myself I was back to normal while growing gradually more mad; time passed, and I left as soon as I could. Which is to say: I’ve mostly dealt with grim things by placing them in a box and running away at full speed. This has gotten me good at inconsequential things like brain dissection and numerical analysis but bad at handling emotional complexity. So maybe I can start by writing something down. Aimee was the one who taught me that even though our earliest experiences teach us who we are, the determined can transcend. I was raped by three men pre-adulthood, people I knew: a cousin, a friend, a housemate. Aimee was raped many more times than that, by a man who was closer and less escapable. She fought back where I did not; she talked about it but I did not, not even to her — not because it was painful but because my experiences seemed pretty normal and I didn’t want to cheapen her experience, which had been abnormally brutal and cruel, by diluting it with mine. When I was raped I discarded certain assumptions I had held about how the world worked and about how safe I was. But Aimee saw her trauma as a tiny obstacle to be cleared, and once cleared an affirmation of her strength. She always had a sense of the possible, which was admirable, and incredible. What’s always seemed problematic is that the brave die, and yet we cowardly ones are still here. Driving through Los Angeles, the landmarks I know best are still the ones from Aimee’s convalescence: hospital, hospital, pharmacy, cancer clinic; the readiest memories still those of sitting in freezing waiting rooms producing the insurance cards, putting them away, filling out paper forms, ad infinitum, uselessly, helplessly. Maybe it shouldn’t have been a surprise that she was able to handle imminent expectation of death with grace and love and courage and personal sacrifice – but I was always surprised: at the equanimity with which she lost her long hair which she’d so prized, at her tolerance for pain, at her unflinching will to face hard truths. Had there been a moment when she was afraid to die? I wasn’t willing to ask. The problem is, as much as you’d like to, you can’t actually take someone else’s weakness or pain or fear. There’s a time when you expect your life to always be full of new and shiny things, and there’s a day you realize that’s not how it’ll be at all. That what life really becomes is a thing made of losses, of prized things that were there and aren’t any more. Grief is so uninteresting, I know; I can look at myself and scoff. I want people around me, I dread the moments of solitude; then I bore everyone until they leave, or talk nonsense about the fungibility of time, tradeoffs, undoing. The madness is fading somewhat but clarity doesn’t take its place. I promised to protect her, she told me I could not. In the end she was right. Tools used: d3, <canvas>, hard gay June 26ths: Lawrence v. Texas (2003): same-sex intercourse becomes legal across the United States United States v. Windsor (2013): the federal government begins fully recognizing same-sex marriages Obergefell v. Hodges (2015): same-sex marriage becomes legal everywhere in the United States # Seaflailing: Or, how to try (and mostly fail) to draw an ocean in OpenGL Things practiced: math, graphics, computers Tools used: WebGL, benvanik’s WebGL Inspector for Chrome, glMatrix for matrix math in Javascript For whatever we lose(like a you or a me) it's always ourselves we find in the sea (from an e.e. cummings poem that autoplays in my head every time I picture the ocean, i.e. ~10,000× while coding this) Despite being terrible at it, I find graphics programming pretty fun. Building realistic 3D environments in real time blends physics and computer science—graphics programmers have to satisfy two conflicting objectives: to simulate reality as accurately as possible, while using fewer than 16 or 33 milliseconds to process each frame. Modeling the ocean is a good example of a situation where tradeoffs are required. Fluid dynamics is based around the Navier-Stokes equations, which are a set of nonlinear partial differential equations that describe the flow of viscous fluids. Fully solving these equations numerically provides an exact model of the ocean, but is computationally infeasible. Instead, I tried to simulate an ocean scene using this approach: 1. Generate realistic water surface height fields to model waves, using empirical knowledge of ocean phenomena. 2. Account for optical processes acting on ocean water: reflection and refraction from a water surface, the color filtering behavior of ocean water, and maybe more-complex effects like caustics and godrays. 3. Render a realistic sky gradient using known properties of atmospheric scattering. 4. Think about computational-resource cost versus quality gained and simplify where possible. Results so far (underwhelming but workable) (click): The code is on GitHub; the simulation is running here. # Days Tools used: a crop-sensor DSLR, Lightroom, Premiere # In colour: Audio rainbowscapes Thing practiced: drawing rainbows Tools used: the JS Web Audio API, three.js instead of actual WebGL, Safari mobile web inspector Jamie xx’s new album In Colour is finally out, and it’s good. Do you ever get that feeling, when you’re up at 2 am and almost alone, of being enveloped in something with maybe someone, of melancholy becoming euphoric? This album feels like that. It’s dense and fleshed out, not downbeat and cryptic like the xx past — but still lonely, and lovely. Listen with good headphones. ### And Because I loved this album and its cover so much, I built an album-themed beat-detecting visualizer here: (If the presets are unsatisfactory, drag and drop your own audio file onto the page.) # Remarkitecture NYC: Lower Manhattan Tools used: camera, D3.js, being a touristy dork Thing practiced: documenting neat buildings so I don’t forget I’ve always loved beautiful buildings, and New York as their apotheotic urban motherland. Cities (and humans) have to perpetually reinvent themselves to stay alive, and New York is the best example of this I know. There’s vast beauty in the uniformly-timeworn structures of Renaissance Florence or the Campidoglio, true — but a modern city needs more. Architecture, as Vincent Scully says, is a conversation between generations. What New York’s architecture shows us is that we can both safeguard the past and believe that today can be just-as-good or better. See the clicky-map version ( ) , or just the notes; all photos from May 9–13, 2015. # Timeslivered Tools used: camera, tripod, Lightroom, Photoshop I love the idea of capturing time, of places where time is a visible dimension. (Seems so démodé that we can live in space and videostream from Japanese mountaintops but still can’t travel in time.) Lower Manhattan from Riverside Park, Hoboken (32 shots/34 minutes) Downtown LA from Elysian Park (13 shots/19 minutes) # Food notes: Sushi rolling I’ve always felt appropriative making sushi, being neither Japanese nor highbrow; but I do love fish, the sea, and the idea of paying obeisance to fish and the sea. Notes: ### Tools and things Rolled sushi is forgiving: the ingredients are on the inside, so the pieces don’t need to be as visually perfect as in nigiri. This means you can get away with a non-fish-specific knife, as long as it’s sharp. Also helpful: a colander for rinsing rice, a sushi rolling mat, a rice spatula, and an automatic rice cooker. ### Making the rice Sushi rice is supposed to be a little chewier than plain white rice, so it needs to be cooked with less water than usual. For four sushi rolls, try 2 cups of rice and 2 13 cups of water. Meanwhile, mix the vinegar: combine 3 T rice vinegar, 3 T sugar, and 1 T salt in a non-metallic bowl, then heat and stir until the sugar dissolves. To make the rice shiny and chewy, you need to cool it as quickly as possible while mixing in the vinegar. Traditionally, this is done in a wide, shallow tub made of Japanese cypress, which absorbs excess moisture and speeds cooling. If you don’t have one (they’re pricey), you can use any wide, shallow, non-metal container. Try to use something that lets you spread the rice over the largest possible surface area to cool. As you spread out the rice, move the spatula horizontally through the grains to separate them while pouring in the vinegar mixture. ### Prepping the other parts Toast the seaweed over a hot surface for a few seconds, just until crisp. To stop the rice from sticking to your hands later, mix up a bowl of “hand vinegar” to keep your fingers moist: 3 T water plus 1 T rice vinegar. If you like, mix up some spicy mayo: 3 parts Kewpie to 1 part Sriracha. Cut up whatever ingredients you want to roll up (here: tuna, salmon, yellowtail, mango, papaya, cucumber, and avocado). ### Normal rolling: Outside-out Lay your rolling mat down on a clean, flat surface. Lay the sheet of seaweed with its prettier side facing down. Dip your fingers in the hand vinegar to keep the rice from sticking to you. Spread the rice out evenly, leaving an empty strip at the far end. Lay your ingredients across the rice, about a third of the way in. Roll up the sushi, starting from the side closest to you. (I find it easiest to hold the ingredients with my fingertips while rolling upward with my thumbs.) Hold the mat around the formed roll for a few seconds to shape it. Place the sushi on a cutting board with the seam at the bottom, and using a wet knife blade cut the roll in half and each half into fourths. ### Alternative rolling: Inside-out Wrap the mat in plastic wrap before you start, so that the rice doesn’t stick. Use half a sheet of seaweed instead of a full sheet. Spread the rice out the same way, but don’t leave any empty strip. Flip the rice-and-seaweed sheet over, then place your filling on top. Roll the sheet up, add some white or black sesame seeds if desired, and cut into eighths. # Photos: A (over) B Thing practiced: pretend Instagram Tool used: Canon EF-S 24mm f/2.8 pancake lens # Secrets & spies: or, I try to understand RSA Thing practiced: ? Tools used: IntelliJ IDEA community edition, Applied Cryptography Scheming humans have always faced a basic problem: how can we communicate securely in the presence of adversaries? Since ancient times, the art of secret communication, or cryptography, has been crucial for governments and the military. But today cryptography affects us all. As messages are increasingly transmitted through public channels, cryptography prevents malicious interceptors from using our information against us. The evolution of cryptography can be split into two eras by the invention of asymmetric ciphers in the 1970s. Historically, encrypting and decrypting a message had been symmetric processes — that is, unscrambling the message required knowing the key that had been used to scramble it. This begat the problem of key distribution: before sending a secret message, the sender would have to take great precautions to transport her encryption key safely to the recipient. In asymmetric cryptography, the keys used to encrypt and to decrypt a message are different. This means that the recipient can make his encryption key public, as long as he keeps his decryption key private. What we need for an asymmetric-cryptography protocol to work is a trapdoor one-way function. This is the mathematical equivalent of a physical lock: easy to process in one direction (snapping the lock closed), but hard to undo (opening the lock), unless we have a special secret (the key). In RSA – the first public-key cryptosystem, and still the most popular – the trapdoor function exploits mathematical features of modular exponentiation, prime numbers, and integer factorization. Let’s throw together a toy implementation in Scala. A number is prime if it has exactly two divisors: 1 and itself. Sometimes it’s useful to have a list of small primes, so we need a Sieve of Eratosthenes: // Sieves a stream of Ints def sieve(s: Stream[Int]): Stream[Int] = // All primes as a lazy sequence val primes = sieve(Stream.from(2)) What we really want, though, is large primes – ones higher than 2500. To get a prime that large, we can’t sieve up from 1; instead, we find a prime by taking a random number, checking if it’s probably prime, and repeating if necessary. The primality test of choice in real systems is Miller-Rabin, but we’ll use Solovay-Strassen to keep things simple: // Ints won't fit these import math.BigInt // Euclid's GCD algorithm def gcd(a: BigInt, b: BigInt): BigInt = if (b == 0) a else gcd(b, a % b) // Computes the Jacobi symbol (needed for our test) def jacobi(a: BigInt, n: BigInt): BigInt = { if (a == 1) 1 else if (n == 1) 1 else if (gcd(a, n) != 1) 0 else if (a == 2 && (n % 8 == 1 || n % 8 == 7)) 1 else if (a == 2 && (n % 8 == 3 || n % 8 == 5)) -1 else if (a > n) jacobi(a % n, n) else if (a % 2 == 0) jacobi(a / 2, n) * jacobi(2, n) else if (n % 2 == 0) jacobi(a, n / 2) * jacobi(a, 2) else if ((((a - 1) * (n - 1)) / 4) % 2 == 0) jacobi(n, a) else jacobi(n, a) * -1 } // Runs the Solovay-Strassen test for i iterations def isPrime(n: BigInt, i: Int): Boolean = { if (i <= 0) true else if (n % 2 == 0 && n != 2) false else { val a = (random(n.bitLength) % (n - 1)) + 1 val j = jacobi(a, n) val exp = a.modPow((n - 1) / 2, n) if (j == 0 || j % n != exp) false else isPrime(n, i - 1) } } Finally, the main reason we’re interested in primes is so we can do calculations modulo our prime. To do this, we need the Extended Euclidean algorithm: // Returns (x, y) such that a*x + b*y = gcd(a, b) def extendedGcd(a: BigInt, b: BigInt): (BigInt, BigInt) = { if (a % b == 0) (0, 1) else { val (x, y) = extendedGcd(b, a % b) (y, x - y * (a / b)) } } The RSA system An asymmetric encryption system has two parts: a public key and a private key. In theory, encrypting a message with the public key can only be reversed by decrypting with the private key. In RSA, we encrypt a message by computing msge mod n, using the publicly-known information n and e. This is the crux of RSA’s security: modular exponentiation is easy to do, but exceedingly hard to undo. To make it possible to retrieve the message from the ciphertext, we build a trapdoor into our encryption routine by making n the product of two large primes p and q (which we keep private). We can then reconstruct the message using a calculated decryption exponent d. To choose the right decryption exponent, we first need a value φ based on n’s factorization such that xφ(p, q) = 1. Luckily, Euler’s totient function gives us just this: // Euler's totient phi, where p and q are n's prime factors def phi(p: BigInt, q: BigInt): BigInt = (p - 1) * (q - 1) As long as we choose our public exponent e so that it doesn’t share a common factor with the totient φ, we can decrypt using the inverse of e mod φ: // A legal (public) exponent has to // (1) be between 1 and the totient // (2) have no non-1 factors in common with the totient def isLegalExponent(e: BigInt, p: BigInt, q: BigInt): Boolean = e > 1 && e < phi(p, q) && gcd(e, phi(p, q)) == 1 // Returns b such that a*b = 1 mod n def modularInverse(a: BigInt, n: BigInt): BigInt = { val (x, _) = extendedGcd(a, n) x % n } // Our decryption exponent: the inverse of e mod phi def d(e: BigInt, p: BigInt, q: BigInt) = modularInverse(e, phi(p, q)) Encryption and decryption are now trivial: // Takes a message m and public information e and n def encrypt(m: BigInt, e: BigInt, n: BigInt): BigInt = m.modPow(e, n) // Takes a ciphertext c, private exponent d, and public information n def decrypt(c: BigInt, d: BigInt, n: BigInt): BigInt = c.modPow(d, n) (In reality, RSA is never used to encrypt messages — for n to be large enough to encode any reasonable length of message, the computing resources required for even the “easy” process of encryption are prohibitively high. Instead, RSA is used to safely deliver symmetric keys, which can then be used with block or stream ciphers to encrypt and decrypt large messages.) # Photos: Points, edges Thing practiced: having a camera on me Tools used: Canon EOS Rebel T3i, Canon EF 40mm f/2.8 pancake
2022-12-09 07:05:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2722592353820801, "perplexity": 5876.950188946103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711390.55/warc/CC-MAIN-20221209043931-20221209073931-00281.warc.gz"}
https://cstheory.stackexchange.com/questions/52040/complexity-of-reachability-in-fractal-mazes-with-traps
# Complexity of reachability in fractal mazes with traps Is reachability in fractal mazes with traps EXPTIME complete? A fractal maze includes one or more copies of itself. For example, see the question Decidability of Fractal Maze or Puzzling StackExchange question Alice and the Fractal Hedge Maze. In this question, a trap, when stepped on, destroys a maze passage, i.e. if we reach a given vertex (alt. edge), a given edge (if present) is permanently removed from the graph; allowing multiple traps in one location does not change the problem. In a fractal maze, maze copies also copy traps (with trap activation on a per-copy basis). We allow directed edges, but because of traps, that does not matter for the question. Posted in Q/A format as I was able to answer it while writing out related results, but I welcome other answers, including additional details, tighter bounds, similar questions/variations, and whether computational complexity of fractal mazes has been studied before. Variations: A restriction is to require the maze to be planar. A strengthening of reachability is to allow reachability infinitely often (this has subtle dynamics), and with a further strengthening requiring the path to be reusable; for these strengthenings, a variation is to disallow directed edges. Some reachability results Reachability in undirected graphs is LOGSPACE complete. Reachability in directed graphs is NL complete; it is open (as of 2022) whether this holds for planar directed graphs. Reachability in undirected mazes with noncomsumable keys (that open doors) is P complete. Reachability in mazes with traps, or with consumable keys (I think even if every door works with every key, consuming one key), or with directed paths with nonconsumable keys (or all three) is NP complete. Reachability in mazes with switches (that toggle some edges) is PSPACE complete. Reachability in fractal mazes (directed or not) is P complete. To see this, let $$C(i,j)$$ hold iff exit $$i$$ is reachable from exit $$j$$. Then the maze gives a monotonic recursive relation for $$C$$, and the true $$C$$ is its least fixed point. (As an aside, taking the greatest fixed point (which is also P complete) corresponds to connectivity when at high enough depth (or in a sense in the limit) we can tunnel through the walls of the submaze.) In the other direction, we can use the reachability to simulate a monotonic circuit. Also, for fractal mazes with $$O(\sqrt{\log n})$$ (external) exits, reachability is in NL, and LOGSPACE (aka L) for undirected mazes. Despite being in P, reaching an exit in a fractal maze can take an exponential number of moves, but the maximum depth is polynomial, and the directions can be printed by a (deterministic) PDA having polynomially many internal states. Reachability in fractal mazes with switches is undecidable, even if the maze directly contains only one copy of itself, each copy directly contains only a single switch, and each switch can be toggled only once. This holds because Turing machines with a single write-once binary tape (i.e. 1→0 is disallowed) are Turing complete as we can repeatedly copy the work area. Reachability in fractal mazes with traps is EXPTIME complete. The maximum shortest path length is $$2^{2^{n^{Θ(1)}}}$$. Reachability in fractal mazes with traps is in EXPTIME (and in PSPACE if the maze directly contains only one copy of itself) since it can be reduced to the following game. Player 1 draws the path except for portions in the maze copies, but including the ordering of the segments, then player 2 chooses a copy, and the players recurse, with player 1 winning once the paths in the copy do not enter its subcopies (and there are no inconsistencies). In the other direction, we use a maze based on the configuration graph of a chosen nondeterministic Turing machine with $$O(\log n)$$ internal memory (and thus $$n^{O(1)}$$ maze size) that includes head position(s), and a binary tape of length $$n^{Θ(1)}$$ encoded by traps. Each bit, once written, is encoded by a pair of traps (allowing a trap to have multiple trigger points and affect multiple edges) exactly one of which was activated. We always write bits in new locations. Entering a maze copy corresponds to a 'recurse' ability (and hence PSPACE hardness) while having two direct maze subcopies captures alternating PSPACE, which equals EXPTIME. Also, just one trap (with multiple triggers and affecting multiple edges, and with the maze having directed edges) per copy suffices. I am not sure about complexity of reachability in fractal mazes with anti-traps (that add rather than remove edges). It is undecidable whether given a fractal maze with traps and an initial position, it is possible to get stuck (making escape impossible), even if the maze directly includes only one copy of itself and there are no directed edges. This is shown using a trap array copying construction such that the copying must be done in order, and skipping an input or output index permanently leaves an escape hatch. Reachability in fractal mazes with switches that are linked across copies (thus avoiding per-copy state) is EXPTIME complete. Using such switches, one can emulate a maze of exponential size, and conversely. The presence of exponentially many direct subcopies (if desired) can be emulated using depth $$n$$ subcopies.
2023-04-01 19:29:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.717017412185669, "perplexity": 1943.652910573683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00137.warc.gz"}
https://math.stackexchange.com/questions/665961/how-to-use-the-generating-function-fx-x-1-x-x2
# How to use the generating function $F(x) =x/(1-x-x^2).$ The generating function for the Fibonacci sequence is $$F(x) =x/(1-x-x^2).$$ To work out the 20th value of the sequence I understand you somehow expand this and look at the coefficient of $x^{20}$. How exactly do you do this? \begin{eqnarray*} {z\over 1-z-z^2}&=&z\sum_{k=0}^\infty (z+z^2)^k\\ &=&z\sum_{k=0}^\infty z^k(1+z)^k\\ &=&z\sum_{k=0}^\infty z^k \sum_{j=0}^k {k\choose j} z^j\\ &=&\sum_{k=0}^\infty \sum_{j=0}^k {k\choose j} z^{1+j+k} \end{eqnarray*} From this we see that the coefficient of $z^N$ is $$\sum_{j=0}^{N-1}{N-1-j\choose j},$$ in particular the coefficient of $z^{20}$ is $${19\choose 0}+{18\choose 1}+{17\choose 2}+\cdots+{10\choose 9}.$$ • Easily the simplest approach. Thus, 6765. – Did Feb 6 '14 at 17:50 You can do the following: 1. Solve the equation $1-x-x^2=0$. The roots are $\frac{-1-\sqrt{5}}{2}$ and $\frac{-1+\sqrt{5}}{2}$. 2. Then you have $1-x-x^2=-(x-\frac{-1-\sqrt{5}}{2})(x-\frac{-1+\sqrt{5}}{2})=-(x+\frac{1+\sqrt{5}}{2})(x+\frac{1-\sqrt{5}}{2})$ 3. $\frac{-x}{(x+\frac{1+\sqrt{5}}{2})(x+\frac{1-\sqrt{5}}{2})}=\frac{A}{(x+\frac{1+\sqrt{5}}{2})}+\frac{B}{(x+\frac{1-\sqrt{5}}{2})}$ 4. Find A and B then you can expand both fractions in geometric series. $\frac{-x}{(x+\frac{1+\sqrt{5}}{2})(x+\frac{1-\sqrt{5}}{2})}=\frac{Ax+Bx+A\frac{1-\sqrt{5}}{2}+B+\frac{1+\sqrt{5}}{2}}{(x+\frac{1+\sqrt{5}}{2})(x+\frac{1-\sqrt{5}}{2})}=\frac{(A+B)x+A\frac{1-\sqrt{5}}{2}+B\frac{1+\sqrt{5}}{2}}{(x+\frac{1+\sqrt{5}}{2})(x+\frac{1-\sqrt{5}}{2})}$ Then $(A+B)=-1$ and $A\frac{1-\sqrt{5}}{2}+B\frac{1+\sqrt{5}}{2}=0$ This s system gives $A$ and $B$. 1. Add the two series and you are done. • Could you expand step 4? I can't quite see what is going on. – marshall Feb 6 '14 at 13:08 If you calculate $F(0.01)$ in Windows calculator you'll get $0.010102030508132134559046368320032$. Look at pairs of digits and you'll see you get the Fibonacci sequence right up until it gets into the 3-digit range. So a simplistic way to do it is to calcuate $F(10^{-5})$ to something like 105 digits of accuracy. $F(10^{-5}) \approx 0.00001 00001 00002 00003 00005 00008 00013 00021 00034 00055 00089 00144 00233 00377 00610 00987 01597 02584 04181 06765 10946$ Looking at this, the twentieth 5-digit group is $06765$, which is indeed $f_{20}$. You can also accomplish it with infinite precision integers. $$F(10^{-a}) = \dfrac{10^{-a}}{1 - 10^{-2a} - 10^{-a}} = \dfrac{10^a}{10^{2a} - 1 - 10^a}$$ Using the fact that $f_n < 10^n$, $$f_n = \lfloor 10^{n^2}F(10^{-n}) \rfloor \mod 10^n$$ $$= \left\lfloor \dfrac{10^{n^2+n}}{10^{2n} - 1 - 10^n}\right\rfloor \mod 10^n$$ But who says we have to use decimals here? $f_n < 2^n$ too, and the following holds for $n > 1$. $$f_n = \left\lfloor \dfrac{2^{n^2+n}}{2^{2n} - 1 - 2^n}\right\rfloor \mod 2^n$$ First, find the roots of your functions denominator $1-z-z^2$ which are $$z_\pm=-\frac{1\pm\sqrt{5}}{2}$$ Then find the partial fraction decomposition $$\frac{z}{1-z-z^2}=\frac{z}{(z-z_-)(z-z_+)}=\frac{a}{z-z_+}+\frac{b}{z-z_-}$$ which turns out to be $a=\tfrac{1}{2}(1-\tfrac{1}{\sqrt{5}})$ and $b=\tfrac{1}{2}(1+\tfrac{1}{\sqrt{5}})$ Now you can easily compute the $N$th derivative of the function which at $z=0$ tells you the coefficient $a_N$: $$a_N=\frac{1}{N!}\partial_z ^N f(z)|_{z=0}=\frac{1}{N!}\partial_z^N\left(\frac{a}{z-z_+}+\frac{b}{z-z_-}\right)_{z=0}=-\frac{1}{2}\left( \frac{a}{z_+^{N+1}}+\frac{b}{z_-^{N+1}}\right)$$ Plugging in $a$, $b$ from above and $N=20$ tells you the result. • Why did you need to find the partial fraction decomposition first? – marshall Feb 6 '14 at 13:11 • @marshall because it makes differentiation easier, if you seek a closed form in $n$ for $\partial_z^n (z-\alpha)^{-1}(z-\beta)^{-1}$ you will run into trouble, because the product rule will make the expression more complicated with each step, but in the decomposed form you can use $\partial_z^n(z-\alpha)^{-1}=(-1)^n n! (-\alpha)^{-n-1}$ which follows from simple induction. – flonk Feb 6 '14 at 14:35 • I know this is a dim question.. but if I take the third derivative of $f(x) = x/(1-x-x^2)$ and evaluate it at $x=0$ I get $12$ which isn't right. Why is this? – marshall Feb 6 '14 at 17:12 • @marshall I have to correct myself, multiplying the generator with $x$, i.e. $G(x)\to xG(x)$ corresponds to shifting the sequence $a_n\to a_{n+1}$, so it is still odd that you get $12$ which is not a Fibonacci number. – flonk Feb 10 '14 at 8:25 • @marshall OK, now I resolved it, I stupidly forgot the factorial, i.e. $a_N = f^{(N)}(0)/N!$, not $f^{(N)}(0)$. In your case it gives $12/3!=2$. – flonk Feb 10 '14 at 20:58
2019-12-08 21:23:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9468942284584045, "perplexity": 322.80259610068265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540514893.41/warc/CC-MAIN-20191208202454-20191208230454-00385.warc.gz"}
https://zeneruhazuriqita.hopebayboatdays.com/write-a-math-equation-in-words-21236kg.html
# Write a math equation in words When 6 is gained to four times a number, the introduction is How much time, on hazy, did he spend on each argument. Addition and Go happen at the same basic. Once the equation is on the work, you can use the dropdown to its own to make tweaks like switching between the Basic and Professional amalgam styles How to Format and Putting Lists in Microsoft Locate How to Do and Manage Lists in High Word No puzzle how many bulleted or discussed lists you have led with Microsoft Protect in your life so far, I bet you will show something new from this topic. Drawing an Examiner With Ink Office also uses you write out equations urban, either using your mouse or a good interface. The number of complaints Karen needs to work. But nurture problems do not have to be the trap part of a mastery class. My students have help writing equations for word processors. Fortunately, both Entertainment Word and provide a special set of definition tools that help you write algebraic expressions without much trouble. Welcome Office has equations that you can often insert into your peers. As you type, Word will run up a graphical representation of the computer. She had 84 prescriptions for the two strategies of drugs. If you would these unreasonable answers into the exam you used in step 4 and it does the equation true, then you should re-think the discussion of your equation. If you want to see additional tigers of linear equations worked out loud, click here. Disorganized the problem carefully and figure out what it is testing you to find. That is called the Euler-Lagrange props plural because this is actually several men. The word "times" tells you that you must also the variable n by 3, and that the age is equal to In a good amount of time, Will drove twice as far as Rhonda. Pronunciation out your equation in full before you go back to do any edits. So if you exactly want to see the most accurate equation ever, call in effect for a few months. Be very helpful with your parentheses here. Stay bibliographical by joining our newsletter. How glance should the shorter piece be. The way this is closed indicates that we find the sum first and then finally. Answer the question in the best The problem chances us to find the easiest grade. Example 2 A number n diacritics 3 is equal to Now we can set up the reader Step 5: What are we only to find. By setting up a system and less it, you can be trying with word problems. Host is the distance all the way around a human. There are some caveats. There are also Lagrangians for everything from taking planets to electromagnetic fields. On high- and pen-enabled devices you can feel equations using a stylus or your degree. Order of Events Acronyms The acronyms for order of thoughts mean you should solve equations in this structure always working left to often in your equation. I primed the thought of this idea. $\mathcal{L}_M$ is the matter Lagrangian, telling you about the sources of gravity. By substituting the Einstein-Hilbert Lagrangian into the principle of least action, one can recover the Einstein Field Equations, which can in principle be solved for the metric. Inline equation in latex with text. Ask Question. up vote 17 down vote favorite. 3. Hi I want to write an inline equation with some texts like. Amplitude = * Max_Amp_Of_Signal How can I do it with LaTeX? math-mode. Text text text $\some \math \commands$ more text text text. I've got a problem and i should solve it using differential equation.I don't know how to write the equation and start. A person is trying to fill a bathtub with water. Water is flowing into the bathtub from the tap at a constant rate of k litres/sec. Words Into Math Graphic Organizer and Word Bank. This is the example that I wrote on the board to illustrate how to write an equation and solve it. PDF and Word File of the Foldable. Sorry for the upside down picture! I was trying to post using my iPad. That app needs work! Write the equation of the line which passes through (2, –1) and is perpendicular to the line with equation 2y – x - Answered by a verified Math Tutor or Teacher Write the equation of the line with x-intercept (–10, 0) and undefined slope. Describe in words the graph of each of these curves below. Include in your description the. Writing Word Problem Equations – Help! Posted on October 7, by I Speak Math. Write an equation; Solve the problem; Write the solution in words to see if it makes sense. I had color coded word problems (by level of difficulty) and used my table top dry erase frames to put them in. Write a math equation in words Rated 4/5 based on 56 review The Language of Algebra - Writing inequalities - First Glance
2022-01-23 08:32:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5810326337814331, "perplexity": 910.0199322253419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304217.55/warc/CC-MAIN-20220123081226-20220123111226-00317.warc.gz"}
https://mathematica.stackexchange.com/questions/203904/matex-in-the-labels
# MaTeX in the labels Writing Plot[Sin[2.5 x], {x, 0, 2}, PlotLabel -> Framed["a=1.5" // MaTeX]] I get the expected labeled plot: However, if I just change 1.5 in the label to 2.5: Plot[Sin[2.5 x], {x, 0, 2}, PlotLabel -> Framed["a=2.5" // MaTeX]] I get What may be the problem here and how to solve it? • Quitting and restarting the kernel got rid of this behavior. Yet, what may have caused in the first place? – Iosif Pinelis Aug 18 '19 at 13:39 • It looks like your system's temporary directory got cleared out for some reason, breaking your running MaTeX session. MaTeX expects its temporary files to persist throughout the session. This has nothing to do with you changing 1.5 to 2.5. You can reset MaTeX's temporary directory by re-loading the package with Get["MaTeX"] (no kernel restart needed but make sure you use Get and not Needs). – Szabolcs Aug 18 '19 at 13:42 • @Szabolcs : Thank you for your comment. But why was MaTeX OK with 1.5 and not with 2.5 in the labels? – Iosif Pinelis Aug 18 '19 at 13:45 • As I said, it has nothing to do with that. As you yourself said, it happened only once and you can't reproduce it. If you can consistently reproduce it, then something would be wrong (please let me know in that case) – Szabolcs Aug 18 '19 at 13:46 • I still don't get it. If the MaTeX session got broken at some point, why was MaTeX working after that, repeatedly, with 1.5 but not with 2.5`? – Iosif Pinelis Aug 18 '19 at 13:50
2020-06-02 12:30:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18711552023887634, "perplexity": 2163.439475903198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347424174.72/warc/CC-MAIN-20200602100039-20200602130039-00223.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=1045143
MathSciNet bibliographic data MR1045143 (92d:43003) 43A07 Miao, Tianxuan Amenability of locally compact groups and subspaces of $L\sp \infty(G)$$L\sp \infty(G)$. Proc. Amer. Math. Soc. 111 (1991), no. 4, 1075–1084. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
2014-04-19 00:26:11
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9963476657867432, "perplexity": 4112.0505765524285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
http://fraggo.it/krax/translation-math.html
### Translation Math Virtual Nerd's patent-pending tutorial system provides in-context information, hints, and links to supporting tutorials, synchronized with videos, each 3 to 7 minutes long. But what would happen if our function was changed slightly? Suppose we have the function. Those transformations are linear, that's what 18. Yay Math in Studio returns, with the help of baby daughter, to share some knowledge about parent functions and their transformations. This study is important in improving the quality of living for injured soldiers requiring prosthetic limbs (e. Translation of a figure. Determine how to translate triangle A'B'C' to triangle ABC. Then one day I was daydreaming about music and I realized there is a numeric structure to learning the piano keys that could translate into math. SheLovesMath. Each of these moves is a transformation of the puzzle piece. Every point on the shape is moved the same distance and the shape of the image after translation is the same as the original shape. May 2, 2020 - Explore algebrateacher's board "Math/Transformations", followed by 584 people on Pinterest. Values are A=(-5,-8), B=(-3,-4), C=(-8,-3), D=(-6,1). , Find the new coordinates of (10,12) if you translate the point three units left and 4 units up. Houghton Mifflin Math; Grade 4; eGames; Education Place; Site Index; Copyright © Houghton Mifflin Company. Since linear transformations are represented easily by matrices, the corresponding entity is an augmented matrix, where the. I teach 7th-grade math in a low-income/high poverty school district in which 65% of the students receive free/reduced-price breakfast and lunch. Pick a Transformation and then a factor choice of that transformation. Our study guides are available online and in book form at barnesandnoble. In this section, you will learn different types of geometry transformations. In addition to translating the text, a word dictionary lookup is also performed to help you to verify if the translation is correct. Transformations which leave the dimensions of the object and its image unchanged are called isometric transformations. U3D1_S Worksheet Functions Relations D & R. Learn 3D Calculator. Proving Angles Congruent (1) Proving Triangles Similar (1) Proving Triangles Similar (3) Proving Triangles Similar (2) Proving Triangles Similar (4) Proving Angles Congruent (2). In short, a transformation is a copy of a geometric figure, where the copy holds certain properties. 00 / 1 vote). Learning Transformations Transformations Learning Activities: Reflection /Flip Rotation / Turn Translation / Slide Transformations Quiz Translation or Rotation Transformations Worksheets: Translations and Rotations Worksheets. #N#GeoGebra Classic. A worksheet is provided for them to graph both the preimage and image in order to determine the ordered pair(s). I teach 7th-grade math in a low-income/high poverty school district in which 65% of the students receive free/reduced-price breakfast and lunch. ie 18th March 2020; Support for Teaching and Learning 16th March 2020; School Visit Support 4th September 2018. From basic equations to advanced calculus, we explain mathematical concepts and help you ace your next test. Math glossary - definitions with examples. A set of worksheets on making rotations, describing them, finding the order of symmetry and making display work. Rotations in math. AN INTRODUCTION TO TRANSFORMATIONS Ready (Summary) In this lesson, participants are introduced to the concept of transformations through two hands-on activities: a patty paper translation and a rubber band dilation. And (3−1,2+2) → (2,4). trigonometry. The course material is extended to include proof writing, compass and straightedge constructions, coordinate geometry, and transformations. A transformation consisting of a constant offset with no rotation or distortion. Interactive math video lesson on Translation: Moving points and shapes in the coordinate plane - and more on geometry. Use the MTP in conjunction with assessment results and gap analyses to download required resources from below. Mar 6, 2017 - Explore c4ss4ndr4c's board "Translation, Rotation, and Reflection Lesson" on Pinterest. Translations of a parabola. JMAP resources include Regents Exams in various formats, Regents Books sorting exam questions by State Standard: Topic, Date, Type and at Random, Regents Worksheets sorting exam questions by State Standard: Topic, Type and at Random, an Algebra I Study Guide, and Algebra I Lesson Plans. Statistics: Anscomb's Quartet example. The act or process of translating, especially from one language into another. • Affine (matrix) transformations: Geometry of computer animation. See also: Flip, Slide, and Turn Worksheets. A geometry translation is an isometric transformation, meaning that the original figure and the image are congruent. The equation of a circle. Reflections, rotations, translations, oh my! Whether you're dealing with points or complete shapes on the coordinate plane, you can spin 'em, flip 'em, or move 'em around to your heart's content. Tracing paper may be used. UCSMP's English translations of foreign books and International Conference Proceedings are a special resource for anyone concerned with teaching or learning mathematics. FISHING A school of fi sh translates from point F to point D. Recent volumes include articles offering timely research by world-class mathematicians. Line segments translate to congruent line segments. Transformations - Logo Project - 17 comments. In this section, you will learn different types of geometry transformations. Geometry Worksheets. They could do these transformations in any order; they just need to document all the transformation rules and label the vertices correctly. After any of those transformations (turn, flip or slide), the shape still has the same size, area, angles and line lengths. U3D2_S Warmup D & R. Translation of a Triangle. In module 2, you made a google doc entitled " first name - geometry reflections" and put it in the math folder. Here, their polar and rectangular forms link through vector notation, and the arithmetic of vectors is born. A translation moves a shape from one position to another by sliding. math tutorials > translation review. Geometry Transformation. Learn about quadrilaterals the fun way in this Shapes in Motion Math Game. Virtual Nerd's patent-pending tutorial system provides in-context information, hints, and links to supporting tutorials, synchronized with videos, each 3 to 7 minutes long. Translation gives the option to move up,down,left,or right one unit. What is an Isometry? Reflections Explorer. When it comes to my Transformations unit in Geometry, I have a mini-project that I like to use. Complete a number of activities that will test your skills. Translation - MathsPad. U3D1 Worksheet Solutions Functions Relations D & R. Describe the effect of dilations, translations, rotations and reflections on two-dimensional figures using coordinates. If the image moved left and down, the rule will be (x - __, y - __) where the blanks are the distances moved along each axis; for translations left and up: (x. - [Voiceover] Triangle ABC undergoes a translation, and we're using the notation capital T for it, and then we see what the translation has to be. Difference Kinds of Transformations. Learn what a Column Vector is and how to use a Vector to perform a Translation. Japanese Textbook Translations. The chart below is similar to the chart on page 68. Translation worksheets contain a variety of practice pages to translate a point and translate shapes according to the given rules and directions. Translating words into mathematics symbols. Another type of sentence used in algebra is called an inequality. A translation operation shifts an image by a specified number of pixels in either the x - or y-direction, or both. Should you actually need assistance with algebra and in particular with translation for math or quiz come visit us at Algebra-calculator. Complete differentiated lesson on translation using vectors. Home Contact About Subject Index. Items such as blacklights, a disco ball, glow sticks, skeeball, and a dartboard will alter the classroom environment to increase student engagement. Types of transformations in math. A transformation is a way of changing the size or position of a shape. Mar 6, 2017 - Explore c4ss4ndr4c's board "Translation, Rotation, and Reflection Lesson" on Pinterest. | Copyright | Feedback |. They are awesome! Easy to follow, lots of practice problems, answer keys that are accurate, and a seller that responds when needed. Hence, one can speak of translates of a function f(x) as functions of the form f(x+c). Learn vocabulary, terms, and more with flashcards, games, and other study tools. What is an Isometry? Reflections Explorer. Math Open Reference. Also, graph the image and find the new coordinates of the vertices of the translated figure in these sheets. In Geometry, "Translation" simply means Moving without rotating, resizing or anything else, just moving. Determine where T' would be if you translated T 3 units to the right and 6 units up. Google's free online language translation service instantly translates text and web pages. To turn "Improve camera input" on or off: On your Android phone or tablet, open the Translate app. Best For: Geometry, Geometry, Math 6, Math 7, Math 8. Animated Gif on Isometry. Learn more in the Cambridge English-French Dictionary. In art, transformations may be created in many ways, but in mathematics, transformations are functions of geometry that involve congruent shapes--shapes that are exactly the same in size and shape. Improve your math knowledge with free questions in "Translations: graph the image" and thousands of other math skills. In the accompanying classroom activity. Transformations Free Math Powerpoints for Kids and Teachers. In this educational animated movie about Math learn more about the value of numbers and their distance from zero. Translations of Shapes Date_____ Period____ Graph the image of the figure using the transformation given. As a culminating activity, students will create a classroom quilt. It is equivalent of picking up the shape and putting it down somewhere else. Math for America provides aspiring math teachers a full-tuition scholarship for a master's degree in mathematics education, and a stipend of \$100,000 over five years in addition to a full time teacher's salary. In the animation below, you can see how we actually translate the point by −1 in the x direction and then by +2 in the y direction. Prime Factorization. c++,algorithm,math,recursion. In this tutorial, learn the definition of translation and see some really neat examples. Over 100,000 Italian translations of English words and phrases. R Reflections and Translations Take Home Quiz Graph the image of the figure using the transformation given. U3D1_T Functions Relations D _ R. A translation moves ("slides") an object a fixed distance in a given direction without changing its size or shape, and without turning it or flipping it. Compositions of Transformations - Problem 1. FISHING A school of fi sh translates from point F to point D. Maths revision video and notes on the topic of transforming shapes by rotation, reflection, enlargement and translation; and describing transformations. The teacher can introduce new knowledge about rotations and translations in order to confirm the prompt is true for specific cases of combined reflections. Volumes in this series consist of articles originally published in books and journals in Russia or Japan. An interactive tool for demonstrating rotations with matching worksheet for pupils. - First step: Find the new verticals under the transformation which are T3,2. American Mathematical Society Translations: Series 2. Statistics: Linear Regression example. the graph of y = f(x - c) is obtained by translating the graph of y = f(x) c units to the right;. Math (English to English translation). Learn to use notation to describe mapping rules,and graph images given preimage and translation. First, I show different company logos in class and we talk about the different transformations we see in the logos. Play this game to review Pre-algebra. RevisionWorld TV. A translation is a type of transformation that moves each point in a figure the same distance in the same direction. Virtual Nerd's patent-pending tutorial system provides in-context information, hints, and links to supporting tutorials, synchronized with videos, each 3 to 7 minutes long. A translation—probably the simplest type of figure transformation in coordinate geometry—is one in which a figure just slides straight to a new location without any tilting or turning. Mathematics (Linear) - 1MA0 TRANSLATION Materials required for examination Items included with question papers Ruler graduated in centimetres and Nil millimetres, protractor, compasses, pen, HB pencil, eraser. Learn high school geometry for free. If we restrict ourselves to mappings within the same space, such as T:Rn→Rn, then T is associated with a square n×n matrix. We show that the full global symmetry groups of all the D-dimensional maximal supergravities can be described in terms of the closure of the internal general coordinate transforma. ie 18th March 2020; Support for Teaching and Learning 16th March 2020; School Visit Support 4th September 2018; Lesson Study Induction Training 4th September 2018; Workshops for Newly Qualified Maths Teachers 1st June 2018. A worksheet is provided for them to graph both the preimage and image in order to determine the ordered pair(s). Translating words into mathematics symbols. com, a math practice program for schools and individual families. Transformations Math Lib Activity {Reflections, Translations, Rotations, Dilations} In this activity, students will practice graphing transformations, including reflections, translations, rotations, and dilations. 2 Problem Solving Help 7. Translations & Vectors Worksheets- Includes math lessons, 2 practice sheets, homework sheet, and a quiz!. Rotate an image. Dilation is either from the x-axis or the y-axis. Practice problems here: Note: Use CTRL-F to type in search term. Mathematics has both formal and informal expressions, which we might characterize as “school math” and “street math” (Usiskin, 1996). The interactive tool includes a movable tracing paper overlay. The calculator shows us the following graph for this. Quiz on translations, rotations and reflections. Determine how to translate triangle ABC to triangle A'B'C'. If a figure is moved from one location another location, we say, it is transformation. Improve your math knowledge with free questions in "Translations: find the coordinates" and thousands of other math skills. Create the worksheets you need with Infinite Geometry. For a review of basic features of an exponential graph, click here. If you're behind a web filter, please make sure that the domains *. This page will deal with three rigid transformations known as translations, reflections and rotations. Shape Sequences (Caroline Payne) DOC. Numerous exercises appear throughout the text, many of which have corresponding answers and hints at the back of the book. ® PixiMaths 2017 Proudly created with Wix. Automatic spacing. Learn more in the Cambridge English-French Dictionary. AN INTRODUCTION TO TRANSFORMATIONS Ready (Summary) In this lesson, participants are introduced to the concept of transformations through two hands-on activities: a patty paper translation and a rubber band dilation. I’ve been incorporating more art into math this year. Identify line and rotational symmetry. Describe the effect of dilations, translations, rotations and reflections on two-dimensional figures using coordinates. Transformations Math Lib Activity {Reflections, Translations, Rotations, Dilations} In this activity, students will practice graphing transformations, including reflections, translations, rotations, and dilations. Guide TranStar through the perilous voids of space in search of a new home. Study the free resources during your math revision and pass your next math exam. Updated: Jan 4, 2020. Practice problems here: Note: Use CTRL-F to type in search term. The Pee Dee Math, Science and Technology Academy does not discriminate on the basis of race, color, national origin, sex, disability, age, religion, or immigrant status in its programs and activities and provides equal access to the Boy Scouts and other designated youth groups. com, a math practice program for schools and individual families. Translating words into mathematics symbols. Automatic spacing. And (3−1,2+2) → (2,4). Math (English to English translation). A set of worksheets on making rotations, describing them, finding the order of symmetry and making display work. mathematician. The vertex of a parabola. We have a set of notes/practice worksheets from the county office of ed (really great!), an FAL ( Transforming 2d figures ), and a common test. A transformation refers to the four various ways wherein the shape of a point, a line or an object is manipulated. Activity 14. Double Reflection of an object. Here are the topics that She Loves Math covers, as expanded below: Basic Math, Pre-Algebra, Beginning Algebra, Intermediate Algebra, Advanced Algebra, Pre-Calculus, Trigonometry, and Calculus. Never runs out of questions. When it comes to my Transformations unit in Geometry, I have a mini-project that I like to use. Translation - of a polygon. This section provides answers to common questions parents have about the Everyday Mathematics curriculum. A translation—probably the simplest type of figure transformation in coordinate geometry—is one in which a figure just slides straight to a new location without any tilting or turning. The Z value is set to 0, in both the original and final line positions. The shape becomes bigger or smaller: When one shape can become another using only Turns. Learn vocabulary, terms, and more with flashcards, games, and other study tools. advertisement. 2 Reflections T 09 FEB 2016 - 2. When it comes to my Transformations unit in Geometry, I have a mini-project that I like to use. 1- Translating words into mathematics symbols - part 1 Translate each of the following into its most likely mathematical meaning and indicate with the appropriate symbol: (+), subtraction (-), multiplication (´), division (¸), power (xy), unknown (?), equal (=), or parentheses ( ). In this non-linear system, users are free to take whatever path through the material best serves their needs. Translation is an example of a transformation. 5: Given a geometric figure and a rotation, reflection, or translation, draw the transformed figure using, e. When we take a function and tweak its rule so that its graph is moved to another spot on the axis system, yet remains recognizably the same graph, we are said to be "translating" the function. Compare transformations that preserve distance and angle to those that do not (e. This dynamic library and database provides access to original publications, and references to available translations and current research. Translate Math to English online and download now our free translation software to use at any time. Learn more about Quia. In the animation below, you can see how we actually translate the point by −1 in the x direction and then by +2 in the y direction. Music is often seen as a way to express emotion. Transformation Gifs. Contents include modern elementary geometry, isometries and similarities in the plane, vectors and complex numbers in geometry, inversion, and isometries in space. Fun maths practice! Improve your skills with free problems in 'Reflections, rotations and translations: graph the image' and thousands of other practice lessons. The following worksheet is for you to practice how to do MULTIPLE TRANSFORMATIONS! You should already know how to do the following: Translations (slides) Reflections (flips, like with a mirror) Rotations (spins or turns). Corbettmaths Videos, worksheets, 5-a-day and much more Translations Video Level 2 Further Maths Revision Cards. About XP Math Mr. Math Geometry Translation or Rotation Game. Maths Fun - Biomechanics. Unsubscribe from mrmaisonet? Sign in to add this video to a playlist. To translate up, down, left, or right, move each vertex of the figure as required, and then redraw the lines between the vertices. com where we believe that there is nothing wrong with being square! This page includes Geometry Worksheets on angles, coordinate geometry, triangles, quadrilaterals, transformations and three-dimensional geometry worksheets. A set of geometry worksheets for teaching students about different types of shape movements - translation, rotation, and reflection. Translation of mathematics at Merriam-Webster's Spanish-English Dictionary. May 2, 2020 - Explore algebrateacher's board "Math/Transformations", followed by 584 people on Pinterest. A translation is basically sliding a shape, curve, or point in a direction a certain amount. Geometry definition is - a branch of mathematics that deals with the measurement, properties, and relationships of points, lines, angles, surfaces, and solids; broadly : the study of properties of given elements that remain invariant under specified transformations. (Mathematics) maths a transformation in which the origin of a coordinate system is moved to another position so. Students use multiple representations to communicate their conceptual understanding. Year 5 Maths Translation Challenge. The translation of a graph. Stop searching. Graphing II - Translation, Reflection, & Rotation Math Cartoon #49A - "Sketch Artist" (9-9-12) Then, there is a practice quiz (and solutions), including parent functions and graphing. 4 Dilations and Quiz R 11 FEB 2016 - 2. Square OABC is drawn on a centimetre grid. 3 Rotations 2. Mathematics textbooks used in grades 7-9 in Japan, available from UCSMP:. Related Topics:More Geometry Lessons In these lessons, we will learn the different types of transformation in Math: Translation, Reflection, Rotation, Dilation Transformation involves moving an object from its original position to a new position. For example: $y = x^2$ can be translated along the x axis:. Year 5 Maths Translation Challenge. Key Words - coordinate grid - Cartesian plane - x-axis - y-axis - origin - quadrants. (a) Reflect in the x-axis, then shift upward 4 units. The teacher can introduce new knowledge about rotations and translations in order to confirm the prompt is true for specific cases of combined reflections. In geometry, transformations relate to the movement of figures on a plane. newsletter terms and conditions. 10(C) explain the effect of translations, reflections. From solving Maths equations to identifying plants to translating words in any language, there are apps that can make you do all this and more using your smartphone's camera. The vertex of a parabola. Mathematical transformations describe how two-dimensional figures move around a plane or coordinate system. Past history will be cleared during this upgrade, so make sure to save translations you want to remember for ease of access later. So, I click on the translate tool. #N#GeoGebra Classic. Where would A' position be at, if the translation was be (x,y) --> (x+3, y-2)? Transformations DRAFT. Such a linear transformation can be associated with an m×n matrix. Sign in - Google Accounts - Google Classroom. • The image is usually labeled using a prime symbol, such as A'B'C'. Multi-Digit Multiplication Pt. You want these coordinates from A origin. Improve persistence and course completion with 24/7 student support online. Contents include modern elementary geometry, isometries and similarities in the plane, vectors and complex numbers in geometry, inversion, and isometries in space. 9th - 10th grade. Geometry definition is - a branch of mathematics that deals with the measurement, properties, and relationships of points, lines, angles, surfaces, and solids; broadly : the study of properties of given elements that remain invariant under specified transformations. math translate: математика. What is an Isometry? Reflections Explorer. com is a free math website that explains math in a simple way, and includes lots of examples, from Counting through Calculus. To translate a shape, you can move it up or down or from side to side, but you cannot change its appearance in any other way. Gizmo User from Virginia - ExploreLearning Staff. , transparencies and geometry software; describe transformations as functions that take points in the plane as inputs and give other points as outputs. Parallel or perpendicular lines translate to parallel or perpendicular lines. 4 Understand that a two-dimensional figure is similar to another if the second can be obtained from the first by a sequence of rotations, reflections, translations, and dilations; given two similar two-dimensional. Transformations. Automatic spacing. Play this game to review Geometry. Virtual Nerd's patent-pending tutorial system provides in-context information, hints, and links to supporting tutorials, synchronized with videos, each 3 to 7 minutes long. Year 5 Maths Translation Challenge. Need homework help? Answered: 4. Math Online Lesson 10 8. Also, graph the image and find the new coordinates of the vertices of the translated figure in these sheets. In an x,y coordinate plane, if you define a function y = f(x), a translation of this function would be y2 = f(x-a) + b, where a is the amount moved right and b is the amount moved up on the graph. (3, − 2) (1, 0) 18. - Graph and describe transformations of a shape on a grid. There have been many artists throughout history that have used different techniques of transformations, such as symmetry and tessellations, in their pieces. Files included (4) flipchart, 320 KB. One definition of "to translate" is "to change from one place, state, form, or appearance to another". Math Word Wall - Reflection, Rotation, Translation Finally a new set of word wall posters are complete! I have created a set of six posters that feature reflection, translation and rotation or slide, turn and turn. Translation of 3 vertices up to 3 units a also arithmetic Worksheet 612792 translation maths worksheets geometry also geometry translation worksheets translation. Keywords for Mathematical Operations You need to be able to translate words into mathematical symbols, focusing on keywords that indicate the mathematical procedures required to solve the problem—both the operation and the order of the expression. Aug 24, 2010, 10:30 AM. See more ideas about Transformations math, Geometry activities and Teaching math. This module mainly discusses the same subject as: 2D transformations, but has a coordinate system with three axes as a basis. Given a point and a definition of a translation, plot the translation on a coordinate plane or identify the coordinates of the translated point. Audio pronunciations, verb conjugations, quizzes and more. Ask students to bring pictures that show examples of rotations being used in designs such as wall paper, floor tiles, art work, etc. Translate the geometric shapes and colour in as per the instructions given to see what it reveals. The object's initial shape and position is called the pre-image while the final position and shape of the manipulated object is called the image under the transformation. Animated Gif of Translations. The vertex of a parabola. The figure shows the parabola y = x2 with a translation 5 units up and a translation 7 units down. On the right is its translation to a "new origin" at (3, 4). We show that the full global symmetry groups of all the D-dimensional maximal supergravities can be described in terms of the closure of the internal general coordinate transforma. On the right is its translation to a "new origin" at (3, 4). In the context of geometry, a translation is a type of transformation in which the movement of a figure occurs along a line, as shown in the figure below: In the figure above, the figure is translated horizontally. Values are A=(-5,-8), B=(-3,-4), C=(-8,-3), D=(-6,1). AN INTRODUCTION TO TRANSFORMATIONS Ready (Summary) In this lesson, participants are introduced to the concept of transformations through two hands-on activities: a patty paper translation and a rubber band dilation. Such a linear transformation can be associated with an m×n matrix. Houghton Mifflin Math; Grade 4; eGames; Education Place; Site Index; Copyright © Houghton Mifflin Company. This Geometry Worksheet may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math. French Translation of "math" | The official Collins English-French Dictionary online. Transformations and Invariant Points (Higher) – GCSE Maths QOTW October 23, 2016 November 14, 2016 Craig Barton Your students may be the kings and queens of reflections, rotations, translations and enlargements, but how will they cope with the new concept of invariant points?. 1- Translating words into mathematics symbols - part 1 Translate each of the following into its most likely mathematical meaning and indicate with the appropriate symbol: (+), subtraction (-), multiplication (´), division (¸), power (xy), unknown (?), equal (=), or parentheses ( ). Study the free resources during your math revision and pass your next math exam. fx x ( )=32+−7. After any of those transformations (turn, flip or slide), the shape still has the same size, area, angles and line lengths. of Wisconsin Law school. Patent and Trademark. 8th Grade Probability. Represent transformations in the plane. Read each question carefully before you begin answering it. Similar Figures. Many natural systems appear to be in equilibrium until suddenly. Vertical stretches and shrinks. stretches, dilations). If you're aiming for a perfect score (or nearly) and want to grab every last point you can, then this is the guide for you. This site is a dedicated art site, not requiring much math. Math and Music Essay Math and music are connected in many ways. Rectangles and parallelograms the same, ditto circles and ellipses • Continuous (topological) transformations: Any loop is a circle. SimilarFigures. If the image moved left and down, the rule will be (x - __, y - __) where the blanks are the distances moved along each axis; for translations left and up: (x. A translation is when you do this to a figure. Corbettmaths Videos, worksheets, 5-a-day and much more Translations Video Level 2 Further Maths Revision Cards. And (3−1,2+2) → (2,4). Jeffer Dave Cagubcob, Geoff Pilling, and Jimin Khim contributed A translation α \alpha α (also known as "slide") is a bijective. This was Euler's first major work running to some 500 pages in the original, and included many of his innovative ideas on analysis. Download the medium term plan by clicking on the button above. Complete differentiated lesson on translation using vectors. , Describe the translation for this notation: (x,y) --> (x + 8, y + 4). How to filter for PDST resources on scoilnet. Convolution + Max pooling $\approx$ translation invariance (as far as I know from the deep learning book…also if you don’t remember what translation invariance is check out: What is translation invariance in computer vision and convolut. As the animation shows a translation of T(−1,+2) on the point A with coordinates (3,2) produces an image at (2,4). Perform a reflection, translation, or rotation with each traced image. The mathematical way to write a translation is the following: (x, y) → (x + 5, y - 3), because you have moved five positive spaces in the x direction and three negative spaces in the y direction. Animated Gif of Translations. Stop searching. If your translation is (200,0), then the coordinate of B' from B origin is (200,0). Another area that transformational geometry is commonly used is in art. Take a look! Keywords: definition; Just about everything in math has a. DFM is a huge bank of free educational resources for teaching mathematics, with full sets of slides, worksheets, games and assessments that span Year 7 to Further Maths and enrichment resources with a Maths Challenge/Olympiad focus. Translations of Functions. Related Topics:More Geometry Lessons In these lessons, we will learn the different types of transformation in Math: Translation, Reflection, Rotation, Dilation Transformation involves moving an object from its original position to a new position. References to complexity and mode refer to the overall difficulty of the problems as they appear in the main program. 6 Similar Figures. Animated Gif on Isometry. The Z value is set to 0, in both the original and final line positions. When we take a function and tweak its rule so that its graph is moved to another spot on the axis system, yet remains recognizably the same graph, we are said to be "translating" the function. National Library of Virtual Manipulatives for Interactive Mathematics: Geometry. They're all free to watch! ↓ Decimal Place Value. On the right is its translation to a "new origin" at (3, 4). Michael Borcherds. Level 1 Level 2 Level 3 Level 4 Level 5 Level 6 Level 7 Exam-Style Description Help More. Created: Oct 9, 2014. Transformations: Scaling a Function example. The mathematical way to write a translation is the following: (x, y) → (x + 5, y - 3), because you have moved five positive spaces in the x direction and three negative spaces in the y direction. Maisonet Math Worksheets and online quizzes on a variety of geometry (and other) topics. 2 Reflections. translate +2 units horizontally-2 units vertically reflect in the y axis rotate 90° clockwise about (0,2) reflect in the line y = x rotate 180° clockwise about (0,0) rotate 90° counterclockwise about (2,0) rotate 90° clockwise about (0,0) reflect in the line y = -x translate +2 units horizontally-2 units vertically reflect in the y axis. A Prezi explaining the different types of transformations! Blog. translation synonyms, translation pronunciation, translation translation, English dictionary definition of translation. In the virtual world of Google Earth, concepts and challenges can be presented in a meaningful way that portray the usefulness of the ideas. This is the second lesson on translations. Symmetry Sheet 1 (Ian Mason) PDF - Sheet 2 (Ian Mason) PDF. Hi, I have a maths question. First, I show different company logos in class and we talk about the different transformations we see in the logos. But what would happen if our function was changed slightly? Suppose we have the function. Interactive calculus applets. Math teachers, don't be forlorn. of Wisconsin Law school. Table of Content. It is also used to improve the performance of Olympic athletes. Image and Preimage. 3 Transformations of Graphs MATH 1330 Precalculus 85 Example Problem 5: Start with the function f x x, and write the function which results from the given transformations. Here, their polar and rectangular forms link through vector notation, and the arithmetic of vectors is born. Translation of math at Merriam-Webster's Spanish-English Dictionary. And (3−1,2+2) → (2,4). COM ALL LEVELS TYPES OF TRANSFORMATIONS Write translation, reflection, or rotation for each transformation below. Corrective Assignment. Translations and Reflections - Sample Math Practice Problems The math problems below can be generated by MathScore. Multiple-choice & free-response. n informal US and Canadian short for mathematics Brit equivalent: maths n. Math Worksheets Listed By Specific Topic and Skill Area. geometry concepts. Chapter 9 Transformations 461 Transformations Make this Foldable to help you organize the types of transformations. Quiz on translations, rotations and reflections. How to do math translations. Table of Content. When the situation involves subtraction, be sure to put the minus symbol before the value being subtracted. This is a complete translation of one of Euler's most important books. Translating words into mathematics symbols. Geometry: Tessellations - Students apply knowledge of reflections, rotations, and translations in creating a tessellation. Log in with ClassLink. Multiple-version printing. 2 May 2020. Translations About UCSMP's Translation Series. A host of activities and lessons that explore the world Geometry Transformations! Identify reflections, rotations, and translations, Translations, Reflections, Rotations, Dilations, Symmetry. Create AccountorSign In. Congruent Angles: Definition. It is opposed to the classical synthetic geometry approach of Euclidean geometry, that focuses on proving theorems. math translate: математика. Rotation gives the option to rotate clockwise or counterclockwise, 90 or 180 degrees. A geometry translation is an isometric transformation, meaning that the original figure and the image are congruent. The weekly math worksheets are used by classrooms to provide mixed reviews in addition, subtraction, multiplication, and division math facts through the use of math drills and word problems. Click on the images to view, download, or print them. Define translation. The reflection is the SAME SIZE as the. 6:04 ASFA Math-Science Senior Research: Sumaiya Tasnim. Geometry Math Transformations Dilation Reflection Rotation Similarity Translation. Improve your math knowledge with free questions in "Translations: find the coordinates" and thousands of other math skills. Alison's free online Diploma in Mathematics course gives you comprehensive knowledge and understanding of key subjects in mathematics e. Multi-Digit Addition. 2 years ago. If W and A are two frames, the pose of A in W is given by the translation from W's origin to A's origin, and the rotation of A's coordinate axes in W. When the situation involves subtraction, be sure to put the minus symbol before the value being subtracted. Geometry Transformations - Students will understand the concepts of reflection and translation as transformations of points, lines and objects. Published on Mar 30, 2015. Study the free resources during your math revision and pass your next math exam. An awesome game for kids to teach them the concept of 'reflection, rotation and translation' in an innovative way. Graphing and finding properties of the root function and the reciprocal function. For the project, I have students find different. When you use Scan or Import to translate photos in the Translate app, those photos are sent to Google for text recognition. Common Geometry Definitions Transformations Answers 8/2: Translations and Reflections. These topics are covered in my Videos on Geometry Transformations. When you are doing a translation, the primary object is called the pre-image, and the object after the translation is called the image. , translation versus horizontal stretch) G-CO. Find the points of the verticals on the graph and connect the points one by one until you can get one shape. (Mathematics) maths a transformation in which the origin of a coordinate system is moved to another position so. Join 1000s of fellow Maths teachers and students all getting the tutor2u Maths team's latest resources and support delivered fresh in their inbox every morning. Types of transformations in math. Uploaded yesterday to Math-Science; 10:14 ASFA Math-Science Senior Research: Isabel Silwal. reflection about the y-axis C. Another area that transformational geometry is commonly used is in art. The absolute value graphs are shown in Figure 10. Search a database of example translations containing a particular word. Math Geometry Translation or Rotation Game. Requirements: -Your animation can be of whatever you like, but your main objective must be to show an object. Here are the most common types: Translation is when we slide a figure in any direction. For the project, I have students find different. Need homework help? Answered: 4. Determine where T' would be if you translated T 3 units to the right and 6 units up. Rotation, Reflection and Translation - All Transformation Worksheets. Explains how to translate an object is to move it with no other change. interactive Math skills resources - sixth grade math concepts, transformations, slides, flips, turns. math translate: математика. It is also used to improve the performance of Olympic athletes. , Find the new coordinates of (10,12) if you translate the point three units left and 4 units up. I greet you this day, Sun Jan 05 2020 20:29:53 GMT-0800 (Pacific Standard Time). • The original object is called the pre-image, and the translation is called the image. We're gonna move positive eight. Attempt every question. Higher Education. You may need to slide, flip or turn the robots to fit. Point Reflection over Point. Log in with ClassLink. Audio pronunciations, verb conjugations, quizzes and more. Graph y 4 3 cos x. Hi, I have a maths question. Mathematics textbooks used in grades 7-9 in Japan, available from UCSMP:. Published on Mar 30, 2015. grade 7 transformations worksheet new dilation math worksheets with answers. Sal shows how to perform a translation on a triangle using our interactive widget! If you're seeing this message, it means we're having trouble loading external resources on our website. Translation by ImTranslator provides the most convenient access to the online translation services powered by Google and other machine translation engines for over 100 foreign languages. Search a database of example translations containing a particular word. A mixed review of problems for middle school and high school students on the concepts of translation, reflection and rotation with exercises to identify the type of transformation, transformation of shapes, writing the coordinates of the transformed shapes and more are included in these pdf worksheets. The lesson also considers compositions. This packet should help a learner seeking to understand transformations of geometric figures. 10(C) explain the effect of translations, reflections. Virtual Nerd's patent-pending tutorial system provides in-context information, hints, and links to supporting tutorials, synchronized with videos, each 3 to 7 minutes long. n informal US and Canadian short for mathematics Brit equivalent: maths n. | Copyright | Feedback |. Proving Angles Congruent (1) Proving Triangles Similar (1) Proving Triangles Similar (3) Proving Triangles Similar (2) Proving Triangles Similar (4) Proving Angles Congruent (2). 8th Grade Probability. Mathematics. May 2, 2020 - Explore algebrateacher's board "Math/Transformations", followed by 584 people on Pinterest. Learn how to perform a Combination of Transformations by looking at free maths videos and example questions. What is an Isometry? Reflections Explorer. References to complexity and mode refer to the overall difficulty of the problems as they appear in the main program. Houghton Mifflin Math; Grade 4; eGames; Education Place; Site Index; Copyright © Houghton Mifflin Company. , Describe the translation for this notation: (x,y) --> (x + 8, y + 4). The initial point and terminal point of the translation vector are irrelevant. To find the image of a point, we multiply the transformation matrix by a column vector that represents the point's coordinate. Created: Oct 9, 2014. 3 Rotations W 10 FEB 2016 - 2. In art, transformations may be created in many ways, but in mathematics, transformations are functions of geometry that involve congruent shapes--shapes that are exactly the same in size and shape. Transformations: Scaling a Function example. To see CCSS connections, simply click the common core icon. Function g(x) is a transformed version of function f(x). Therefore, it is the responsibility of the middle school teacher to move students in that direction (NCTM, 2000). Work with the art teacher to design a project involving geometric transformations. Translate texts with the world's best machine translation technology, developed by the creators of Linguee. Translation. Affine transformations become linear transformations in one dimension higher. I had the students draw a pre-image, and then accurately translate it, reflect it, and rotate it into images. Translation Test In this test you will translate points, segments, and triangles in the coordinate plane as well as determine coordinates of these translated figures. Translations DRAFT. Created: Oct 9, 2014. This test has ten problems that check how well you can determine the correct coordinates of figures that have been translated on the coordinate plane. To allow Google to retain images for future product improvement, you can turn “Improve camera input” on. Download the medium term plan by clicking on the button above. , transparencies and geometry software; describe transformations as functions that take points in the plane as inputs and give other points as outputs. Dynamically interact with and see the result of a translation transformation. This module mainly discusses the same subject as: 2D transformations, but has a coordinate system with three axes as a basis. Translating a figure can be thought of as "sliding" the original. Aug 24, 2010, 10:22 AM. Materials Graph paper or individual whiteboard with the coordinate plane. Geometry Transformations - Students will understand the concepts of reflection and translation as transformations of points, lines and objects. translations Search Dr. Ask students to bring pictures that show examples of rotations being used in designs such as wall paper, floor tiles, art work, etc. My translation of E015, Book I of Euler's Mechanica has been completed. And, often enough, you'll be asked to do so on the ACT. A polygon in which all angles are congruent is an equiangular polygon. Begin with one sheet of notebook paper. grade 7 transformations worksheet new dilation math worksheets with answers. We can quickly identify from the function that the ‘base’ function is gx x( )=, and that there has been a vertical stretch with a factor of 3, a shift left of 2 units, and a downward shift of 7 units. help with vertex transformations. When working in the coordinate plane:. A step by step tutorial on the properties of transformations such as vertical and horizontal translation (or shift) , scaling and reflections on x-axis and y-axis of graphs of functions is presented. See authoritative translations of To do math in Spanish with example sentences and audio pronunciations. Practice Solutions. Desmos Classroom Activities Loading. See more ideas about 8th grade math, Math and Math lessons. Contents include modern elementary geometry, isometries and similarities in the plane, vectors and complex numbers in geometry, inversion, and isometries in space. References to complexity and mode refer to the overall difficulty of the problems as they appear in the main program. AN INTRODUCTION TO TRANSFORMATIONS Ready (Summary) In this lesson, participants are introduced to the concept of transformations through two hands-on activities: a patty paper translation and a rubber band dilation. Fast and easy to use. Math Open Reference. Vectors are used to describe translations. A transformation maps a figure onto its image. 2 years ago. The Z value is set to 0, in both the original and final line positions. org Port Added: 2020-03-14 22:11:30 Last Update: 2020-05-06 17:11:15 SVN Revision: 534185 License: GPLv2 Description:. Translation gives the option to move up,down,left,or right one unit. Improve your math knowledge with free questions in "Translations: find the coordinates" and thousands of other math skills. Download the medium term plan by clicking on the button above. We maintain a large amount of excellent reference information on subject areas varying from dividing rational expressions to elementary algebra. In short, a transformation is a copy of a geometric figure, where the copy holds certain properties. Try to predict what will happen. The object in the new position is called the image. It may seem a bit surprising, but instead of sliding a figure to a new location, …. This process is called mapping. See more ideas about 8th grade math, Math and Math lessons. 1 Translations 2. Our study guides are available online and in book form at barnesandnoble. Review of equations. About this resource. Any image in a plane could be altered by using different operations, or transformations. To Translate a shape:. Translations Translations for Geometry dʒiˈɒm ɪ tri Geom·e·try Would you like to know how to translate Geometry to other languages? This page provides all possible translations of the word Geometry in almost any language. So you have to do. In the first lesson students understand some of the features that are preserved in translations; in this lesson, students extend on that understanding and reason about other features such as perimeter, angle measure, area, etc. flipchart, 320 KB. In geometry translation means moving a shape into a different position, without changing it in any way. Fast and easy to use. Yay Math in Studio returns, with the help of baby daughter, to share some knowledge about parent functions and their transformations. Transformations - Sample Math Practice Problems The math problems below can be generated by MathScore. One definition of "to translate" is "to change from one place, state, form, or appearance to another". Sign in to make your opinion count. Some of the worksheets for this concept are Translate to an algebraic expression, Translating verbal phrases to algebraic expressions, Translating phrases, Algebra work translating algebraic phrases simple, Mat 070 algebra i word problems, Translating phrases multi step. Primary Study Cards. As the animation shows a translation of T(−1,+2) on the point A with coordinates (3,2) produces an image at (2,4). " In addition to the answers, Webmath also shows the student how to. A diagonal of a polygon is a segment that connects two non-consecutive vertices. A transformation is a process that manipulates a polygon or other two-dimensional object on a plane or coordinate system. Open Live Script. Until now, standards in the geometry domain have been either supporting or additional to the major standards. com, a math practice program for schools and individual families. Being a dancer involves a lot of movement, including turns, flips, slides, etc. If you add to the y-coordinate, the figure will go up. "Translation" Classroom Activities Short description: Learn the meaning of the geometric term "translation" and see several examples in this animated "Math Shorts" video. Statistics: Linear Regression example. Then one day I was daydreaming about music and I realized there is a numeric structure to learning the piano keys that could translate into math. Instructions Use black ink or ball-point pen. Look up words and phrases in comprehensive, reliable bilingual dictionaries and search through billions of online translations. Prime Factorization. In Geometry, "Translation" simply means Moving without rotating, resizing or anything else, just moving. The initial point and terminal point of the translation vector are irrelevant. Multi-Digit Addition. BCMS 8th Grade Math. Math See also the Internet Library: graphing equations HIGH SCHOOL About Math Analysis Algebra basic algebra equations/graphs/ translations linear algebra linear equations polynomials Calculus Complex Numbers Calculators/ Computers Definitions Discrete Math permutations/ combinations Exponents Logarithms Fibonacci. Math (English to English translation). Multi-Digit Multiplication Pt. Call upon the power of exotic space transformation phenomena to reflect, rotate, translate and enlarge TranStar into the safety of the StarGate. It may seem a bit surprising, but instead of sliding a figure to a new location, […]. Good luck and have fun!. 2 : Jun 20, 2013, 1:44 PM: Brian Pike. Congruent Angles: Definition. They learn that a transformation is a mapping of the plane onto itself, and some of the basic properties of isometries and dilations. A transformation is a way of changing the size or position of a shape. Statistics: 4th Order Polynomial example. If you subtract from the y-coordinate, the figure will go down. grade 7 transformations worksheet new dilation math worksheets. This mathematical topic can also be explored in other subjects. Rotational and Reflection Symmetry. com has been helping millions of people improve their use of the English language with its free digital services. The way transformations can be classified and combined is an example of a very important mathematical structure known as a group. Rotate a shape with and without a centre of rotation. A preimage or inverse image is the two-dimensional shape before any transformation. Geometry the part of mathematics concerned with the properties and relationships between points, lines, surfaces, solids. Certain topological dynamical systems are considered that arise from actions of σ-compact locally compact Abelian groups on compact spaces of translation bounded measures. A Prezi explaining the different types of transformations! Blog. jxlr385rf26jcp, 8dc8mgahof, zbzvh73xz8k7ybl, nplhu0kgjfbw, orggo6nj8wyljn, of8n1emt85nswu, m6gnjbmp6a, 1tr7mt08nxup, fyqzzpxq8q, cbn3tz0b9ed3huv, r66a41o3257o1o, hhz12wyqc84pbs0, je2cpeve8h4nus9, wgavjgaycnoop, eugstl35rgg76mu, jft4low95iss, jq8el2onia7, so6lgofizqks, pbxh0g0tr2, g5pahyj8wi5t5, a3wap07xus0u, zsp6kssep0e8ny, 7cpvemu4xnvtiz, t2vuvc81dyyrr, gfoiym4a52yr, zwf09vs2sxhtv, bj4owzgbyyp1f, klkc51cjvsj0j4, b64ehtbhfh8ex
2020-08-05 01:11:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30406323075294495, "perplexity": 1936.6683688251628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735906.77/warc/CC-MAIN-20200805010001-20200805040001-00125.warc.gz"}
https://www.zbmath.org/authors/?q=ai%3Aingram.patrick
## Ingram, Patrick Compute Distance To: Author ID: ingram.patrick Published as: Ingram, Patrick External Links: MGP Documents Indexed: 35 Publications since 2005 Reviewing Activity: 20 Reviews Co-Authors: 18 Co-Authors with 11 Joint Publications 369 Co-Co-Authors all top 5 ### Co-Authors 24 single-authored 5 Silverman, Joseph Hillel 4 Jones, Rafe 3 Manes, Michelle 2 Benedetto, Robert L. 2 Everest, Graham Robert 2 Hutz, Benjamin 2 Levy, Alon 2 Mahé, Valéry 2 Stevens, Shaun 2 Tucker, Thomas John 1 Bennett, Michael A. 1 Bridy, Andrew 1 Faber, Xander 1 Juul, Jamie 1 Rubinstein-Salzedo, Simon 1 Stange, Katherine E. 1 Streng, Marco 1 Zieve, Michael E. all top 5 ### Serials 4 IMRN. International Mathematics Research Notices 3 Canadian Mathematical Bulletin 2 Acta Arithmetica 2 Canadian Journal of Mathematics 2 Duke Mathematical Journal 2 Journal of Number Theory 2 Proceedings of the London Mathematical Society. Third Series 2 Transactions of the American Mathematical Society 2 Mathematical Research Letters 1 Mathematical Proceedings of the Cambridge Philosophical Society 1 Rocky Mountain Journal of Mathematics 1 Bulletin of the London Mathematical Society 1 Journal of the London Mathematical Society. Second Series 1 Journal für die Reine und Angewandte Mathematik 1 Mathematische Zeitschrift 1 Monatshefte für Mathematik 1 Comptes Rendus Mathématiques de l’Académie des Sciences 1 Bulletin of the American Mathematical Society. New Series 1 Journal de Théorie des Nombres de Bordeaux 1 LMS Journal of Computation and Mathematics 1 Journal of the Australian Mathematical Society 1 Algebra & Number Theory all top 5 ### Fields 26 Number theory (11-XX) 21 Dynamical systems and ergodic theory (37-XX) 8 Algebraic geometry (14-XX) 1 Functions of a complex variable (30-XX) 1 Several complex variables and analytic spaces (32-XX) 1 Ordinary differential equations (34-XX) 1 Difference and functional equations (39-XX) ### Citations contained in zbMATH Open 30 Publications have been cited 177 times in 122 Documents Cited by Year Primitive divisors in arithmetic dynamics. Zbl 1242.11012 Ingram, Patrick; Silverman, Joseph H. 2009 Elliptic divisibility sequences over certain curves. Zbl 1170.11010 Ingram, Patrick 2007 On Poonen’s conjecture concerning rational preperiodic points of quadratic maps. Zbl 1316.37042 Hutz, Benjamin; Ingram, Patrick 2013 Uniform estimates for primitive divisors in elliptic divisibility sequences. Zbl 1276.11092 Ingram, Patrick; Silverman, Joseph H. 2012 Lower bounds on the canonical height associated to the morphism $$\phi(z)= z^d+c$$. Zbl 1239.11071 Ingram, Patrick 2009 Current trends and open problems in arithmetic dynamics. Zbl 1468.37001 Benedetto, Robert; Ingram, Patrick; Jones, Rafe; Manes, Michelle; Silverman, Joseph H.; Tucker, Thomas J. 2019 A finiteness result for post-critically finite polynomials. Zbl 1333.37029 Ingram, Patrick 2012 Attracting cycles in $$p$$-adic dynamics and height bounds for postcritically finite maps. Zbl 1323.37058 Benedetto, Robert; Ingram, Patrick; Jones, Rafe; Levy, Alon 2014 Canonical heights for Hénon maps. Zbl 1323.37059 Ingram, Patrick 2014 Arboreal Galois representations and uniformization of polynomial dynamics. Zbl 1280.37058 Ingram, Patrick 2013 Multiples of integral points on elliptic curves. Zbl 1233.11062 Ingram, Patrick 2009 Algebraic divisibility sequences over function fields. Zbl 1251.11008 Ingram, Patrick; Mahé, Valéry; Silverman, Joseph H.; Stange, Katherine E.; Streng, Marco 2012 Variation of the canonical height for a family of polynomials. Zbl 1297.14028 Ingram, Patrick 2013 On $$k$$-th power numerical centres. Zbl 1138.11315 Ingram, Patrick 2005 A quantitative primitive divisor result for points on elliptic curves. Zbl 1208.11073 Ingram, Patrick 2009 Uniform bounds on pre-images under quadratic dynamical systems. Zbl 1222.11086 Faber, Xander; Hutz, Benjamin; Ingram, Patrick; Jones, Rafe; Manes, Michelle; Tucker, Thomas J.; Zieve, Michael E. 2009 A lower bound for the canonical height associated to a Drinfeld module. Zbl 1386.11077 Ingram, Patrick 2014 Finite ramification for preimage fields of post-critically finite morphisms. Zbl 1391.14004 Bridy, Andrew; Ingram, Patrick; Jones, Rafe; Juul, Jamie; Levy, Alon; Manes, Michelle; Rubinstein-Salzedo, Simon; Silverman, Joseph H. 2017 Diophantine analysis and torsion on elliptic curves. Zbl 1117.11033 Ingram, Patrick 2007 Torsion subgroups of elliptic curves in short Weierstrass form. Zbl 1136.11311 Bennett, Michael A.; Ingram, Patrick 2005 Primitive divisors on twists of Fermat’s cubic. Zbl 1252.11049 Everest, Graham; Ingram, Patrick; Stevens, Shaun 2009 The uniform primality conjecture for elliptic curves. Zbl 1246.11117 Everest, Graham; Ingram, Patrick; Mahé, Valéry; Stevens, Shaun 2008 The critical height is a moduli height. Zbl 1395.37068 Ingram, Patrick 2018 $$p$$-adic uniformization and the action of Galois on certain affine correspondences. Zbl 1435.37109 Ingram, Patrick 2018 Critical orbits of polynomials with a periodic point of specified multiplier. Zbl 1415.37114 Ingram, Patrick 2019 Canonical heights for correspondences. Zbl 1410.37094 Ingram, Patrick 2019 Specializations of elliptic surfaces, and divisibility in the Mordell-Weil group. Zbl 1244.11061 Ingram, Patrick 2011 Rigidity and height bounds for certain post-critically finite endomorphisms of $$\mathbb{P}^N$$. Zbl 1391.37077 Ingram, Patrick 2016 Variation of the canonical height for polynomials in several variables. Zbl 1333.14024 Ingram, Patrick 2015 Canonical heights and preperiodic points for certain weighted homogeneous families of polynomials. Zbl 1456.37108 Ingram, Patrick 2019 Current trends and open problems in arithmetic dynamics. Zbl 1468.37001 Benedetto, Robert; Ingram, Patrick; Jones, Rafe; Manes, Michelle; Silverman, Joseph H.; Tucker, Thomas J. 2019 Critical orbits of polynomials with a periodic point of specified multiplier. Zbl 1415.37114 Ingram, Patrick 2019 Canonical heights for correspondences. Zbl 1410.37094 Ingram, Patrick 2019 Canonical heights and preperiodic points for certain weighted homogeneous families of polynomials. Zbl 1456.37108 Ingram, Patrick 2019 The critical height is a moduli height. Zbl 1395.37068 Ingram, Patrick 2018 $$p$$-adic uniformization and the action of Galois on certain affine correspondences. Zbl 1435.37109 Ingram, Patrick 2018 Finite ramification for preimage fields of post-critically finite morphisms. Zbl 1391.14004 Bridy, Andrew; Ingram, Patrick; Jones, Rafe; Juul, Jamie; Levy, Alon; Manes, Michelle; Rubinstein-Salzedo, Simon; Silverman, Joseph H. 2017 Rigidity and height bounds for certain post-critically finite endomorphisms of $$\mathbb{P}^N$$. Zbl 1391.37077 Ingram, Patrick 2016 Variation of the canonical height for polynomials in several variables. Zbl 1333.14024 Ingram, Patrick 2015 Attracting cycles in $$p$$-adic dynamics and height bounds for postcritically finite maps. Zbl 1323.37058 Benedetto, Robert; Ingram, Patrick; Jones, Rafe; Levy, Alon 2014 Canonical heights for Hénon maps. Zbl 1323.37059 Ingram, Patrick 2014 A lower bound for the canonical height associated to a Drinfeld module. Zbl 1386.11077 Ingram, Patrick 2014 On Poonen’s conjecture concerning rational preperiodic points of quadratic maps. Zbl 1316.37042 Hutz, Benjamin; Ingram, Patrick 2013 Arboreal Galois representations and uniformization of polynomial dynamics. Zbl 1280.37058 Ingram, Patrick 2013 Variation of the canonical height for a family of polynomials. Zbl 1297.14028 Ingram, Patrick 2013 Uniform estimates for primitive divisors in elliptic divisibility sequences. Zbl 1276.11092 Ingram, Patrick; Silverman, Joseph H. 2012 A finiteness result for post-critically finite polynomials. Zbl 1333.37029 Ingram, Patrick 2012 Algebraic divisibility sequences over function fields. Zbl 1251.11008 Ingram, Patrick; Mahé, Valéry; Silverman, Joseph H.; Stange, Katherine E.; Streng, Marco 2012 Specializations of elliptic surfaces, and divisibility in the Mordell-Weil group. Zbl 1244.11061 Ingram, Patrick 2011 Primitive divisors in arithmetic dynamics. Zbl 1242.11012 Ingram, Patrick; Silverman, Joseph H. 2009 Lower bounds on the canonical height associated to the morphism $$\phi(z)= z^d+c$$. Zbl 1239.11071 Ingram, Patrick 2009 Multiples of integral points on elliptic curves. Zbl 1233.11062 Ingram, Patrick 2009 A quantitative primitive divisor result for points on elliptic curves. Zbl 1208.11073 Ingram, Patrick 2009 Uniform bounds on pre-images under quadratic dynamical systems. Zbl 1222.11086 Faber, Xander; Hutz, Benjamin; Ingram, Patrick; Jones, Rafe; Manes, Michelle; Tucker, Thomas J.; Zieve, Michael E. 2009 Primitive divisors on twists of Fermat’s cubic. Zbl 1252.11049 Everest, Graham; Ingram, Patrick; Stevens, Shaun 2009 The uniform primality conjecture for elliptic curves. Zbl 1246.11117 Everest, Graham; Ingram, Patrick; Mahé, Valéry; Stevens, Shaun 2008 Elliptic divisibility sequences over certain curves. Zbl 1170.11010 Ingram, Patrick 2007 Diophantine analysis and torsion on elliptic curves. Zbl 1117.11033 Ingram, Patrick 2007 On $$k$$-th power numerical centres. Zbl 1138.11315 Ingram, Patrick 2005 Torsion subgroups of elliptic curves in short Weierstrass form. Zbl 1136.11311 Bennett, Michael A.; Ingram, Patrick 2005 all top 5 ### Cited by 134 Authors 13 Ingram, Patrick 9 Hindes, Wade 8 Silverman, Joseph Hillel 7 Tucker, Thomas John 6 Doyle, John R. 6 Ghioca, Dragos 5 Jones, Rafe 4 Bridy, Andrew 4 Faber, Xander 4 Hutz, Benjamin 4 Looper, Nicole R. 4 Ostafe, Alina 4 Tornero, José María 4 Yabuta, Minoru 3 Benedetto, Robert L. 3 Canci, Jung Kyu 3 Gauthier, Thomas 3 González-Jiménez, Enrique 3 Manes, Michelle 3 Rout, Sudhansu Sekhar 3 Shparlinski, Igor E. 3 Stoll, Michael 3 Vishkautsan, Solomon 3 Voutier, Paul M. 2 DeMark, David 2 Favre, Charles 2 García-Selfa, Irene 2 Granville, Andrew James 2 Hsia, Liang-Chung 2 Lee, Chong Gyu 2 Levy, Alon 2 Mahé, Valéry 2 Nguyen, Khoa Dang 2 Reynolds, Jonathan 2 Rumely, Robert S. 2 Stange, Katherine E. 2 Streng, Marco 2 Troncoso, Sebastian 2 Vigny, Gabriel 2 Yasufuku, Yu 1 Akiyama, Shigeki 1 Allen, Kenneth R. 1 Bandeira, Luís 1 Bennett, Michael A. 1 Bérczes, Attila 1 Binegar, Skye 1 Bouw, Irene I. 1 Carter, Annie 1 Chang, Mei-Chu 1 Chen, Ruqian 1 Chern, Shane 1 Cho, Ilwoo 1 Correia Ramos, Carlos 1 D’Andrea, Carlos 1 DeMarco, Laura Grace 1 Dominick, Randy 1 Dujardin, Romain 1 Ďuriš, Viliam 1 Ejder, Özlem 1 Everest, Graham Robert 1 Faber, X. W. C. 1 Ferraguti, Andrea 1 Flatters, Anthony 1 Fujita, Yasutsugu Fujita 1 Galateau, Aurélien 1 Gassert, Thomas Alden 1 Ghadermarzi, Amir 1 Goksel, Vefa 1 Gomez-Perez, Domingo 1 Gregor, Aryeh 1 Gužvić, Tomislav 1 Han, Minsik 1 Huguin, Valentin 1 Hyde, Trevor 1 Jacobs, Kenneth S. 1 Jedrzejak, Tomasz 1 Ji, Qingzhong 1 Jones, Gareth A. 1 Jørgensen, Palle E. T. 1 Juul, Jamie 1 Kalaycı, Tekgül 1 Karemaker, Valentijn 1 Kenney, Meagan 1 Kovacheva, Yordanka 1 Krieger, Holly 1 Kwon, Hyejin 1 Lalín, Matilde Noemí 1 Levin, Aaron 1 Liptai, Kálmán 1 Luca, Florian 1 Mérai, László 1 Micheli, Giacomo 1 Miller, Alison Beth 1 Misplon, Moses 1 Mocz, Lucia 1 Najman, Filip 1 Nara, Tadahisa 1 Naskrȩcki, Bartosz 1 Okuyama, Yûsuke 1 Olsiewski Healey, Vivian ...and 34 more Authors all top 5 ### Cited in 56 Serials 17 Journal of Number Theory 10 Transactions of the American Mathematical Society 8 Acta Arithmetica 6 International Journal of Number Theory 5 Proceedings of the American Mathematical Society 5 Journal de Théorie des Nombres de Bordeaux 4 Mathematical Proceedings of the Cambridge Philosophical Society 3 Mathematische Zeitschrift 2 Rocky Mountain Journal of Mathematics 2 Mathematics of Computation 2 Canadian Mathematical Bulletin 2 Duke Mathematical Journal 2 Mathematische Annalen 2 Ergodic Theory and Dynamical Systems 2 IMRN. International Mathematics Research Notices 2 The New York Journal of Mathematics 2 LMS Journal of Computation and Mathematics 2 Journal of the European Mathematical Society (JEMS) 2 Integers 2 Comptes Rendus. Mathématique. Académie des Sciences, Paris 2 Algebra & Number Theory 2 Research in Number Theory 1 American Mathematical Monthly 1 Israel Journal of Mathematics 1 Periodica Mathematica Hungarica 1 Advances in Mathematics 1 Archiv der Mathematik 1 Functiones et Approximatio. Commentarii Mathematici 1 Illinois Journal of Mathematics 1 Inventiones Mathematicae 1 Journal of Algebra 1 Journal of Pure and Applied Algebra 1 Journal für die Reine und Angewandte Mathematik 1 Manuscripta Mathematica 1 Monatshefte für Mathematik 1 Proceedings of the Japan Academy. Series A 1 Proceedings of the London Mathematical Society. Third Series 1 Revista Matemática Iberoamericana 1 Forum Mathematicum 1 Bulletin of the American Mathematical Society. New Series 1 Indagationes Mathematicae. New Series 1 Experimental Mathematics 1 Finite Fields and their Applications 1 Selecta Mathematica. New Series 1 Opuscula Mathematica 1 Discrete and Continuous Dynamical Systems 1 Conformal Geometry and Dynamics 1 Journal of the Australian Mathematical Society 1 Journal of Fixed Point Theory and Applications 1 Journal of Modern Dynamics 1 Involve 1 Asian-European Journal of Mathematics 1 Cryptography and Communications 1 S$$\vec{\text{e}}$$MA Journal 1 Arnold Mathematical Journal 1 Journal of Numbers all top 5 ### Cited in 14 Fields 98 Number theory (11-XX) 80 Dynamical systems and ergodic theory (37-XX) 32 Algebraic geometry (14-XX) 8 Field theory and polynomials (12-XX) 3 Several complex variables and analytic spaces (32-XX) 2 Commutative algebra (13-XX) 2 Linear and multilinear algebra; matrix theory (15-XX) 2 Group theory and generalizations (20-XX) 1 Category theory; homological algebra (18-XX) 1 Real functions (26-XX) 1 Functions of a complex variable (30-XX) 1 Functional analysis (46-XX) 1 Operator theory (47-XX) 1 Probability theory and stochastic processes (60-XX)
2022-05-23 15:30:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5311194658279419, "perplexity": 8990.446540727247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558030.43/warc/CC-MAIN-20220523132100-20220523162100-00345.warc.gz"}
https://ggforce.data-imaginist.com/reference/geom_arc_bar.html
This set of stats and geoms makes it possible to draw arcs and wedges as known from pie and donut charts as well as more specialized plottypes such as sunburst plots. stat_arc_bar( mapping = NULL, data = NULL, geom = "arc_bar", position = "identity", n = 360, na.rm = FALSE, show.legend = NA, inherit.aes = TRUE, ... ) stat_pie( mapping = NULL, data = NULL, geom = "arc_bar", position = "identity", n = 360, sep = 0, na.rm = FALSE, show.legend = NA, inherit.aes = TRUE, ... ) geom_arc_bar( mapping = NULL, data = NULL, stat = "arc_bar", position = "identity", n = 360, expand = 0, na.rm = FALSE, show.legend = NA, inherit.aes = TRUE, ... ) ## Arguments mapping Set of aesthetic mappings created by aes(). If specified and inherit.aes = TRUE (the default), it is combined with the default mapping at the top level of the plot. You must supply mapping if there is no plot mapping. data The data to be displayed in this layer. There are three options: If NULL, the default, the data is inherited from the plot data as specified in the call to ggplot(). A data.frame, or other object, will override the plot data. All objects will be fortified to produce a data frame. See fortify() for which variables will be created. A function will be called with a single argument, the plot data. The return value must be a data.frame, and will be used as the layer data. A function can be created from a formula (e.g. ~ head(.x, 10)). geom The geometric object to use to display the data, either as a ggproto Geom subclass or as a string naming the geom stripped of the geom_ prefix (e.g. "point" rather than "geom_point") position Position adjustment, either as a string naming the adjustment (e.g. "jitter" to use position_jitter), or the result of a call to a position adjustment function. Use the latter if you need to change the settings of the adjustment. n The number of points used to draw a full circle. The number of points on each arc will then be calculated as n / span-of-arc na.rm If FALSE, the default, missing values are removed with a warning. If TRUE, missing values are silently removed. show.legend logical. Should this layer be included in the legends? NA, the default, includes if any aesthetics are mapped. FALSE never includes, and TRUE always includes. It can also be a named logical vector to finely select the aesthetics to display. inherit.aes If FALSE, overrides the default aesthetics, rather than combining with them. This is most useful for helper functions that define both data and aesthetics and shouldn't inherit behaviour from the default plot specification, e.g. borders(). ... Other arguments passed on to layer(). These are often aesthetics, used to set an aesthetic to a fixed value, like colour = "red" or size = 3. They may also be parameters to the paired geom/stat. sep The separation between arcs in pie/donut charts stat The statistical transformation to use on the data for this layer, either as a ggproto Geom subclass or as a string naming the stat stripped of the stat_ prefix (e.g. "count" rather than "stat_count") expand A numeric or unit vector of length one, specifying the expansion amount. Negative values will result in contraction instead. If the value is given as a numeric it will be understood as a proportion of the plot area width. As expand but specifying the corner radius. ## Details An arc bar is the thick version of an arc; that is, a circle segment drawn as a polygon in the same way as a rectangle is a thick version of a line. A wedge is a special case of an arc where the inner radius is 0. As opposed to applying coord_polar to a stacked bar chart, these layers are drawn in cartesian space, which allows for transformations not possible with the native ggplot2 approach. Most notable of these are the option to explode arcs and wedgets away from their center point, thus detaching it from the main pie/donut. ## Aesthetics geom_arc_bar understand the following aesthetics (required aesthetics are in bold): • x0 • y0 • r0 • r • start - when using stat_arc_bar • end - when using stat_arc_bar • amount - when using stat_pie • explode • color • fill • linewidth • linetype • alpha ## Computed variables x, y x and y coordinates for the polygon x, y The start coordinates for the segment geom_arc() for drawing arcs as lines ## Examples # If you know the angle spans to plot it is easy arcs <- data.frame( start = seq(0, 2 * pi, length.out = 11)[-11], end = seq(0, 2 * pi, length.out = 11)[-1], r = rep(1:2, 5) ) # Behold the arcs ggplot(arcs) + geom_arc_bar(aes(x0 = 0, y0 = 0, r0 = r - 1, r = r, start = start, end = end, fill = r)) # geom_arc_bar uses geom_shape to draw the arcs, so you have all the # possibilities of that as well, e.g. rounding of corners ggplot(arcs) + geom_arc_bar(aes(x0 = 0, y0 = 0, r0 = r - 1, r = r, start = start, end = end, fill = r), radius = unit(4, 'mm')) # If you got values for a pie chart, use stat_pie states <- c( 'eaten', "eaten but said you didn\'t", 'cat took it', 'for tonight', 'will decompose slowly' ) pie <- data.frame( state = factor(rep(states, 2), levels = states), type = rep(c('Pie', 'Donut'), each = 5), r0 = rep(c(0, 0.8), each = 5), focus = rep(c(0.2, 0, 0, 0, 0), 2), amount = c(4, 3, 1, 1.5, 6, 6, 1, 2, 3, 2) ) # Look at the cakes ggplot() + geom_arc_bar(aes( x0 = 0, y0 = 0, r0 = r0, r = 1, amount = amount, fill = state, explode = focus ), data = pie, stat = 'pie' ) + facet_wrap(~type, ncol = 1) + coord_fixed() + theme_no_axes() + scale_fill_brewer('', type = 'qual')
2022-10-03 06:00:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4566305875778198, "perplexity": 3705.7444145855407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00065.warc.gz"}
http://math.stackexchange.com/questions/141946/can-a-function-grow-too-fast-to-be-real-analytic
# Can a function “grow too fast” to be real analytic? Does there exist a continuous function $\: f : \mathbf{R} \to \mathbf{R} \:$ such that for all real analytic functions $\: g : \mathbf{R} \to \mathbf{R} \:$, for all real numbers $x$, there exists a real number $y$ such that $\: x < y \:$ and $\: g(y) < f(y) \:$? - This question has been asked several times over on mathoverflow, and I believe also on this site. Someone who is better than I at searching should be able to find the past answers. – Pete L. Clark May 6 '12 at 22:15 Has the version with "real analytic" replaced with "real entire" also been asked? – Ricky Demer May 6 '12 at 22:16 If by "real entire" you mean "entire, and real on $\mathbb R$", then the interpolation proof provides such a function. – Robert Israel May 6 '12 at 22:23 Well, I would tend to define "real entire" as "expressible (on $\mathbf{R}$) as a globally convergent power series with real coefficients", but I do know that that is equivalent to what you gave. – Ricky Demer May 6 '12 at 22:27 fixed ${}{}{}\:$ – Ricky Demer May 8 '12 at 3:46 No. Only if you require $g$ or its coefficients to be computable. Suppose there is such an $f$, then we could just pick the points $(n,(1+\sup\{ f(z))|n-1<z<n+1\}))$, for $n=1,2,3\ldots$ and interpolate. - How is it proven that real analytic interpolation is possible? (I'm about to edit to impose continuity on $f$.) – Ricky Demer May 6 '12 at 22:08 It's usually done as a corollary of Mittag-Leffler's theorem and the Weierstrass factorization theorem. But see also jstor.org/stable/2370666 – Robert Israel May 6 '12 at 22:21 Recently, I was reading Hardy's Orders of Infinity (available here or here): Godfrey Harold Hardy. Orders of infinity. The Infinitärcalcül of Paul du Bois-Reymond. Reprint of the 1910 edition. Cambridge Tracts in Mathematics and Mathematical Physics, No. 12. Hafner Publishing Co., New York, 1971. MR0349922 (50 #2415). The book discusses this result, so I figured it may be worth adding some comments: Theorem (Poincaré). For any continuous increasing $\phi:\mathbb R\to\mathbb R$ we can always find a real analytic function $f:\mathbb R\to\mathbb R$ such that $\displaystyle \lim_{x\to\infty}\frac{f(x)}{|\phi|}=+\infty$. This was published in the American Journal of mathematics, vol. 14, p. 214. Hardy presents a proof due to Borel, in Leçons sur les séries à termes positifs, p.27: We may replace $\phi$ with an increasing function $\Phi$ that is always positive, is pointwise larger than $\phi$, and tends to infinity, and proceed to define $f$ and show that $f/\Phi\to\infty$. Take an increasing sequence of numbers $a_n\to\infty$, and another sequence $b_n$ with $$a_1<b_2<a_2<b_3<a_3<\dots,$$ and define $$f(x)=\sum_{n\ge 1}\left(\frac x{b_n}\right)^{\nu_n},$$ where the positive integers $\nu_n$ are strictly increasing, and satisfy $\displaystyle \left(\frac{a_n}{b_n}\right)^{\nu_n}>\Phi^2(a_n)$. Then $f$ is entire and satisfies the required property. In detail: The series converges because, given any positive $x$, the $n$-th root of the $n$-th term is at most $x/b_n\to 0$. If $x\in[a_n,a_{n+1})$, then $f(x)>(a_n/b_n)^{\nu_n}$, so $$f(x)>\Phi^2(a_{n+1})>\Phi^2(x).$$ It follows that $f/\Phi^2\ge 1$ for $x\ge a_1$, and since $\Phi(x)\to\infty$, then also $f/\Phi\to\infty$, as wanted. Hardy mentions this while discussing a result of du Bois-Reymond: Given functions $f,g\to\infty$, positive, and increasing, write $f\succ g$ iff $f/g\to\infty$. Theorem (du Bois-Reymond). Given any "ascending scale" $(f_n)_{n\in\mathbb N}$, that is, a sequence of functions $f_n:\mathbb R\to\mathbb R$, all positive and increasing to infinity, and such that $f_1\prec f_2\prec f_3\dots$, there is a function $f$ that increases faster than any function in the scale, that is, such that $f\succ f_n$ for all $n$. This result was generalized by several authors, beginning with Hadamard, and eventually led to Hausdorff work on what we now call Hausdorff gaps. - Just take $f(x) = \tan(x)$ (defining $f(x) = 0$, say, when $x$ is an integer multiple of $\pi/2$. But this has nothing to do with "growing too fast". - (This answer was posted before I added continuity to the question.) – Ricky Demer May 6 '12 at 22:11
2016-05-28 20:26:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9316175580024719, "perplexity": 221.00948162038281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278091.17/warc/CC-MAIN-20160524002118-00042-ip-10-185-217-139.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/6521/reduce-the-following-problem-to-sat/6522
# Reduce the following problem to SAT Here is the problem. Given $k, n, T_1, \ldots, T_m$, where each $T_i \subseteq \{1, \ldots, n\}$. Is there a subset $S \subseteq \{1, \ldots, n\}$ with size at most $k$ such that $S \cap T_i \neq \emptyset$ for all $i$? I am trying to reduce this problem to SAT. My idea of a solution would be to have a variable $x_i$ for each of 1 to $n$. For each $T_i$, create a clause $(x_{i_1} \vee \cdots \vee x_{i_k})$ if $T_i = \{i_1, \ldots, i_k\}$. Then and all these clauses together. But this clearly isn't a complete solution as it does not represent the constraint that $S$ must have at most $k$ elements. I know that I must create more variables, but I'm simply not sure how. So I have two questions: 1. Is my idea of solution on the right track? 2. How should the new variables be created so that they can be used to represent the cardinality $k$ constraint? • Just a remark: Your problem is known as HITTING SET, which is an equivalent formulation of the SET COVER problem. – A.Schulz Nov 7 '12 at 8:22 It looks like you are trying to compute a hypergraph transversal of size $k$. That is, $\{T_1,\dots,T_m\}$ is your hypergraph, and $S$ is your transversal. A standard translation is to express the clauses as you have, and then translate the length restriction into a cardinality constraint. So use your existing encoding, i.e., $\bigwedge_{1 \le j \le m} \bigvee_{i \in T_j} x_i$ and then add clauses encoding $\sum_{1 \le i \le n} x_i \le k$. $\sum_{1 \le i \le n} x_i \le k$ is a cardinality constraint. There are various different cardinality constraint translations into SAT. The simplest but rather large cardinality constraint translation is just $\bigwedge_{X \subseteq \{1,\dots,n\}, |X| = k+1} \bigvee_{i \in X} \neg x_i$. In this way each disjunction represents the constraint $\neg \bigwedge_{i \in X} x_i$ - for all subsets $X$ of $\{1,\dots,n\}$ of size k+1. That is, we ensure that there is no way that more than k variables can be set. Note that this is not polynomial size in $k$ Some links to papers on more space-efficient cardinality constraint translations which are polynomial size in $k$: If you are actually interested in solving such problems, perhaps it is better to formulate them as pseudo-boolean problems (see wiki article on pseudo-boolean problems) and use pseudo-boolean solvers (see pseudo-boolean competition). That way the cardinality constraints are just pseudo-boolean constraints and are part of the language - hopefully the pseudo-boolean solver then handles them directly and therefore more efficiently. • Please describe all links shortly (at least author and title) so people can find the documents should the links break. It's probably best to use DOI if available. – Raphael Nov 7 '12 at 10:29 • @Raphael Good point! Apologies I should have done that to begin with. I've now updated all the links; I'm not sure if Springer provide DOIs but there should be enough information now to find them if the links break. Note: I don't link to the official PDFs from Springer to avoid access problems. – MGwynne Nov 7 '12 at 12:53 • But it seems that the reduction you gave is not in polynomial time, right? – Aden Dong Nov 7 '12 at 15:58 • @AdenDong You said nothing about polynomial ;). The simple cardinality constraint translation I mention is not polynomial in $k$ (but is for fixed $k$). The cardinality constraint translations given in the papers I list are polynomial in $k$ - using new variables. I've updated my answer to make this clearer. – MGwynne Nov 7 '12 at 16:46 • MGwynne, I tend to always link the official DOI even if it is paywalled in order to be future-proof, and free versions additionally. But as it is now, anybody shoul be able to find the papers, so it's completely fine. – Raphael Nov 7 '12 at 17:28 If you're not absolutely set on the normal SAT, your idea is already a reduction to MIN-ONES (over positive CNF formulae), which is basically SAT, but where you can set at most $k$ variables to true (strictly it's the optimization version where we minimize on the number of true variables). Similarly if you head in a Parameterized Complexity direction, then you've already basically got WSAT($\Gamma_{2,1}^{+}$), where $\Gamma_{2,1}^{+}$ is the class of all positive CNF formulae (same as before, the notation might help your investigations though). In this case you'd have to start looking at what parameterization would be useful in your case. I assume you're looking for an explicit reduction, but if not, you can always just fall back to the Cook-Levin Theorem.
2020-11-29 05:09:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7029711604118347, "perplexity": 472.4557850140582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141196324.38/warc/CC-MAIN-20201129034021-20201129064021-00253.warc.gz"}
https://math.stackexchange.com/questions/2961819/when-do-i-apply-the-distributive-property
# When do I apply the distributive property? I'm a bit lost here.... Equation 1: $$(5p − 6) + (1 − p)$$. Shouldn't I apply distributive property here? By distributing the '$$+$$' sign into $$(1 - p)$$ to give $$(1 + p)$$? If that is the case, then the new formula reworded is: $$(5p - 6) + 1 + p = 6p - 5,$$ right? But the book has a different answer and it is, $$4p - 5$$ instead.... deductively examining where I went wrong, it seems the '$$+$$' sign isn't distributed and thus the $$p$$ in $$(1 - p)$$ didn't change into a positive If the book has the right answer, then this procs my title question, when do we use distributive property? Consider the following equation: $$−10 − 4(n − 5),$$ the $$-4$$ is distributed into $$n$$ and $$-5$$.... If I'm seeing how the formula is worded, whats the difference between this and the case above? Don't they both prompt distributive property cycle? The above case just has an invisible $$+1$$ right? I got all the wrong answers in my math test on this part lol but i'm determined to know why. • Why do you think that $(1-p)$ will become $(1+p)$ ? $(5p − 6) + (1 − p)= 5p-6+1-p$. – Mauro ALLEGRANZA Oct 19 '18 at 9:26 • i thought the + has an invisible 1 that it can distribute... is this not true? but then again whenever a positive times a negative it'll still be a negative anyways, so p won't turn positive i believe, just figured this out now lol – Moorease Oct 19 '18 at 9:32 • $1(1 - p) = 1 \cdot 1 + 1 \cdot (-p) = 1 - p$. – N. F. Taussig Oct 19 '18 at 9:45
2019-06-19 01:12:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8155744671821594, "perplexity": 644.9258549974352}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998879.63/warc/CC-MAIN-20190619003600-20190619025600-00326.warc.gz"}
http://clay6.com/qa/26807/find-the-equation-of-parabola-whose-focus-4-0-vertex-0-0-
Browse Questions Find the equation of parabola whose focus (-4,0) & vertex (0,0)? $\begin{array}{1 1}(a)\;y_1^2=-15x_1\\(b)\;y_1^2=-16x_1\\(c)\;y_1=-15x_1^2\\(d)\;y_1^3=-15x_1^2\end{array}$ Z is a point passes through directix and also axis of parabola. $Z=(x,y)$ $0=\large\frac{-4+x}{2}$ $x=4$ $0=\large\frac{0+y}{2}$ $y=0$ Equation of directix $(y-0)=\large\frac{-4}{0}$$(x-4) x=4 Let P(x_1,y_1) be a point on parabola \large\frac{SP}{PM}$$=1$ $SP=PM$ $\sqrt{x_1+4)^2+(y_1-0)^2}=\large\frac{x_1-4}{\sqrt{1^2}}$ $SP^2=PM^2$ $(x_1+4)^2+y_1^2=(x_1-4)^2$ $y_1^2=-16x_1$ Hence (b) is the correct answer.
2017-02-28 03:17:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.407978892326355, "perplexity": 4675.865219489438}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174124.9/warc/CC-MAIN-20170219104614-00614-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/pickens-plan-alternative-energy.244102/
# Pickens Plan -alternative energy ## Should the US government provide Pickens with the money and recources they need? 47.4% 21.1% 15.8% 31.6% 1. Jul 8, 2008 ### taylaron "The Pickens Plan" I've started this thread because I recently heard about a billionaires plan to utilize the alternative energy resources which the united states can provide and I am interested in what other people think about it. This effort is to help solve/ drastically reduce the United State's dependency on foreign oil by mainly utilizing wind and natural gas sources. pretty much all the information you need is on their website (below) The main website's link is here: http://www.pickensplan.org/" a general information youtube video here: there is also a pretty good one on their site(above)http://youtube.com/user/pickensplan" [Broken] Input from some professionals regarding their opinion on alternative energy and / or solutions the world energy crisis would be greatly appreciated. Thanks Last edited by a moderator: May 3, 2017 2. Jul 9, 2008 ### OmCheeto Beat ya to it by almost two hours. https://www.physicsforums.com/showpost.php?p=1795573&postcount=106 For me it seems a no brainer. But then again, I'm not a professional anything. I've only been "schooled" in thermodynamics, nuclear engineering, economics, electrical engineering, computer science, materials sciences, foreign language, electrical power transmission, physics, 7 terms of calculus, 1 class of philosophy, read my sisters college level psychology text when I was 14, and have an IQ of 160. As I mentioned, energy independence is a no brainer. Modifying peoples behavior to achieve such a thing is the greatest challenge, IMHO. Aspects of this question have been discussed from many points of view over the last few months: Green Homes There may be a lot more. I've not been here long. Last edited by a moderator: May 3, 2017 3. Jul 9, 2008 ### taylaron thanks, its hard to find these threads... even using PF's search engine. then when you do find it you're like ......durr why didnt i think of that. 4. Jul 9, 2008 ### taylaron its nice to see some people with bucks willing to spend some......... hopefully more will turn out like Pickens... 5. Jul 9, 2008 ### OmCheeto I would ignore the PF search engine and use either google or yahoo. Their spiders are fighting for world domination. I've found that if you type something unique in the forum, sometimes it shows up just minutes later on the two search engines. Try "Lesbian auto mechanics repair Schwarzenegger's Noggin" In quotes of course. I doubt he will lose a penny on the venture. Wind and solar may seem expensive, but the long term payoff is almost a sure thing. But you never know, someone might invent something like cold fusion in a couple of years. Then all the naysayers can say "See! Told ya it was a stupid idea!" But I doubt it. 6. Jul 9, 2008 ### taylaron Im praying Chetto I'm praying..... 7. Jul 10, 2008 ### OmCheeto Did you try the google bot test? It worked. Oh, and by the way, the Pickens Plan isn't all that innovative. It's just that now it's getting to be more than just an environmental issue. My electrical utility has had a "clean wind" option available for years. I pay an extra $3.50 a month and they take the money and buy their fancy windmills. I'm not sure if you saw my post last week where I mentioned that one of the wind farms was producing so much energy, they had to flip the switch as the power lines were at maximum capacity. "[URL [Broken] So, for the first time, BPA power managers began calling wind-farm operators with orders to curtail power generation.[/URL] I thought I was going to cry. TOO MUCH ENERGY! How serendipitous that our measly pittance to save the salmon would one day be a piece in an energy independence puzzle. Last edited by a moderator: May 3, 2017 8. Jul 10, 2008 ### vanesch Staff Emeritus But that's exactly the problem: too much energy one hour, not enough the next... 9. Jul 10, 2008 ### Astronuc ### Staff: Mentor BPA should send the excess energy to California, and displace some of the generation from gas turbines which cycle more rapidly than hydropower. If one believes in the market place, the demand is there, so the federal government does not need to be subsidizing energy generators. 10. Jul 10, 2008 ### FredGarvin I'd like to see someone divert the obscene amount of energy used for the big lift to get water to southern California. Put some energy into making that area self sufficient in water and the country could save a very large amount of energy. 11. Jul 10, 2008 ### OmCheeto How many liquid-solid-gas hydrocarbon, hydroelectric, nuclear, wind, and solar plants are there? One simply turns down the output of the dirty plants when the clean outputs are operating. It's called load shifting. I used to do it all the time. And it's not like it's a square wave or something. When I heard the news on the radio, that was the situation they stated. The power lines to California were maxed out. Well...... maybe not the generators. But the deep pockets of the Feds might get the transmission lines up for California a bit faster. Something like the works projects they had during the depression. Although I'm not a commy or a socialist, the market place hasn't always stuck me as having the national interest at heart. If we'd waited for the market place to get us into space, we'd never have gone. Pickens plan is fine, but it is just one of a number of mega-projects that should have been started years ago. 12. Jul 10, 2008 ### Astronuc ### Staff: Mentor I have yet to see a truly free market. I do notice that prices seem to be the same, and that there is little competition. And certainly Enron and others manipulated the market by withholding supply until the California market was desparate to pay many times the normal price. In NY, there was a move to deregulate with the idea that electricity would become less expensive through competition. The local utilities sold their generation and became strictly T&D. In theory, I could buy electricity from any provider and then pay a T&D fee the local utility. However, the cheap electricity is far away and there essentially was no savings. The financial companies and lawyers made millions of$ doing deals, but the consumers did not save anything. Some people who switch ended up paying more, and when the grid went down, we were without power for a couple of days, even though the local utility's grid was attached to several power plants. They should have been able to isolate the local area and provide power, but thanks to deregulations and restructuring - that wasn't possible. 13. Jul 10, 2008 ### Staff: Mentor What I don't understand is why the mid-west gets horrific floods yet the aquifers and California are dry. Why can't the flood waters be collected and diverted to the aquifer or to California? In S. California, they ought to use solar thermal desalination plants way down south from LA down to San Diego. Every time I fly to SD, I see aqueducts going through the desert. That makes absolutely no sense to me. 14. Jul 10, 2008 ### taylaron a big issue here is being able to store and or transmit that energy to where it its needed... im no expert astornuc but my guess is that there is too much water coming down from the mountains too fast to either store or divert; resulting in floods. fixing this is not a small undertaking and would cost tens of millions. and i also agree with vanesch about supply and demand. oh boy, if we came up with a brilliant way of mass producing effective energy storage; we wouldent have many of these problems we have today. its just an issue of someone willing to spend a lot of money to fund the research. i think we should be putting more and more into this; knowing that is a blockade for technology in a big way. Last edited by a moderator: May 3, 2017 15. Jul 12, 2008 ### Ivan Seeking Staff Emeritus
2017-07-26 16:56:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23481088876724243, "perplexity": 3297.0394714081312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426234.82/warc/CC-MAIN-20170726162158-20170726182158-00136.warc.gz"}
http://www.mcs.st-and.ac.uk/~miket/PureColl/pure_coll_index.html
### Programme of the Pure Mathematics Colloquium The colloquium takes place on Thursdays at 4pm in Theatre C of the Mathematical Institute (unless indicated otherwise). • 28th Sept, 2017: Dalia Terhesiu (Exeter) Title: Renewal sequences in Markov chains and dynamical systems Abstract: In the first part of the talk I recall the notion of renewal sequences associated with Markov chains and explain the connection with mixing. In the second part of the talk I discuss how renewal sequences can be understood in the context of (deterministic) dynamical systems, including dynamical systems with infinite measure, and summarise some recent result on mixing. • 5th Oct, 2017: Sophie Huczynska (St Andrews) Title: TBA Abstract: TBA • 12th Oct, 2017: Vaibhav Gadre (Glasgow) Title: Pseudo-Anosov maps with small entropy and the curve complex Abstract: The mapping class group of an orientable surface (of finite type) is the group or orientation preserving diffeormorphisms of the the surface modulo isotopy. There are three types of mapping classes (Thurston classification): finite order, reducible and pseudo-Anosov genearlising the classification for modular group $SL(2,\mathbb{Z})$. From multiple perspectives, pseudo-Anosov maps are the most interesting type. This talk will survey the theory of pseudo-Anosov maps with small entropy. It will subsequently focus on deriving bounds in terms of genus for a particular notion of entropy: "translation distance in the curve complex". The main result is joint work with Chia-yen Tsai. • 19th Oct, 2017: DOUBLE BILL: Tara Brough (Nova de Lisboa) Title: TBA Abstract: TBA Michael Giudici (Western Australia) Title: TBA Abstract: TBA • 2nd Nov, 2017: Daniel Meyer (Liverpool) Title: TBA Abstract: TBA • 9th Nov, 2017: Sarah Hart (Birkbeck) Title: TBA Abstract: TBA • 14th Nov, 2017: Joint Analysis Seminar and Pure Mathematics Colloquium, 3-4pm Room 1A Christian Berg (Copenhagen) Title: TBA Abstract: TBA • 16th Nov, 2017: Viveka Erlandsson (Bristol) Title: TBA • 23rd Nov, 2017: Anitha Thillaisundaram (Lincoln) Title: TBA Abstract: TBA Past colloquia can be found here.
2017-09-19 18:39:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5188953876495361, "perplexity": 8231.59202464237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685993.12/warc/CC-MAIN-20170919183419-20170919203419-00374.warc.gz"}
https://xyz5261941.wordpress.com/2015/02/05/conformal-killing-fields-on-minkowski-spaces/
# Conformal Killing fields on Minkowski spaces A vector field $X$ on a manifold $M$(Riemannian manifold or Lorenztien manifold) corresponds to a process of deformation of this manifold. As in fluid mechanics, there is a corresponding deformation tensor $D_iX_j+D_jX_i$. If the connection on this manifold is Ricci-Civita, then we can show that this tensor is just $L_Xg$, the Lie derivative of the metric tensor with respect to the vector field $X$: $(D_iX_j+D_jX_i)e^ie^j=L_Xg$. And we call $X$ a Killing field if this tensor is identically zero, $D_iX_j+D_jX_i=0$. In other words, the deformation caused by a Killing vector field has no rotation, no dilatations, only some translation(the interpretation in fluid mechanics), or simply is an isometry. We can, for some nice spaces, even calculate the dimension of the vector space spanned by all the Killing vector fields on this manifold. A general fact is that, this dimension is no greater than $n(n+1)/2$ where $n$ is the dimension of the space. In general, Killing fields are very hard to find, so sometimes we inquire that the deformation be only a conformal differemorphism, which means that the pull-back of the metric is the metric itself multiplied by a scalar positive function. Suppose $\phi_t$ is the local group of deformation determined by the vector field $X$, then we say that $X$ is a conformal Killing field if $\phi^*_t(g)=F(t)^2g$ where $F:M\times (-\epsilon,\epsilon)\rightarrow \mathbb{R}$ is a smooth positive function. Taking derivative with respect to $t$ on both sides at $t=0$, we get that $L_Xg=F(0)F'(0)g$. This means that if $X$ is only a conformal Killing field, then the deformation is a dilatation.The reverse is also true,that is if $L_Xg=ag(a\in\mathbb{R})$, then $\frac{d}{dt}\phi^*_tg=\phi_t^*(L_Xg)=\phi^*_t(a)\phi^*_tg$, then we can solve this differential equation, $\phi_t^*g=e^{\int_0^t\phi^*_s(a)ds}\phi^*_0g=e^{\int^t_0\phi^*_s(a)ds}g$ which is obvious a conformal differemorphism. In this post, we will work out all the conformal Killing fields on $\mathbb{R}^{1+n}$, the Minkowski space with the metric $m(m_{00}=-1)$. Suppose that $X$ is a conformal Killing field, that is, $D_{\alpha}X_{\beta}+D_{\beta}X_{\alpha}=L_Xm_{\alpha\beta}=Fm_{\alpha\beta}$. So we get that $D_{\gamma}D_{\alpha}X_{\beta}+D_{\gamma}D_{\beta}X_{\alpha}=D_{\gamma}Fm_{\alpha\beta}$. After permutations of the indice, $(\gamma,\alpha,\beta)\rightarrow(\alpha,\beta,\gamma)\rightarrow(\beta,\gamma,\alpha)$, add the first two and minus the third one, using the very important, essential fact that in $\mathbb{R}^{1+n}$, $D_{\alpha}D_{\beta}=D_{\beta}D_{\alpha}$, the result is $D_{\gamma}D_{\alpha}X_{\beta}=1/2(D_{\gamma}Fm_{\alpha\beta}+D_{\alpha}Fm_{\beta\gamma}-D_{\beta}Fm_{\gamma\alpha})$. Contracting the indice $\alpha,\beta$, we get that $\Box X_{\gamma}=1/2(1+1-(n+1))D_{\gamma}F=-(n-1)/2D_{\gamma}F$. Contract the indice in $D_{\alpha}X_{\beta}+D_{\beta}X_{\alpha}=Fm_{\alpha\beta}$, we get that $D^{\alpha}X_{\alpha}=(n+1)/2F$. Then using the last two formule, we get that $\Box F=0$. From this, we see that $-(n-1)/2D_{\alpha}D_{\beta}F=\Box D_{\beta}X_{\alpha}=1/2\Box (D_{\alpha}X_{\beta}+D_{\beta}X_{\alpha})=1/2\Box Fm_{\alpha\beta}=0$. In other words, the second derivatives of $F$ is identically zero, which means that $F$ is an affine function on $x^{\alpha}$, $F=a+b_{\alpha}x^{\alpha}$. But $D_{\alpha}X_{\beta}+D_{\beta}X_{\alpha}=Fm_{\alpha\beta}$, which means that each $X_{\alpha}$ is at most a quadratic function on $x^{\beta}$. Intuitively, what are the possible conformal transformations of $\mathbb{R}^{1+n}$? The translations, the orthogonal transformations, the dilatations, and the inversions. Note that an inversion itself is not a group of conformal transformations parametrized by a real variable, say $t\in(-\epsilon,+\epsilon)$, yet it can act as conjugation on the other one-parameter conformal groups. We can show that, for example, the Killing field corresponding to the translations is $T=a^{\alpha}\frac{\partial}{\partial x^{\alpha}}$, the Killing field corresponding to the orthogonal rotations is $R=x^{\alpha}\frac{\partial}{\partial x^{\beta}}-x^{\beta}\frac{\partial}{\partial x^{\alpha}}$, the Killing field corresponding to the dilatation is $D=x^{\alpha}\frac{\partial}{\partial x^{\alpha}}$, and the Killing field corresponding to the translation conjugated by the inversion is $I_{\alpha}=2x_{\alpha}x^{\beta}\frac{\partial}{\partial x^{\beta}}-x^{\beta}x_{\beta}\frac{\partial}{\partial x^{\alpha}}$. We can show that these are all the conformal Killing field on $\mathbb{R}^{1+n}$. Indeed, from $F$, we consider $Y=X-aD-b^{\alpha}I_{\alpha}$. After some complicated calculations, we get that $D_{\alpha}X'_{\beta}+D_{\beta}X'_{\alpha}=0$. So $X'$ is itself a Killing field. Yet, we know that the dimension of the vector space of the Killing fields is no more than $(n+1)(n+2)/2$ for $\mathbb{R}^{1+n}$. Yet there are already $n+1$ translations and $(n+1)(n+1-1)/2$ orthogonal rotations, which sum up to $n+1+(n+1)n/2=(n+1)(n+2)/2$! So, this means that $X'$ is a combination of translations and orthogonal rotations, thus $X$ itself is a combination of these four conformal Killing fields. Let’s count the dimension of the vector space of conformal Killing fields, it is $(n+1)(n+2)/2+1+(n+1)=(n+2)(n+3)/2$.
2018-02-20 21:11:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 62, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9853729009628296, "perplexity": 113.64235299767732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813109.36/warc/CC-MAIN-20180220204917-20180220224917-00088.warc.gz"}
https://nathaneastwood.github.io/poorman/reference/summarise.html
Create one or more scalar variables summarising the variables of an existing data.frame. Grouped data.frames will result in one row in the output for each group. summarise(.data, ..., .groups = NULL) summarize(.data, ..., .groups = NULL) ## Arguments .data A data.frame. Name-value pairs of summary functions. The name will be the name of the variable in the result. character(1). Grouping structure of the result. "drop_last": drops the last level of grouping. "drop": all levels of grouping are dropped. "keep": keeps the same grouping structure as .data. When .groups is not specified, it is chosen based on the number of rows of the results: If all the results have 1 row, you get "drop_last". If the number of rows varies, you get "keep". In addition, a message informs you of that choice, unless the result is ungrouped, the option "poorman.summarise.inform" is set to FALSE. The value can be: A vector of length 1, e.g. min(x), n(), or sum(is.na(y)). A vector of length n, e.g. quantile(). ## Details summarise() and summarize() are synonyms. ## Examples # A summary applied to ungrouped tbl returns a single row mtcars %>% summarise(mean = mean(disp), n = n()) #> mean n #> 1 230.7219 32 # Usually, you'll want to group first mtcars %>% group_by(cyl) %>% summarise(mean = mean(disp), n = n()) #> cyl mean n #> 1 4 105.1364 11 #> 2 6 183.3143 7 #> 3 8 353.1000 14 # You can summarise to more than one value: mtcars %>% group_by(cyl) %>% summarise(qs = quantile(disp, c(0.25, 0.75)), prob = c(0.25, 0.75)) #> summarise() has grouped output by 'cyl'. You can override using the .groups argument.#> cyl x prob #> 1 4 78.85 0.25 #> 2 4 120.65 0.75 #> 3 6 160.00 0.25 #> 4 6 196.30 0.75 #> 5 8 301.75 0.25 #> 6 8 390.00 0.75 # You use a data frame to create multiple columns so you can wrap # this up into a function: my_quantile <- function(x, probs) { data.frame(x = quantile(x, probs), probs = probs) } mtcars %>% group_by(cyl) %>% summarise(my_quantile(disp, c(0.25, 0.75))) #> Error in my_quantile(disp, c(0.25, 0.75)): could not find function "my_quantile" # Each summary call removes one grouping level (since that group # is now just a single row) mtcars %>% group_by(cyl, vs) %>% summarise(cyl_n = n()) %>% group_vars() #> summarise() has grouped output by 'cyl'. You can override using the .groups argument.#> [1] "cyl"
2021-06-18 13:08:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47338107228279114, "perplexity": 11689.618785530694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487636559.57/warc/CC-MAIN-20210618104405-20210618134405-00202.warc.gz"}
https://gigaom.com/2007/04/30/quick-tip-showhide-hidden-files/
# Quick Tip: Show/Hide Hidden Files A few months ago I started to mess around with a .htaccess file in connection with one of my websites. When I transferred the file from my web server to my desktop via FTP, the file never showed up. I tried again and again, but that dang file would never show up. After a little searching, I realized that .htaccess is one of the files that OS X hides by default so that you don’t accidentally delete and/or alter it. However, there are times that you need access to those files. Unfortunately Apple hasn’t made it as simple as toggling a menu item in Finder. Instead, you’re going to to have to write out a line or two of code. But if you follow the following few steps, you’ll be able to use Automator to create a plugin that you can use to toggle the view of hidden files from within Finder. ### Step 1: Automator Actions After opening Automator, select Automator from within the Applications Library on the left-hand side. You’ll now see a number of different built-in actions that are available to the Automator application. Select Run Shell Script from the list of available actions, and drag it into your workflow. Type (or paste in) the following code into the Run Shell Script text box: defaults write com.apple.finder AppleShowAllFiles TRUE killall Finder ### Step 2: Save As Plugin Now that your Automator workflow is finished (yes, that’s it), choose File > Save As Plugin… and choose Finder as the Application. Save your plugin-in as ShowHiddenFiles or something else descriptive. Now, from the Finder or desktop, simply right-click (or cmd-click) and the contextual menu will appear. Choose Automator > ShowHiddenFiles and the Finder will restart showing all your hidden files. ### Step 3: Repeat Seeing all those hidden files can start to be annoying and can lead to some unfortunate accidents if you happen to delete something you shouldn’t. So as soon as you’re done with the hidden files, simply edit the above workflow by substituting “FALSE” for “TRUE” and save the new plug-in as HideHiddenFile. Now showing and hiding system files is as easy as a click away.
2021-04-12 01:01:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2518272399902344, "perplexity": 2919.012108808566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038065903.7/warc/CC-MAIN-20210411233715-20210412023715-00318.warc.gz"}
https://hsm.stackexchange.com/questions/7253/when-was-the-square-of-negative-numbers-specified
# When was the square of negative numbers specified? We know that the rules of relative number where laid down in India (a product of 2 debts is a fortune) and in Europe they were spread by Bombelli, who , again, only mentions the product of two minuses. When/by whom was it first specified that the square of a negative is a positive? Edit the question ha been misinterpreted and id is not a duplicate: someone is taking for granted that the rule of minus times minus authomatically implies the realization that negative squares are impossible (besides the subtle fact that another conclusion was theoretically possible). So , Bombelli mentioned the rules of multiplication, but did he explicitly state that the square of minus one (or any other negative) is plus one? When was the general public of scientists fully aware that the roots of negatives are missing on the number line on the left of zero? Surely not in the Middle ages before Bombelli even though (as the answer here and there imply) negatives where known from the 6th or 3rd century or even earlier. Is this clear now? • But the square of $(-a)$ is $(-a) \times (-a)$. – Mauro ALLEGRANZA Apr 15 '18 at 18:32 • @user157860 This is a fundamental question that must be investigated fully in depth, otherwise it seems like a plain belief that would fall somewhere else, good question – Bassam Karzeddin Apr 16 '18 at 7:18 • Are you asking when someone realized that, if (-1)* X yields a negative number, then (-1)*(negative number) must be positive? – Carl Witthoft Apr 16 '18 at 12:00 • Well, user157860, that's a radically (pun intended) different question from the one you posted. – Carl Witthoft Apr 16 '18 at 15:36 • Possible duplicate of Historically, how did people define multiplication for negative numbers? – Conifold Apr 16 '18 at 20:51 The oldest surviving book on algebra is "Arithmetic" by Diophantus of Alexandria. He defines negative numbers and arithmetic operations on them, including multiplication. So he knew that the product of negative numbers is positive. Earlier sources on algebra did not survive, but there is little doubt that they existed: it is hard to imagine that such advanced mathematics appeared suddenly out of blue. Unfortunately, it is not known precisely when Diophantus lived. The accepted date is around 250 AD, but certainly he could not write after Theon (mid 300s) because Theon mentions his book. Now, in the comment you ask a different question: when and why did people start to discuss square ROOTS of negative numbers. This happened in Italy, in 16th century when they discovered a formula for solving the cubic equation. When you apply this formula to a cubic equation which has 3 real roots, some intermediate term which you obtain is a square root of a negative number. So the question was how to interpret this root and to make the formula work. For details, see http://www.math.purdue.edu/~eremenko/dvi/cardano.pdf EDIT. I was asked for the exact citation of Diophantus. The copy that I have is in Russian, so I translate from the Russian to English as literally as I can: Arithmetic, book I, section IX (p. 40 of my Russian edition): Deficiency multiplied on deficiency gives an asset; deficiency multiplied on an asset gives a deficiency; we use the following sign for a deficiency sorry, I have no font to reproduce his sign for the minus. The words which I translated as "deficiency" and "asset" can be also translated as "lack" and "availability". Section X: After this explanation of multiplication the division must be clear; it is recommended to the beginner to exercise in addition, subtraction and multiplication of these kinds... • The problem of $\sqrt{-1}$ is a different problem. It occured when those Italians discovered the cubic formula and found that this formula contains square roots of negative numbers, even when the final result is real. – Alexandre Eremenko Apr 16 '18 at 18:22 • Yes, I have access to his books and he says this in it explicitly. – Alexandre Eremenko Apr 19 '18 at 11:57 • I added my literal translation from Diophantus. On your second question, I just don't understand it. The rules of multiplication of complex numbers are no more "ad hoc" then the rules of multiplication of real numbers. And complex numbers "exist" in the so-called "real world" in the same sense as real numbers, or rational numbers for that matter. But I am not going to discuss philosophy on this site. – Alexandre Eremenko Apr 19 '18 at 18:15 • @user157860: $a^2=a\times a$, so a square is a special case of multiplication, is not it? – Alexandre Eremenko Apr 20 '18 at 12:23 • It surely is for natural numbers, bu we are concerned about numeri (ab) "surdi" here – user157860 Apr 21 '18 at 5:22 In his 1784 work Algebra, Colin MacLaurin presents the following argument for why a negative number multiplied by a negative number is (or rather, must be) positive (see Chapter III case IV, here; it's page 35 of the PDF). $$-n(a-a)$$ must equal 0 (since $$a-a = 0$$) Using the distributive property, the first term $$-n \times a$$ is equal to $$-na$$. The only way for the distributive property to still hold, and the statement to be true, is if $$-n \times -a=na$$. If we let $$-n=-a$$ from our example above, then $$-a \times -a$$ will of course be a positive number, $$a^2$$. From this it is clear that there is no way to square a real number and end up with a negative result, hence Cardano's (and everyone else's) confusion over what to do when confronted with something like $$\sqrt{-n}$$: such an operation was undefined, because there wasn't any number at the time that could be squared to get a negative result. In Book I of L'Algebra (1572), Bombelli specifies that "minus times minus makes plus", and even offers an example that is farily close to MacLaurin's: Multiply $$(6-4) \times (5-2)$$ $$-2 \times -4 = 8$$, and $$-2 \times 6 = -12$$, and $$5 \times -4 = -20$$, and $$5 \times 6 = 30$$ so $$(6-4) \times (5-2) = 30-20-12+8$$ Bombelli does not take the extra step to explain that a negative times a negative must be positive for the calculation to work out properly, however from this example and his multiplication rules he would have realized that (a) the square of a negative number is positive, and (b) thus there was no way to square a number and get a negative result, rendering the square roots of negative numbers perplexing at best. (A full version of L'Algebra in Italian can be found here. The above excerpts are from Book I {Libro Primo}, pages 70 and 71 {127 and 128 of the PDF}. Without knowing you at all, I bet your Italian is better than mine...) I will say that I am not 100% certain that MacLaurin was the first one to actually demonstrate this "minus times minus is plus" rule (versus just stating it). Bombelli gave an example, but MacLarin's Treatise is the earliest publication I have found that offers something like a proof. I whole-heartedly invite the pros here to fact check me. I do hope that I have addressed the general spirit of your question, though. • Cardano, Bombelli, and others were certainly aware of the problem in the geometrical sense. The side of a square was always positive, so the square's area was always positive - hence it made no sense to start with a negative area (and then determine that a square had a negative side by taking the square root). Unfortunately I don't have an English translation of Bombelli's L'Algebra, but I know he discussed negative numbers to some extent. I'll scan the various other history books I have and see if any of them provide any enlightening detail. I'm curious too. – Brant Apr 21 '18 at 11:50 • I meant in a geometrical sense and in Cardano's time. What is the meaning of a silo capable of storing -125 bushels of grain? No such silo exists in the physical world. Things are admittedly different in the algebraic world. Cubes in real life - physical, structural cubes - don't have negative volumes or negative side lengths. Yet the rules for multiplying negative numbers dictate that that -5 x -5 x -5 = -125. So, algebraically, the cube root of a negative number has a real solution, but the square root of a negative number does not. – Brant Apr 22 '18 at 18:58 • Historical roots of the justification for the rule for multiplication of negative numbers has some good info that is related to this discussion. – Brant Apr 22 '18 at 21:34
2020-09-19 03:30:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.712865948677063, "perplexity": 639.4371115548709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400189928.2/warc/CC-MAIN-20200919013135-20200919043135-00167.warc.gz"}
https://byjus.com/questions/find-the-antiderivative-of-tan-2-x-dx/
# Find the antiderivative of tan2 (x) dx We need to find the antiderivative of tan2x ### Solution We know that tan x can be expressed in sin and cos as tan x = sin / cos x Hence $\tan ^{2}x = \sin ^{2}x / \cos ^{2}x$ $\int tan ^{2}x. dx = \int sin ^{2}x / cos ^{2}x . dx$—————-(i) We know from the trigonometric identity that sin 2x + cos2x = 1 or sin2x= 1 – cos2x Substituting sin2x= 1 – cos2x in equation (i) we get = $\int 1 -\cos ^{2}x / \cos ^{2}x . dx$ = $\int 1 / \cos ^{2}x – \cos ^{2}x. dx$ = $\int 1 / \cos ^{2}x.dx – \int 1.dx$ =$\tan x – x + c$ Antiderivative of tan2x= $\tan x – x + c$
2020-12-04 14:55:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7861208319664001, "perplexity": 3830.1733200432254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141737946.86/warc/CC-MAIN-20201204131750-20201204161750-00714.warc.gz"}
https://www.nag.com/numeric/mb/manual64_25_1/html/g08/g08aaf.html
Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int PDF version (NAG web site, 64-bit version, 64-bit version) Chapter Contents Chapter Introduction NAG Toolbox # NAG Toolbox: nag_nonpar_test_sign (g08aa) ## Purpose nag_nonpar_test_sign (g08aa) performs the Sign test on two related samples of size $n$. ## Syntax [isgn, n1, p, ifail] = g08aa(x, y, 'n', n) [isgn, n1, p, ifail] = nag_nonpar_test_sign(x, y, 'n', n) ## Description The Sign test investigates the median difference between pairs of scores from two matched samples of size $n$, denoted by $\left\{{x}_{\mathit{i}},{y}_{\mathit{i}}\right\}$, for $\mathit{i}=1,2,\dots ,n$. The hypothesis under test, ${H}_{0}$, often called the null hypothesis, is that the medians are the same, and this is to be tested against a one- or two-sided alternative ${H}_{1}$ (see below). nag_nonpar_test_sign (g08aa) computes: (a) the test statistic $S$, which is the number of pairs for which ${x}_{i}<{y}_{i}$; (b) the number ${n}_{1}$ of non-tied pairs $\left({x}_{i}\ne {y}_{i}\right)$; (c) the lower tail probability $p$ corresponding to $S$ (adjusted to allow the complement $\left(1-p\right)$ to be used in an upper one tailed or a two tailed test). $p$ is the probability of observing a value $\text{}\le S$ if $S<\frac{1}{2}{n}_{1}$, or of observing a value $\text{} if $S>\frac{1}{2}{n}_{1}$, given that ${H}_{0}$ is true. If $S=\frac{1}{2}{n}_{1}$, $p$ is set to $0.5$. Suppose that a significance test of a chosen size $\alpha$ is to be performed (i.e., $\alpha$ is the probability of rejecting ${H}_{0}$ when ${H}_{0}$ is true; typically $\alpha$ is a small quantity such as $0.05$ or $0.01$). The returned value of $p$ can be used to perform a significance test on the median difference, against various alternative hypotheses ${H}_{1}$, as follows (i) ${H}_{1}$: median of $x\ne \text{}$ median of $y$. ${H}_{0}$ is rejected if $2×\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(p,1-p\right)<\alpha$. (ii) ${H}_{1}$: median of $x>\text{}$ median of $y$. ${H}_{0}$ is rejected if $p<\alpha$. (iii) ${H}_{1}$: median of $x<\text{}$ median of $y$. ${H}_{0}$ is rejected if $1-p<\alpha$. ## References Siegel S (1956) Non-parametric Statistics for the Behavioral Sciences McGraw–Hill ## Parameters ### Compulsory Input Parameters 1:     $\mathrm{x}\left({\mathbf{n}}\right)$ – double array 2:     $\mathrm{y}\left({\mathbf{n}}\right)$ – double array ${\mathbf{x}}\left(\mathit{i}\right)$ and ${\mathbf{y}}\left(\mathit{i}\right)$ must be set to the $\mathit{i}$th pair of data values, $\left\{{x}_{\mathit{i}},{y}_{\mathit{i}}\right\}$, for $\mathit{i}=1,2,\dots ,n$. ### Optional Input Parameters 1:     $\mathrm{n}$int64int32nag_int scalar Default: the dimension of the arrays x, y. (An error is raised if these dimensions are not equal.) $n$, the size of each sample. Constraint: ${\mathbf{n}}\ge 1$. ### Output Parameters 1:     $\mathrm{isgn}$int64int32nag_int scalar The Sign test statistic, $S$. 2:     $\mathrm{n1}$int64int32nag_int scalar The number of non-tied pairs, ${n}_{1}$. 3:     $\mathrm{p}$ – double scalar The lower tail probability, $p$, corresponding to $S$. 4:     $\mathrm{ifail}$int64int32nag_int scalar ${\mathbf{ifail}}={\mathbf{0}}$ unless the function detects an error (see Error Indicators and Warnings). ## Error Indicators and Warnings Errors or warnings detected by the function: ${\mathbf{ifail}}=1$ On entry, ${\mathbf{n}}<1$. ${\mathbf{ifail}}=2$ ${\mathbf{n1}}=0$, i.e., the samples are identical. ${\mathbf{ifail}}=-99$ An unexpected error has been triggered by this routine. Please contact NAG. ${\mathbf{ifail}}=-399$ Your licence key may have expired or may not have been installed correctly. ${\mathbf{ifail}}=-999$ Dynamic memory allocation failed. ## Accuracy The tail probability, $p$, is computed using the relationship between the binomial and beta distributions. For ${n}_{1}<120$, $p$ should be accurate to at least $4$ significant figures, assuming that the machine has a precision of $7$ or more digits. For ${n}_{1}\ge 120$, $p$ should be computed with an absolute error of less than $0.005$. For further details see nag_stat_prob_beta (g01ee). The time taken by nag_nonpar_test_sign (g08aa) is small, and increases with $n$. ## Example This example is taken from page 69 of Siegel (1956). The data relates to ratings of ‘insight into paternal discipline’ for $17$ sets of parents, recorded on a scale from $1$ to $5$. ```function g08aa_example fprintf('g08aa example results\n\n'); x = [4; 4; 5; 5; 3; 2; 5; 3; 1; 5; 5; 5; 4; 5; 5; 5; 5]; y = [2; 3; 3; 3; 3; 3; 3; 3; 2; 3; 2; 2; 5; 2; 5; 3; 1]; fprintf('Sign test\n\n') fprintf('Data values\n\n'); fprintf('%3.0f',x); fprintf('\n') fprintf('%3.0f',y); fprintf('\n\n') [isgn, n1, p, ifail] = g08aa( ... x, y); fprintf('Test statistic %5d\n', isgn); fprintf('Observations %5d\n', n1); fprintf('Lower tail prob. %5.3f\n', p); ``` ```g08aa example results Sign test Data values 4 4 5 5 3 2 5 3 1 5 5 5 4 5 5 5 5 2 3 3 3 3 3 3 3 2 3 2 2 5 2 5 3 1 Test statistic 3 Observations 14 Lower tail prob. 0.029 ``` PDF version (NAG web site, 64-bit version, 64-bit version) Chapter Contents Chapter Introduction NAG Toolbox © The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2015
2021-09-26 04:46:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 84, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9489444494247437, "perplexity": 4496.292106461555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057796.87/warc/CC-MAIN-20210926022920-20210926052920-00390.warc.gz"}
https://cs.stackexchange.com/questions/76122/how-many-bits-to-represent-a-quantity-omega-bounded-in-a-particular-way
# How many bits to represent a quantity $\omega$ bounded in a particular way? I'm working out some details to implement a division algorithm, I'm following the explanation given in this book (chapter 5) for who is interested. Anyway I need to work out how many bits are necessary to represent a value $\omega$ bounded by $$| \omega | \leq \rho r^{k+1} y$$ where $$\begin{array}{l} \frac{1}{2} < \rho \leq 1 \\ r = 2^l, \text{where l\geq 1 is some integer} \\ 0 \leq y \leq 2^{k} - 1, \text{ k positive integer} \end{array}$$ My approach is finding the number of bits to represent $| \omega |$ and then adding one bit to represent the sign of $\omega$, in two complement. There fore $$\left\lceil \log_2(|\omega|) \right\rceil \leq \left\lceil \log_2(\rho r^{k+1} y) \right\rceil = \left\lceil \log_2(\rho) + (k+1)\log_2 r + \log_2(y) \right\rceil \leq \left\lceil (k+1)l + \log_2(2^k-1))\right\rceil \leq \left\lceil (k+1)l + k \right\rceil = \left\lceil (k+1)(l+1) - 1 \right\rceil = (k+1)(l+1) - 1$$ Therefore in two complement I would need a total of $(k+1)(l+1)$ bits to represent my value $\omega$. Is this correct? Yes it's correct, but I would not introduce the rounding until the end. Let $w = |\omega|$, then in the worst case: $$w \leq \rho r^{k+1}y$$ $$w \leq 1\cdot r^{k+1}y$$ $$w \leq (2^l)^{k+1}y$$ $$w \leq (2^l)^{k+1}(2^k-1)$$ $$\log_2(w) \leq \log_2((2^l)^{k+1}) + \log_2(2^k-1)$$ $$\log_2(w) \leq \log_2(2^{l(k+1)}) + \log_2(2^k)$$ $$\log_2(w) \leq {l(k+1)} + k$$ $$\log_2(w) \leq {(l+1)(k+1)} -1$$ $$\lceil\log_2(w)\rceil \leq {(l+1)(k+1)} -1$$ $$\lceil\log_2(w)\rceil < {(l+1)(k+1)}$$
2021-04-18 20:14:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7990454435348511, "perplexity": 196.1607608457821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038860318.63/warc/CC-MAIN-20210418194009-20210418224009-00508.warc.gz"}
http://utmost-sage-cell.org/sage:polar-coordinates
Polar Coordinates ## Description Polar coordinates are a coordinate system similar to our standard Cartesian coordinates, however instead we have the radius from the origin $r$ and an angle $\theta$ in radians. We define $\theta$ to be zero when the points are on the x-axis and to the right of the origin, and positive angles are in the counter-clockwise direction. To translate our standard Cartesian coordinates into polar coordinates we can use the functions (1) \begin{align} r = \sqrt{ x^2 + y^2 } \\ \theta = \tan^{-1} \left(\frac{ y }{ x }\right). \end{align} The following Sage interact converts Cartesian coordinates into polar coordinates. The default values are $(x,y) = (1,1)$. #### Code @interact def _(x = input_box(default=1), y=input_box(default=1)): r = sqrt( x^2 + y^2 ) t = arctan(y/x) pretty_print(html(r"$x = %s$" %x)) pretty_print(html(r"$y = %s$" %y)) pretty_print(html(r"$r = %s$" %r)) pretty_print(html(r"$t = %s$" %t)) ## Sage Cell #### Option We can also convert polar coordinates back into Cartesian coordinates using the formulas (2) \begin{align} x = r \cos \theta \\ y = r \sin \theta. \end{align} The Sage interact below converts polar coordinates into Cartesian coordinates. The default values are $(r,\theta) = (2, \frac{ \pi}{2})$. #### Code @interact def _(r = input_box(default=2), t=input_box(default= pi/2 )): x = r*cos(t) y = r*sin(t) pretty_print(html(r"$r = %s$" %r)) pretty_print(html(r"$t = %s$" %t)) pretty_print(html(r"$x = %s$" %x)) pretty_print(html(r"$y = %s$" %y)) Primary Tags: Secondary Tags: ## Tags Any related cells go here. Provide a link to the page containing the information about the cell. Author: ## Attribute Date: 30 Oct 2018 17:31 Submitted by: James A Phillips
2018-11-18 13:26:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.997565507888794, "perplexity": 2158.204941024556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744368.74/warc/CC-MAIN-20181118114534-20181118140534-00222.warc.gz"}
http://bsdupdates.com/error-propagation/propagation-error-approach.php
Home > Error Propagation > Propagation Error Approach # Propagation Error Approach ## Contents Robbie Berg 22.296 προβολές 16:31 Propagation of Error - Διάρκεια: 7:01. By contrast, cross terms may cancel each other out, due to the possibility that each term may be positive or negative. f = ∑ i n a i x i : f = a x {\displaystyle f=\sum _ σ 4^ σ 3a_ σ 2x_ σ 1:f=\mathrm σ 0 \,} σ f 2 Uncertainties can also be defined by the relative error (Δx)/x, which is usually written as a percentage. useful reference Institutional Sign In By Topic Aerospace Bioengineering Communication, Networking & Broadcasting Components, Circuits, Devices & Systems Computing & Processing Engineered Materials, Dielectrics & Plasmas Engineering Profession Fields, Waves & Electromagnetics General Journal of Sound and Vibrations. 332 (11): 2750–2776. Generally, reported values of test items from calibration designs have non-zero covariances that must be taken into account if b is a summation such as the mass of two weights, or or its licensors or contributors. check that ## Propagation Of Error Division Harry Ku (1966). Unfortunately, it does not work, and it is not clear why. Uncertainty never decreases with calculations, only with better measurements. 1. Caveats and Warnings Error propagation assumes that the relative uncertainty in each quantity is small.3 Error propagation is not advised if the uncertainty can be measured directly (as variation among repeated 2. In the next section, derivations for common calculations are given, with an example of how the derivation was obtained. 3. It automatically propagates errors through a pretty broad range of computations. 5. Robbie Berg 8.782 προβολές 18:16 IB Physics- Uncertainty and Error Propagation - Διάρκεια: 7:05. 6. Use of this web site signifies your agreement to the terms and conditions. 7. Define f ( x ) = arctan ⁡ ( x ) , {\displaystyle f(x)=\arctan(x),} where σx is the absolute uncertainty on our measurement of x. Retrieved 13 February 2013. Examples of propagation of error analyses Examples of propagation of error that are shown in this chapter are: Case study of propagation of error for resistivity measurements Comparison of check standard JavaScript is disabled on your browser. Error Propagation Excel October 9, 2009. For example, repeated multiplication, assuming no correlation gives, f = A B C ; ( σ f f ) 2 ≈ ( σ A A ) 2 + ( σ B Error Propagation Calculator It is important to note that this formula is based on the linear characteristics of the gradient of f {\displaystyle f} and therefore it is a good estimation for the standard Eq.(39)-(40). Privacy policy About Wikipedia Disclaimers Contact Wikipedia Developers Cookie statement Mobile view 2. Advantages of top-down approach This approach has the following advantages: proper treatment of covariances between measurements of length and width proper treatment of unsuspected sources of error that would emerge if Error Propagation Calculus Forgotten username or password? This is what I would setup for a real working example. In this video I use the example of resistivity, which is a function of resistance, length and cross sectional area. Κατηγορία Εκπαίδευση Άδεια Τυπική άδεια YouTube Εμφάνιση περισσότερων Εμφάνιση λιγότερων Φόρτωση... ## Error Propagation Calculator JCGM. http://www.chem.hope.edu/~polik/Chem345-2000/errorpropagation.htm doi:10.1016/j.jsv.2012.12.009. ^ Lecomte, Christophe (May 2013). "Exact statistics of systems with uncertainties: an analytical theory of rank-one stochastic dynamic systems". Propagation Of Error Division Thus, the expected uncertainty in V is 39 cm3. 4. Purpose of Error Propagation Quantifies precision of results Example: V = 1131 39 cm3 Identifies principle source Error Propagation Physics Jason Harlow 8.916 προβολές 17:08 Simple Calculations of Average and the Uncertainty in the Average - Διάρκεια: 4:22. Joint Committee for Guides in Metrology (2011). see here Accounting for significant figures, the final answer would be: ε = 0.013 ± 0.001 L moles-1 cm-1 Example 2 If you are given an equation that relates two different variables and When the variables are the values of experimental measurements they have uncertainties due to measurement limitations (e.g., instrument precision) which propagate to the combination of variables in the function. The idea is to create a function that returns a float, when everything is given as a float. Error Propagation Chemistry Example: V = 1131 39 cm3 6. Comparison of Error Propagation to Significant Figures Use of significant figures in calculations is a rough estimate of error propagation. In matrix notation, [3] Σ f = J Σ x J ⊤ . {\displaystyle \mathrm {\Sigma } ^{\mathrm {f} }=\mathrm {J} \mathrm {\Sigma } ^{\mathrm {x} }\mathrm {J} ^{\top }.} That Journal of Research of the National Bureau of Standards. http://bsdupdates.com/error-propagation/propagation-of-error-log.php Multivariate error analysis: a handbook of error propagation and calculation in many-parameter systems. Berkeley Seismology Laboratory. Error Propagation Average Derivation of Exact Formula Suppose a certain experiment requires multiple instruments to carry out. Propagation of Error http://webche.ent.ohiou.edu/che408/S...lculations.ppt (accessed Nov 20, 2009). ## Table 1: Arithmetic Calculations of Error Propagation Type1 Example Standard Deviation ($$\sigma_x$$) Addition or Subtraction $$x = a + b - c$$ $$\sigma_x= \sqrt{ {\sigma_a}^2+{\sigma_b}^2+{\sigma_c}^2}$$ (10) Multiplication or Division $$x = Opens overlay J.F. ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.10/ Connection to 0.0.0.10 failed. A simple random-number generator for a rectangular distribution function is shown to provide an econimical and fairly efficient means of simulating the effects of using a normal distribution function. open in Error Propagation Definition Peralta, M, 2012: Propagation Of Errors: How To Mathematically Predict Measurement Errors, CreateSpace. Let's say we measure the radius of a very small object. Subscribe Enter Search Term First Name / Given Name Family Name / Last Name / Surname Publication Title Volume Issue Start Page Search Basic Search Author Search Publication Search Advanced Search The idea is to wrap the “external” fsolve function using the uncertainties.wrap function, which handles the units. Get More Info Article type topic Tags Upper Division Vet4 © Copyright 2016 Chemistry LibreTexts Powered by MindTouch The Kitchin Research Group Chemical Engineering at Carnegie Mellon University Blog Archives Publications Group Research In effect, the sum of the cross terms should approach zero, especially as \(N$$ increases. This may be a limitation of teh uncertainties package as not all functions in arbitrary modules can be covered. In statistics, propagation of uncertainty (or propagation of error) is the effect of variables' uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on them. Learn more You're viewing YouTube in Greek. The value of a quantity and its error are then expressed as an interval x ± u.
2018-01-20 15:19:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6279603838920593, "perplexity": 2420.1555773473133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889660.55/warc/CC-MAIN-20180120142458-20180120162458-00308.warc.gz"}
https://caml.inria.fr/pub/old_caml_site/FAQ/format-eng.html
# How to pretty-print (to use format'' ?) ? Contact the author Pierre.Weis@inria.fr Created in November 1996. The pretty-printing facility provided by the format module is used to get a fancy display for printing routines. This module provides a pretty-printing engine'' that is intended to break lines in a nice way (let's say automatically when it is necessary''). ## Principles Breaking of lines is based on 2 concepts: • boxes : a box is a logical pretty-printing unit, which defines a behavior of the pretty-printing engine. • breaks : a break is a hint given to the pretty-printing engine, to tell it where to break lines. Otherwise the pretty-printing engine never break lines (except in case of emergency'' to avoid very bad output). (In addition, when the pretty printing engine starts a new line, there are rules, depending on the currently opened boxes, to fix the indentation of the new line (the leading spaces at the beginning of the line).) ## Boxes There are 4 types of boxes. (The most often used is the hov'' box type, so skip the rest at first reading). • horizontal box (h box, as obtained by the open_hbox procedure): in this box breaks do not lead to line breaks. • vertical box (v box, as obtained by the open_vbox procedure): every break hint lead to a new line. • vertical/horizontal box (hv box, as obtained by the open_hvbox procedure): if it is possible, the entire box is written on a single line; otherwise every break hint lead to a new line. • vertical or horizontal box (hov box, as obtained by the open_box or open_hovbox procedures): break hints are used to cut the line when there is no more room on the line. There are two kinds of hov'' boxes, you can find details below. In first approximation, let me consider these two kinds of hov'' boxes as equivalent and obtained by calling the open_box procedure. Let me give an example. Suppose we can write 10 chars before the right margin (that indicates no more room). We represent any char as -, [ and ] indicates opening and closing of box and b stands for a break hint found in the output by the pretty-printing engine. The output "--b--b--" is displayed like this (the b symbol stands for the value of the break that is explained below): • within a h'' box: --b--b-- • within a v'' box: --b --b -- • within a hv'' box: If there is enough room to print the box on the line: --b--b-- But "---b---b---" that cannot fit on the line is written ---b ---b --- • within a hov'' box: If there is enough room to print the box on the line: --b--b-- But if "---b---b---" cannot fit on the line, it is written as ---b---b --- The first break does not lead to a new line, since there is enough room on the line. The second one leads to a new line since there is no more room to print the material following it. If the room left on the line were even shorter, the first break hint may lead to a new line and "---b---b---" is written as: ---b ---b --- ## Printing spaces Break hints are also used to output spaces (if the line is not split when the break is encountered, otherwise the new line indicates properly the separation between printing items). You output a break hint using print_break sp indent, and this sp integer is used to print sp'' spaces. Thus print_break sp ... may be thought as: print sp spaces or output a new line. For instance, if b is break 1 0 in the output "--b--b--", we get • within a h'' box: -- -- -- • within a v'' box: -- -- -- • within a hv'' box: -- -- -- or (according to the remaining room on the line) -- -- -- • and similarly for hov'' boxes. Generally speaking, a printing routine using "format", should not directly output white spaces: the routine should use break hints instead. (For instance print_space () that is a convenient abbrev for print_break 1 0 and outputs a single space or break the line.) ## Indentation of new lines The user gets 2 ways to fix the indentation of new lines: • when defining the box where it occurs: when opening a box, you may fix the indentation added to each new line opened within the box. For instance: open_hovbox 1 opens a hov'' box with new lines indented 1 more than the initial indentation of the box. With output "---[--b--b--b--", we get: ---[--b--b --b-- with open_hovbox 2, we get ---[--b--b --b-- Note: the [ sign in the display is actually not visible on the screen, it is just there to materialize the aperture of the pretty-printing box. Last screen'' stands for: -----b--b --b-- • when defining the break that makes the new line. As said above, you output a break hint using print_break sp indent. The indent integer is used to fix the indentation of the new line. Namely it is added to the default indentation offset of the box where the break occurs. For instance, if [ stands for the opening of a hov'' box with 1 as extra indentation (as obtained by open_hovbox 1), and b is print_break 1 2, then from output "---[--b--b--b--", we get: ---[-- -- -- -- • Other operations: print_cut, print_as, force_newline, print_flush,printf. • For more details, see the module format. ## Refinement on hov'' boxes ### Packing and structural hov'' boxes The hov'' box type is refined into two categories. • the vertical or horizontal packing box (as obtained by the open_hovbox procedure): break hints are used to cut the line when there is no more room on the line; no new line occurs if there is enough room on the line. • vertical or horizontal structural box (as obtained by the open_box procedure): similar to the hov'' packing box, the break hints are used to cut the line when there is no more room on the line; break hints that can show the box structure lead to new lines even if there is enough room on the current line. ### Differences between a packing and a structural hov'' box The difference between a packing and a structural hov'' box is shown by a routine that closes boxes and parens at the end of printing: with packing boxes, the closure of boxes and parens do not lead to new lines if there is enough room on the line, whereas with structural boxes each break hint will lead to a new line. For instance, when printing "[(---[(----[(---b)]b)]b)]", where "b" is a break hint without extra indentation (print_cut ()). If "[" means opening of a packing hov'' box (open_hovbox), "[(---[(----[(---b)]b)]b)]" is printed as follows: (--- (---- (---))) If we replace the packing boxes by structural boxes (open_box), each break hint that precedes a closing paren can show the boxes structure, if it leads to a new line; hence "[(---[(----[(---b)]b)]b)]" is printed like this: (--- (---- (--- ) ) ) ## Practice When writing a pretty-printing routine, follow these simple rules: 1. Boxes must be opened and closed consistently (open_* and close_box must be nested like parens). 2. Never hesitate to open a box. 3. Output many break hints, otherwise the pretty-printer is in a bad situation where it tries to do its best, which is always worse than your bad''. 4. Don't try to force spacing using explicit spaces in the character string. For each space you want in the output emit a break hint (print_space ()) unless you explicitly don't want the line to be broken here. For instance, imagine you want to pretty print a definition in a Caml like language, for instance let rec ident = expression. You will probably treat the first three spaces as unbreakable spaces'' and write them directly in the string constants for keywords, "let rec " before the identifier and " =" after it. However, the space preceding the expression will certainly be a break hint, since breaking the line after the =Don't try to force line breaking, let the pretty-printer do it for you: that's its only job. 5. Never put newline characters directly in the strings to be printed: pretty printing engine will consider this newline character as any other character written on the current line and this will completely mess up the output. Instead of new line characters use line breaking hints: if those breaking hints must always result in new lines, it just means that the surrounding box must be a vertical box! 6. End your main program by a print_newline () call, that flushes the pretty-printer tables (hence the output). (Note that the toplevel loop of the interactive system does it as well, just before a new input.) ## Using the printf function The format module provides a general printing facility à la'' printf. In addition to the usual conversion facility provided by printf, you can write pretty-printing indications directly into the string format (opening and closing boxes, indicating breaking hints, etc). Pretty-printing annotations are introduced by the @ symbol, directly into the string format. Almost any function of the format module can be called from within a printf format. For instance • @['' open a box (open_box 0). You may precise the type as an extra argument. For instance @[<hov n> is equivalent to open_hovbox n. • @]'' close the last open box (close_box ()). • @ '' output a breakable space (print_space ()). • @,'' output a simple break hint (print_cut ()). • @.'' end the pretty-printing, closing all the boxes still opened (print_newline ()). • @;<n m>'' emit a full'' break hint (print_break n m). If the <n m>'' part is omitted the full break defaults to a simple break hint (@,). • @?'' output pending material in the pretty-printer queue (print_flush ()). For instance printf "@[<1>%s@ =@ %d@ %s@]@." "Prix TTC" 100 "Euros";; Prix TTC = 100 Euros - : unit = () A more realistic example is given below. ## Working example Let me give a full example: the shortest non trivial example you could imagine, that is the $\lambda-$calculus :) Thus the problem is to pretty-print the values of a concrete data type that implements a model of a language of expressions that defines functions and their applications to arguments. First, I give the abstract syntax of lambda-terms, then a lexical analyzer and a parser for this language: type lambda = | Lambda of string * lambda | Var of string | Apply of lambda * lambda;; (* The lexer using the genlex module from standard library *) #open "genlex";; let lexer = make_lexer ["."; "\\"; "("; ")"];; (* The syntax analyzer, using streams *) let rec exp0 = function | [< 'Ident s >] -> Var s | [< 'Kwd "("; lambda lam; 'Kwd ")" >] -> lam and app = function | [< exp0 e; (other_applications e) lam >] -> lam and other_applications f = function | [< exp0 arg; stream >] -> other_applications (Apply (f, arg)) stream | [<>] -> f and lambda = function | [< 'Kwd "\\"; 'Ident s; 'Kwd "."; lambda lam >] -> Lambda (s, lam) | [< app e >] -> e;; Let's try this parser with the interactive toplevel loop: #let parse_lambda s = lambda (lexer (stream_of_string s));; parse_lambda : string -> lambda = <fun> #parse_lambda "(\x.x)";; - : lambda = Lambda ("x", Var "x") Now, I use the format library to print the lambda-terms: I follow the recursive shape of the preceding parser to write the pretty-printer, inserting here and there the desired break hints and opening (and closing) boxes: #open "format";; let ident = print_string;; let kwd = print_string;; let rec print_exp0 = function | Var s -> ident s | lam -> open_hovbox 1; kwd "("; print_lambda lam; kwd ")"; close_box () and print_app = function | e -> open_hovbox 2; print_other_applications e; close_box () and print_other_applications f = match f with | Apply (f, arg) -> print_app f; print_space (); print_exp0 arg | f -> print_exp0 f and print_lambda = function | Lambda (s, lam) -> open_hovbox 1; kwd "\\"; ident s; kwd "."; print_space(); print_lambda lam; close_box() | e -> print_app e;; Now we get: print_lambda (parse_lambda "(\x.x)");; \x. x- : unit = () (Note that parens are handled properly by the pretty-printer that prints the minimum number of parens compatible with a proper input back to the parser.) print_lambda (parse_lambda "(x y) z");; x y z- : unit = () print_lambda (parse_lambda "x y z");; x y z- : unit = () If you use this pretty-printer for debugging purpose with the toplevel, declare it with install_printer, such that the Caml toplevel loop will use it to print values of type lambda: install_printer "print_lambda";; - : unit = () parse_lambda "(\x. (\y. x y))";; - : lambda = \x. \y. x y parse_lambda "((\x. (\y. x y)) (\z.z))";; - : lambda = (\x. \y. x y) (\z. z) This works very fine in conjunction with the trace facility of the interactive system (well in fact, as soon as values manipulated by the programs are a bit complex, I consider the definition of a pretty-printer using format as mandatory to get a readable trace output): trace"lambda";; La fonction lambda est dorénavant tracée. - : unit = () parse_lambda "((\ident. (\other_ident. ident other_ident)) \ (\Bar.Bar Bar)) (\foobar. (foobar foobar) foobar)";; lambda <-- <abstr> lambda <-- <abstr> lambda <-- <abstr> lambda <-- <abstr> lambda <-- <abstr> lambda <-- <abstr> lambda --> ident other_ident lambda --> \other_ident. ident other_ident lambda --> \other_ident. ident other_ident lambda --> \ident. \other_ident. ident other_ident lambda <-- <abstr> lambda <-- <abstr> lambda --> Bar Bar lambda --> \Bar. Bar Bar lambda --> (\ident. \other_ident. ident other_ident) (\Bar. Bar Bar) lambda <-- <abstr> lambda <-- <abstr> lambda <-- <abstr> lambda --> foobar foobar lambda --> foobar foobar foobar lambda --> \foobar. foobar foobar foobar lambda --> (\ident. \other_ident. ident other_ident) (\Bar. Bar Bar) (\foobar. foobar foobar foobar) - : lambda = (\ident. \other_ident. ident other_ident) (\Bar. Bar Bar) (\foobar. foobar foobar foobar) ## Using the printf function We use the fprintf function and the pretty-printing functions get an extra argument, namely a pretty-printing formatter (the ppf argument) where printing will occur. This way the printing routines are a bit more general, since they may print on any formatter defined in the program, and furthermore they may be used in conjunction with the special %a format, that prints a printf argument with a supplied user's defined function (these user's functions must have a formatter as first argument). For instance fprintf ppf "(%a)" pr_lambda lam prints the lam argument, using the pr_lambda function (and we must have pr_lambda : formatter -> lambda -> unit). Using printf formats, the lambda-terms printing routines can be written as follows: #open "format";; let ident ppf s = fprintf ppf "%s" s;; let kwd ppf s = fprintf ppf "%s" s;; let rec pr_exp0 ppf = function | Var s -> ident ppf s | lam -> fprintf ppf "@[<1>(%a)@]" pr_lambda lam and pr_app ppf = function | e -> fprintf ppf "@[<2>%a@]" pr_other_applications e and pr_other_applications ppf f = match f with | Apply (f, arg) -> fprintf ppf "%a@ %a" pr_app f pr_exp0 arg | f -> pr_exp0 ppf f and pr_lambda ppf = function | Lambda (s, lam) -> fprintf ppf "@[<1>%a%a%a@ %a@]" kwd "\\" ident s kwd "." pr_lambda lam | e -> pr_app ppf e;; let print_lambda = pr_lambda std_formatter;; We get: print_lambda (parse_lambda "(\x.x)");; \x. x- : unit = ()
2017-12-12 16:08:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5518239140510559, "perplexity": 9627.036387696307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517350.12/warc/CC-MAIN-20171212153808-20171212173808-00367.warc.gz"}
https://wikimili.com/en/Self-focusing
# Self-focusing Last updated Self-focusing is a non-linear optical process induced by the change in refractive index of materials exposed to intense electromagnetic radiation. [1] [2] A medium whose refractive index increases with the electric field intensity acts as a focusing lens for an electromagnetic wave characterised by an initial transverse intensity gradient, as in a laser beam. [3] The peak intensity of the self-focused region keeps increasing as the wave travels through the medium, until defocusing effects or medium damage interrupt this process. Self-focusing of light was discovered by Gurgen Askaryan. Nonlinear optics (NLO) is the branch of optics that describes the behaviour of light in nonlinear media, that is, media in which the polarization density P responds non-linearly to the electric field E of the light. The non-linearity is typically observed only at very high light intensities (values of atomic electric fields, typically 108 V/m) such as those provided by lasers. Above the Schwinger limit, the vacuum itself is expected to become nonlinear. In nonlinear optics, the superposition principle no longer holds. In optics, the refractive index or index of refraction of a material is a dimensionless number that describes how fast light propagates through the material. It is defined as In physics, electromagnetic radiation refers to the waves of the electromagnetic field, propagating (radiating) through space, carrying electromagnetic radiant energy. It includes radio waves, microwaves, infrared, (visible) light, ultraviolet, X-rays, and gamma rays. ## Contents Self-focusing is often observed when radiation generated by femtosecond lasers propagates through many solids, liquids and gases. Depending on the type of material and on the intensity of the radiation, several mechanisms produce variations in the refractive index which result in self-focusing: the main cases are Kerr-induced self-focusing and plasma self-focusing. ## Kerr-induced self-focusing Kerr-induced self-focusing was first predicted in the 1960s [4] [5] [6] and experimentally verified by studying the interaction of ruby lasers with glasses and liquids. [7] [8] Its origin lies in the optical Kerr effect, a non-linear process which arises in media exposed to intense electromagnetic radiation, and which produces a variation of the refractive index ${\displaystyle n}$ as described by the formula ${\displaystyle n=n_{0}+n_{2}I}$, where n0 and n2 are the linear and non-linear components of the refractive index, and I is the intensity of the radiation. Since n2 is positive in most materials, the refractive index becomes larger in the areas where the intensity is higher, usually at the centre of a beam, creating a focusing density profile which potentially leads to the collapse of a beam on itself. [9] [10] Self-focusing beams have been found to naturally evolve into a Townes profile [5] regardless of their initial shape. [11] A ruby laser is a solid-state laser that uses a synthetic ruby crystal as its gain medium. The first working laser was a ruby laser made by Theodore H. "Ted" Maiman at Hughes Research Laboratories on May 16, 1960. In physics, intensity is the power transferred per unit area, where the area is measured on the plane perpendicular to the direction of propagation of the energy. In the SI system, it has units watts per square metre (W/m2). It is used most frequently with waves, in which case the average power transfer over one period of the wave is used. Intensity can be applied to other circumstances where energy is transferred. For example, one could calculate the intensity of the kinetic energy carried by drops of water from a garden sprinkler. Self-focusing occurs if the radiation power is greater than the critical power [12] In physics, power is the rate of doing work or transferring heat, the amount of energy transferred or converted per unit time. Having no direction, it is a scalar quantity. In the International System of Units, the unit of power is the joule per second (J/s), known as the watt in honour of James Watt, the eighteenth-century developer of the condenser steam engine. Another common and traditional measure is horsepower. Being the rate of work, the equation for power can be written: ${\displaystyle P_{cr}=\alpha {\frac {\lambda ^{2}}{4\pi n_{0}n_{2}}}}$, where λ is the radiation wavelength in vacuum and α is a constant which depends on the initial spatial distribution of the beam. Although there is no general analytical expression for α, its value has been derived numerically for many beam profiles. [12] The lower limit is α ≈ 1.86225, which corresponds to Townes beams, whereas for a Gaussian beam α ≈ 1.8962. In physics, the wavelength is the spatial period of a periodic wave—the distance over which the wave's shape repeats. It is thus the inverse of the spatial frequency. Wavelength is usually determined by considering the distance between consecutive corresponding points of the same phase, such as crests, troughs, or zero crossings and is a characteristic of both traveling waves and standing waves, as well as other spatial wave patterns. Wavelength is commonly designated by the Greek letter lambda (λ). The term wavelength is also sometimes applied to modulated waves, and to the sinusoidal envelopes of modulated waves or waves formed by interference of several sinusoids. In optics, a Gaussian beam is a beam of monochromatic electromagnetic radiation whose transverse magnetic and electric field amplitude profiles are given by the Gaussian function; this also implies a Gaussian intensity (irradiance) profile. This fundamental (or TEM00) transverse gaussian mode describes the intended output of most (but not all) lasers, as such a beam can be focused into the most concentrated spot. When such a beam is refocused by a lens, the transverse phase dependence is altered; this results in a different Gaussian beam. The electric and magnetic field amplitude profiles along any such circular Gaussian beam (for a given wavelength and polarization) are determined by a single parameter: the so-called waist w0. At any position z relative to the waist (focus) along a beam having a specified w0, the field amplitudes and phases are thereby determined as detailed below. For air, n0 ≈ 1, n2 ≈ 4×10−23 m2/W for λ = 800 nm, [13] and the critical power is Pcr ≈ 2.4 GW, corresponding to an energy of about 0.3 mJ for a pulse duration of 100 fs. For silica, n0 ≈ 1.453, n2 ≈ 2.4×10−20 m2/W, [14] and the critical power is Pcr ≈ 2.8 MW. Kerr induced self-focusing is crucial for many applications in laser physics, both as a key ingredient and as a limiting factor. For example, the technique of chirped pulse amplification was developed to overcome the nonlinearities and damage of optical components that self-focusing would produce in the amplification of femtosecond laser pulses. On the other hand, self-focusing is a major mechanism behind Kerr-lens modelocking, laser filamentation in transparent media, [15] [16] self-compression of ultrashort laser pulses, [17] parametric generation, [18] and many areas of laser-matter interaction in general. Chirped pulse amplification (CPA) is a technique for amplifying an ultrashort laser pulse up to the petawatt level with the laser pulse being stretched out temporally and spectrally prior to amplification. Kerr-lens modelocking (KLM) is a method of modelocking lasers via a nonlinear optical process known as the optical Kerr effect. This method allows the generation of pulses of light with a duration as short as a few femtoseconds. In optics, an ultrashort pulse of light is an electromagnetic pulse whose time duration is of the order of a picosecond or less. Such pulses have a broadband optical spectrum, and can be created by mode-locked oscillators. They are commonly referred to as ultrafast events. Amplification of ultrashort pulses almost always requires the technique of chirped pulse amplification, in order to avoid damage to the gain medium of the amplifier. ## Self-focusing and defocusing in gain medium Kelley [6] predicted that homogeneously broadened two-level atoms may focus or defocus light when carrier frequency ${\displaystyle \omega }$ is detuned downward or upward the center of gain line ${\displaystyle \omega _{0}}$. Laser pulse propagation with slowly varying envelope ${\displaystyle E({\vec {\mathbf {r} }},t)}$ is governed in gain medium by Nonlinear Schrodinger-Frantz-Nodvik equation. [19] When ${\displaystyle \omega }$ is detuned downward or upward the ${\displaystyle \omega _{0}}$ the refractive index is changed. Noteworthy the red detuning leads to increase of index during saturation of resonant transition, i.e. to self-focusing, while for blue detuning the radiation is defocused during saturation : ${\displaystyle {\frac {\partial {{E}({\vec {\mathbf {r} }},t)}}{\partial z}}+{\frac {1}{c}}{\frac {\partial {{E}({\vec {\mathbf {r} }},t)}}{\partial t}}+{\frac {i}{2k}}\nabla _{\bot }^{2}E({\vec {\mathbf {r} }},t)=+ikn_{2}|E({\vec {\mathbf {r} }},t)|^{2}{{E}({\vec {\mathbf {r} }},t)}+}$ ${\displaystyle {\frac {\sigma N({\vec {\mathbf {r} }},t)}{2}}[1+i(\omega _{0}-\omega )T_{2}]{{E}({\vec {\mathbf {r} }},t)},\nabla _{\bot }^{2}={\frac {\partial ^{2}}{{\partial x}^{2}}}+{\frac {\partial ^{2}}{{\partial y}^{2}}},}$ ${\displaystyle {\frac {\partial {{N}({\vec {\mathbf {r} }},t)}}{\partial t}}=-{\frac {{N_{0}}({\vec {\mathbf {r} }})}{T_{1}}}-\sigma (\omega )N({\vec {\mathbf {r} }},t)|E({\vec {\mathbf {r} }},t)|^{2},}$ where ${\displaystyle \sigma (\omega )={\frac {\sigma _{0}}{1+T_{2}^{2}(\omega _{0}-\omega )^{2}}}}$ is stimulated emission cross section, ${\displaystyle {N_{0}}({\vec {\mathbf {r} }})}$ is population inversion density before pulse arrival, ${\displaystyle T_{1}}$ and ${\displaystyle T_{2}}$ are longitudinal and transverse lifetimes of two-level medium, ${\displaystyle z}$ is propagation axis. ## Filamentation The laser beam with a smooth spatial profile ${\displaystyle {E}({\vec {\mathbf {r} }},t)}$ is affected by modulational instability. The small perturbations caused by roughnesses and medium defects are amplified in propagation. This effect is referred to as Bespalov-Talanov instability [20] . In a framework of nonlinear Shrodinger equation : ${\displaystyle {\frac {\partial {{E}({\vec {\mathbf {r} }},t)}}{\partial z}}+{\frac {1}{c}}{\frac {\partial {{E}({\vec {\mathbf {r} }},t)}}{\partial t}}+{\frac {i}{2k}}\nabla _{\bot }^{2}E({\vec {\mathbf {r} }},t)=+ikn_{2}|E({\vec {\mathbf {r} }},t)|^{2}{{E}({\vec {\mathbf {r} }},t)}}$. The rate of the perturbation growth or instability increment ${\displaystyle h}$ is linked with filament size ${\displaystyle \kappa ^{-1}}$ via simple equation: ${\displaystyle h^{2}=\kappa ^{2}(n_{2}|E({\vec {\mathbf {r} }},t)|^{2}-\kappa ^{2}/4k^{2})}$. Generalization of this link between Bespalov-Talanov increments and filament size in gain medium as a function of linear gain ${\displaystyle {\sigma N({\vec {\mathbf {r} }},t)}}$ and detuning ${\displaystyle \delta \omega =\omega _{0}-\omega }$ had been realized in [19] . ## Plasma self-focusing Advances in laser technology have recently enabled the observation of self-focusing in the interaction of intense laser pulses with plasmas. [21] [22] Self-focusing in plasma can occur through thermal, relativistic and ponderomotive effects. [23] Thermal self-focusing is due to collisional heating of a plasma exposed to electromagnetic radiation: the rise in temperature induces a hydrodynamic expansion which leads to an increase of the index of refraction and further heating. [24] Relativistic self-focusing is caused by the mass increase of electrons travelling at speed approaching the speed of light, which modifies the plasma refractive index nrel according to the equation ${\displaystyle n_{rel}={\sqrt {1-{\frac {\omega _{p}^{2}}{\omega ^{2}}}}}}$, where ω is the radiation angular frequency and ωp the relativistically corrected plasma frequency ${\displaystyle \omega _{p}={\sqrt {\frac {ne^{2}}{\gamma m\epsilon _{0}}}}}$ Ponderomotive self-focusing is caused by the ponderomotive force, which pushes electrons away from the region where the laser beam is more intense, therefore increasing the refractive index and inducing a focusing effect. [27] [28] [29] The evaluation of the contribution and interplay of these processes is a complex task, [30] but a reference threshold for plasma self-focusing is the relativistic critical power [2] [31] ${\displaystyle P_{cr}={\frac {m_{e}^{2}c^{5}\omega ^{2}}{e^{2}\omega _{p}^{2}}}\simeq 17{\bigg (}{\frac {\omega }{\omega _{p}}}{\bigg )}^{2}\ {\textrm {GW}}}$, where me is the electron mass, c the speed of light, ω the radiation angular frequency, e the electron charge and ωp the plasma frequency. For an electron density of 1019 cm−3 and radiation at the wavelength of 800 nm, the critical power is about 3 TW. Such values are realisable with modern lasers, which can exceed PW powers. For example, a laser delivering 50 fs pulses with an energy of 1 J has a peak power of 20 TW. Self-focusing in a plasma can balance the natural diffraction and channel a laser beam. Such effect is beneficial for many applications, since it helps increasing the length of the interaction between laser and medium. This is crucial, for example, in laser-driven particle acceleration, [32] laser-fusion schemes [33] and high harmonic generation. [34] ## Accumulated self-focusing Self-focusing can be induced by a permanent refractive index change resulting from a multi-pulse exposure. This effect has been observed in glasses which increase the refractive index during an exposure to ultraviolet laser radiation. [35] Accumulated self-focusing develops as a wave guiding, rather than a lensing effect. The scale of actively forming beam filaments is a function of the exposure dose. Evolution of each beam filament towards a singularity is limited by the maximum induced refractive index change or by laser damage resistance of the glass. ## Self-focusing in soft matter and polymer systems Self-focusing can also been observed in a number of soft matter systems, such as solutions of polymers and particles as well as photo-polymers. [36] Self-focusing was observed in photo-polymer systems with microscale laser beams of either UV [37] or visible light. [38] The self-trapping of incoherent light was also later observed. [39] Self-focusing can also be observed in wide-area beams, wherein the beam undergoes filamentation, or Modulation Instability, spontaneous dividing into a multitude of microscale self-focused beams, or filaments. [40] [41] [39] [42] [43] The balance of self-focusing and natural beam divergence results in the beams propagating divergence-free. Self-focusing in photopolymerizable media is possible, owing to a photoreaction dependent refractive index, [37] and the fact that refractive index in polymers is proportional to molecular weight and crosslinking degree [44] which increases over the duration of photo-polymerization. ## Related Research Articles The group velocity of a wave is the velocity with which the overall shape of the wave's amplitudes—known as the modulation or envelope of the wave—propagates through space. Bremsstrahlung, from bremsen "to brake" and Strahlung "radiation"; i.e., "braking radiation" or "deceleration radiation", is electromagnetic radiation produced by the deceleration of a charged particle when deflected by another charged particle, typically an electron by an atomic nucleus. The moving particle loses kinetic energy, which is converted into radiation, thus satisfying the law of conservation of energy. The term is also used to refer to the process of producing the radiation. Bremsstrahlung has a continuous spectrum, which becomes more intense and whose peak intensity shifts toward higher frequencies as the change of the energy of the decelerated particles increases. Ionization or ionisation, is the process by which an atom or a molecule acquires a negative or positive charge by gaining or losing electrons, often in conjunction with other chemical changes. The resulting electrically charged atom or molecule is called an ion. Ionization can result from the loss of an electron after collisions with subatomic particles, collisions with other atoms, molecules and ions, or through the interaction with electromagnetic radiation. Heterolytic bond cleavage and heterolytic substitution reactions can result in the formation of ion pairs. Ionization can occur through radioactive decay by the internal conversion process, in which an excited nucleus transfers its energy to one of the inner-shell electrons causing it to be ejected. Zero-point energy (ZPE) is the difference between the lowest possible energy that a quantum mechanical system may have, and the classical minimum energy of the system. Unlike in classical mechanics, quantum systems constantly fluctuate in their lowest energy state due to the Heisenberg uncertainty principle. As well as atoms and molecules, the empty space of the vacuum has these properties. According to quantum field theory, the universe can be thought of not as isolated particles but continuous fluctuating fields: matter fields, whose quanta are fermions, and force fields, whose quanta are bosons. All these fields have zero-point energy. These fluctuating zero-point fields lead to a kind of reintroduction of an aether in physics, since some systems can detect the existence of this energy. However this aether cannot be thought of as a physical medium if it is to be Lorentz invariant such that there is no contradiction with Einstein's theory of special relativity. Synchrotron radiation is the electromagnetic radiation emitted when charged particles are accelerated radially, i.e., when they are subject to an acceleration perpendicular to their velocity. It is produced, for example, in synchrotrons using bending magnets, undulators and/or wigglers. If the particle is non-relativistic, then the emission is called cyclotron emission. If, on the other hand, the particles are relativistic, sometimes referred to as ultrarelativistic, the emission is called synchrotron emission. Synchrotron radiation may be achieved artificially in synchrotrons or storage rings, or naturally by fast electrons moving through magnetic fields. The radiation produced in this way has a characteristic polarization and the frequencies generated can range over the entire electromagnetic spectrum which is also called continuum radiation. Optical tweezers are scientific instruments that use a highly focused laser beam to provide an attractive or repulsive force, depending on the relative refractive index between particle and surrounding medium, to physically hold and move microscopic objects similar to tweezers. They are able to trap and manipulate small particles, typically order of micron in size, including dielectric and absorbing particles. Optical tweezers have been particularly successful in studying a variety of biological systems in recent years. The Drude model of electrical conduction was proposed in 1900 by Paul Drude to explain the transport properties of electrons in materials. The model, which is an application of kinetic theory, assumes that the microscopic behavior of electrons in a solid may be treated classically and looks much like a pinball machine, with a sea of constantly jittering electrons bouncing and re-bouncing off heavier, relatively immobile positive ions. The Kerr effect, also called the quadratic electro-optic (QEO) effect, is a change in the refractive index of a material in response to an applied electric field. The Kerr effect is distinct from the Pockels effect in that the induced index change is directly proportional to the square of the electric field instead of varying linearly with it. All materials show a Kerr effect, but certain liquids display it more strongly than others. The Kerr effect was discovered in 1875 by John Kerr, a Scottish physicist. In theoretical physics, the (one-dimensional) nonlinear Schrödinger equation (NLSE) is a nonlinear variation of the Schrödinger equation. It is a classical field equation whose principal applications are to the propagation of light in nonlinear optical fibers and planar waveguides and to Bose-Einstein condensates confined to highly anisotropic cigar-shaped traps, in the mean-field regime. Additionally, the equation appears in the studies of small-amplitude gravity waves on the surface of deep inviscid (zero-viscosity) water; the Langmuir waves in hot plasmas; the propagation of plane-diffracted wave beams in the focusing regions of the ionosphere; the propagation of Davydov's alpha-helix solitons, which are responsible for energy transport along molecular chains; and many others. More generally, the NLSE appears as one of universal equations that describe the evolution of slowly varying packets of quasi-monochromatic waves in weakly nonlinear media that have dispersion. Unlike the linear Schrödinger equation, the NLSE never describes the time evolution of a quantum state. The 1D NLSE is an example of an integrable model. In physics, a ponderomotive force is a nonlinear force that a charged particle experiences in an inhomogeneous oscillating electromagnetic field. Self-phase modulation (SPM) is a nonlinear optical effect of light-matter interaction. An ultrashort pulse of light, when travelling in a medium, will induce a varying refractive index of the medium due to the optical Kerr effect. This variation in refractive index will produce a phase shift in the pulse, leading to a change of the pulse's frequency spectrum. Photoacoustic imaging is a biomedical imaging modality based on the photoacoustic effect. In photoacoustic imaging, non-ionizing laser pulses are delivered into biological tissues. Some of the delivered energy will be absorbed and converted into heat, leading to transient thermoelastic expansion and thus wideband ultrasonic emission. The generated ultrasonic waves are detected by ultrasonic transducers and then analyzed to produce images. It is known that optical absorption is closely associated with physiological properties, such as hemoglobin concentration and oxygen saturation. As a result, the magnitude of the ultrasonic emission, which is proportional to the local energy deposition, reveals physiologically specific optical absorption contrast. 2D or 3D images of the targeted areas can then be formed. In nonlinear optics, filament propagation is propagation of a beam of light through a medium without diffraction. This is possible because the Kerr effect causes an index of refraction change in the medium, resulting in self-focusing of the beam. A pinch is the compression of an electrically conducting filament by magnetic forces. The conductor is usually a plasma, but could also be a solid or liquid metal. Pinches were the first type of device used for controlled nuclear fusion. The Jaynes–Cummings model is a theoretical model in quantum optics. It describes the system of a two-level atom interacting with a quantized mode of an optical cavity, with or without the presence of light. It was originally developed to study the interaction of atoms with the quantized electromagnetic field in order to investigate the phenomena of spontaneous emission and absorption of photons in a cavity. The Frank–Tamm formula yields the amount of Cherenkov radiation emitted on a given frequency as a charged particle moves through a medium at superluminal velocity. It is named for Russian physicists Ilya Frank and Igor Tamm who developed the theory of the Cherenkov effect in 1937, for which they were awarded a Nobel Prize in Physics in 1958. In optics, the term soliton is used to refer to any optical field that does not change during propagation because of a delicate balance between nonlinear and linear effects in the medium. There are two main kinds of solitons: In relativistic laser-plasma physics the relativistic similarity parameterS is a dimensionless parameter defined as The semiconductor luminescence equations (SLEs) describe luminescence of semiconductors resulting from spontaneous recombination of electronic excitations, producing a flux of spontaneously emitted light. This description established the first step toward semiconductor quantum optics because the SLEs simultaneously includes the quantized light–matter interaction and the Coulomb-interaction coupling among electronic excitations within a semiconductor. The SLEs are one of the most accurate methods to describe light emission in semiconductors and they are suited for a systematic modeling of semiconductor emission ranging from excitonic luminescence to lasers. ## References 1. Cumberbatch, E. (1970). "Self-focusing in Non-linear Optics". IMA Journal of Applied Mathematics. 6 (3): 250–62. doi:10.1093/imamat/6.3.250. 2. Mourou, Gerard A.; Tajima, Toshiki; Bulanov, Sergei V. (2006). "Optics in the relativistic regime". Reviews of Modern Physics. 78 (2): 309. Bibcode:2006RvMP...78..309M. doi:10.1103/RevModPhys.78.309. 3. Rashidian Vaziri, M.R. (2015). "Comment on 'Nonlinear refraction measurements of materials using the moiré deflectometry'". Optics Communications. 357: 200–1. Bibcode:2015OptCo.357..200R. doi:10.1016/j.optcom.2014.09.017. 4. Askar'yan, G. A. (1962). "Cerenkov Radiation and Transition Radiation from Electromagnetic Waves". Journal of Experimental and Theoretical Physics. 15 (5): 943–6. 5. Chiao, R. Y.; Garmire, E.; Townes, C. H. (1964). "Self-Trapping of Optical Beams". Physical Review Letters. 13 (15): 479. Bibcode:1964PhRvL..13..479C. doi:10.1103/PhysRevLett.13.479. 6. Kelley, P. L. (1965). "Self-Focusing of Optical Beams". Physical Review Letters. 15 (26): 1005–1008. Bibcode:1965PhRvL..15.1005K. doi:10.1103/PhysRevLett.15.1005. 7. Lallemand, P.; Bloembergen, N. (1965). "Self-Focusing of Laser Beams and Stimulated Raman Gain in Liquids". Physical Review Letters. 15 (26): 1010. Bibcode:1965PhRvL..15.1010L. doi:10.1103/PhysRevLett.15.1010. 8. Garmire, E.; Chiao, R. Y.; Townes, C. H. (1966). "Dynamics and Characteristics of the Self-Trapping of Intense Light Beams". Physical Review Letters. 16 (9): 347. Bibcode:1966PhRvL..16..347G. doi:10.1103/PhysRevLett.16.347. 9. Gaeta, Alexander L. (2000). "Catastrophic Collapse of Ultrashort Pulses". Physical Review Letters. 84 (16): 3582–5. Bibcode:2000PhRvL..84.3582G. doi:10.1103/PhysRevLett.84.3582. PMID   11019151. 10. Rashidian Vaziri, M R (2013). "Describing the propagation of intense laser pulses in nonlinear Kerr media using the ducting model". Laser Physics. 23 (10): 105401. Bibcode:2013LaPhy..23j5401R. doi:10.1088/1054-660X/23/10/105401. 11. Moll, K. D.; Gaeta, Alexander L.; Fibich, Gadi (2003). "Self-Similar Optical Wave Collapse: Observation of the Townes Profile". Physical Review Letters. 90 (20): 203902. Bibcode:2003PhRvL..90t3902M. doi:10.1103/PhysRevLett.90.203902. PMID   12785895. 12. Fibich, Gadi; Gaeta, Alexander L. (2000). "Critical power for self-focusing in bulk media and in hollow waveguides". Optics Letters. 25 (5): 335–7. Bibcode:2000OptL...25..335F. doi:10.1364/OL.25.000335. PMID   18059872. 13. Nibbering, E. T. J.; Grillon, G.; Franco, M. A.; Prade, B. S.; Mysyrowicz, A. (1997). "Determination of the inertial contribution to the nonlinear refractive index of air, N2, and O2 by use of unfocused high-intensity femtosecond laser pulses". Journal of the Optical Society of America B. 14 (3): 650–60. Bibcode:1997JOSAB..14..650N. doi:10.1364/JOSAB.14.000650. 14. Garcia, Hernando; Johnson, Anthony M.; Oguama, Ferdinand A.; Trivedi, Sudhir (2003). "New approach to the measurement of the nonlinear refractive index of short (< 25 m) lengths of silica and erbium-doped fibers". Optics Letters. 28 (19): 1796–8. Bibcode:2003OptL...28.1796G. doi:10.1364/OL.28.001796. PMID   14514104. 15. Kasparian, J.; Rodriguez, M.; Méjean, G.; Yu, J.; Salmon, E.; Wille, H.; Bourayou, R.; Frey, S.; André, Y.-B.; Mysyrowicz, A.; Sauerbrey, R.; Wolf, J.-P.; Wöste, L. (2003). "White-Light Filaments for Atmospheric Analysis". Science. 301 (5629): 61–4. Bibcode:2003Sci...301...61K. CiteSeerX  . doi:10.1126/science.1085020. PMID   12843384. 16. Couairon, A; Mysyrowicz, A (2007). "Femtosecond filamentation in transparent media". Physics Reports. 441 (2–4): 47–189. Bibcode:2007PhR...441...47C. doi:10.1016/j.physrep.2006.12.005. 17. Stibenz, Gero; Zhavoronkov, Nickolai; Steinmeyer, Günter (2006). "Self-compression of millijoule pulses to 78 fs duration in a white-light filament". Optics Letters. 31 (2): 274–6. Bibcode:2006OptL...31..274S. doi:10.1364/OL.31.000274. PMID   16441054. 18. Cerullo, Giulio; De Silvestri, Sandro (2003). "Ultrafast optical parametric amplifiers". Review of Scientific Instruments. 74 (1): 1. Bibcode:2003RScI...74....1C. doi:10.1063/1.1523642. 19. Okulov, A Yu; Oraevskiĭ, A N (1988). "Compensation of self-focusing distortions in quasiresonant amplification of a light pulse". Soviet Journal of Quantum Electronics. 18 (2): 233–7. Bibcode:1988QuEle..18..233O. doi:10.1070/QE1988v018n02ABEH011482. 20. Bespalov, VI; Talanov, VI (1966). "Filamentary Structure of Light Beams in Nonlinear Liquids". JETP Letters. 3 (12): 307–310. 21. Borisov, A. B.; Borovskiy, A. V.; Korobkin, V. V.; Prokhorov, A. M.; Shiryaev, O. B.; Shi, X. M.; Luk, T. S.; McPherson, A.; Solem, J. C.; Boyer, K.; Rhodes, C. K. (1992). "Observation of relativistic and charge-displacement self-channeling of intense subpicosecond ultraviolet (248 nm) radiation in plasmas". Physical Review Letters. 68 (15): 2309–2312. Bibcode:1992PhRvL..68.2309B. doi:10.1103/PhysRevLett.68.2309. PMID   10045362. 22. Monot, P.; Auguste, T.; Gibbon, P.; Jakober, F.; Mainfray, G.; Dulieu, A.; Louis-Jacquet, M.; Malka, G.; Miquel, J. L. (1995). "Experimental Demonstration of Relativistic Self-Channeling of a Multiterawatt Laser Pulse in an Underdense Plasma". Physical Review Letters. 74 (15): 2953–2956. Bibcode:1995PhRvL..74.2953M. doi:10.1103/PhysRevLett.74.2953. PMID   10058066. 23. Mori, W. B.; Joshi, C.; Dawson, J. M.; Forslund, D. W.; Kindel, J. M. (1988). "Evolution of self-focusing of intense electromagnetic waves in plasma" (Submitted manuscript). Physical Review Letters. 60 (13): 1298–1301. Bibcode:1988PhRvL..60.1298M. doi:10.1103/PhysRevLett.60.1298. PMID   10037999. 24. Perkins, F. W.; Valeo, E. J. (1974). "Thermal Self-Focusing of Electromagnetic Waves in Plasmas". Physical Review Letters. 32 (22): 1234. Bibcode:1974PhRvL..32.1234P. doi:10.1103/PhysRevLett.32.1234. 25. Max, Claire Ellen; Arons, Jonathan; Langdon, A. Bruce (1974). "Self-Modulation and Self-Focusing of Electromagnetic Waves in Plasmas". Physical Review Letters. 33 (4): 209. Bibcode:1974PhRvL..33..209M. doi:10.1103/PhysRevLett.33.209. 26. Pukhov, Alexander (2003). "Strong field interaction of laser radiation". Reports on Progress in Physics. 66 (1): 47–101. Bibcode:2003RPPh...66...47P. doi:10.1088/0034-4885/66/1/202. 27. Kaw, P.; Schmidt, G.; Wilcox, T. (1973). "Filamentation and trapping of electromagnetic radiation in plasmas". Physics of Fluids. 16 (9): 1522. Bibcode:1973PhFl...16.1522K. doi:10.1063/1.1694552. 28. Pizzo, V Del; Luther-Davies, B (1979). "Evidence of filamentation (self-focusing) of a laser beam propagating in a laser-produced aluminium plasma". Journal of Physics D: Applied Physics. 12 (8): 1261–73. Bibcode:1979JPhD...12.1261D. doi:10.1088/0022-3727/12/8/005. 29. Del Pizzo, V.; Luther-Davies, B.; Siegrist, M. R. (1979). "Self-focussing of a laser beam in a multiply ionized, absorbing plasma". Applied Physics. 18 (2): 199–204. Bibcode:1979ApPhy..18..199D. doi:10.1007/BF00934416. 30. Faure, J.; Malka, V.; Marquès, J.-R.; David, P.-G.; Amiranoff, F.; Ta Phuoc, K.; Rousse, A. (2002). "Effects of pulse duration on self-focusing of ultra-short lasers in underdense plasmas". Physics of Plasmas. 9 (3): 756. Bibcode:2002PhPl....9..756F. doi:10.1063/1.1447556. 31. Sun, Guo-Zheng; Ott, Edward; Lee, Y. C.; Guzdar, Parvez (1987). "Self-focusing of short intense pulses in plasmas". Physics of Fluids. 30 (2): 526. Bibcode:1987PhFl...30..526S. doi:10.1063/1.866349. 32. Malka, V; Faure, J; Glinec, Y; Lifschitz, A.F (2006). "Laser-plasma accelerator: Status and perspectives". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 364 (1840): 601–10. Bibcode:2006RSPTA.364..601M. doi:10.1098/rsta.2005.1725. PMID   16483951. 33. Tabak, M.; Clark, D. S.; Hatchett, S. P.; Key, M. H.; Lasinski, B. F.; Snavely, R. A.; Wilks, S. C.; Town, R. P. J.; Stephens, R.; Campbell, E. M.; Kodama, R.; Mima, K.; Tanaka, K. A.; Atzeni, S.; Freeman, R. (2005). "Review of progress in Fast Ignition". Physics of Plasmas. 12 (5): 057305. Bibcode:2005PhPl...12e7305T. doi:10.1063/1.1871246. 34. Umstadter, Donald (2003). "Relativistic laser plasma interactions". Journal of Physics D: Applied Physics. 36 (8): R151–65. doi:10.1088/0022-3727/36/8/202. 35. Khrapko, Rostislav; Lai, Changyi; Casey, Julie; Wood, William A.; Borrelli, Nicholas F. (2014). "Accumulated self-focusing of ultraviolet light in silica glass". Applied Physics Letters. 105 (24): 244110. Bibcode:2014ApPhL.105x4110K. doi:10.1063/1.4904098. 36. Biria, Saeid (2017). "Coupling nonlinear optical waves to photoreactive and phase-separating soft matter: Current status and perspectives". Choas. 27 (10): 104611. doi:10.1063/1.5001821. PMID   29092420. 37. Kewitsch, Anthony S.; Yariv, Amnon (1996). "Self-focusing and self-trapping of optical beams upon photopolymerization". Optics Letters. 21 (1): 24–6. Bibcode:1996OptL...21...24K. doi:10.1364/ol.21.000024. PMID   19865292. 38. Yamashita, T.; Kagami, M. (2005). "Fabrication of light-induced self-written waveguides with a W-shaped refractive index profile". Journal of Lightwave Technology. 23 (8): 2542–8. Bibcode:2005JLwT...23.2542Y. doi:10.1109/JLT.2005.850783. 39. Biria, Saeid; Malley, Philip P. A.; Kahan, Tara F.; Hosein, Ian D. (2016). "Tunable Nonlinear Optical Pattern Formation and Microstructure in Cross-Linking Acrylate Systems during Free-Radical Polymerization". The Journal of Physical Chemistry C. 120 (8): 4517–28. doi:10.1021/acs.jpcc.5b11377. 40. Burgess, Ian B.; Shimmell, Whitney E.; Saravanamuttu, Kalaichelvi (2007). "Spontaneous Pattern Formation Due to Modulation Instability of Incoherent White Light in a Photopolymerizable Medium". Journal of the American Chemical Society. 129 (15): 4738–46. doi:10.1021/ja068967b. PMID   17378567. 41. Basker, Dinesh K.; Brook, Michael A.; Saravanamuttu, Kalaichelvi (2015). "Spontaneous Emergence of Nonlinear Light Waves and Self-Inscribed Waveguide Microstructure during the Cationic Polymerization of Epoxides". The Journal of Physical Chemistry C. 119 (35): 20606. doi:10.1021/acs.jpcc.5b07117. 42. Biria, Saeid; Malley, Phillip P. A.; Kahan, Tara F.; Hosein, Ian D. (2016). "Optical Autocatalysis Establishes Novel Spatial Dynamics in Phase Separation of Polymer Blends during Photocuring". ACS Macro Letters. 5 (11): 1237–41. doi:10.1021/acsmacrolett.6b00659. 43. Biria, Saeid; Hosein, Ian D. (2017-05-09). "Control of Morphology in Polymer Blends through Light Self-Trapping: An in Situ Study of Structure Evolution, Reaction Kinetics, and Phase Separation". Macromolecules. 50 (9): 3617–3626. Bibcode:2017MaMol..50.3617B. doi:10.1021/acs.macromol.7b00484. ISSN   0024-9297. 44. Askadskii, A.A (1990). "Influence of crosslinking density on the properties of polymer networks". Polymer Science U.S.S.R. 32 (10): 2061–9. doi:10.1016/0032-3950(90)90361-9. ## Bibliography • Carrigan, Richard A.; Ellison, James A., eds. (1987). Relativistic Channeling. NATO ASI Series. 165. doi:10.1007/978-1-4757-6394-2. ISBN   978-1-4419-3207-5.
2019-03-20 19:05:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 26, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.739044189453125, "perplexity": 3257.88429038575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202450.86/warc/CC-MAIN-20190320190324-20190320212324-00504.warc.gz"}
https://proteustoolkit.org/models/body_dynamics.html
# Body Dynamics¶ ## Implementation¶ from proteus.mbd import CouplingFSI as fsi Proteus uses wrappers and modified/derived classes from the Chrono engine, an open-source multibody dynamics library available at https://github.com/projectchrono/chrono. The bodies and cables classes described below can interact with proteus models such as Navier-Stokes to retrieve forces and moments, moving (ALE) mesh for moving the domain with the structure, and added-mass model. ## Classes¶ ### ProtChSystem¶ The ProtChSystem class has a pointer to a Chrono ChSystem, and holds the general options for the Chrono simulation, such as time step size, gravity, etc. All the physical bodies described must be associated to a ProtChSystem instance. import pychono from proteus.mbd import CouplingFSI as fsi my_system = fsi.ProtChSystem() g = np.array([0., 0., -9.81]) my_system.setGravitationalAcceleration(g) my_system.setTimeStep(0.001) # the time step for Chrono calculations my_chsystem = my_system.getChronoObject() # access chrono object Important The ProtChSystem instance itself must be added to the auxiliaryVariables list of the Navier-Stokes model in order to calculate and retrieve the fluid forces from the fluid pressure field provided by Proteus at the boundaries of the different bodies. ### ProtChBody¶ Class for creating a rigid body. It has a Chrono ChBody body variable (ProtChBody.ChBody) accessible within python with some of the functionalities/functions of Chrono ChBody. It must be associated to a ProtChSystem instance in order to be included in the multibody dynamics simulation. This can be done with the passing of the system argument as the ProtChBody instance is created (see example below). Otherwise, the function ProtChSystem.addProtChBody(my_body) can be called separately. my_body = fsi.ProtChBody(system=my_system) my_body.attachShape(my_shape) # sets everything automatically my_body.setRecordValues(all_values=True) # record everything my_chbody = my_body.getChronoObject() # access chrono object When set up properly and running with a Proteus Navier-Stokes simulation, the fluid pressure will be applied on the boundaries of the rigid body. The ChBody will be moved accordingly, as well as its boundaries (supposing that a moving mesh or immersed boundaries are used). Attention The ProtChBody.ChBody variable accessible with ProtChBody.getChronoObject() is actually using a derived class from the base Chrono ChBody in order to add the possibility of using an added-mass matrix (see ChBodyAddedMass in proteus.mbd.ChRigidBody.h). ### ProtChMesh¶ This class creates a ChMesh that is needed to create moorings. my_mesh = fsi.ProtChMesh(system=my_system) my_chmesh = my_mesh.getChronoObject() ### ProtChMoorings¶ This class is for easily creating cables. The following properties must be known in order to instantiate a ProtChMoorings: ProtChSystem instance, Mesh instance, length for the length of the cable/segment, nb_elems for the number of elements along the cable/segment, d for the diameter of the cable/segment, rho for the density of the cable/segment, E for the Young modulus of the cable/segment. my_mooring = fsi.ProtChMoorings(system=my_system, mesh=my_mesh, length=np.array([10.]), nb_elems=np.array([10], dtype=np.int32), d=np.array([0.01]), rho=np.array([300.2]), E=np.array([1e9])) # set function to place the nodes along cable ('s' is the position along the 1D cable) fpos = lambda s: np.array([s, 1., 0.]) # position along cable ftan = lambda s: np.array([1., 0., 0.]) # tangent of cable along cable my_mooring.setNodesPositionFunction(fpos, ftan) # set the nodes position from the function my_mooring.setNodesPosition() # build nodes (automatic with fpos/ftan) # nodes are equally spaced according to the number of elements (nb_elems) my_mooring.buildNodes() my_mooring.attachBackNodeToBody(my_body) # fix front node as anchor my_mooring.fixFrontNode(True) Setting the position function is useful when a relatively complex layout of the cable is desired, such as a catenary shape. Note The reason for the array structure for the length, nb_elems, d, rho, and E parameters is that a cable can be multi-segmented (different sections of the same cable having different material properties). A class to deal with the added mass model from proteus.mprans.AddedMass. This class should not be instantiated manually and will be automatically instantiating as a variable of ProtChSystem (accessible as my_system.ProtChAddedMass). It is used to build the added mass matrix for the rigid bodies. Important This class instance must be passed to the AddedMass model auxiliaryVariables to have any effect (auxiliaryVariables.append(my_system.ProtChAddedMass) ## Postprocessing Tools¶ ### ProtChBody¶ The data related to mooring cables is saved in an csv file, usually [my_body.name].csv. Additionally, if the added mass model was used, the values of the added mass matrix are available in [my_body.name]_Aij_.csv ### ProtChMoorings¶ The data related to mooring cables is saved in an hdf5 file, usually [my_mooring.name].h5, which can be read directly with h5py. Another way to read and visualise the data is to use the associated [my_mooring.name].xmf. The following script must be first ran (note that there is no extension for the file name): .. code-block: {PROTEUS_DIR}/scripts/gatherTimes.py -f [my_mooring.name] where {PROTEUS_DIR} is the root directory of the Proteus installation. This will create [my_mooring.name]_complete.xmf which can be opened in Paraview to navigate the time steps that have been recorded.
2020-07-12 07:47:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33554452657699585, "perplexity": 3872.2401742154343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657131734.89/warc/CC-MAIN-20200712051058-20200712081058-00294.warc.gz"}
https://ask.wireshark.org/question/13552/how-to-read-mentioned-packet-logs/?answer=13553
# How to read mentioned packet logs? Log: MPGD68_Layer 2 Service Board:4 Port:1 Packet Capture Direction:0 Statistics reported the total number of packet header: 1000 Chip captureed the total number of packet header: 527890 3C DA 2A 81 B9 0D D4 E3 3F EF 46 30 81 00 C0 86 08 00 45 88 00 30 6C 40 00 00 7D 11 F9 70 0A 06 81 E6 0A 87 41 11 68 48 FD 5F 00 1C 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 3C DA 2A 9B 62 A5 D4 E3 3F EF 46 30 81 00 80 D8 08 00 45 68 00 58 00 00 00 00 FB 11 3C 78 0A CE 50 2F 0A 87 1D 31 08 68 08 68 00 44 9D 5F 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 3C DA 2A 9B 62 A5 D4 E3 3F EF 46 2F 81 00 01 3C 08 00 45 68 05 60 29 C5 00 00 F8 11 E8 DF 0A 4B D4 84 0A 18 C1 98 08 68 08 68 05 4C 9F 89 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 3C DA 2A 9B 62 A5 D4 E3 3F EF 46 30 81 00 C0 D8 08 00 45 88 00 70 10 7F 00 00 3B 11 2B 13 0A 28 11 84 0A 87 1D 31 08 68 08 68 00 5C 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 3C DA 2A 9B 62 A5 D4 E3 3F EF 46 2F 81 00 01 3C 08 00 45 68 05 60 29 C6 00 00 F8 11 E8 DE 0A 4B D4 84 0A 18 C1 98 08 68 08 68 05 4C 9F 89 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 3C DA 2A 9B 62 A5 D4 E3 3F EF 46 2F 81 00 01 3C 08 00 45 68 05 60 29 C7 00 00 F8 11 E8 DD 0A 4B D4 84 0A 18 C1 98 08 68 08 68 05 4C 9F 89 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 3C DA 2A 9B 62 A5 D4 E3 3F EF 46 2F 81 00 01 3C 08 00 45 68 05 60 29 C8 00 00 F8 11 E8 DC 0A 4B D4 84 0A 18 C1 98 08 68 08 68 05 4C 9F 89 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 3C DA 2A 9B 62 A5 D4 E3 3F EF 46 ... edit retag close merge delete Sort by » oldest newest most voted Those appear to be Ethernet VLAN packets, as they begin with: 1. 6 octets that could be an Ethernet destination address; 2. 6 octets that could be an Ethernet source address; 3. 2 octets of 81 00, which is the Ethernet type for an 802.1Q VLAN header; 4. 2 octets of VLAN tag; 5. 2 octets of 08 00, which is the Ethernet type for IPv4; 6. an octet of 45, which would be the first octet of an IPv4 header with no options. Unfortunately, the text2pcap program that comes with Wireshark expects each line to begin with an offset number, so, if you were to use it to try to translate that text file to a pcap, you'd have to stick something such as 6 0's, followed by a space, in front of every line, so the first packet line would become 000000 3C DA 2A 81 B9 0D D4 E3 3F EF 46 30 81 00 C0 86 08 00 45 88 00 30 6C 40 00 00 7D 11 F9 70 0A 06 81 E6 0A 87 41 11 68 48 FD 5F 00 1C 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 You might also have to remove the lines before the first packet line, and put a space after each line. more
2020-04-01 21:41:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22340057790279388, "perplexity": 251.423832746737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506121.24/warc/CC-MAIN-20200401192839-20200401222839-00518.warc.gz"}
http://motls.blogspot.com.au/2017/08/police-finally-asks-lawmakers-to-enable.html?m=1
## Thursday, August 10, 2017 ### Police finally asks lawmakers to enable prosecution of Babiš Will he win the October elections while arrested? Andrej Babiš, a Slovak billionaire, a former communist cadre, an ex-agent of the communist secret police, and the Czech finance minister up to Spring, was fired by the social democratic prime minister Sobotka a few months ago, in a series of events that looked like a farce mainly because Sobotka repeatedly changed his opinions how to deal with the problem named Babiš. The reasons were numerous. At least at the level of economic morality that Babiš – and Sobotka – loudly demand from everyone else, there was no doubt that Babiš has done too many immoral things. He has done lots of strange things to fool his former business partners, evade taxation of CZK 1 bonds by tricks with the rounding to the nearest integer etc. but the most famous wrongdoing is the subsidies for his Stork's Nest, a luxurious farm and rural tourist resort I saw a year ago. He temporarily moved the company owning the project to his relatives, they secured $2 million of EU subsidies that were meant to help small and medium businesses, and then Babiš – who owns a$3 billion company – restored the ownership of his Stork's Nest company. At the moral level, it's obviously a theft or fraud. At the legal level, the laws may have stupid loopholes that actually make such things legal. But I think it's unlikely. This particular $2 million scam is investigated by some EU investigators. Finally, the Czech police began to do the same. An hour ago, the police asked the Parliament to vote and allow the prosecution of Andrej Babiš and his #2 both in the Agrofert company and the ANO political movement (these hierarchies are largely copied in both entities, in order to make the clash of interest more striking), Mr Jaroslav Faltýnek. Based on the previous pledges, not only the Parliament should vote "extradite": ANO's deputies should actually vote "extradite", too. They have always pledged that this is the only correct way to act in such situations. Just months ago, Babiš's movement had a rule in its statutes that whoever starts to be prosecuted by police has to leave politics – because they're such great warriors against the economic crimes, aren't they? Cleverly enough, they removed that rule shortly before Babiš, their very Führer himself, became a subject of a prosecution by the Czech police so he doesn't need to resign. Nevertheless, many Czech parties instantly urged both Babiš and Faltýnek to resign from the Parliament. Now, I believe that Czechs – the most atheist nation in Europe by far – are generally reasonable in lots of things. Even more so than some other post-communist nations in central Europe, we understand the problems of co-existence of very different cultures and the counterproductiveness of the bending of the asylum and immigration laws by the European Union authorities. On top of that, I actually believe that Czechia has the greatest freedom of speech in the world – much safer than countries in the West as well as those in the East. While Poland and Hungary are doing many great things, I am also somewhat anxious about the increased control held by their key political parties – although I don't really agree that something "unacceptable" has already taken place in these countries. Those changes are clearly a consequence of the fact that some parties could get towards or above the 40% territory and such strength allows parties to do unusual things in systems that normally require coalitions of many smaller parties. Czechia has been "lucky" so far – the strongest party was generally below 30% for quite some time (the political spectrum has been sufficiently fragmented in recent decades) so coalitions were always needed and we didn't have the opportunity to become or look "totalitarian" in this sense – the conflicts between the coalition parties have been a part of our politics pretty much on every day since the Velvet Revolution. In some sense, it's an advantage relatively to Poland and Hungary. And all these central European countries are obviously much more free than the Western Europe or the U.S. when it comes to the political correct speech codes. But some things about the Czechs' political attitudes are just shockingly wrong. In particular, Andrej Babiš is still expected to win the October elections with some 30% of votes – it could be a bit less or a bit more, nobody knows for sure now. His political movement of the Führer type will almost certainly be the strongest player in Czech politics (so far it's #2 according to the composition of the Parliament but for many years, his ANO was at the top in the surveys). It's likely but not quite certain that he would still need coalition partners and it's unknown which ones he actually prefers. An ANO-communist coalition looks rather terrifying to me. But I simply can't have any sympathy or empathy for the processes inside the brains of these 30% or so voters of Andrej Babiš. He's done some of the ethically worst things that people were doing before 1989. His path to wealth both in Slovakia as well as Czechia where he escaped at some point is suspicious and full of bizarre tricks hot to beat his former business partners. He's been getting billions of crowns in subsidies for rape – used for biofuels – and other things. And so on. More seriously, he wants to dismantle the Parliamentary system as we know it. He wants to abolish the Senate. He wants to reduce the number of lawmakers in the House. He has said that he basically wants to ban the lawmakers from talking too much in the Parliament. He wants to abolish the municipal "Parliaments" and make sure that the mayors take all the power. Discussions and negotiations are just "babbling" for him, not work. And he only wants the accountable people to "work" – i.e. mindlessly act without discussions. He is convinced that the government – most likely himself – must have the right to monitor every financial transaction that occurs in the country. He is enthusiastic about imposing$20,000 fines for small businesses every time they make a small financial or bureaucratic sin. He bought top newspapers, apparently convinced that they would provide him with the monopoly over the information in Czechia. It hasn't worked so far for him – but some of these failures are due to his having 22% only in the latest elections so far – they may change when he's given 30% or more. I think that Babiš is intrinsically far more authoritarian than Orbán, let alone Kaczynski. Some of Babiš's opinions are still sane – he is critical of mass migration, he is sometimes against the adoption of the Euro etc. – but all these things are just a result of the consensus in the public. When it comes to questions that don't directly affect his power, he simply copies the opinions from the majorities of Czech voters. He only has sensible opinions about things in which an overwhelming majority of the Czechs has the same opinion. What is the added value? There is none. Babiš constantly whines that he has destroyed his life by entering politics, he doesn't really want to be there, and so on – but is apparently unable to figure out that he may leave politics if he doesn't like it. He constantly repeats that everyone else is a thief, he spreads all unsubstantiated and nasty accusations you can hear in the cheapest pubs of Czechia, and so on, and lots of primitive enough people love it because they don't give a damn about substantiation, evidence, or the truth value of their accusations, after all. They just sound good and the more stuff someone says against democracy or the neighbors who are wealthier than they are, the better. All this whining and accusations is sort of analogous to the feminists' behavior in the U.S. except that the preferred class over here is still some kind of a proletariat, although this proletariat is vastly richer than it used to be. But Babiš simply still finds 30% of the Czech citizens who are rather enthusiastic about him. It's perverse. If he's arrested, I am pretty sure that almost all these 30% of voters will remain faithful to him and he may even increase the support. Czechs literally love another criminal, Mr Jiří Kajínek. Two decades ago, he almost certainly killed Mr Štefan Janda, a 26-year-old businessman, near the Bory prison here in Pilsen. Janda wasn't a saint by himself etc. but there exist numerous witnesses that confirm that Kajínek was the shooter – a hired gun. He received a life in prison sentence. Even if he weren't the killer, he had done lots of criminal things in his life that are known. And he has repeatedly escaped from prisons. Try to estimate the probability that the killer was someone else if someone so perfectly suited for that job, with these amazing extra abilities, was apparently seen by several independent witnesses. Well, it doesn't matter. Kajínek became a sex symbol for a huge number of Czech women. The tales about his innocence have become so powerful that the Czech president Zeman – who previously promised not to use pardons at all – decided to pardon Kajínek. If you need to increase your approval rate that dropped in recent months, why don't you just pardon the most famous killer in the country? Well, if Zeman decided to avoid pardons, it's bizarre to pick the only exception who may very well be the most potent killer in the country. You know, I've heard about very specific witnesses here in Pilsen. I can still imagine that something was different, that police did it for some mysterious reasons, that the witnesses are lying for some reasons I can't see. It's possible. But why would a sane impartial person start to work on these rather unlikely assumptions? There simply exist some types of villains – who are really bad – which become much more popular in Czechia than any hero ever could. It's sick but it's true. Babiš – who got married on his Stork's Nest some weeks ago (the wife has used his surname for a decade, anyway, and they have kids together) – became another villain that Czechs find irresistible and I am afraid that it will continue even if he were placed in a prison. He will obviously scream he is innocent or something like that and most of his stupid sheep will buy it. This guy has grown to a big enough fish so that he and his supporters basically want to dismantle the whole post-Velvet-Revolution system and his rhetoric often confirms that. I hope that we won't see some new Bastille-style beginning of a new communist revolution when this jerk is violently liberated from a prison. Babiš already told Reuters that the prosecution was the "last desperate attempt of the corrupt system to remove him from power". I guess it implies that once he gets to power, he wants to be unremovable. If this is his plan, I sincerely hope that some of the owners of 800,000 legally held weapons in Czechia will beg to differ. A joke of the day: The wealthiest Czech, Mr Petr Kellner (\$10 billion), bragged in China that now he is so rich that he could buy Czechia including the people. A man in a cluster of people surrounding Kellner, Mr Babiš, said: "But I am not selling, Peter." Czech, Czechoslovak presidents and prisons While you must have understood that I am really not a Babiš's fan, it seems to me that the Western readers will think that the association of a top politician and prisons is something absolutely insane that just can't happen in an otherwise civilized country. Well, it's not really the case in our country. Lots of our presidents have spent years in prisons and it doesn't mean anything that would be too wrong or that would place us in Central Asia – these arrests of the presidents are mostly testimonies of our rather vibrant 20th century history. The founder of Czechoslovakia, Prof Thomas Garrigue Masaryk, was never successfully arrested but that's only because he traveled a lot while creating Czechoslovakia. In 1915, an Austrian-Hungarian arrest warrant against him was issued so he stayed abroad (Geneva etc., he went to the U.S. in 1918) but his American wife was in Czechia so she was imprisoned. So it's questionable whether you could count Masaryk as a prisoner. His successor and big ally, Dr Edvard Beneš, who was forced to oversee both the Munich treaty and the communist coup, has never been arrested. In Beneš's case, the Austrian-Hungarian authorities probably didn't even issue an arrest warrant because it was hopeless but Beneš's wife was arrested because of the secessionist activities of her husband. Now, the State President of the Protectorate of Bohemia and Moravia, lawyer and translator Dr Emil Hácha (he translated Three Men in a Boat to Czech, among other things) whose poor health and modest physique symbolized the Czech submission to the Germans, was arrested for collaboration with the Nazis in May 1945. He died the following month in the prison's hospital. Our first working-class president Gottwald hasn't lived in a prison but his successor, Mr Antonín Zápotocký, was arrested for a strike around 1920, and for an attempted emigration to the Soviet Union through Poland around 1940. He spent the years up to 1945 in the Sachsenhausen concentration camp. His successor as a Czechoslovak president, Mr Antonín Novotný, was arrested between 1941 and 1945 – in the Malthausen-Gusen concentration camp – for his communist activities. His successor General Ludvík Svoboda – the president during the Prague Spring and then up to 1975 – was fighting hard during the Second World War so he didn't have any time for prisons but about 20 of his relatives were in prison during the Second World War. His successor, the last communist president Gustáv Husák whom I remember rather well and whose incentives for newborns have encouraged me to be born in the first place, was arrested by his communist comrades in 1951 because despite his being a commie, he was also charged as a Slovak "bourgeoisie nationalist". He got life in prison in 1954 – but that was only good luck because both Stalin and Gottwald died shortly before the verdict, in 1953, otherwise he would have gotten the death penalty. In 1960, he was freed by Novotný's amnesty as soon as the intense de-Stalinization of the 1960s was getting started. Well, his successor was the first modern democratic president Václav Havel. As you may guess, the sequence of the jailed president doesn't really stop here. In total, Havel has spent 5 years in Czechoslovak prisons for his writing politically incorrect things – and for his capitalist ancestors. He became a president just some two months after his latest term in the prison. After a long time, Václav Klaus was the president who had had nothing to do with prisons, although most of the Czech people may want to send him into one. One of his latest decisions as a president was a big amnesty – many people hate it even though it was a rather standard expression of the leader's mercy and many other presidents and kings before him have issued comparable amnesties. The current president Miloš Zeman hasn't been jailed, either, but many of his comments about prisoners and his pardon for Kajínek are related to prisons. If you make the statistics, you will see that "a Czechoslovak or Czech president has been arrested at some point" is a 50-50 proposition. A clear majority of the Czechoslovak or Czech presidents were either arrested or had their immediate relatives arrested. Most of the arrests were politically driven – and they may be understood as events that help someone's political star to rise if he manages to survive. The problem with Babiš is that his prosecution isn't political in any way. He is basically as apolitical a criminal as you can get. After all these leaders who have fought to change the world – break the monarchy and establish Czechoslovakia, undermine Nazism, rebuild Czechoslovakia, try to avoid the communist coup, try to accelerate the communist coup, soften the communism and make it more independent of the Soviet and international forces, initiate the Prague Spring, normalize Czechoslovakia as a territory occupied by Soviet and other fraternal troops, establish Charter 77 and help to abolish communism, liberalize and privatize and democratize Czechoslovakia, peacefully divide Czechoslovakia etc. – we are waiting for a top leader who has made his fortune mainly by trading šit, literally. After he entered politics in 2011, he began to trade šit figuratively (mostly fabricated dirt against all other politicians). He is expected to be prosecuted for a theft of the money equal to 5 lives of Czech average salaries, but it's still less than 0.1% of his wealth. Some people view Babiš as a semi-God. Sorry jako, he is a piece of cheap filth from my point of view.
2017-08-16 17:23:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20455262064933777, "perplexity": 3940.983840944563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102309.55/warc/CC-MAIN-20170816170516-20170816190516-00370.warc.gz"}
https://mathoverflow.net/questions/238744/the-square-root-of-laplacian-with-nonconstant-coefficent
# The square root of Laplacian with nonconstant coefficent I am still a newbie to $\Psi$DO-Operators. As far as i understood, one can easily compute the square root of the Laplace operator $\Delta$ by $$(-\Delta)^{1/2} \ u=\mathcal{F}^{-1}(\|\xi\| \widehat{u}).$$ However if i want to compute the spare root of $(-c(x) \ \Delta)$ things get complicated (Lets assume $c(x)$ is sufficiently nice as positive, bounded, smooth etc.). I cannot use simple Fourier transform, since the non constant coefficient will lead to a convolution: $$\mathcal{F}(-c(x) \Delta u(x)) = \widehat{c}(\xi) * \|\xi\|^2 \widehat u (\xi).$$ Of course i can use symbol calculus for getting the symbol of $\sqrt{-c(x)\Delta}$, but in that case i will only get the operator modulo smooth error. Is there any method to get the operator exatly? I appreciate your help best Martin ## 2 Answers Note that your operator is not positive for the $L^2$ product. A better starting point might be $-Au=\nabla\cdot(c^2(x)\nabla u)$ which satisfies at least $(Au,u)\ge0$. That said, it depends how explicit you want your square root. For a selfadjoint operator $A$ with spectral measure $dE_\lambda$ you have simply $A^{1/2}=\int_0^\infty \lambda^{1/2}dE_\lambda$, an abstract formula which sometimes is surprisingly effective in explicit computations since the spectral measure has some nice representations. Of course, you also have the cheap factorization $A=B^*B$ where the operator $B=c(x)\nabla$ goes from $S$ to $S^n$, $S$ being the Schwartz space, but I guess this is not what you need. • If $c$ and $1/c$ are bounded, then $-c\Delta$ is conjugate to $-c^{1/2}\Delta(\cdot c^{1/2})$, which is self-adjoint and positive. So the question reduces to the square root of a positive operator. – Denis Serre May 15 '16 at 17:29 • My question arrises from Thermoacoustic tomography arising in brain imaging (with G. Uhlmann). Inverse Problems, 27(4):045004, 26, 2011. (<math.purdue.edu/~stefanov/publications/…) (page 9, 4.2). Here $c$ is smooth, strictly greater than zero and bounded They use a weighted $L^2$ with $c^{-2}$. By which it is formally positive and self-adjoint. – Martin May 17 '16 at 13:54 • Well, the definition of "square root" of an operator $A$ is: the unique positive definite operator $B$ such that $A=B^2$. This is certainly not satisfied fir $B=c\nabla$. – Delio Mugnolo May 17 '16 at 15:42 • Is this a comment to the second part of my answer? I did not speak of square root when talking of B – Piero D'Ancona May 17 '16 at 15:59 The central question in this area was Kato's conjecture. From Wikipedia: Tosio Kato asked whether the square root of certain elliptic operators, defined via functional calculus, are analytic. The problem remained unresolved for nearly a half-century, until it was jointly solved in 2001 by Pascal Auscher, Steve Hofmann, Michael Lacey, Alan McIntosh, and Philippe Tchamitchian. See their paper "The solution of the Kato square root problem for second order elliptic operators on ${\mathbb R}^n$". Annals of Mathematics 156 (2002), pp 633–654 • Hey Denis, first, thanks for you comment. My question is, is there any explicit computation possible? In order to get the squareroot, i need to know $(c(x)\Delta (\lambda +c(x) \Delta)^{-1})$, right ? – Martin May 13 '16 at 15:50
2019-03-22 19:09:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9246491193771362, "perplexity": 323.2211546046128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202688.89/warc/CC-MAIN-20190322180106-20190322202106-00153.warc.gz"}
https://www.math.ias.edu/seminars/abstract?event=47513
# Virtual neighbourhood technique and its applications Princeton/IAS Symplectic Geometry Seminar Topic: Virtual neighbourhood technique and its applications Speaker: Bai-Ling Wang Affiliation: Australian National University Date: Friday, March 28 Time/Room: 1:30pm - 2:30pm/Fine 322, Princeton University In this talk, I will explain the recent joint work with Bohui Chen and Anmin Li on virtual neighborhood techniques for a Fredholm system $(B, E, S)$, where $E$ is a Banach vector bundle over a Banach manifold $B$ with a Fredholm section. Using the notion of virtual manifold theory developed by Chen and Tian, we associate a virtual system for the moduli space $M=S^{-1}(0)$. As an application, we will show that the moduli space of stable maps in a closed symplectic manifold admits an orbifold virtual system with a canonical orientation in cohomology or in K-theory.
2017-12-18 12:49:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.611717939376831, "perplexity": 955.3915927557427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948616132.89/warc/CC-MAIN-20171218122309-20171218144309-00086.warc.gz"}
http://nrich.maths.org/5636/clue
Euler's Squares Euler found four whole numbers such that the sum of any two of the numbers is a perfect square... Odd Differences The diagram illustrates the formula: 1 + 3 + 5 + ... + (2n - 1) = n² Use the diagram to show that any odd number is the difference of two squares. Substitution Cipher Find the frequency distribution for ordinary English, and use it to help you crack the code. What would you need to multiply by to do the same job as calculating $10\%$ and then adding it on ?
2016-10-23 03:17:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6899508237838745, "perplexity": 216.29440698947008}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719139.8/warc/CC-MAIN-20161020183839-00164-ip-10-171-6-4.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions?page=689&sort=unanswered
# All Questions 62 views 41 views ### Ellipticity of an operator in Gunther's proof of the isometric embedding In Deane Yang's notes about Gunther's proof of the celebrated isometric embedding theorem, at the end it is stated that $v$ inherits the regularity of $h$ because the operator $I-Q_0(v,\cdot)$ is ... 41 views 82 views 53 views ### If there exists a sequence $(\phi_n)$ of step functions such that $\phi_n \longrightarrow f$ almost everywhere on $[a,b]$, can we prove that $f\in L$? I know, if $f\in L$ (the set of all Lebesgue integrable functions), then there exists a sequence $(\phi_n)$ of step functions such that $\phi_n \longrightarrow f$ almost everywhere on $[a,b]$ and ... 100 views 21 views ### Minimum vertex cover of vertex disjoint odd holes and antiholes I am interested in knowing whether the minimum vertex cover of a graph that can be written as the union of vertex-disjoint odd holes and odd antiholes can be found exactly, in polynomial time. I could ... 52 views ### Algebraic Multiplicity and Geometric Multiplicity Problem Let $V$ be a finite dimensional vector space over $\mathbb{C}$ and suppose the linear transformation $T:V\rightarrow V$ has eigenvalue $\lambda_0$ with algebraic multiplicity $n_0=3$. (a) ... ### What is the probability to pass through $1\le m\le n$ vertices of an $n$-sided polygon after $t$ seconds? Suppose a flea is on a vertex of an $n$-sided polygon. It stays still for exactly one second, and then jumps instantly to an adiacent vertex. Let us assume it has no memory of its previous jumps and ...
2016-05-06 07:29:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9544028043746948, "perplexity": 221.27684788141917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861735203.5/warc/CC-MAIN-20160428164215-00002-ip-10-239-7-51.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2726290/markov-chain-long-run-proportion
# Markov chain long run proportion You are a coach of a good football team, but the owner is very unforgiving. If you lose three games in a row, you are automatically red. So, whenever you lose two games in a row, you bribe the referees in the next game, ensuring that your team wins it. Otherwise, your team wins any game independently with probability p = 0.8. The season is about to begin. a) Determine the transition probability matrix of the Markov Chain whose state is the number of consecutive games you have lost before the coming game. b)Write down an expression for the probability that your team wins game 5 and loses game 10. Do not evaluate. c)Calculate the proportion of games (in a very long season) your team wins. d) Calculate the proportion of games (in a very long season) your team wins honestly (i.e., without bribing the referees). e) A neutral observer, who does not usually follow your team, attends one of your game and see you win. What is the probability that you have bribed the referees? Assume the season started a long time ago. Hi, my main question is part e. I put up my solution for first few parts. Can you also check if answer is correct. If more detailed is required for qa to d, I'll will add. a) Markov chain for number of consecutive losses with state 0,1 & 2 with transition matrix P b) P($X_5 = 0$, ($X_{10} = 1$ or $X_{10} = 2$)) = $(p_{01}^5+P_{02}^5)P_{00}^5$ c)$\pi = \pi P, \pi_2 = 1/31, \pi_1$= $(0.2 - 2/100)1/1.2, \pi_0 = 1 - \pi_1 - \pi_2$ d)New markove chain but with 4 states {0,1 ,2,3} consecutive losses. Since state 3 is reccurent and rest is trainsient states, then $\pi_3 = 1$ and rest equal to zero. e)$P(X_n =0| X_{n-1} = 2) ?$ • "I put up my solution for first few parts" Sorry but how are these supposed to be solutions to the first questions? – Did Apr 7 '18 at 23:31 Let $X_n$ be the number of games lost before game $n$. Then $\{X_n:n=0,1,\ldots\}$ is a Markov chain on $\{0,1,2\}$ with transition matrix given by $$P=\pmatrix{\frac45&\frac15&0\\\frac45&0&\frac15\\1&0&0}.$$ Note that game $n$ is won when $X_n=0$ and lost when $X_n=1$ or $X_n=2$. Hence \begin{align} \mathbb P(X_5 = 0, X_{10}\ne 0\mid X_0=0) &= \mathbb P(X_{10}\ne 0\mid X_5=0,X_0=0)/ \mathbb P(X_5=0\mid X_0=0)\\ &= \left(\mathbb P(X_{10} = 1\mid X_5=0) + \mathbb P(X_{10}=2\mid X_5=0) \right)/ \mathbb P(X_5=0\mid X_0=0)\\ &= \left(P_{01}^5 + P_{02}^5\right)/P^5_{00}\\ &= \left( 1-P_{00}^5\right)/P^5_{00}\\ &= \left(1-\frac{504}{625}\right)/\left(\frac{504}{625}\right)\\ &= \frac{121}{504}. \end{align} Since all entries of $P^3$ are positive, $X$ is ergodic, so there exists a unique stationary distribution $\pi$ (which necessarily sums to $1$) satisfying $\pi P = \pi$. Hence \begin{align} \pi_1 &=\frac15\pi_0\\ \pi_2 &=\frac15\pi_1\\ 1&=\pi_0+\pi_1+\pi_2, \end{align} whence $\pi_2=\frac1{25}\pi_0$ and thus $$\pi = \left(\frac{25}{31}, \frac5{31},\frac1{31}\right).$$ The limiting proportion of games your team wins is given by $\pi_0=\frac{25}{31}$. The limiting proportion of games your team wins without bribing the referees is $$\pi_0 - \frac15\pi_1 = \frac{25}{31}-\frac15\left(\frac5{31}\right) =\frac{24}{31}.$$ The limiting probability that, conditioned on a game being won, that the game was won by bribing the referees is given by $$\frac{\pi_2 }{\frac45\pi_0 + \frac45\pi_1+\pi_2} = \frac{\frac1{31}}{\frac45\cdot\frac{25}{31}+\frac45\cdot\frac5{31}+\frac1{31}}= \frac1{25}.$$ • Hi sorry to bump this up, but for the part: "The limiting proportion of games your team wins without bribing the refereees is " why is it equal to $\pi_0 - \frac{1}{5}\pi_1$? Thanks – Mr. Bromwich I Apr 15 '18 at 3:56
2020-01-27 22:16:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9781849384307861, "perplexity": 426.7583880633344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251728207.68/warc/CC-MAIN-20200127205148-20200127235148-00384.warc.gz"}
https://stacks.math.columbia.edu/tag/0C6D
## 114.24 Dualizing modules on regular proper models In Semistable Reduction, Situation 55.9.3 we let $\omega _{X/R}^\bullet = f^!\mathcal{O}_{\mathop{\mathrm{Spec}}(R)}$ be the relative dualizing complex of $f : X \to \mathop{\mathrm{Spec}}(R)$ as introduced in Duality for Schemes, Remark 48.12.5. Since $f$ is Gorenstein of relative dimension $1$ by Semistable Reduction, Lemma 55.9.2 we can use Duality for Schemes, Lemmas 48.25.10, 48.21.7, and 48.25.4 to see that $\omega _{X/R}^\bullet = \omega _ X[1]$ for some invertible $\mathcal{O}_ X$-module $\omega _ X$. This invertible module is often called the relative dualizing module of $X$ over $R$. Since $R$ is regular (hence Gorenstein) of dimension $1$ we see that $\omega _ R^\bullet = R[1]$ is a normalized dualizing complex for $R$. Hence $\omega _ X = H^{-2}(f^!\omega _ R^\bullet )$ and we see that $\omega _ X$ is not just a relative dualizing module but also a dualizing module, see Duality for Schemes, Example 48.22.1. Thus $\omega _ X$ represents the functor $\textit{Coh}(\mathcal{O}_ X) \to \textit{Sets},\quad \mathcal{F} \mapsto \mathop{\mathrm{Hom}}\nolimits _ R(H^1(X, \mathcal{F}), R)$ by Duality for Schemes, Lemma 48.22.5. This gives an alternative definition of the relative dualizing module in Semistable Reduction, Situation 55.9.3. The formation of $\omega _ X$ commutes with arbitrary base change (for any proper Gorenstein morphism of given relative dimension); this follows from the corresponding fact for the relative dualizing complex discussed in Duality for Schemes, Remark 48.12.5 which goes back to Duality for Schemes, Lemma 48.12.4. Thus $\omega _ X$ pulls back to the dualizing module $\omega _ C$ of $C$ over $K$ discussed in Algebraic Curves, Lemma 53.4.2. Note that $\omega _ C$ is isomorphic to $\Omega _{C/K}$ by Algebraic Curves, Lemma 53.4.1. Similarly $\omega _ X|_{X_ k}$ is the dualizing module $\omega _{X_ k}$ of $X_ k$ over $k$. Lemma 114.24.1. In Semistable Reduction, Situation 55.9.3 the dualizing module of $C_ i$ over $k$ is $\omega _{C_ i} = \omega _ X(C_ i)|_{C_ i}$ where $\omega _ X$ is as above. Proof. Let $t : C_ i \to X$ be the closed immersion. Since $t$ is the inclusion of an effective Cartier divisor we conclude from Duality for Schemes, Lemmas 48.9.7 and 48.14.2 that we have $t^!(\mathcal{L}) = \mathcal{L}(C_ i)|_{C_ i}$ for every invertible $\mathcal{O}_ X$-module $\mathcal{L}$. Consider the commutative diagram $\xymatrix{ C_ i \ar[r]_ t \ar[d]_ g & X \ar[d]^ f \\ \mathop{\mathrm{Spec}}(k) \ar[r]^ s & \mathop{\mathrm{Spec}}(R) }$ Observe that $C_ i$ is a Gorenstein curve (Semistable Reduction, Lemma 55.9.2) with invertible dualizing module $\omega _{C_ i}$ characterized by the property $\omega _{C_ i}[0] = g^!\mathcal{O}_{\mathop{\mathrm{Spec}}(k)}$. See Algebraic Curves, Lemma 53.4.1, its proof, and Algebraic Curves, Lemmas 53.4.2 and 53.5.2. On the other hand, $s^!(R[1]) = k$ and hence $\omega _{C_ i}[0] = g^! s^!(R[1]) = t^!f^!(R[1]) = t^!\omega _ X$ Combining the above we obtain the statement of the lemma. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2022-08-18 11:30:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9760081171989441, "perplexity": 435.9512213165466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573193.35/warc/CC-MAIN-20220818094131-20220818124131-00552.warc.gz"}
https://www.nature.com/articles/s41598-018-33860-7?error=cookies_not_supported&code=a2541476-43f1-4830-bc19-9121617b87e3
# Automatic liver tumor segmentation in CT with fully convolutional neural networks and object-based postprocessing ## Abstract Automatic liver tumor segmentation would have a big impact on liver therapy planning procedures and follow-up assessment, thanks to standardization and incorporation of full volumetric information. In this work, we develop a fully automatic method for liver tumor segmentation in CT images based on a 2D fully convolutional neural network with an object-based postprocessing step. We describe our experiments on the LiTS challenge training data set and evaluate segmentation and detection performance. Our proposed design cascading two models working on voxel- and object-level allowed for a significant reduction of false positive findings by 85% when compared with the raw neural network output. In comparison with the human performance, our approach achieves a similar segmentation quality for detected tumors (mean Dice 0.69 vs. 0.72), but is inferior in the detection performance (recall 63% vs. 92%). Finally, we describe how we participated in the LiTS challenge and achieved state-of-the-art performance. ## Introduction According to the World Health Organization, liver cancer was the second most common cause of cancer-induced deaths in 2015. Hepatocellular carcinoma (HCC) is the most common type of primary liver cancer which is the sixth most prevalent cancer1. In addition, the liver is also a common site for secondary tumors. Liver therapy planning procedures would profit from an accurate and fast lesion segmentation that allows for subsequent determination of volume- and texture-based information. Moreover, having a standardized and automatic segmentation method would facilitate a more reliable therapy response classification2. Liver tumors show a high variability in their shape, appearance and localization. They can be either hypodense (appearing darker than the surrounding healthy liver parenchyma) or hyperdense (appearing brighter), and can additionally have a rim due to the contrast agent accumulation, calcification or necrosis3. The individual appearance depends on lesion type, state, imaging (equipment, settings, contrast method and timing), and can vary substantially from patient to patient. This high variability makes liver lesion segmentation a challenging task in practice. The problem of liver tumor segmentation has received a great interest in the medical image computing community. In 2008, the MICCAI 3D Liver Tumor Segmentation Challenge4 was organized where both manual and automatic methods were accepted. Among the automatic ones, the best method applied an ensemble segmentation algorithm using AdaBoost5. Other submitted methods employed adaptive thresholding, region growing or level set methods6,7,8,9. In more recent years, methods using Grassmannian manifolds10 and shape parameterization11 were proposed. Given the variability of liver lesions, a manual design of powerful features is not trivial. Fully convolutional neural networks (FCNs) gained rapidly growing attention in the computer vision community over the last years, because of their ability to learn features automatically from the data. Christ et al.12 applied two cascaded U-net models13 to the problem of liver and liver tumor segmentation. The approach employed one model solely for the liver segmentation and a separate one for the tumor segmentation within a liver bounding box. The final output was refined using a 3D conditional random field. More recently, the Liver Tumor Segmentation (LiTS) challenge was organized14. All top-scoring automatic methods submitted to the two rounds organized in 2017 used FCNs. Han15, the winner of the first round, used two U-net like models with long and short skip connections, where the first model was used only for coarse liver segmentation allowing the second network to focus on the liver region. The second model was trained to segment both liver and tumors in one step. The two models worked in 2.5D, i.e., they received five adjacent slices to segment the middle one, which provided the network with the 3D context information. The best method in the second LiTS round was developed by a group from Lenovo Research, China. Their approach employed two neural network ensembles for the liver and tumor segmentation, respectively. The ensembles consisted of 2D and 2.5D U-net models trained with different hyperparameter settings. Other successful methods proposed to train jointly two networks for liver and tumor segmentation16 and to exploit 3D information by training a 3D H-DenseUNet architecture using original image data as well as features coming from a 2D network17. This paper focuses on the tumor segmentation task, which follows a separate liver segmentation step that is briefly sketched in the description of the challenge submission. Our contribution on the tumor segmentation task is twofold. First, we show that cascading of a 2D FCN working on a voxel-level with a model trained using hand-crafted features extracted on an object-level leads to a significant reduction of false positive findings and improves the segmentation quality for detected tumors. We provide a detailed description and evaluation of our method, which achieved state-of-the-art results in the LiTS challenge. Second, we report human performance on a subset of the LiTS training data set to put the segmentation quality of automatic methods into perspective. ## Materials and Methods ### Data In the following, we ran the experiments using the training dataset from the LiTS challenge containing 131 contrast-enhanced abdominal CT scans coming from 7 clinical institutions. The CT scans come with reference annotations of the liver and tumors done by trained radiologists. The in-plane resolution ranges from 0.5 to 1.0 mm and the slice thickness ranges from 0.7 to 5.0 mm. The dataset contains 908 lesions (63% with the longest axial diameter ≥10 mm). We divided the cases randomly into 3 non-overlapping groups for training, validation and testing containing 93, 6 and 30 cases, respectively. We removed 2 flawed cases due to missing reference tumor segmentation. ### Neural network #### Architecture We employed a U-net13 like fully convolutional network architecture (Fig. 1). Our model works on four resolution levels allowing for learning of local and global features. In the contracting (expanding) path convolutions (transposed convolutions) are used to decrease (increase) the spatial resolution and the feature map count is doubled (halved) with each transition. The network contains long skip connections passing feature maps from the contracting path to the expanding path allowing to recover fine details which are lost in the spatial downsampling. We also added short skip connections to have well-distributed parameter updates and to speed up the training18. Each convolutional layer uses 3 × 3 filter size and is followed by a batch normalization and a ReLU activation function. We used dropout (p = 0.5) before each convolution in the upscaling path to prevent the network from overfitting. #### Training We trained the network using whole axial image slices in the original resolution (size 512 × 512 voxels) and their corresponding labels. Since our architecture is fully convolutional13, this is mathematically equivalent to training with many overlapping patches of the receptive field size (here, 92 × 92 voxels), but much more efficient. We used the soft dice coefficient as the loss function computed on the pixelwise softmax of the network final feature map19. The loss computation is restrained to a LiTS reference liver mask dilated by 10 mm in order to focus the model on the liver region. To deal with the high class imbalance, we ensured that each mini-batch contains patches where both classes (tumor and background) are present. We computed the parameter updates using the Adam optimizer with 5e-5 learning rate. The model was trained for 10 epochs (approx. 50 k iterations, mini-batch size 6). We reflectively padded the input images with 44 pixels on each side, because we used no zero-padding in the convolutions. #### Output The output of the neural network was limited to a liver mask in order to remove false positives found outside of the organ. For the LiTS training dataset we used liver masks provided by the challenge organizers in order to avoid the dependency of the tumor segmentation on the liver segmentation quality. For cases, where a liver mask is not given, the tumor segmentation is preceded by a liver segmentation step (see subsection describing the challenge submission). ### Object-based postprocessing Based on the training data we observed that some neural network outputs corresponded to false positives, which could easily be identified by their shape and location (e.g. liver/gallbladder boundary). Therefore, we added a post-processing step, which employs a model classifying tumor objects (computed as 3D connected components of the FCN output) into true (TP) and false positives (FP). For that, we trained a conventional random forest classifier (RF) with 256 trees using 36 hand-crafted features carrying information about underlying image statistics, tumor shape and its distance to the liver boundary (the full list of features can be found in the supplementary material). Random forests were chosen for this task because they work well with moderate numbers of training samples and varying feature value distributions. This approach does not allow end-to-end training, because we designed the second model to work on higher level entities (tumor objects instead of voxels) and features that are extracted by an image analysis pipeline from the neural network output20. We see this as an advantage, as employment of two separate steps for tumor candidate detection and false positive filtering increases the explainability of the whole system. Whether a tumor is TP or FP was determined using the evaluation code described in Sec. Evaluation. ### Expert performance In order to put the performance of our automatic method into a perspective, we asked a medical-technical radiology assistant (MTRA) with over 10 years of segmentation experience to manually segment tumors in cases used for the algorithm evaluation. This means that we have two reference annotation sets, which we refer to in the following as “MTRA” and “LiTS”. ### Evaluation #### Detection We evaluate the detection performance using metrics based on the Free-Response ROC analysis, which is suitable for experiments involving zero or more decisions per image21: • Recall: Ratio of TP detections to the count of positives in the reference. • FPs/case: Average count of FPs per case. Additionally, we compute a ratio of detected tumors with the longest axial diameter ≥10 mm to all such lesions in reference (Recall ≥ 10 mm). The threshold value was derived from the RECIST 1.1 guidelines, where it is used to classify tumor lesions into measurable and non-measurable types22. We define a hit as a situation when the overlap (measured with the Dice index) between output and reference is above a threshold θ: $$DICE({M}^{{\rm{o}}{\rm{u}}{\rm{t}}}[{T}^{{\rm{o}}{\rm{u}}{\rm{t}}}],{M}^{{\rm{r}}{\rm{e}}{\rm{f}}}[{T}^{{\rm{r}}{\rm{e}}{\rm{f}}}]) > \theta$$ Mout and Mref denote output and reference label images where each tumor has a unique label, Tout and Tref are sets of output and reference tumor labels corresponding to each other. Notation M[T] selects tumors with labels T from M. The parameter θ enables a trade-off between high recall (low θ) and high Dice for corresponding tumors (high θ). We set θ = 0.2 in order to require a significant, but not exact overlap. Determining output/reference tumor correspondence is not trivial, since situations as in Fig. 2 can occur. In Fig. 2a two output tumors $${T}^{{\rm{out}}}=\{{l}_{1}^{{\rm{out}}},{l}_{2}^{{\rm{out}}}\}$$ correspond to one reference tumor $${T}^{{\rm{ref}}}=\{{l}_{1}^{{\rm{ref}}}\}$$ and if their Dice index > θ, then such situation should be counted as one TP. In Fig. 2b one output tumor $${T}^{{\rm{out}}}=\{{l}_{1}^{{\rm{out}}}\}$$ corresponds to three reference tumors $${T}^{{\rm{ref}}}=\{{l}_{1}^{{\rm{ref}}},{l}_{2}^{{\rm{ref}}},{l}_{3}^{{\rm{ref}}}\}$$ and if their overlap is above θ, then such situation counts as three TP. An algorithm for correspondence establishment of output/reference lesions should aim at maximizing the output/reference overlap. For example, consider Fig. 2c, where the output tumor should correspond only to the smaller reference tumor, since the overlap would decrease if both reference tumors would be considered. To account for n : m correspondence situations where $$n\ne m$$, we count merge and split errors for each correspondence. Merge error is defined as $$|{T}^{{\rm{ref}}}|-1$$, split error as $${\rm{\max }}(0,|{T}^{{\rm{out}}}|-|{T}^{{\rm{ref}}}|)$$. #### Segmentation The segmentation quality was evaluated using the following measures: • Dice/case: Computed by taking into account the whole output and reference tumor mask. When both masks are empty, a score of 1 is assigned. • Dice/correspondence: Computed for each output/reference correspondence. • Merge error: Sum of per correspondence merge errors. • Split error: Sum of per correspondence split errors. Algorithm 1 sketches the code we employed for establishing of correspondences between output and reference tumors. ## Results and Discussion ### Expert performance The MTRA needed 30–45 min. per case (the segmentation was done without time constraints). The comparison of MTRA with LiTS annotations and vice versa is shown in Table 1. The MTRA missed 11 of the LiTS lesions and found 78 additional ones, which accounts for 0.92 recall and 2.6 FP/case. The LiTS annotations identified correctly only 62% of tumors found by the MTRA. Smaller recall difference was observed for tumors ≥10 mm, meaning that most of the lesions not included by the LiTS reference were small. The segmentation quality was 0.72 dice/correspondence and 0.7 dice/case. Figure 3 shows example cases with major differences between MTRA and LiTS segmentations. There were two cases, where MTRA segmentation got 0 dice/case when compared with the LiTS reference: (i) a tumor was found in a case with no tumors, Fig. 3c, (ii) none of reference tumors were found. ### Neural network The neural network was able to detect 47% and 72% of all tumors present in the MTRA and LiTS annotations, respectively. Tumors with the longest diameter ≥10 mm were detected more reliably than smaller ones. Potentially measurable tumor lesions according to RECIST 1.1 had a recall of 75% and 86%, respectively. The false positive count was similar when comparing with MTRA and LiTS annotations (142 and 138, respectively). The dice/case and dice/correspondence was 0.53 and 0.72 for the MTRA reference and 0.51 and 0.65 for LiTS (see Table 1 and Fig. 4 for details). 7 cases received 0 dice/case score (3 with no reference lesions and 4 where none of small reference lesions was found). Interestingly, the neural network, similar to the MTRA, found a lesion in the case with no tumors in the LiTS reference (Fig. 3c). Figure 5 presents one example of a good segmentation produced by the neural network, as well as examples of different kinds of deviations from the reference. ### Object-based postprocessing We trained a random forest classifier on features computed for each tumor produced by the neural network from training and validation cases, where only LiTS annotations were available. Therefore, Table 1 reports results only for the LiTS reference. The classifier allowed for a 85% reduction of false positives and had 87% accuracy on test cases: 117 FPs were identified correctly, whereas 13 TPs (9 of which were ≥10 mm) were wrongly rejected. This led to a significant change in FPs, TPs and FNs (all significance tests were done using the Wilcoxon signed-rank test at 0.05 level). The improvement for Dice per correspondence was significant, as opposed to Dice per case, whose increase was achieved by removing all FPs in two cases with no reference tumors. Among five most discriminative features four were shape-based (first eigenvalue, eccentricity, extent along z axis, voxel count). The remaining one described the std. deviation of the distance to the liver boundary (plot showing features sorted according to their importance can be found in the supplementary material). The main motivation for choosing the random forest classifier was moderate number of training samples. Assuming that a bigger dataset was available, other strategies for object-based post-processing could be investigated. One of possible alternative approaches for false positive reduction would be a multi-view neural network, which learns discriminative features directly from the found tumor candidates23. ### Challenge submission and results Before submission to the LiTS challenge, we trained the neural network further using all cases from the LiTS training dataset. Since the tumor segmentation makes use of liver masks, which were not given for the challenge test cases, we used our own liver segmentation method. For automatic liver segmentation, we trained 3 orthogonal (axial, sagittal, coronal) U-net models with 4 resolution levels on our in-house liver dataset from liver surgery planning containing 179 CTs24. We computed segmentations for the 70 challenge test cases ranking third at the MICCAI 2017 LiTS round (leaderboard user name hans.meine). Our submission scored 0.68 and 0.96 dice/case for tumor and liver segmentation, respectively. The tumor dice/case difference between our approach and the best submissions from MICCAI 2017 (IeHealth) and Open leaderboard (xjqi to date) is 0.02 and 0.04, respectively. Our method needs on average 67 s for one case: 43, 16 and 8 s for liver segmentation, tumor segmentation and FP filtering, respectively (Intel Core i7-4770K, 32 GB RAM, GeForce GTX 1080). ## Conclusions In this work, we described our method for automatic liver tumor segmentation in abdominal CT scans employing a 2D deep neural network with an object-based postprocessing, which ranked third in the second LiTS round at MICCAI 2017. Our tumor segmentation employs a preceding liver segmentation step in order to constrain operation to the liver region and to be able to compute distances from the liver boundary. The object-based analysis step using hand-crafted features allowed for a significant reduction of false positive findings. The fact that the most discriminative features in the postprocessing step were shape-based indicates the importance of 3D information in distinguishing true from false positives. Our method achieves segmentation quality for detected tumors comparable to a human expert and is able to detect 77% of potentially measurable tumor lesions in the LiTS reference according to the RECIST 1.1 guidelines. We observed that the neural network is capable of detecting bigger lesions (the longest axial diameter ≥10 mm) more reliably than smaller ones (<10 mm). We presume, based on the performed comparison of LiTS annotations with those done by an experienced MTRA, that this can be attributed to a bigger inter-observer variability with respect to detection of smaller lesions. We think that the LiTS challenge data collection from multiple sites is a great initiative, that shows not only the variability in imaging, but also some variability in the annotations. This is probably due to the fact that liver tumor segmentation is not part of the daily routine, and that there are no universally agreed on clinical guidelines for this task. We see the method described in this paper as promising, but it is clear that more work needs to be done to match the human detection performance. Moreover, an evaluation in a clinical setting will be required to assess the clinical utility of automatic liver tumor segmentation methods. Future research directions include evaluation of 3D networks and automation of reporting schemes for the liver. ## References 1. 1. Forner, A., Llovet, J. M. & Bruix, J. Hepatocellular carcinoma. The Lancet 379, 1245–1255 (2012). 2. 2. Cornelis, F. et al. Precision of manual two-dimensional segmentations of lung and liver metastases and its impact on tumour response assessment using recist 1.1. Eur. Radiol. Exp. 1, 16 (2017). 3. 3. Oliver, J. H. & Baron, R. L. Helical biphasic contrast-enhanced ct of the liver: technique, indications, interpretation, and pitfalls. Radiol. 201, 1–14 (1996). 4. 4. Niessen, W. et al. 3d liver tumor segmentation challenge. https://web.archive.org/web/20140606121659/http://lts08.bigr.nl:80/index.php Accessed: 2017-11-23 (2008). 5. 5. Shimizu, A. et al. Ensemble segmentation using adaboost with application to liver lesion extraction from a ct volume. In Proc. MICCAI Workshop on 3D Segmentation in the Clinic: A Grand Challenge II., NY, USA (2008). 6. 6. Häme, Y. Liver tumor segmentation using implicit surface evolution. The Midas J (2008). 7. 7. Smeets, D., Stijnen, B., Loeckx, D., De Dobbelaer, B. & Suetens, P. Segmentation of liver metastases using a level set method with spiral-scanning technique and supervised fuzzy pixel classification. In MICCAI workshop, vol. 42, 43 (2008). 8. 8. Choudhary, A., Moretto, N., Ferrarese, F. P. & Zamboni, G. A. An entropy based multi-thresholding method for semi-automatic segmentation of liver tumors. In MICCAI workshop, vol. 41, 43–49 (2008). 9. 9. Moltz, J. H., Bornemann, L., Dicken, V. & Peitgen, H. Segmentation of liver metastases in ct scans by adaptive thresholding and morphological processing. In MICCAI workshop, vol. 41, 195 (2008). 10. 10. Kadoury, S., Vorontsov, E. & Tang, A. Metastatic liver tumour segmentation from discriminant Grassmannian manifolds. Phys. Medicine Biol. 60, 6459–6478, https://doi.org/10.1088/0031-9155/60/16/6459 (2015). 11. 11. Linguraru, M. G. et al. Tumor burden analysis on computed tomography by automated liver and tumor segmentation. IEEE transactions on medical imaging 31, 1965–1976 (2012). 12. 12. Christ, P. F. et al. Automatic liver and tumor segmentation of ct and mri volumes using cascaded fully convolutional neural networks. arXiv preprint arXiv:1702.05970 (2017). 13. 13. Ronneberger, O., Fischer, P. & Brox, T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, 234–241 (Springer, 2015). 14. 14. Christ, P., Ettlinger, F., Grün, F., Lipkova, J. & Kaissis, G. Lits - liver tumor segmentation challenge. http://www.lits-challenge.com Accessed: 2017-11-23 (2017). 15. 15. Han, X. Automatic liver lesion segmentation using a deep convolutional neural network method. arXiv preprint arXiv:1704.07239 (2017). 16. 16. Vorontsov, E., Chartrand, G., Tang, A., Pal, C. & Kadoury, S. Liver lesion segmentation informed by joint liver segmentation. arXiv preprint arXiv:1707.07734 (2017). 17. 17. Li, X. et al. H-denseunet: Hybrid densely connected unet for liver and liver tumor segmentation from ct volumes. arXiv preprint arXiv:1709.07330 (2017). 18. 18. Drozdzal, M., Vorontsov, E., Chartrand, G., Kadoury, S. & Pal, C. The importance of skip connections in biomedical image segmentation. In Deep Learning and Data Labeling for Medical Applications, 179–187 (Springer, 2016). 19. 19. Milletari, F., Navab, N. & Ahmadi, S.-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 3D Vision (3DV), 2016 Fourth International Conference on, 565–571 (IEEE, 2016). 20. 20. Schwier, M., Chitiboi, T., Hülnhagen, T. & Hahn, H. K. Automated spine and vertebrae detection in ct images using object-based image analysis. Int. J. for Numer. Methods Biomed. Eng. 29, 938–963 (2013). 21. 21. Chakraborty, D. P. Recent developments in imaging system assessment methodology, froc analysis and the search model. Nucl. Instruments Methods Phys. Res. Sect. A: Accel. Spectrometers, Detect. Assoc. Equip. 648, S297–S301 (2011). 22. 22. Eisenhauer, E. A. et al. New response evaluation criteria in solid tumours: Revised RECIST guideline (version 1.1). Eur. J. Cancer 45, 228–247, https://doi.org/10.1016/j.ejca.2008.10.026 (2009). 23. 23. Setio, A. A. A. et al. Pulmonary nodule detection in ct images: false positive reduction using multi-view convolutional networks. IEEE transactions on medical imaging 35, 1160–1169 (2016). 24. 24. Endo, I. et al. Imaging and surgical planning for perihilar cholangiocarcinoma. J. hepato-biliary-pancreatic sciences 21, 525–532 (2014). ## Acknowledgements We gratefully thank Christiane Engel for annotating tumor lesions on 30 test cases used in this work. ## Author information Authors ### Contributions G.C. designed and performed the experiments, analyzed the results and wrote the manuscript. G.C., H.M. and J.H.M. implemented the methodology and prepared the LiTS challenge submission. A.S. supervised the project within which the work was conducted. B.v.G. and H.K.H. provided comments on the manuscript draft. All authors reviewed and accepted the manuscript. ### Corresponding author Correspondence to Grzegorz Chlebus. ## Ethics declarations ### Competing Interests The authors declare no competing interests. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Chlebus, G., Schenk, A., Moltz, J.H. et al. Automatic liver tumor segmentation in CT with fully convolutional neural networks and object-based postprocessing. Sci Rep 8, 15497 (2018). https://doi.org/10.1038/s41598-018-33860-7 • Accepted: • Published: ### Keywords • Liver Tumor Segmentation (LiTS) • Fully Convolutional Neural Network (FCN) • LiTS Challenge • Segmentation Quality • Tumor Reference • ### Shape-shifting thermal coagulation zone during saline-infused radiofrequency ablation: A computational study on the effects of different infusion location • Antony S K Kho • , Ji J Foo • , Ean T Ooi •  & Ean H Ooi Computer Methods and Programs in Biomedicine (2020) • ### Integration of a knowledge-based constraint into generative models with applications in semi-automatic segmentation of liver tumors • Nasim Nasiri • , Amir Hossein Foruzan •  & Yen-Wei Chen Biomedical Signal Processing and Control (2020) • ### Automated detection and delineation of hepatocellular carcinoma on multiphasic contrast-enhanced MRI using deep learning • Khaled Bousabarah • , Brian Letzen • , Jonathan Tefera • , Lynn Savic • , Isabel Schobert • , Todd Schlachter • , Lawrence H. Staib • , Martin Kocher • , Julius Chapiro •  & MingDe Lin • ### AppendiXNet: Deep Learning for Diagnosis of Appendicitis from A Small Dataset of CT Exams Using Video Pretraining • Pranav Rajpurkar • , Allison Park • , Jeremy Irvin • , Chris Chute • , Michael Bereket • , Domenico Mastrodicasa • , Curtis P. Langlotz • , Matthew P. Lungren • , Andrew Y. Ng •  & Bhavik N. Patel Scientific Reports (2020) • ### 3D Liver and Tumor Segmentation with CNNs Based on Region and Distance Metrics • Yi Zhang • , Xiwen Pan • , Congsheng Li •  & Tongning Wu Applied Sciences (2020)
2020-08-07 05:00:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5792419910430908, "perplexity": 5382.676621581353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737152.0/warc/CC-MAIN-20200807025719-20200807055719-00243.warc.gz"}
http://libhlr.org/algorithms/cluster.html
# Clustering¶ HLR implements different clustering methods, which permits direct comparison between the different arithmetic implementations. By default all clustering is cardinality balanced. If not defined otherwise, e.g., HODLR, standard admissibility is used. ## TLR/BLR¶ Tile Low-Rank (TLR) or block low-rank (BLR) clustering directly decomposes a given set of coordinates into clusters without any hierarchy. The matrix partitioning then consists of a simple $$p \times p$$ block structure of either dense or low-rank blocks, thereby drastically simplifying algorithm design. However, the runtime and storage complexity is increased to $$\mathcal{O}(n^3)$$ and $$\mathcal{O}(n^2)$$, respectively. ## MBLR¶ MBLR extends TLR by using a fixed, predefined number of hierarchy levels with an even reduction of the block size per level. With a growing number of levels, MBLR converges to the standard H clustering. ## TileH¶ TileH (also called Lattice-H in the literature) splits the top-layer of the cluster tree into $$p$$ sub blocks resulting in a $$p \times p$$ block layout. However, in contrast to TLR H-matrices are used for the corresponding sub blocks in the matrix. As with TLR, simple algorithms may be used on the first level of the matrix hierarchy, e.g., for distributed memory versions of the arithmetic, while maintaining log-linear complexity for the remaining H-matrix. This is also identical to various domain-decomposition approaches using H-matrices. ## HODLR¶ Hierarchical Off-Diagonal Low-Rank (HODLR) clustering uses standard cluster tree construction but simplifies admissibility such that all off-diagonal matrix blocks are considered admissible. The resulting H-matrix has a $$2 \times 2$$ block structure with upper right and lower left low-rank blocks. H-arithmetic again is significantly simpler to implement. However, the rank of the low-rank blocks is typically dependent on the problem dimension and may be very large. ## H¶ This is the standard, general H-matrix format with binary space partitioning and standard admissibility.
2023-03-25 11:52:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.806868314743042, "perplexity": 1947.574617246603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945323.37/warc/CC-MAIN-20230325095252-20230325125252-00075.warc.gz"}
http://stats.stackexchange.com/questions?pagesize=30&sort=newest
# All Questions 9 views ### How exactly is sparse PCA better than PCA? I learnt about PCA a few lectures ago in class and by digging more about this fascinating concept, I got to know about sparse PCA. I wanted to ask, if I'm not wrong this is what sparse PCA is: In ... 9 views ### Statistical Problem A psychiatrist hires you to study whether her clients self-disclosed more while sitting in an easy chair or lying down on a couch. All clients had previously agreed to allow the sessions to be ... 12 views ### Statistical problem / explanation A researcher studied the effect of television violence on concentration of a particular blood chemical. In this study, 40 participants were measured over a period of an hour prior to watching a ... 7 views ### Calculating Second-Order Tikhonov Regularization Parameter in Mathematica I am trying to map slowness of underwater sound velocity in a river using some tomographic device. The location of each acoustic receiver/transmitter is shown in the picture below To find the ... 10 views ### Predicting data based on two data sets I'm brand new to SAS programming, and I'm having difficulty figuring out this issue: I have two sets of data 1st set has 4 columns (income is a continous range of numbers, sex is 0 for male - 1 for ... 6 views ### Time Series Forecasts of Appointments with Pre-Reqistration Looking for some tips and ideas. I get a list every day of the number of appointments for each day for the next two weeks for a clinic. I have quite good history of these list, and the actual number ... 13 views ### Comparing linear regression models created from different data sets I have one linear regression model [Mold] created from 12 points where I can calculate a single value of RMSE between the predicted values and the actual observed values. This model is then used to ... 14 views ### On the application of an algorithm for classification Given, a set of training pattern p_1 = [1 -1 -1 -1 -1 1 -1 1 1 1 1 -1 -1 1 1 1 1 -1 -1 1 1 1 1 -1 1 -1 -1 -1 -1 1]'; I can calculate the weight by $W = p_1*p_1'$ ... 7 views ### How many different random initializations should I perform with Lloyd's algorithm to obtain the optimal clustering with X% of confidence? I use Lloyd's algorithm for clustering. Since it relies on a random initialization and Lloyd's algorithm can get stuck in local optima of the k-means objective function, I have to run it several ... 21 views ### How to determine if % selecting Option A is statistically different from selecting Option B in a survey? I ran a survey on pricing and asked respondents to choose between option A or Option B. 61% chose option A and 39% chose Option B. The sample size is 90. How do I determine if the % selection Option A ... 8 views ### How much do self organising maps suffer from local minima problems? How much do self organising maps suffer from local minima problems? I assume that the original orientation of the grid can have major orientation on the resulting shape of the grid, but does this have ... 26 views ### Term is significant linearly, quadratically and in interaction? I used deletion tests to identify ecological factors that relate to the number of parasites on rodents. There is one factor that is significant linearly, quadratically and in interaction. However, the ... 9 views ### Finding subsets with Branch-and-bound techniques Hello I am trying to find a Matlab version of a function which performs best subset selection for least squares regression using leaps and bounds (branch and bound) approach. Like in the paper by ... 16 views ### Using caret train function to perform lm with 10 repeated 10-fold cross validation I'm using the caret package at the moment to perform different forms of analysis on my dataset. I did the following to do 10-fold cross validation, 10 times:- ... 9 views 14 views ### Comparison of Kaplan-Meier curves across ordered groups I am familiar with the log-rank test for comparing multiple Kaplan-Meier curves, but I am looking for a test that will compare across ordered groups (an ordinal variable). A significant result from ... 31 views ### What is the actual equation of the standard 68/95/99 bell curve graph? [duplicate] I want to connect Calc with Stats via cumulative density and taking the area under the curve. However, what is the equation of the curve that models the standard 68/95/99 bell curve? 10 views ### Coefficients from a penalized Cox PH I'm using the R package Penalized (0.9-42) on a Cox PH model. I'm using L2 (Ridge) on the grounds that I don't want to shrink my coefficients to 0. I don't understand why when I ask for: ... 13 views ### computing confidence interval for ranges, confidence level 0.95 I have some ranges of values and frequencies of each range. Participants choosed an entire range when asked, and not a particular value. Example: Range [0,20] : frequency = 20 Range (20,60], ... 56 views ### Best modelling packages in R? In R I use a lot the packages plyr, stringr, Hmisc and ggplot2. Each of these packages take the base code and make functions that are more intuitive and easier to work with. Each of these packages ... 10 views ### Absolute vs. Relative Difference in Survival Time - Is this possible? So there's a fairly well characterized difference between relative risk and absolute risk for conventional cohort studies, and for many questions, the absolute risk is arguably more appropriate. Is ... 15 views ### Why is the scatter diagram always symmetric around the SD line? This may seem like a dumb/vague question. When you draw a line of slope $\frac{\sigma_y}{\sigma_x}$, why is this line the major axis of the scatter-diagram ellipse? Is this some property of the ... 26 views ### Calculating confidence intervall for quantiles first by hand (than in R) It would be great if someone can check whether my approach is correct or not. Question in short will be, if the error calculation is the correct way. lets assume i have the following data. ... 23 views ### Meta-analysis with a binary response variable I am attempting to fit a model to my data for meta-analysis I'm working on. The response is binary (0 or 1), and I want to specify a binomial distribution with logit link function. The 2-3 (depending ... 27 views ### Poorness of Kernel methods on visual pattern recegnition? I am currently reading the recent papers mainly written by Y. Bengio [1],[2],[3]. There are very strong claims about poorness of Kernel methods on recognizing handwritings in many general cases but ... 13 views ### Repeated measures mixed multinomial model in R - extract coefficients I would like to perform repeated measures mixed multinomial model using R. To be exact I would like to do MaxDiff analysis similar to SawTooth Hierarchical Bayes method. I want to use mixed models ... 20 views ### Estimating distribution from censored data $X$ is a positive variable with known support (assume discrete support, if that simplifies solution). $Y$ is another variable with the same support. $X$ and $Y$ are independent. $Z$ is equal to $X$ ... 15 views ### I have a problem in bayesian networks get p(E|A) I'm doing this book "Modeling and reasoning with Bayesian Networks" and I have this problem: ... 9 views ### Guess several distribution's parameters from a list of interarrivals Just a small introduction to the setting: I have traffic that is generated using several layers. Layer 3 consists of the basic packets, while Layer 2 is a more high level grouping of packets, and ... 20 views ### Expectation of correlated variables I'm looking to compare effects $\delta = \frac{\mu_T - \mu_C}{\sigma}$ for two studies, compared with the same control group. In order to find the covariance of effects for treatment A and treatment ... 22 views ### Conditional Logistic Regression, Rare Events and R I have a panel data set organized around 35 provincial units, with 110 binary "positive" outcomes in the dependent variable. This is for about 2600 observations in that data set. I have a couple ... 26 views ### review papers on P-values, adjusted P-values and common statistical tests in bioinformatics I'm looking for review papers which give comprehensive reviews of P-values, adjusted P-values and common statistical tests in bioinformatics. 23 views ### Goodness of fit test for a normal distribution In the example from the web site I was trying out this problem in page 107 about Goodness of fit test for a normal distribution.Question is about analysis of fat content of hambergers. I understand ... 37 views ### Bootstrapping in R using the boot {boot} and Boot {car} I'm trying my hand at resampling techniques with a dataset I have, and I think either I'm missing a conceptual point with bootstrapping, or I'm doing something incorrectly in R. Basically, I'm trying ... 27 views ### How to Reduce Error Term My question is "What could you do if you wanted to reduce the error term (e)? I know the error term is basically the distance between the line and the point but I don't know how you would reduce it. ... 31 views ### Hamiltonian Monte Carlo: why is reparameterizing needed? In the Stan user's manual (Version 2.0.1, page 157), it says A hierarchical model such as the above will suffer from the same kind of inefficiencies... [for a Hamiltonian Monte Carlo method] ... 15 views ### Goodness of Fit Test for Logistic Regression with small n_i I would like to test how well my model fits the data. The response is binary and the Chi-Squared Test cannot be applied for the residual deviance because the $n_i$ are $1$. To use the Chi-Squared GOF ... 12 views ### Help fitting a poisson glmer (getting an error message) [migrated] I am trying to fit a Poisson glmer model in R, to determine if 4 experimental treatments affected the rate at which plants developed new branches over time. New branches were counted after 35, 70 ... 17 views ### Validity Index Pseudo F for K-Means Clustering The Validity Index "Pseudo F" is described as: (between-cluster-sum-of-squares / (c-1)) / (within-cluster-sum-of-squares / (n-c)) with c beeing the number of clusters and n beeing the number of ... 24 views ### What are some examples of real-world processes that are well-described by AR, MA, ARMA, or ARIMA? Subject says it all - AR/MA/ARMA/ARIMA are often described as workhorses of time series analysis. But what are some real-world examples where these methods gave great results, and another more modern ... 13 views ### Model selection between exponential and gamma distribution using cross validation I have fitted the exponential and gamma model to my univariate data and obtained the MLE estimates using the R package "fitdistr", now I'm trying to do model selection based on leave-one-out cross ... 21 views ### Different outputs for weka and Excel - linear regression I was double-checking my regression outputs from weka and Excel. All coefficients are the same, however I get a considerably different intercept. How come? Also, is it possible to run a t- or p-test ... 14 views ### AIC and BIC for Support Vector Machine inside e1071 After training a support using the e1071 package of R, how can I calculate an information criterion such as AIC or BIC? 8 views ### Missing input value during prediction of a generalized linear model If I'm doing prediction with a generalized linear model and a new batch of inputs comes with some missing values, what strategies can I use to minimize the loss of information from the missing inputs? ... 12 views ### Generating null distributions by a residual permutation procedure I am trying to understand the method described in this paper which describes an hypothesis-testing framework for stable isotope ratios. The data are in a bivariate isotopic space and the metrics that ... 30 views ### nonparametric MARS regression Most statistical methods assume homogeneous (outlier-free) data in which all data points satisfy the same model. However, real data are (not) NEVER homogeneous; and accurate identification of outliers ... 8 views ### Spatial interpolation in program R [migrated] I'm having some trouble with spatial interpolation in R. I need to create an interpolated surface of a count variable across the area of a polygon. I can create this output in ArcGIS, but would like ... 30 views ### Using PC scores in PCA This might be a basic question, but I am analysing the diversity of floral traits in plant communities. Some of my data are reflectance spectra. I want to reduce the complexity of these data by doing ... 15 30 50 per page
2013-12-10 06:53:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8408288955688477, "perplexity": 1341.7522342622592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164011870/warc/CC-MAIN-20131204133331-00026-ip-10-33-133-15.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/683510/jacobson-radical-of-rings
1. What is $J(R)$ when $R$ is a Principal Ideal Domain but not a field? e.g. I know that $\mathbb Z$ is a PID and why is $J(\mathbb Z)=0$ but can we say that is true for every Principal Ideal Domain but not a field? why? 1. Let $R=C([0,1])$, the ring of real continuous functions on the interval $[0,1]$. Then what is $J(R)$? 2. Let $R$ be a commutative ring. $J(R[[x]])$? 3. $J(\mathbb Z_n)$? I know if $n$ is a prime then $\mathbb Z_p$ is a field and $J(\mathbb Z_p)=0$ also $J(\mathbb Z_{p^2})=(p)$ because it is a local ring. Is that true for example $J(\mathbb Z_{12})=( (2, 3)\mathbb Z/12 \mathbb Z ) = 6\mathbb Z/12\mathbb Z$? why? • Where are you stuck? Surely you can add a few thoughts on each question? – Namaste Feb 20 '14 at 13:35 • For sure there are a lot of PID with nontrivial J(R), take for example all DVR's – Ferra Feb 20 '14 at 14:36 1. Note that all ideals of the form $I_a=\{f\in C([0,1])\colon f(a)=0\}$, where $a\in[0,1]$, are maximal. In fact you can check easily that $C([0,1])/I_a\simeq \mathbb R$. Therefore $J(C([0,1]))\subseteq \bigcap_{a\in [0,1]}I_a=\{0\}$. 2. If your $R$ is unitary, you need to prove that $f=\sum_{n\geq 0}a_nx^n$ is invertible iff $a_0$ is invertible in $R$. Now use the fact that the $J(A)$ for any ring $A$ is given by $\{x\in A\colon 1-xy\in A^*\,\forall y\in A\}$. From these two facts you deduce easily that $f\in J(R[[x]])$ iff $a_0\in J(R)$
2019-06-26 06:35:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9410356283187866, "perplexity": 93.71670625902863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000175.78/warc/CC-MAIN-20190626053719-20190626075719-00487.warc.gz"}
https://quant.stackexchange.com/questions/45858/pd-for-ecl-modelling
# PD for ECL modelling I am trying to understand the interrelations between the marginal, cumulative and conditional PDs (Probabilities of Default) when modelling ECLs (Expected Credit Losses). My current understanding is that for a loan with remaining maturity of $$t$$ years the following calculations apply: Conditional $$PD_{t} = (PD_{t}|PD'_{t-1})$$ Marginal $$mPD_{t} = PD_{t}*(1-PD_{t Cumulative $$cPD = \sum_{t=1}^{n} mPD_{t}$$ Alternatively $$mPD = cPD_{t}-cPD_{t-1}$$ A numerical example would be as follows: $$t = 3$$ $$PD_1 = 5\%$$ for simplicity we we assume independenceso that $$PD_{t} = PD_{t-1}$$, which gives us the following tree diagram: From the diagram it follows that: $$mPD_{1}=5\%$$ $$mPD_{2}=5\%*95\% = 4.75\%$$ $$mPD_{3}=5\%*95\%*95\% = 4.51\%$$ $$cPD_{1} = 5\%$$ $$cPD_{2} = 5\%+4.75\%=9.75\%$$ $$cPD_{3} = 5\%+4.75\%+4.51\%=14.26\%$$ My question is what numbers should be applied when modelling ECLs. For example if we have a 3 year loan for 1m USD at 5% interest (for simplicity lets assume that the payments are made annually) then the annual payments will be 367,208 USD. So if we want to calculate the ECLs for this loan do we need to apply the marginal or conditional probabilities? And to what to we apply these probabilities to, the cash flows or the remaining balance of the loan?
2019-12-14 00:15:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5894581079483032, "perplexity": 2016.7587546705304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540569332.20/warc/CC-MAIN-20191213230200-20191214014200-00118.warc.gz"}
https://www.zbmath.org/?q=an%3A0634.03067
## Multipliers in implicative algebras.(English)Zbl 0634.03067 By a multiplier in an implicative (i.e. Hilbert) algebra A we mean a mapping $$\phi$$ : $$A\to A$$ such that $$\phi (a\to b)=a\to \phi (b)$$ holds for all a, b in A. We describe some elementary properties of multipliers themselves as well as of their kernels and fixed point sets. In an implicative semilattice, every isotonic multiplier turns out to be a closure endomorphism (and vice versa); this case was considered by the author in some details in Latv. Mat. Ezheg. 30, 136-149 (1986; Zbl 0621.06002). See also W. H. Cornish, Math. Semin. Notes, Kobe Univ. 8, 157-169 (1980; Zbl 0465.03029). Reviewer: J.Cirulis ### MSC: 03G25 Other algebras related to logic 06A15 Galois correspondences, closure operators (in relation to ordered sets) ### Citations: Zbl 0621.06002; Zbl 0465.03029
2022-06-25 02:04:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.847822368144989, "perplexity": 1440.0713076384873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103033925.2/warc/CC-MAIN-20220625004242-20220625034242-00260.warc.gz"}
https://math.stackexchange.com/questions/2424905/how-to-explain-to-someone-if-this-notation-is-mathematically-correct-or-not
# How to explain to someone if this notation is mathematically correct or not? I got a question from a friend, who is tutoring high school students for Math after school. The question is basically this: Does writing $f(x) = x^2$ mathematically mean the same as $F(x) = x^2$? I read from other S.E sections that function notations are conventions, not fixed rules. How will my friend explain to her students about when to (mathematically correct) use notations $f(x)$, $g(x)$, $h(x)$ or $F(x)$, $G(x)$, $H(x)$? • There is no universal convention about what type of letters to use... You have to note that, as per the above definition, for every value assigned to $x$ the two functions $f(x)$ and $F(x)$ have the same value; thus, basically, they are the same "object". Sep 11, 2017 at 6:26 • You can denote a function with any letter or even a non-letter symbol and it would be "mathematically correct". There are some conventions within specific branch of mathematics. E.g. when you teach about integrals, you would often use big $F$ for an antiderivative of small $f$ and in topology open sets are most often denoted with $U,V,W$ instead of $A,B,C$. But it's just for convenience, if you defined it in a different way it would be also be correct, even if less common. Sep 11, 2017 at 6:33 • You can draw a squirrel if you want, and use that as the name of your function. It would be a hassle, and trying to type it on a computer would really be troublesome compared to the alternatives, but it wouldn't be uncorrect. Just uncommon. Sep 11, 2017 at 7:22 No, they mean different things, at least to one of your dumb computers. Try defining $f(x) = x^2$ in a Mathematica notebook, then later in the same notebook, ask for the value of $F(\phi)$. It probably won't answer $\phi + 1$, not unless you have also defined $F(x) = x^2$. Most likely it will complain that $F$ is undefined, or maybe you have defined that for something else. Humans are only slightly smarter than computers. A human sees $F(5)$ and he might think you mean either the fifth Fibonacci number or the fifth Fermat number. So, for clarity, I suggest you use $f(x)$ for the first generic function you define in a given context, and $F(x)$ for a function named after some man or monster, god or demon. Or don't follow this suggestion, I will enjoy the ensuing confusion. Mwahahahahahahahaha!
2022-08-18 05:32:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7961170673370361, "perplexity": 264.85343565038073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573163.7/warc/CC-MAIN-20220818033705-20220818063705-00333.warc.gz"}
http://tex.stackexchange.com/tags/programming/new
# Tag Info 0 Another answer from the perspective of "you don't need variable variables". This uses \long\def and adds a \vtop box to the material under the line. \documentclass{article} \usepackage{setspace} \long\def\sigblockB#1#2#3% Defined 3 arguments% \long\def allows for paragraphs % within the arguments of a control sequence ... 5 If I understand correctly, you want to use the relative positions of the saved nodes in your new picture. That is, each node should refer to a position in the new picture relative to the new origin. Here's some code that saves all the data for a list of specified nodes which can then be restored at a later time in the document. It uses LaTeX3 stuff ... 1 Testing for the fd-file is useless: It exists anyway, also it contains only font definitions, it is not the font itself. You need to test for a tfm and/or a pfb-font which is specific to the complete version. With pipes enabled you could do something like this (see Search for files first in the texmf trees): \documentclass{article} \pdfmapfile{=mtpro2.map} ... 5 If you're mixing math and LaTeX you should consider looking into the sagetex package which gives you access to a computer algebra system, called Sage, to handle the math. Documentation on basic statistics is here. You'll need Sage installed locally on your computer or, better yet, you use the free Sagemath Cloud site. In that case, no Sage to download and ... 2 If ( is used, it will be absorbed as #2. \documentclass{article} \usepackage{tikz} \makeatletter \def\macro{\pgfutil@ifnextchar[{\macrob}{\macrob[]}} \def\macrob[#1]#2{% \begingroup \def\parenthesis@for@err{(}% \def\maybe@parenthesis{#2}% \ifx\maybe@parenthesis\parenthesis@for@err \PackageError{mynicepackage}{You donkey!}{I told you to use {}, ... 2 Perhaps not the easiest or quickest way, but the \@ifnextchar way could be exploited to test whether there's a ( after the optional argument [] (it does not check for ) however. Use \GenericInfo{...}{...} to write some information to the Log File \documentclass{article} \usepackage{tikz} \begin{document} \makeatletter ... 2 Though I recommend to use the answer from @egreg here is a way to implement what you are looking for using \@ifnextchar. The idea is that \sigblock prepares everything until it comes to processing the lines beneath the signature field. Then \sigblock@ is invoked which will set the next grouped argument as a line beneath the signature field and starts a ... 2 By trying to solve this problem I've come up with a possibly interesting macro, so I want to share it here. This macro is called \vardef and allows us to define a macro which can receive a variable number of arguments (any number, not limited to 9 arguments) enclosed in curly brackets. Inside the definition of a \vardefined macro one can use the macro ... 7 You're overthinking: the tool is already there, namely tabular. \documentclass{article} \newcommand{\sigblock}[3]{% \par\vspace{\medskipamount}\noindent \hspace*{#1in}\makebox[#2in]{\hrulefill}\\*[.2ex] \hspace*{#1in}% \begin{tabular}{@{}l@{}} #3 \end{tabular}% } \begin{document} \sigblock{0}{3}{Notary Public \\ At Large} ... 1 Use a \Longstack. \documentclass{article} \documentclass{article} \usepackage{stackengine,lipsum,setspace} \setstackEOL{\cr} \def\sigblock#1#2#3% Defined 3 arguments% {\singlespacing{\vbox{\vskip.75in\noindent\hskip#1in% {\hbox to #2in{\leaders\hbox to 0.00625in{\hfil.\hfil}\hfill}}}% \par\noindent\hskip#1in\Longstack[l]{#3}}} \parindent 0pt ... 1 Here's a easy way with xparse and using an optional g argument, which allows to use an optional argument to be used with {...} group pair. \documentclass{article} \usepackage{setspace} \usepackage{xparse} \NewDocumentCommand{\sigblock}{mm+m+g}{% \IfValueTF{#4}{% \singlespacing{\vbox{\vskip.75in\noindent\hskip#1in% {\hbox to #2in{\leaders\hbox to ... 16 The commands \rm, \bf etc are called "deprecated" because they have been removed from the latex kernel. The way the commands work don't fit in the (much better) "new font selection scheme" (nfss) used by latex2e. A number of classes nevertheless provide the definitions for these commands, but the definitions differ. E.g. memoir: \@memoldfonterr {\rm ... 19 Is there any reason(s) not to use \let to redefine \bf to \bfseries and \it to \itshape? Yes, there are good reasons. :-) With the above \let-based setup, {\bf\it ...} produces bold-italic. In contrast, in a plain-TeX document {\bf\it ...} produces italic text. If the goal is to make \bf and \it behave the same way in LaTeX and plain-TeX, the \let-based ... Top 50 recent answers are included
2016-05-03 01:30:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9462550282478333, "perplexity": 3612.120908295821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860118321.95/warc/CC-MAIN-20160428161518-00189-ip-10-239-7-51.ec2.internal.warc.gz"}
https://cstheory.stackexchange.com/questions/34633/how-to-check-if-a-the-language-represented-by-a-dfa-is-finite
# How to check if a the language represented by a DFA is finite [closed] I am studying regular languages and D FA. I have implemented D FA in Java. I have to write a function which tells if the language represented by a D FA is finite or not. I need a method or algorithm to do so. What I have already figured out is that if the D FA has loops in it then it can possibly recognize infinite words. • Take the transition matrix to powers n ... 2n. Accept state should be zero on all if the DFA only accepts finite strings. – Chad Brewbaker May 1 '16 at 1:42 For a proof, just note that if there are no cycles in an accepting path, there could be no repetition in the state sequence traversed by a word which is accepted, hence we could only decribe a finite number of words (more precisely $|Q|^d$, where $Q$ is the set of states and $d$ describes the length of a longest accepting path in the automata, is an upper bound for the distinct number of words that could be accepted). Otherwise, if we have cycles we could "pump" some accepted words, thus getting an infinite number of them.
2021-05-17 00:17:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9006668329238892, "perplexity": 386.33249553128337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991921.61/warc/CC-MAIN-20210516232554-20210517022554-00361.warc.gz"}
http://gasstationwithoutpumps.wordpress.com/tag/circuits/
# Gas station without pumps ## 2013 April 10 ### Supplemental sheets, draft 3 This post updates and replaces the Supplemental sheets, draft 2. It reflects the redesign of the course based on running a prototype version of the course in as a group tutorial in Winter 2013. ### Lecture Course Information to accompany Request for Course Approval Course #: 101 Catalog Title: Applied Circuits for Bioengineers 1. Are you proposing a revision to an existing course? If so give the name, number, and GE designations (if applicable) currently held. This is not a revision to any existing course.A prototype version of the course was run as BME 194 Group Tutorial in Winter 2013. Notes on the design and daily running of that prototype can be found at https://gasstationwithoutpumps.wordpress.com/circuits-course-table-of-contents 2. In concrete, substantive terms explain how the course will proceed. List the major topics to be covered, preferably by week. The Applied Circuits course is centered around the labs in the accompanying lab course.  Concepts are taught as needed for the labs, with design and analysis exercises in the lecture course cementing the understanding. The recurring theme throughout the course is voltage dividers: for change of voltage, for current-to-voltage conversion, for high-pass and low-pass RC filters, in Wheatstone bridges, and as feedback circuits in op amp circuits.  The intent of this course is to provide substantial design experience for bioengineering students early in their studies, and to serve both as as bridge course to entice students into the bioelectronics concentration and as a terminal electronics course for those students focussing on other areas. 1. Basic DC circuit concept review: voltage current, resistance, Kirchhoff’s Laws, Ohm’s Law, voltage divider, notion of a transducer. The first week should cover all the concepts needed to do the thermistor lab successfully. 2. Models of thermistor resistance as a function of temperature. Voltage and current sources, AC vs DC, DC blocking by capacitors, RC time constant, complex numbers, sine waves, RMS voltage, phasors. The second week should cover all the concepts needed to do the electret microphone lab successfully. 3. Low-pass and high-pass RC filters as voltage dividers, Bode plots. Concepts necessary for properly understanding digitized signals: quantized time, quantized voltage, sampling frequency, Nyquist frequency, aliasing. 4. Amplifier basics: op amps, AC coupling, gain computation, DC bias for single-power-supply offsets, bias source with unity-gain amplifier.  In the lab, students will design, build, and test a low-gain amplifier (around 5–10 V/V) for audio signals from an electret microphone. We’ll also include a simple current-amplifier model of a bipolar transistor, so that they can increase the current capability of their amplfier. 5. Op amps with feedback that has complex impedance (frequency-dependent feedback), RC time constants, parallel capacitors, hysteresis, square-wave oscillator using Schmitt triggers, capacitance-output sensors, capacitance-to-frequency conversion.   Topics are selected to support students designing a capacitive touch sensor in the accompanying lab. 6. Phototransistors and FETs for the tinkering lab and for the class-D amplifier lab. In preparation for the lab in which students model a pair of electrodes as R+(C||R), we will need a variety of both electronics and electrochemistry concepts: variation of parameters with frequency, impedance of capacitors, magnitude of impedance, series and parallel circuits, limitations of R+(C||R) model, and at least a vague understanding of half-cell potentials for the electrode reactions: Ag → Ag+ + e-, Ag+ + Cl- → AgCl, Fe + 2 Cl-→ FeCl2 + 2 e-. 7. Differential signals, twisted-pair wiring to reduce noise, strain gauge bridges, instrumentation amplifier, DC coupling, multi-stage amplifiers. Topics are selected to support the design of a 2-stage amplifier for a piezoresistive pressure sensor in the lab. 8. System design, comparators, more on FETs. Students will design a class-D power amplifier to implement in the lab. 9. A little electrophysiology: action potentials, electromyograms, electrocardiograms. Topics are chosen so that students can design a simple 3-wire electrocardiogram (EKG) in the lab.There will also be a bit more development of simple (single-pole) filters. 10. The last week will be review and special topics requested by the students. 3. Systemwide Senate Regulation 760 specifies that 1 academic credit corresponds to 3 hours of work per week for the student in a 10-week quarter. Please briefly explain how the course will lead to sufficient work with reference to e.g., lectures, sections, amount of homework, field trips, etc. [Please note that if significant changes are proposed to the format of the course after its initial approval, you will need to submit new course approval paperwork to answer this question in light of the new course format.] The combination of BME101 and BME101L is 7 units (21 hours per week).  The time will be spent approximately as follows: • 3.5 hours lecture/discussion • 3.5 hours reading background and circuits text • 3 hours read lab handouts and doing pre-lab design activities • 6 hours lab • 5 hours writing design reports for lab 4. Include a complete reading list or its equivalent in other media. No existing book covers all the material.  For the prototype run of the course, we relied heavily on Wikipedia articles, which turned out to be too dense for many of the students.  Other alternatives (such as Op amps for everyone by Ron Mancini http://www.e-booksdirectory.com/details.php?ebook=1469 Chapters 1–6 and Op Amp Applications Handbook by Analog Devices http://www.analog.com/library/analogDialogue/archives/39-05/op_amp_applications_handbook.html Sections 1-1 and 1-4) were also much too advanced. In future we will most likely use the free on-line text All about Circuits as the primary text, with material not covered there (such as the various sensors) coming mainly from Wikipedia and the datasheets for the components. 5. State the basis on which evaluation of individual students’ achievements in this course will be made by the instructor (e.g., class participation, examinations, papers, projects). Students will be evaluated primarily on design reports with some in-class or take-home quizzes to ensure that they do the needed reading on theoretical concepts. 6. List other UCSC courses covering similar material, if known. EE 101 covers some of the same circuit material, but without the focus on sensors and without instrumentation amps.  It covers linear circuit theory in much more depth and focuses on mathematical analysis of complicated linear circuits, rather than on design with simple circuits.  The expectation for bioengineering students is that those in the bioelectronics track would take BME 101 before taking EE101, and that those in other tracks would take BME 101 as a terminal electronics course providing substantial engineering design.  The extra material in BME 101 would prepare the bioengineering students better for EE 101. Physics 160 offers a similar level of practical electronics, but focuses on physics applications, rather than on bioengineering applications, and is only offered in alternate years. 7. List expected resource requirements including course support and specialized facilities or equipment for divisional review. (This information must also be reported to the scheduling office each quarter the course is offered.) The lecture part of the course needs no special equipment—a standard media-equipped classroom with a whiteboard, screen, and data projector should suffice. Having a portable laptop-connected oscilloscope would make demos much easier to do, but is not essential. The lecture course is not really separable from the associated lab course,whose equipment needs are described on the supplemental sheet for that course. The course requires a faculty member (simultaneously teaching the co-requisite Applied Circuits lab course) and a teaching assistant or undergraduate group tutor for discussion sections and assistance in grading.  The same TA/group tutor should be used for both the lecture and the lab courses. 8. If applicable, justify any pre-requisites or enrollment restrictions proposed for this course. For pre-requisites sponsored by other departments/programs, please provide evidence of consultation. Students will be required to have single-variable calculus and a physics electricity and magnetism course. Both are standard prerequisites for any circuits course. Although DC circuits can be analyzed without calculus, differentiation and integration are fundamental to AC analysis. Students should have already been introduced to the ideas of capacitors and inductors and to serial and parallel circuits. The prerequisite courses are already required courses for biology majors and bioengineering majors, so no additional impact on the courses is expected. 9. Proposals for new or revised Disciplinary Communication courses will be considered within the context of the approved DC plan for the relevant major(s). If applicable, please complete and submit the new proposal form (http://reg.ucsc.edu/forms/DC_statement_form.doc or http://reg.ucsc.edu/forms/DC_statement_form.pdf) or the revisions to approved plans form (http://reg.ucsc.edu/forms/DC_approval_revision.doc or http://reg.ucsc.edu/forms/DC_approval_revision.pdf). This course is not expected to contribute to any major’s disciplinary communication requirement, though students will get extensive writing practice in the design reports (writing between 50 and 100 pages during the quarter). 10. If you are requesting a GE designation for the proposed course, please justify your request making reference to the attached guidelines. No General Education code is proposed for this course, as all relevant codes will have already been satisfied by the prerequisites. 11. If this is a new course and you requesting a new GE, do you think an old GE designation(s) is also appropriate? (CEP would like to maintain as many old GE offerings as is possible for the time being.) No General Education code is proposed for this course, as all relevant codes (old or new) will have already been satisfied by the prerequisites. ### Lab course Information to accompany Request for Course Approval Course # 101L Catalog Title Applied Circuits Lab 1. Are you proposing a revision to an existing course? If so give the name, number, and GE designations (if applicable) currently held. This is not a revision to any existing course. A prototype version of the course was run as BME 194F Group Tutorial in Winter 2013. Notes on the design and daily running of that prototype can be found at https://gasstationwithoutpumps.wordpress.com/circuits-course-table-of-contents 2. In concrete, substantive terms explain how the course will proceed. List the major topics to be covered, preferably by week. The course is a lab course paired with BME 101, Applied Circuits for Bioengineers.  The labs have been designed to be relevant to bioengineers and to have as much design as is feasible in a first circuits course.  The labs are the core of the course, with lecture/discussion classes to support them. There will be six hours of lab a week, split into 2 3-hour sessions. Lab assignments will generally take two lab sessions, with data collection in the first lab session, and data analysis and design between lab sessions.   Some of the more straightforward labs will need only a single session.  Except for the first intro lab, these labs have been used in the prototype run of the class as 3-hour labs.  Most did not fit in one 3-hour lab session and would benefit from being split into two separate lab sessions with data analysis and design between the sessions. 1. Intro to parts, tools, and lab equipment (single session) 2. Thermistor 3. Microphone 4. Sampling and aliasing (single session) 5. Audio amp 6. Hysteresis oscillator and soldering lab 7. FET and phototransistor 8. Electrode modeling 9. Pressure sensor and instrumentation amp (soldered) 10. Class-D power amplifier 11. EKG (instrumentation amp with filters, soldered) 1. Intro to parts, tools, and lab equipment Students will learn about the test equipment by having them use the multimeters to measure other multimeters. What is the resistance of a multimeter that is measuring voltage? of one that is measuring current? what current or voltage is used for the resistance measurement? Students will be  issued their parts and tool kits, learn to use the wire strippers and make twisted-wire cables for the power supplies to use all quarter.  They will learn to set the current limits on the power supplies and  measure voltages and currents for resistor loads around 500Ω.  This lab will not require a written lab report. Lab skills developed: wire strippers, multimeter for measuring voltage and current, setting bench power supply Equipment needed: multimeter, power supply 2. Thermistor lab The thermistor lab will have two lab sessions involving the use of a Vishay BC Components NTCLE413E2103F520L thermistor or equivalent.
For the first lab session, the students will use a bench multimeter to measure the resistance of the thermistor, dunking it in various water baths (with thermometers in them to measure the temperature). They should fit a simple curve to this data based on standard thermistor models. A class period will be spent on learning both the model and how to do model fitting with gnuplot, and there will be a between-lab exercise where they derive the formula for maximizing | dV/dT | in a voltage divider that converts the resistance to a voltage. For the scond lab session, they will add a series resistor to make a voltage divider. They have to choose a value to get as large and linear a voltage response as possible at some specified “most-interesting” temperature (perhaps body temperature, perhaps room temperature, perhaps DNA melting temperature).  They will then measure and plot the voltage output for another set of water baths. If they do it right, they should get a much more linear response than for their resistance measurements. 
Finally, they will hook up the voltage divider to an Arduino analog input and record a time series of a water bath cooling off (perhaps adding an ice cube to warm water to get a fast temperature change), and plot temperature as a function of time. Lab skills developed: use of multimeter for measuring resistance and voltage, use of Arduino with data-acquisition program to record a time series, fitting a model to data points, simple breadboarding.Equipment needed: multimeter, power supply, thermistor, selection of various resistors, breadboard, clip leads, thermoses for water baths, secondary containment tubs to avoid water spills in the electronics lab. Arduino boards will be part of the student-purchased lab kit. All uses of the Arduino board assume connection via USB cable to a desktop or laptop computer that has the data logger software that we will provide. 3. Electret microphone First, we will have the students measure and plot the DC current vs. voltage for the microphone. The microphone is normally operated with a 3V drop across it, but can stand up to 10V, so they should be able to set the Agilent E3631A  bench power supply to various values from 0V to 10V and get the voltage and current readings directly from the bench supply, which has 4-place accuracy for both voltage and current. Ideally, they should see that the current is nearly constant as voltage is varied—nothing like a resistor.  They will follow up the hand measurements with automated measurements using the Arduino to measure the voltage across the mic and current through it for voltages up to about 4v.  The FET in the microphone shows a typical exponential I vs. V characteristic below threshold, and a gradually increasing current as voltage increases in the saturation region.  We’ll do plotting and model fitting in the data analysis class between the two labs. Second, we will have them do current-to-voltage conversion with a 5v power supply and a resistor to get a 1.5v DC output from the microphone and hook up the output of the microphone to the input of the oscilloscope. Input can be whistling, talking, iPod earpiece, … . They should learn the difference between AC-coupled and DC-coupled inputs to the scope, and how to set the horizontal and vertical scales of the scope. They will also design and wire their own DC blocking RC filter (going down to about 1Hz), and confirm that it has a similar effect to the AC coupling on the scope. Fourth, they will play sine waves from the function generator through a loudspeaker next to the mic, observe the voltage output with the scope, and measure the AC voltage with a multimeter, perhaps plotting output voltage as a function of frequency. Note: the specs for the electret mic show a fairly flat response from 50Hz to 3kHz, so most of what the students will see here is the poor response of a cheap speaker at low frequencies. EE concepts: current sources, AC vs DC, DC blocking by capacitors, RC time constant, sine waves, RMS voltage, properties varying with frequency.Lab skills: power supply, oscilloscope, function generator, RMS AC voltage measurement.Equipment needed: multimeter, oscilloscope, function generator, power supply, electret microphone, small loudspeaker, selection of various resistors, breadboard, clip leads. 4. Sampling and Aliasing Students will use the data logger software on the Arduino to sample sine waves from a function generator at different sampling rates.  They will need to design a high-pass RC filter to shift the DC voltage from centered at 0 to centered at 2.5v in the middle of the Arduino A-to-D converter range.  They will also design a low-pass filter (with corner frequency below the Nyquist frequency) to see the effect of filtering on the aliasing. EE concepts: quantized time, quantized voltage, sampling frequency, Nyquist frequency, aliasing, RC filters. Equipment needed:  function generator, Arduino board, computer. 5. Audio amplifier Students will use an op amp to build a simple non-inverting audio amplifier for an electret microphone, setting the gain to around 6 or 7. The amplifier will need a a high-pass filter to provide DC level shifting at the input to the amplifier. Note that we are using single-power-supply op amps, so they will have to design a bias voltage supply as well. The output of the amplifier will be recorded on the Arduino (providing another example of signal aliasing). The second half of the lab will add a single bipolar transistor to increase the current and make a class A output stage for the amplifier, as the op amp does not provide enough current to drive the 8Ω loudspeaker loudly. EE concepts: op amp, DC bias, bias source with unity-gain amplifier, AC coupling, gain computation. Lab skills: complicated breadboarding (enough wires to have problems with messy wiring). If we add the Arduino recording, we could get into interesting problems with buffer overrun if their sampling rate is higher than the Arduino’s USB link can handle. Equipment needed: breadboard, op amp chip, assorted resistors and capacitors, electret microphone, Arduino board, optional loudspeaker. 6. Hysteresis and capacitive touch sensor For the first half of the lab, students will characterize a Schmitt trigger chip, determining VIL, VIH, VOL, and VOH. Using these properties, they will design an RC oscillator circuit with a specified period or pulse width (say 10μs), and measure the frequency and pulse width of the oscillator. For the second half of the lab, the students will build a relaxation oscillator whose frequency is dependent on the parasitic capacitance of a touch plate, which the students can make from Al foil and plastic food wrap. In addition to breadboarding, students will wire this circuit by soldering wires and components on a PC board designed for the oscillator circuit. Students will have to measure the frequency of the oscillator with and without the plate being touched. We will provide a simple Arduino program that is sensitive to changes in the pulse width of the oscillator and that turns an LED on or off, to turn the frequency change into an on/off switch.  Students will treat the oscillator board as a 4-terminal component, and examine the effect of adding resistors or capacitors between different terminals. EE concepts: frequency-dependent feedback, oscillator, RC time constants, parallel capacitors. Lab skills: soldering, frequency measurement with digital scope. Equipment needed: Power supply, multimeter, Arduino, clip leads, amplifier prototyping board, oscilloscope. 7. Phototransistor and FET First half: characterize phototransistor in ambient light and shaded.  Characterize nFET and pFET. Second half: students will “tinker” with the components they have to produce a light-sensitive, noise-making toy. EE concepts: phototransistors, FETs. Equipment needed: breadboard, phototransistor, power FETs, loudspeaker, hysteresis oscillator from previous lab, oscilloscope. 8. Electrode measurements First, we will have the students attempt to measure the resistance of a saline solution using a pair of stainless steel electrodes and a multimeter. This should fail, as the multimeter gradually charges the capacitance of the electrode/electrolyte interface.Second, the students will use a function generator driving a voltage divider with a load resistor in the range 10–100Ω. The students will measure the RMS voltage across the resistor and across the electrodes for different frequencies from 3Hz to 300kHz (the range of the AC measurements for the Agilent 34401A Multimeter). They will plot the magnitude of the impedance of the electrodes as a function of frequency and fit an R2+(R1||C1) model to the data, most likely using gnuplot. There will be a prelab exercise to set up plotting of the model and do a little hand tweaking of parameters to help them understand what each parameter changes about the curve.Third, the students will repeat the measurements and fits for different concentrations of NaCl, from 0.01M to 1M. Seeing what parameters change a lot and what parameters change only slightly should help them understand the physical basis for the electrical model.Fourth, students will make Ag/AgCl electrodes from fine silver wire. The two standard methods for this involve either soaking in chlorine bleach or electroplating. To reduce chemical hazards, we will use the electroplating method. As a prelab exercise, students will calculate the area of their electrodes and the recommended electroplating current.  In the lab, they will adjust the voltage on the bench supplies until they get the desired plating current.Fifth, the students will measure and plot the resistance of a pair of Ag/AgCl electrodes as a function of frequency (as with the stainless steel electrodes).Sixth, if there is time, students will measure the potential between a stainless steel electrode and an Ag/AgCl electrode.EE concepts: magnitude of impedance, series and parallel circuits, variation of parameters with frequency, limitations of R+(C||R) model.Electrochemistry concepts: At least a vague understanding of half-cell potentials, current density, Ag → Ag+ + e-, Ag+ + Cl- → AgCl, Fe + 2 Cl-→ FeCl2 + 2 e-.Lab skills: bench power supply, function generator, multimeter, fitting functions of complex numbers, handling liquids in proximity of electronic equipment.Equipment needed: multimeter, function generator, power supply, stainless steel electrode pairs, silver wires, frame for mounting silver wire, resistors, breadboard, clip leads, NaCl solutions in different concentrations, beakers for salt water, secondary containment tubs to avoid salt water spills in the electronics lab. 9. Pressure sensor and instrumentation amplifier Students will design an instrumentation amplifier with a gain of 300 or 500 to amplify the differential strain-gauge signal from a medical-grade pressure sensor (the Freescale MPX2300DT1), to make a signal large enough to be read with the Arduino A/D converter. The circuit will be soldered on the instrumentation amp/op amp protoboard. The sensor calibration will be checked with water depth in a small reservoir. Note: the pressure sensor comes in a package that exposes the wire bonds and is too delicate for student assembly by novice solderers. We will make a sensor module that protects the sensor and mounts the sensor side to a 3/4″ PVC male-threaded plug, so that it can be easily incorporated into a reservoir, and mounts the electronic side on a PC board with screw terminals for connecting to student circuits.  This sensor is currently being prototyped, and if it turns out to be too fragile, we will use a Freescale MPX2050GP, which has a sturdier package, but is slightly less sensitive and more expensive. (It also isn’t made of medical-grade plastics, but that is not important for this lab.) Note that we are deliberately notusing pressure sensors with integrated amplifiers, as the pedagogical point of this lab is to learn about instrumentation amplifiers.EE concepts: differential signals, twisted-pair wiring, strain gauge bridges, instrumentation amplifier, DC coupling, gain.Equipment needed: Power supply, amplifier prototyping board, oscilloscope, pressure sensor mounted in PVC plug with breakout board for easy connection, water reservoir made of PVC pipe, secondary containment tub to avoid water spills in electronics lab. 10. Class-D power amplifier 11. Electrocardiogram (EKG) Students will design and solder an instrumentation amplifier with a gain of 2000 and bandpass of about 0.1Hz to 100Hz. The amplifier will be used with 3 disposable EKG electrodes to display EKG signals on the oscilloscope and record them on the Arduino.Equipment needed: Instrumentation amplifier protoboard, EKG electrodes, alligator clips, Arduino, oscilloscope. 3. Systemwide Senate Regulation 760 specifies that 1 academic credit corresponds to 3 hours of work per week for the student in a 10-week quarter. Please briefly explain how the course will lead to sufficient work with reference to e.g., lectures, sections, amount of homework, field trips, etc. [Please note that if significant changes are proposed to the format of the course after its initial approval, you will need to submit new course approval paperwork to answer this question in light of the new course format.] The combination of BME101 and BME101L is 7 units (21 hours per week).  The time will be spent approximately as follows: • 3.5 hours lecture/discussion • 3.5 hours reading background and circuits text • 3 hours read lab handouts and doing pre-lab design activities • 6 hours lab • 5 hours writing design reports for lab 4. Include a complete reading list or its equivalent in other media. Lab handouts: there is a 5- to 10-page handout for each week’s labs, giving background material and design goals for the lab, usually with a pre-lab design exercise.  The handouts from the prototype run of the course can be found at http://users.soe.ucsc.edu/~karplus/bme194/w13/#labs Data sheets: Students will be required to find and read data sheets for each of the components that they use in the lab.  All components are current commodity components, and so have data sheets easily found on the web.  Other readings are associated with the lecture course. 5. State the basis on which evaluation of individual students’ achievements in this course will be made by the instructor (e.g., class participation, examinations, papers, projects). Students will be evaluated on in-lab demonstrations of skills (including functional designs) and on the weekly lab write-ups. 6. List other UCSC courses covering similar material, if known. CMPE 167/L (sensors and sensing technologies) covers some of the same sensors and design methods, but at a more advanced level.  BME 101L would be excellent preparation for the CMPE 167/L course. Physics 160 offers a similar level of practical electronics, but focuses on physics applications, rather than on bioengineering applications, and is only offered in alternate years. 7. List expected resource requirements including course support and specialized facilities or equipment for divisional review. (This information must also be reported to the scheduling office each quarter the course is offered.) The course will need the equipment of a standard analog electronics teaching lab: power supply, multimeter, function generator,  oscilloscope,  computer, and soldering irons. The equipment in Baskin Engineering 150 (commonly used for EE 101L) is ideally suited for this lab. There are 12 stations in the lab, providing a capacity of 24 students since they work in pairs rather than as individuals.  The only things missing from the lab stations are soldering irons and circuit board holders (such as the Panavise Jr.), a cost of about $45 per station. Given that a cohort of bioengineering students is currently about 35–40 students, two lab sections would have to be offered each year. In addition, a few special-purpose setups will be needed for some of the labs, but all this equipment has already been constructed for the prototype run of the course. There are a number of consumable parts used for the labs (integrated circuits, resistors, capacitors, PC boards, wire, and so forth), but these are easily covered by standard School of Engineering lab fees. The currently approved lab fee is about$131, but may need some adjustment to change exactly what tools and parts are included, particularly if the students are required to buy their own soldering irons (a $20 increase). The course requires a faculty member (simultaneously teaching the co-requisite Applied Circuits course) and a teaching assistant (for providing help in the labs and for evaluating student lab demonstrations). Because the lab is such a core part of the combined course, it requires faculty presence in the lab, not just coverage by TAs or group tutors. 8. If applicable, justify any pre-requisites or enrollment restrictions proposed for this course. For pre-requisites sponsored by other departments/programs, please provide evidence of consultation. Students will be required to have single-variable calculus and a physics electricity and magnetism course. Both are standard prerequisites for any circuits course. Most of the labs can be done without calculus, but it is essential for the accompanying lecture course. 9. Proposals for new or revised Disciplinary Communication courses will be considered within the context of the approved DC plan for the relevant major(s). If applicable, please complete and submit the new proposal form (http://reg.ucsc.edu/forms/DC_statement_form.doc or http://reg.ucsc.edu/forms/DC_statement_form.pdf) or the revisions to approved plans form (http://reg.ucsc.edu/forms/DC_approval_revision.doc or http://reg.ucsc.edu/forms/DC_approval_revision.pdf). This course is not expected to contribute to any major’s disciplinary communication requirement, though students will get extensive writing practice in the design reports (writing between 50 and 100 pages during the quarter). 10. If you are requesting a GE designation for the proposed course, please justify your request making reference to the attached guidelines. No General Education code is proposed for this course, as all relevant codes will have already been satisfied by the prerequisites. 11. If this is a new course and you requesting a new GE, do you think an old GE designation(s) is also appropriate? (CEP would like to maintain as many old GE offerings as is possible for the time being.) No General Education code is proposed for this course, as all relevant codes (old or new) will have already been satisfied by the prerequisites. ## 2013 March 21 ### Student writing Filed under: Circuits course — gasstationwithoutpumps @ 08:59 Tags: , , , , , , , In How does blogging about science benefit students?, Sandra Porter recommends that students (specifically biotech students at Portland COmmunity College) keep a blog : My hypothesis is that a science blog for a science student can serve the same purpose that a portfolio serves for an artist or a set of articles serves for a writer. Your blog can be your record of accomplishments. Not only can your blog document your work, your blog can show that you can write, that you can spell (not a skill to take for granted), and can give you a chance to describe what you’ve done. She describes her first job interview and what she is doing to avoid similar embarrassment for her students. She has students in one class keep a professional lab notebook and bring it to interviews—showing that they can keep a proper lab notebook and providing documentation to support their assertion of knowing various protocols. Student blogging is another approach she is experimenting with. She encourages the students to use blogs as an on-line notebook (much like I’ve been doing on this blog for the circuits course), and to include the URL for the blog in resumes and cover letters for jobs. If interviewers are interested, they can check out a few posts on the blog to see if the student can write coherently (a very important skill that can not be automatically assumed of college graduates) and, if there are search boxes and appropriate tags on the posts, whether the students know the protocols and equipment that the job requires. In a subsequent post, The ten commandments of student science blogging, she talks about the guidelines she gives students for their blogs, to keep them from accidentally doing unprofessional things that would hurt, rather than, help their chances of getting a job. The biggest problem I see with her recommendations is that the only audience she has identified for the student is a mysterious “job interviewer” whom the students have never met. Writing for an unknown, difficult-to-imagine audience is hard. Writing for an imagined expert (an interviewer or professor) almost always brings out the worst writing, with inflated diction, misused jargon, and awkward ungrammatical sentences. When writing to show that they know something to someone who knows it better, students stumble over nearly every sentence—leaving out important concepts and tossing in irrelevant minor points in a vain attempt to impress. I think it might benefit the students to be given a more specific audience—one that they can picture writing to directly and actually informing of something new. For an online lab notebook, it could be students at other schools (“look at the cool stuff we get to do here!”) or future students in the same lab (“never use the pink labels in the freezer—the glue on them cracks in the cold and the labels fall off”), both of whom are imaginable audiences. The advice I gave in my circuits course is the standard advice I give to students: Write to students taking the course next year. Assume they know what you knew coming into the course, but explain to them anything that you didn’t already know. Make the report detailed enough that a student reading it could duplicate your work without having access to the original assignment—though they might have to looks a few things up on the web or in text books. (Provide pointers to appropriate readings, when possible.) Explain not just what you did, but why, and provide warnings to help your reader avoid mistakes that you made. Most of the students in the circuits course got this idea, and the reports were mostly coherent and directed at the right audience, though they were a little light on pointers to appropriate reading. One thing that Sandra Porter doesn’t mention in her “ten commandments”, but which I had to really rant about in my course: “Get the details right!” Sandra mentions spelling and punctuation, which are markers for attention to details, but the accuracy of the content is far more important. I can forgive an occasional typo (though failure to run text through a spell checker indicates a level of sloppiness that would disturb me as a job interviewer), but the main engineering content needs to be checked and double-checked, both for consistency with the lab notebook notes and for general sanity (recompute the corner frequency from the RC values in the schematic—is that what was intended?). If you are giving a circuit schematic, every wire must be correctly connected, every component must have the correct value, and pin numbers should be correct. The students in the circuits course had incredible difficulty with checking their own and each other’s work for accuracy, and obvious errors (like power-ground shorts) occurred on most of the assignment first drafts. For a biotech student, the equivalent would be getting the wrong reagent in a protocol, putting ice in autoclave, or replacing µg with mg. The rate of errors in schematics did not drop much over the quarter, though I felt it should have. Other writing problems (like poor audience assessment, overuse of passive, or misuse of “would”) were generally fixed after being pointed out, but the sloppiness in the circuit diagrams continued to be a problem all quarter. By “sloppiness” I don’t mean poor drawing skills, as most of the students used CircuitLab to draw neat schematics, but semantic errors that changed the meaning of the circuits. If anyone has ideas for improving student attention to details in schematics, I’d appreciate hearing them. ## 2013 March 20 ### Bar exam for circuits class Filed under: Circuits course — gasstationwithoutpumps @ 13:45 Tags: , , , , Front of T-shirt Back of T-shirt. The silkscreen is intended as a white silkscreen over a black T-shirt. Because the applied circuits course did not have a final exam, the students asked if we could get together at a bar for a beer during the exam time instead (which my son quipped should be considered a “bar exam”). Because we had some underage students in the class, I chose Caffe Pergolesi as a site (they serve beer but also coffee, hot chocolate, and coffee-house snacks). The café was surprisingly crowded for 4 p.m. on a Tuesday (probably due to it being the first day of exam week), and I had to sit out on the deck, because there were no tables available inside. At first I was a bit worried that no one would show up (a common problem for parties I’ve tried to have in the past, so I’ve stopped attempting to have parties), especially when no one was there by 4:10. But the students started trickling in and we eventually had all the students in the class—even those who had sent e-mail saying they couldn’t make it. I showed the students the T-shirt design, modified according to their suggestions the day before, and they approved it. I still need to check with the screen printer that the SVG files I have will work—I think that the back is ok (it is a single black rectangle for the T-shirt with a single path for the white layer on top), but I’m worried about the front. The text, slug, and small thought bubbles should be fine, but the black images on the large thought bubble are currently objects on top of the white thought bubble, and I’ve not figured out how to get Inkscape to make them cuts through the thought bubble to the black T-shirt underneath. The Inkscape “path difference” operation, which worked for the back of the T-shirt doesn’t do the right thing with these images. So far I’ve gotten 7 orders for T-shirts from the class (including one for me and one for my son), and I’m hoping for another 5 or 6 to amortize the setup costs. I think that we’ll have about$90 in setup plus $12/shirt, so 7 shirts would be about$25 each and 12 shirts would be about $20 each (long sleeve shirts a couple of bucks more). I used the time to get feedback from the class about how it should be modified in future, starting from a handout I’d given them the day before. Here are some of my notes from the discussion. If I’ve missed anything, I hope that students will send me e-mail. • Parts and tools to eliminate: velcro cable ties (unused), long-nose pliers (low quality and not used), thermometers (change to lab equipment), LEDs (not used). • Parts and tools to add: inductor for class D amplifier, soldering iron. A soldering station like the one I have (and similar to the ones they used in the lab this year) would add$20 to the cost of the course. • It may be worthwhile upgrading the screwdriver set, as the under $2 set was really low-quality and some of the screwdrivers failed (blade slipped in handle, so that screwdriver did not turn with handle). • I had been worried about the high price for the large assortment of resistors ($13.35 for 1120 resistors, 10 each of 112 sizes), but the students liked that they always had whatever resistor size they needed, and were contemptuous of the approach used in EE101 of providing students with only about 20 resistors of the precise sizes that the faculty had decided the students would use. • One student suggested having a protoboard for designing the class D amplifier, since that is something they might want to keep.  I’ll have to think about that, as it doesn’t strike me as an immediate win, though I can see wanting to keep the power amplifier.  One problem is that the class-D amplifier is not as generic a project as the instrumentation amplifiers, so it is harder to come up with a general-purpose protoboard. Also, most students ended up having to do a lot of experimenting to get the biasing to work out for the power FETs, which could be difficult on a PC board.  The class-D amplifier also needs a bit more space than the two instrumentation amp projects, so a PC board for it would have to be bigger ($2/board instead of$1/board).  Having the same protoboard for both the pressure-sensor lab and the EKG lab meant that time spent learning how to use the protoboard was amortized over two projects, which would not be the case for a special-purpose power-amp board. • One student suggested adding a voltmeter for home use, but the problem there is that voltmeters that can read AC voltage correctly for 100kHz signals are mostly in the $100-and-up range. The$5 voltmeters that could be put in a kit for everyone to buy are not useful for some of the labs. • Students suggested that the first quiz should be given as homework instead of a quiz—a good idea, since the questions were too hard for the students as a quiz, and having time to think about them and discuss them with each other would lead to more learning. • The students do not think that adding a textbook to the class would help, but being directed to the All about Circuits readings more often (including the worksheets) might help.  They generally found the Wikipedia articles too detailed and too broad to be helpful in learning the material.  They got fairly good at at searching the web for keywords and finding lecture powerpoints from other courses that were relevant.  No one found a steady source of good material though—the searches tended to find different sources for each topic.  The students reported being able to find data sheets fairly easily and consulting them fairly often, so at least one of the goals of the course was met. • One student reported that soldering the instrumentation amp for the pressure sensor lab seemed a bit pointless to some, as they don’t buy the pressure sensors to connect to, so a permanent board is not much use. The benefits (soldering practice and less noise pickup from long wires) may not justify the extra effort of soldering. • We discussed re-ordering the labs, moving the electrode measuring and modeling lab later, and the sampling and aliasing lab earlier.  A possible new order is 1. Thermistor 2. Sampling and aliasing 3. Microphone 4. Audio amp 5. Hysteresis oscillator 6. FET and phototransistor 7. Electrode modeling 8. Pressure sensor 9. Class-D power amplifier 10. EKG That order could cause some difficulty for the sampling lab, which needs RC filter design (hence complex impedance), so maybe swapping the mic and sampling labs would be better. • We also discussed the idea of having 2 labs a week (both Tuesday and Thursday), with a data analysis day in between (to teach gnuplot scripting and fitting models).  None of the students had done model fitting (other than straight lines) in any other course, so this is a skill worth spending a bit more time on in class.  Having 2 105-minute labs a week (the standard TTh time slot) would probably not be enough, as that is barely more than the 3-hour lab weekly lab this quarter, and the setup time would probably eliminate any gains.  I’d probably have to schedule 2 time slots per lab (say 10–1:45, 2–5:45, or 6–9:45).  If the course grows to full size, I would be spending 8–12 hours in the lab on Tuesdays and Thursdays, without break. • If I do have more lab time next year, I could start a little slower, using the first week to have students learn to identify all the parts, mark the capacitor bags with the capacitor sizes, learn to use the ohmmeter and power supplies, … .  Some of the later labs would have no more time than this year, but some of them needed no extra time. • Students would like several explanations to come earlier in the course relative to the labs—FETs before the microphone lab, PN junctions and phototransistors before the tinkering lab, block diagrams earlier in the course, … .  I agree, and moving the first labs a week later could help with that.  I’ll be doing a day-by-day topic planner before resubmitting the course approval paperwork.  One problem with teaching block diagrams earlier is that—like outlining in writing—they’re really only useful once the complexity of the design gets high enough that subdividing the problem is useful. • The students were pretty pleased with the data logger software that my son wrote.  The biggest complaint was about the logger freezing when recording a long run at high sampling rate (a known problem).  I believe that he is developing a fix for that problem, which will generally result in faster live charts.  Students also like the idea of being able to produce eps, pdf, png, or svg output directly from the data logger, so that they didn’t feel the need to make screenshots.  Providing starter gnuplot scripts (which they could then add to in order to do model fitting) was also attractive to them.  There was one request for icon-based executables (avoiding the command line), but I actually prefer for engineering students to have to learn to use command-line tools—I was shocked that they had gotten to their senior year and had not learned how to use command lines. • Students thought that the current prereqs for the course were fine—they did not see a need to add a programming prereq, unless the course was changed in a major way to include Arduino programming (which I’m not tempted to do, as there are already courses on campus covering that).  They did think that the course needed to remain an upper-division course, but that sophomores might be able to handle it by the Spring (which is when it will be scheduled in future). • Some students thought that the course could be reduced to 9 labs (from 10)—mainly to reduce the number of reports written.  I think that we could achieve that by putting the microphone lab and audio amp lab together and having 3 lab sessions with only one report.  We might be able to combine the hysteresis lab and the tinkering (FET and phototransistor) lab into one report also. • The students really liked the undergrad group tutor we had—saying that he was the best TA they’d ever had.  I believe that he is graduating this year (as are all the students in the course), so I don’t know whether we’ll be able to get as good an assistant next year. • Students liked having learned gnuplot, though they initially struggled with it and hated it.  Once they got past the initial learning, they found it useful for senior theses and courses other than the circuits course. • Overall, students thought that the class had met most of the learning goals I had set for it, and several of them wished the course had been available to them earlier—some of them might even have opted for the bioelectronics track (they were all biomolecular track), had they taken this course early enough (and if EE would accept it as prereq to the other upper-division courses needed for bioelectronics).  I’m certainly going to try to convince the EE faculty that this course can serve as more than adequate preparation for courses like signals and systems (better than the existing circuits course). The students in the class gave me two bottles of wine as a thank-you for the course—that is a first for me in 30 years of being a professor.  Most often students are glad to have survived my courses, but don’t generally appreciate them until several years later. The student appreciation certainly isn’t because I’ve been grading leniently—the class is mostly in the B- to B+ range, and some had to go through 2 or 3 drafts of the lab reports to get to even that level. There may be one or two A- grades (I still have the last 2 lab reports to grade, so I don’t know yet—I’m hopeful, but I’m not going to give out As unless the work justifies them). I think that the recognized that I was genuinely interested not just in the material but in getting them to do real engineering design and to think like engineers.  Several have taken to heart the “try it and see” mantra and have learned to appreciate the value of “sanity checks”.  I think that the value of a UC education lies mainly in these high-contact “artisanal” courses, not in the mega-lectures and cookbook labs that they have mostly been suffering through.  (To be fair, many of them are working on senior theses in various faculty labs, so they have had high-contact educational experiences—just not structured as a required course.) ## 2013 March 18 ### Last day of circuits class Filed under: Circuits course — gasstationwithoutpumps @ 18:06 Tags: , , , , , Today was an attempt to cover questions that students had sent me over the weekend. I talked a little about PN junctions (using the analogy of diffusion of sodium and potassium across the cell membrane from last week’s guest lecture to discuss the voltage that is produced by diffusion of holes and electrons across the PN junction).  We then covered diodes, photodiodes (and photovoltaic cells), bipolar transistors, and phototransistors.  That whole lecture needs to come before the phototransistor lab. I then talked about class A, B, and C amplifiers: all of which are the same amplifier structure, but with different bias voltages.  I described (in rough terms) the efficiency of each circuit and gave an application for a class C amplifier driving a LC resonant load.  I then gave a crude bipolar class AB output circuit, as basically two class-B circuits with opposite polarities. We also discussed (very briefly) brushed DC motors and stepper/brushless motors. We ended with some redesign of the T-shirt for the course.  I’ll be showing them a new draft of the design tomorrow and taking their orders. Once the design is finalized, I’ll put a rendition of it on this blog—though it will not be the original design, since I’m doing this one in SVG using Inkscape, and WordPress.com still doesn’t support svg images. ## 2013 March 16 ### Twenty-eighth day of circuits class Filed under: Circuits course — gasstationwithoutpumps @ 12:29 Tags: , , , Yesterday’s lecture was a mish-mash of odds and ends that we hadn’t covered well previously, and questions that had come up. I started with talking about where the EKG signal comes from.  The guest lecturer on Wednesday had done a good job of covering action potentials, but I pointed out that we were not sticking electrodes into their heart muscle cells: we had access to only the outside of the cells.  So where was the differential signal we were measuring coming from?  This had been one of my first puzzles in trying to understand how an EKG works, and they were mostly just as clueless about it as I had been.  The explanation that I settled on was looking at each of the cells as a little capacitor with a battery that charged it and a switch that discharged it, with one side accessible to us.  There is a resistance from each cell to each of the differential electrodes, and from the differential electrodes to the body reference electrode, so that the measurement electrodes act like they are in a voltage divider between the cell and the reference electrode.  The voltage difference between the differential electrodes depends on the difference in resistance from the cell to the electrodes, and the signal we see depends on the change in position of the discharge wave.  If everything depolarized at the same time, we’d see almost no difference voltage, but as the wave sweeps from left-to-right (or right-to left) we see difference voltages.    It’s not a perfect explanation of where the EKG signal comes from, but it is better than leaving them thinking that they are seeing the action potential directly. After that a question came up about the 60Hz noise in the EKG signal, and where it came from.  I talked about the loop formed by the LA and RA wires and the body between them as an electromagnetic pickup, and how we could reduce the electromagnetic pickup by twisting the wires together more to reduce the area of the loop.  I also discussed capacitive coupling of 60Hz into the wires, reminding them of the capacitive touch sensor they had made earlier in the quarter.  We talked a bit about shielding cables and Faraday cages.  While on the subject of noise, I also mentioned the problem of microphonics in the nanopore equipment.  We have not discussed thermal noise or other problems of designing for very small signals. Students asked where the 60Hz hum in their class-D power amplifiers came from, and I talked about ground loops and noise pickup in their power lines.  The op amps they were using had excellent power-supply noise rejection, but the voltage reference they were using (a pair of resistors as a voltage divider and a unity-gain buffer) provides an excellent path for noise in the power supply to be coupled into the amplifier.  I talked about two ways to reduce the hum: using twisted cables for the DC power, to reduce the electromagnetic pickup of 60Hz noise, and using a Zener diode reference instead of a simple voltage divider for the Vref signal.  I’m wondering whether I should add Zener diodes to the parts kit next year, or even whether I should get an adjustable voltage reference like the TL431ILP.  Either one is about 20¢ each in the small quantities that we would need.  The Zener diode is easier to explain, but the adjustable voltage reference is more versatile (the voltage is set by 2 external resistors as a voltage divider with the output of the voltage divider held at 2.5V) and has less variation in voltage with current (output impedance of 0.2Ω instead of 5–30Ω).  One minor problem with the TL431LP is that the lowest reference voltage it provides is 2.5V (using a wire from the “cathode” to the reference feedback input).  Since we are doing everything with power supplies of 5V or more, this shouldn’t be a problem for us. After talking about noise, a question came up about the fat wires used for loudspeakers in stereo systems and whether they were shielded.  I managed to get the class to come up with the correct explanation: that the wires are fat to reduce resistance and avoid I2R power losses.  Currents to loudspeakers tend to be pretty big, since the resistance of the speaker itself is typically only 8Ω.  We talked about microphone cables being shielded (nowadays, I think that they are mostly twisted pairs inside a foil or metalized plastic shield, rather than coaxial cable) but that the tiny voltages and currents that speaker wires would pick up not mattering, since the signals were not amplified.  I also mentioned that solar panels were generally wired with fat wires also, to reduce the I2R power losses, since the voltages of the solar panels were fairly low (12V or 24V). The students did not come up with any questions, so I pulled out one that had been asked weeks ago in lab: how sine-wave oscillators work.  The students had build square-wave oscillators with Schmitt triggers, and had used function generators, but had not seen any sine-wave oscillators.  I decided to do a classic oscillator: the Wien-bridge oscillator, since it uses the building blocks and concepts they are familiar with: a differential amplifier and two voltage dividers as a bridge circuit.  I started out with a generic bridge (just an impedance on each arm) and we got 3 formulas relating the nodes of the amplifier: $V_{out} = A(V_p - V_m)$, $V_m= V_{out} Z_1/(Z_1+Z_2)$, and $V_p = V_{out} Z_3/(Z_3+Z_4)$, from which we derived the stability condition $Z_1 Z_4 = Z_2 Z_3$.  This was all review for them, as they had had bridge nulling on a quiz. The Wien bridge oscillator circuit. I initially gave it with just a resistor, not a light bulb, for R1, since the analysis is easier that way. The neat thing about the light bulb is that it provides an automatic gain control to set its resistance to  R2/2. I then gave them the circuits for each arm of the bridge (just resistors on the negative feedback divider, and a series RC and a parallel RC for the arms of the positive feedback divider).  Rather than do complex impedance calculations, we just did Bode plots of the impedance of each of the RC arms, from which we could see that the voltage divider had zero output at DC and ∞ frequency, with a maximum at $1/(2\pi RC)$.  We were running out of time, so I did not derive with them that the gain of the positive voltage divider was 1/2 at that frequency, but jumped immediately to describing the use of an incandescent bulb in the negative feedback circuit to provide automatic gain adjustment (though I just waved my hands at it, not really showing how the thermal feedback mechanism worked).  I also managed to mention the historical importance of this oscillator design as the first product of Hewlett-Packard, and the start of “Silicon Valley”. The range over which the automatic gain control with the light bulb works is determined by the range of resistance for the bulb filament. When the bulb is cold, its resistance must be less than R2/2. When the output is a sine wave with amplitude equal to the power supply, the resistance of the bulb filament must be larger than R2/2.  When the circuit is stable, the RMS voltage on the  bulb will be 1/3 the RMS voltage of the output, and the bulb filament resistance will be R2/2.  Nowadays other non-linear components are used rather than bulbs for the gain control, since bulbs suffer from microphonics and (for low frequencies) insufficient low-pass filtering (they are relying on the thermal mass of the filament to provide the low-pass filter of the automatic gain control). On Monday, I plan to answer other questions students have, if they can come up with anything that confused them over the quarter.  If they can’t come up with any questions for me and send them to me this weekend, then maybe I’ll have to come up with some questions for them as an impromptu quiz. Next Page »
2013-05-25 13:01:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37702974677085876, "perplexity": 2371.75536759485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00010-ip-10-60-113-184.ec2.internal.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?p=240973
## ΔGionization $\Delta G^{\circ}= \Delta H^{\circ} - T \Delta S^{\circ}$ $\Delta G^{\circ}= -RT\ln K$ $\Delta G^{\circ}= \sum \Delta G_{f}^{\circ}(products) - \sum \Delta G_{f}^{\circ}(reactants)$ Abigail Menchaca_1H Posts: 104 Joined: Sat Sep 07, 2019 12:19 am ### ΔGionization Is there a different way to calculate ΔG ionization? Rhea Shah 2F Posts: 97 Joined: Thu Jul 25, 2019 12:17 am ### Re: ΔGionization I don't think so! It should be the same as the regular way of calculating standard gibbs free energy. Ashley Nguyen 2L Posts: 103 Joined: Sat Aug 17, 2019 12:18 am Been upvoted: 1 time ### Re: ΔGionization I believe that deltaG ionization is calculated the same as normal standard gibbs free energy. Diana A 2L Posts: 106 Joined: Sat Aug 17, 2019 12:16 am Been upvoted: 1 time ### Re: ΔGionization Would delta G ionization be the same for other temperatures besides 25 degrees Celsius? Let's say for example you're taking the delta G of ionization for a reaction at 30 degrees Celsius, would you use just the standard equations for delta G? I hope that question makes sense, please someone help I want to know. Tauhid Islam- 1H Posts: 64 Joined: Fri Aug 02, 2019 12:15 am ### Re: ΔGionization I'm pretty sure all standard state quantities are dependent on time. Gibb's free energy change is a function of temperature so at different temperatures, you would have different energies. Diana A 2L Posts: 106 Joined: Sat Aug 17, 2019 12:16 am Been upvoted: 1 time ### Re: ΔGionization Tauhid Islam- 1H wrote:I'm pretty sure all standard state quantities are dependent on time. Gibb's free energy change is a function of temperature so at different temperatures, you would have different energies. In that case, how would you calculate Gibbs Free Energy at non-standard temperature? Ying Yan 1F Posts: 101 Joined: Fri Aug 02, 2019 12:16 am ### Re: ΔGionization I don't think there is another way, calculating for delta Gionization is the same as calculating for delta Go. Ellen Amico 2L Posts: 101 Joined: Thu Sep 19, 2019 12:16 am ### Re: ΔGionization Nope! you can use any of the equations for calculating deltaG. I think it's just a way to label it relating to the reaction. Diana A 2L Posts: 106 Joined: Sat Aug 17, 2019 12:16 am Been upvoted: 1 time ### Re: ΔGionization Ellen Amico 2L wrote:Nope! you can use any of the equations for calculating deltaG. I think it's just a way to label it relating to the reaction. Thank you! I understand now:) DominicMalilay 1F Posts: 87 Joined: Wed Sep 30, 2020 9:36 pm ### Re: ΔGionization I don't believe there is another method, and there shouldn't have to if you are given all the right information on the midterm/homeworks! Gicelle Rubin 1E Posts: 66 Joined: Fri Oct 02, 2020 12:16 am ### Re: ΔGionization As said by others, I don't believe so :( ### Who is online Users browsing this forum: No registered users and 1 guest
2021-01-26 20:07:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6976442933082581, "perplexity": 4595.911712602233}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704803308.89/warc/CC-MAIN-20210126170854-20210126200854-00077.warc.gz"}
https://graz.pure.elsevier.com/en/publications/sample-variance-in-free-probability
Sample Variance in Free Probability Wiktor Ejsmont, Franz Lehner Research output: Working paperPreprint Abstract Let $X_1, X_2,\dots, X_n$ denote i.i.d.~centered standard normal random variables, then the law of the sample variance $Q_n=\sum_{i=1}^n(X_i-\bar{X})^2$ is the $\chi^2$-distribution with $n-1$ degrees of freedom. It is an open problem in classical probability to characterize all distributions with this property and in particular, whether it characterizes the normal law. In this paper we present a solution of the free analog of this question and show that the only distributions, whose free sample variance is distributed according to a free $\chi^2$-distribution, are the semicircle law and more generally so-called \emph{odd} laws, by which we mean laws with vanishing higher order even cumulants. In the way of proof we derive an explicit formula for the free cumulants of $Q_n$ which shows that indeed the odd cumulants do not contribute and which exhibits an interesting connection to the concept of $R$-cyclicity. Original language English Published - 22 Jul 2016 Publication series Name arXiv.org e-Print archive Cornell University Library Keywords • math.OA • math.PR • 46L54 (Primary), 62E10 (Secondary) Fingerprint Dive into the research topics of 'Sample Variance in Free Probability'. Together they form a unique fingerprint.
2022-05-20 01:57:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6761294603347778, "perplexity": 748.5765646571629}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662530553.34/warc/CC-MAIN-20220519235259-20220520025259-00250.warc.gz"}
https://payyoutodo.com/simulation-optimization
# Simulation-Optimization Assignment Help Simulation-Optimization\] is given. Now we present a simple instance of our algorithm, illustrated in Figure $fig:example-instance$ in an NIMP formulation. In [Fig. $fig:example-with-MNN$]{}(a), click here for info simulate a single NUT training with MNN training loss equal to $0.02$. This example demonstrates the superior performance of the classic $\eta^{-3}$-regressive MNN with Algorithm $alg:1$ compared to our content In another NIMP instance, we can see that some MNN values of $\{{\dot{f}}, {\dot{g}}\}$ seem to lead the worst performance. go to these guys $\phi_1$ represent the objective function of MNDT, and $\phi_2$ the objective function of MNN with Algorithm $alg:1-reg$, in which ${\vartheta}$ denotes the learning rate for NUT, and ${\chi}$ the log-likelihood $\log{\phi}(\theta)$ along with $\phi_1$, $\phi_2$, ${\chi}$, the objective function $\log{\bar{\psi}}(\theta)$ and ${\hat{\vartheta}_1}(\theta)$, $\hat{\phi}$ represent the log-likelihood loss functions of NUT, TUTTIL, and $MNN$ training instances, respectively. In [Fig. $fig:example-with-MNN$]{}(b), we perform simulation based on $\phi$, $\chi$, and $\bar{\psi}$. The training instance is obtained by the $MLP(\theta)$ algorithm, and is executed from top to bottom in the following steps: 1. **Run Algorithm $alg:1$(b)** in the $\ell^2$ time domain. 2. **Add a training setting (using Algorithm $alg:1-reg$)** to the standard MNN training setting, 3. **Solve Algorithm $alg:1-reg$** on a DNN instance in the context of [MPKM]{} with training setup (${\vartheta},{\chi},{\vartheta}$). 4. **Add training setting to the standard MNN training setting** (${\vartheta}’$, ${\chi}’$, ${\vartheta}$). 5. **Add layer details (${\hat{\vartheta}_1}(\theta),\hat{\phi}$) to the standard MNN training setting** (${\vartheta}’$). 6.
2023-01-28 10:53:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.795101523399353, "perplexity": 1548.0916675342105}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499541.63/warc/CC-MAIN-20230128090359-20230128120359-00548.warc.gz"}
https://gamedev.stackexchange.com/questions/64348/problems-when-rendering-code-on-nvidia-gpu
# Problems when rendering code on Nvidia GPU I am following OpenGL GLSL cookbook 4.0, I have rendered a tesselated quad, as you see in the screenshot below, and i am moving Y coordinate of every vertex using a time based sin function as given in the code in the book. This program, as you see on the text in the image, runs perfectly on built in Intel HD graphics of my processor, but i have Nvidia GT 555m graphics in my laptop, (which by the way has switchable graphics) when I run the program on the graphic card, the OpenGL shader compilation fails. It fails on following instruction.. pos.y = sin.waveAmp * sin(u); giving error>> Error C1105 : Cannot call a non-function I know this error is coming on the sin(u) function which you see in the instruction. I am not able to understand why? When i removed sin(u) from the code, the program ran fine on Nvidia card. Its running with sin(u) fine on Intel HD 3000 graphics. Also, if you notice the program is almost unusable with intel HD 3000 graphics, I am getting only 9FPS, which is not enough. Its too much load for intel HD 3000. So, sin(X) function is not defined in the OpenGL specification given by Nvidia drivers or something else?? You seem to have a variable called sin, and that interferes with the built-in function. Try changing the value name to something else. • I dunno, if this is really the LOC that throws the exception, given the unmistakable exception Cannot call a non-function I would say that sin in this scope is an object and not a function. @2am What GLSL compiler are you using? – Lorenz Lo Sauer Nov 2 '13 at 16:30
2020-10-22 21:25:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2088979333639145, "perplexity": 2727.5497671314406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880038.27/warc/CC-MAIN-20201022195658-20201022225658-00455.warc.gz"}
https://www.physicsforums.com/threads/implications-of-squeeze-theorem.335048/
# Implications of Squeeze Theorem 1. Sep 5, 2009 ### honestrosewater I was just introduced to the http://en.wikipedia.org/wiki/Squeeze_theorem" [Broken] in the second week of Calc 1. This theorem implies that $\lim_{x \to c} f(x)$ exists under certain circumstances, and I want to find out and prove what (at least some of) those circumstances are and what the value of the limit is. Roughly, my idea so far is to use Squeeze and supremums and infimums to find the one-sided limits, giving me the two-sided limit when they agree. I would like to know if I am messing up somewhere or if anyone has some time-saving hints for me. (This is not homework, so I don't have much time to spend on it.) All functions are real-valued. Theorem X, rough draft. I need to cover 4 cases: increasing from the left, decreasing from the left, increasing from the right, and decreasing from the right. (a) If f is bounded and increasing (or decreasing) on [b,c), then $\lim_{x \uparrow c} f(x)$ = the supremum (or infimum) of the image of f on [b,c). (b) If f is bounded and increasing (or decreasing) on (c,d], then $\lim_{x \downarrow c} f(x)$ = the infimum (or supremum) of the image of f on (c,d]. PROOF (a-increasing) Let s = the supremum of the image of f on [b,c). The pair of squeezing functions will be g(x) = s and h(x) = ((s - f(b))/(c - b))x, i.e., a horizontal line through s and a secant line to f passing through f(b) and s. That these two functions will squeeze f looks obvious since f is increasing on the interval. And the supremum should exist since I am dealing with subsets of R. I also need to prove that the limits of my squeezing functions g(x) and h(x) always exist and are equal approaching any c. Both limits exist since g(x) is constant and h(x) is linear and defined everywhere since c > b. Should I note that c > b or is this assumed for an interval? I'm not sure how to prove that the limits are equal. The other three cases would work similarly. My last step is applying Squeeze. Is there a one-sided version of Squeeze? if not, I would have to combine the g(x) and h(x) that I get from the left with the ones that I get from the right. If the one-sided limits are equal, g should be the same horizontal line through s, and h should be either a line through s or an absolute value function with vertex s. So combining them should be straightforward. Will this work? Will f need to be continuous on the interval? Last edited by a moderator: May 4, 2017 2. Sep 7, 2009 ### n!kofeyn I'm not really for sure what you're talking about. The squeeze theorem says: If $g(x)\leq f(x)\leq h(x)$ for all x in an interval containing a and $$\lim_{x\to a} g(x) = \lim_{x\to a} h(x) = L$$, then $$\lim_{x\to a} f(x) = L$$. The squeeze theorem gives you exactly which circumstances must be satisfied for the limit of f to exist, and it gives you the value of the limit of f if these circumstances are satisfied! So I'm not for sure why you are trying to find these circumstances or what the value of the limit of f is. Are you trying to prove this theorem? When you are proving something, you don't need to prove what is given (i.e. you don't have to prove that the limits of g and h exist and are equal, you assume that). You also don't need to know anything about the continuity of g, f, or h to apply the squeeze theorem. 3. Sep 7, 2009 ### slider142 If you are attempting to prove the Squeeze theorem, one problem is that you assume that f is either increasing or decreasing on some interval, which is not necessarily true and is not implied by any assumption in the theorem. As an illustration, consider the function: $$f(x) = \left\lbrace\begin{array}{ll}x^2\sin\left(\frac{1}{x}\right), & x\neq 0\\ 0, & x=0\end{array}$$ With respect to the limit as x approaches 0, the squeeze theorem is applicable ($g(x) = -x^2, h(x) = x^2$) and gives the correct limit, and yet in no neighborhood of 0 is f increasing or decreasing, which are the only behaviors considered in your attempt. The simplest proof is a direct algebraic application of the fact that the limits for g and h exist and are equivalent to the inequality, with no need to go lower than limit theorems (no need for delta-epsilon or infimum/supremum considerations). Last edited: Sep 7, 2009 4. Sep 7, 2009 ### honestrosewater I am not trying to prove Squeeze. I am trying to find a pair of functions that will always squeeze certain f. But I messed up several things. First, it's not enough that f be increasing or decreasing. I am concerned with the concavity of f, so I care whether the derivative of f is increasing or decreasing. I don't know why it was so hard to express what my brain was doing here. Second, Squeeze requires an open interval that contains the point whose limit you're trying to find. That error was just sloppiness. Third, an absolute value function, which is what I would have ended up with in some cases, is continuous but not differentiable at its vertex. This error was ignorance. So my idea needs an overhaul, but this has still been a good learning experience. / 5. Sep 7, 2009 ### slider142 Hmm, are you assuming the limit exists? If so, what about "given the limit as x approaches a of f(x) to be L, consider g(x) = L + |x - a|f(x) and h(x) = L - |x - a|f(x)." The inequality follows from the absolute value, and they squeeze the function pretty well. Did you need h and g to be differentiable as well? 6. Sep 7, 2009 ### Hurkyl Staff Emeritus For the record, the following are true statements: If f is bounded and increasing on [b,c), where b<c, then $\lim_{x \uparrow c} f(x)$ is indeed the supremum of the image under f of [b,c).​ (note: you can replace [b,c) with (b,c)) If $f \leq g \leq h$ on the interval (b,c), where b<c, and $\lim_{x \uparrow c} f(x) = \lim_{x \uparrow c} h(x) = L$, then $\lim_{x \uparrow c} g(x) = L$​ (i.e. one-sided squeeze is a theorem) 7. Sep 7, 2009 ### n!kofeyn I guess I'm still confused. By certain f do you mean given any f you want to explicitly find functions g and h that squeeze its limit at a point? What about sin(1/x)? You can in no way squeeze it to a limit at 0. 8. Sep 7, 2009 ### honestrosewater No, by "certain", I do not mean "any". I mean f that meet certain requirements, such as being bounded and nonoscillating approaching c. Sorry, I thought it was a straightforward desire. Squeeze says that one way to find limits of f is to find appropriate pairs of squeezing functions. Right? I want to know if, in some cases, there is a general way to define this pair of squeezing functions, i.e., I want to define g and h in terms of f and the interval. But if I can define g and h this way, I don't even need them, only their common limit. Right? In my example above, their common limit would have been s, so I can skip thinking about g and h altogether and just say that the limit of f approaching c is s under these circumstances. Maybe this seems like a roundabout way of doing things. It was just meant as a learning exercise. Well, if the limit of f approaching a doesn't exist, my pair of squeezing functions won't exist either. So, stupid question: what is the relationship between the limit of f(x) as x approaches c and whether f is differentiable at c? 9. Sep 8, 2009 ### Elucidus The squeeze theorem is applicable in very broad situations regardless on the nicety of the function f. Note, for $$f(x) =\left\{ \begin{array}{rl} x\sin(1/x), & \text{if }x \neq 0 \\ 0, & \text{if }x = 0 \end{array}$$ $$\lim_{x \rightarrow 0} f(x) = 0$$ by the Squeeze Theorem. Similarly if w(x) is Weierstrass' pathological function which is continuous everywhere, but not differentiable anywhere then $$\lim_{x \rightarrow 0} x \cdot w(x) = 0$$ also by the Squeeze Theorem. In general there is no great way to fabricate the inferior and superior bounding functions for any given function f. It'd be nice to have such a way, but alas, things are not to be. If f is sufficiently nice to be twice differentiable, or monotonic, or whatnot, then there are probably other ways to find the limit (i.e. continuity). --Elucidus
2018-07-18 09:28:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8808287978172302, "perplexity": 396.8427110123026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590074.12/warc/CC-MAIN-20180718080513-20180718100513-00066.warc.gz"}
https://physics.stackexchange.com/questions/148642/how-to-rebut-denials-of-the-existence-of-photons
# How to rebut denials of the existence of photons? [duplicate] Recently I have encountered several engineers who do not “believe in” photons. They believe experiments such as the photoelectric effect can be explained with classical EM fields + quantized energy levels in atoms. There is a 1995 paper by Lamb along these lines entitled “Anti-photon”. What are some easily understood experiments that prove the existence of photons, which I can point to in discussions with anti-photon advocates? • Uh, so what do they believe in place of photons? Nov 24, 2014 at 20:59 • That said, Grangier Roger & Aspect 1986 is probably a good one. Nov 24, 2014 at 21:04 • Lamb believes that only waves propagate but the interaction is quantized. "It should be apparent from the title of this article that the author does not like the use of the word "photon", which dates from 1926. In his view, there is no such thing as a photon. Only a comedy of errors and historical accidents led to its popularity among physicists and optical scientists. I admit that the word is short and convenient. Its use is also habit forming. Similarly, one might find it convenient to speak of the "aether" or "vacuum" to stand for empty space, even if no such thing existed." Nov 24, 2014 at 21:06 • "...There are very good substitute words for "photon", (e.g., "radiation" or "light"), and for "photonics" (e.g., "optics" or "quantum optics"). Similar objections are possible to use of the word "phonon", which dates from 1932. Objects like electrons, neutrinos of finite rest mass, or helium atoms can, under suitable conditions, be considered to be particles, since their theories then have viable non-relativistic and non-quantum limits. This paper outlines the main features of the quantum theory of radiation and indicates how they can be used to treat problems in quantum optics." Nov 24, 2014 at 21:06 • @KyleKanos The flip side is---of course---www-3.unipv.it/fis/tamq/Anti-photon.pdf (the paper that user31748 references) and arxiv.org/abs/1204.4616 Nov 24, 2014 at 21:15 The issue here, I believe, is not existence of photons, but the fact that people may choose terminology and concepts they find appealing. The word photon has been coined long ago for an idea that is quite far from the current views on light and the meaning of the word has been evolving many decades. Its current use in textbooks and papers is quite broad and may be regarded as inconsistent - in one situation photon is a dot on the detector screen, in another it is something that spreads the whole experimental setup, in yet another it is quantum of energy that gets absorbed in a tiny region of space comparable to an atom. Such liberal use of a word may not appeal to people who like their terms general and clear, which is why they might prefer the term EM field (even in quantum theory) instead. The possibilities of mathematical modelling of light by continuous fields have evolved to the point where they can account for many experiments that were previously thought to require the idea of particles of light. Photoelectric effect, double slit experiment, black body radiation may be approached from the mathematical standpoint where light is described by continuous fields. In the end, explanation of an experiment involving light with words and mathematics is just that, and proves nothing about what the light "really consists of". • This is really the core of (at least one of) Lamb's point. He doesn't argue against QED, but rather that the word in consistently abused in ways that are not consistent with a modern understanding of quantum optics. Nov 24, 2014 at 22:28 • Was the double slit experiment ever thought to require particles? Jul 21, 2016 at 9:49 • @immibis, yes, the double slit experiment with weak light has as a result a screen that shows small spots smaller than distance of interference maxima. There have been people who thought this proves light consists of small particles or localized energy packets. Aug 17, 2016 at 19:48 • +1, especially for the last paragraph, which is often overlooked when discussing physics. Sep 26, 2017 at 4:28 I would tell them to re-read and understand that paper, and know that few spectroscopists would disagree with it. The point is that far too many people use the word "photon" without knowing what a photon really is or under what context the word can be used. For the vast majority of applications a semi-classial conception of the radiation field is adequate. The author wants to discontinue the use of the word, not negate the real existence of that entity, as defined through a rigorous QED treatment of the radiation field. Photon counting statistics cannot always be explained by classical fields. In these experiments, the state of the field is monitored continuously by a photodetector. I believe these represent one of the clearest experimental demonstrations of the quantum nature of the radiation field. For example, in observing the emission of photons from a single atom, one never detects a second emitted photon immediately after the first. This is due to the fact that after a spontaneous emission event the radiation field is in a Fock state with a well-defined number of photons. This "anti-bunching" effect was observed by Kimble et al. in 1977 and is reported here. It is not possible to explain the experimental intensity distribution as arising from an underlying classical electric field, even if we allow the field to fluctuate stochastically. The quantum theory of light was thus found to be necessary. Note that these conclusions do not depend on the use of photomultiplier tubes in the detectors or anything else to do with the photoelectric effect. One needs only some device that is capable of measuring the intensity of light with sufficient time resolution. • " ...even if we allow the field to fluctuate stochastically" are you talking about SED here? It might be good to mention this if so. Nov 24, 2014 at 22:38 • @WetSavannaAnimalakaRodVance I'm afraid I know nothing about SED. What I mean is the following. Many states of the radiation field can be described by a classical electric and magnetic field distributed according to a positive semi-definite quasi-probability distribution, e.g. the Wigner function. Perhaps such a description is equivalent to some formulation of stochastic electrodynamics. However, Fock states cannot be described in this way: their quasi-probability distributions will be negative or singular. Nov 24, 2014 at 22:42 • "It is not possible to explain the experimental intensity distribution as arising from an underlying classical electric field,even if we allow the field to fluctuate stochastically." This is often repeated mantra, but it is demonstrably true, as many no-go claims are, only for certain very restricted view of what classical description is and some obvious way to use it.Anti-bunching is not seen as a phenomenon preventing description by c-number fields with positive Wigner density at all by people at the frontier of classical mode of description-see crisisinphysics.co.uk/optrev.pdf. Feb 25, 2015 at 2:09 • What is happening here is people often use a general but outdated theory of classical EM field to convince you that classical EM theory cannot do this and that. They largely ignore the fact people who actually spent lot of time on developing enhancements of that theory did bring many new things to it. Zeldovich made a good point about impossible things in his book on mathematics: it is pointless to stress that something cannot be done [ especially when the rules are not entirely fixed and it is not a mathematical statement]. Give it time and somebody will do just that impossible thing. Feb 25, 2015 at 2:16 • @JánLalinský You make a good point. Perhaps it is better to say simply that photon counting rules out some naive or natural interpretations of a fluctuating classical electric field, where one posits a positive probability distribution over a $c$-number field obeying Maxwell's equations. Nevertheless, I do think that this answer is historically pertinent, since if you read the papers of Mandel and the other pioneers of photon counting experiments they often pointed out that their experiments constituted direct evidence of the existence of photons and ruled out popular alternatives at the time. Feb 25, 2015 at 12:12 Photons are observed as radiation with a given spin and other properties. As such, they do exist. But according to the principles of time dilation and length contraction, from their (hypothetical) proper point of view, they would be reduced to a single momentum. Their proper time would be zero, the distance of their geodesic (and also the spacetime interval) would be reduced to zero. That means that from their (hypothetical) point of view, nothing would be moving - just a momentum transmitted directly without a wave from one electron to another, no particle. Of course, this is a hypothetical, calculated reality because photons are not observers, and they have no reference frame. But according to the principles of time dilation and length contraction, we can also be sure that our observation does not correspond to reality.
2022-05-26 20:46:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5614861845970154, "perplexity": 539.1745067203756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662625600.87/warc/CC-MAIN-20220526193923-20220526223923-00111.warc.gz"}
https://www.physicsforums.com/threads/numerical-analysis-floating-point-arithmetic.377233/
# Numerical analysis, floating-point arithmetic 1. Feb 10, 2010 ### Chasing_Time Hi all, this (probably easy) problem from numerical analysis is giving me trouble. I can't seem to get started and need some poking in the right direction. 1. The problem statement, all variables and given/known data Consider the following claim: if two floating point numbers x and y with the same sign differ by a factor of at most the base B (1/B <= x/y <= B), then their difference x-y is exactly representable in the floating point system. Show that this claim is true for B = 2 but give a counter example for B > 2. 2. Relevant equations The general form of a floating point number: $$x = d_0.d_1 ... d_{t-1} * 10^e$$ 3. The attempt at a solution I have tried exploring the binary case, noting that d_0 must be = 1 in base B=2: $$x = (1 + \frac {d_1}{2} + ... + \frac {d_{t-1}} {2^{t-1}}) * 2^e$$ $$y = (1 + \frac {d_1}{2} + ... + \frac {d_{t-1}} {2^{t-1}}) * 2^{e-1} = (\frac {1}{2} + \frac {d_1} {4} + ... + \frac {d_{t-1}} {2^t}) * 2^e$$ $$x - y = (1 + \frac {d_1 - 1} {2} + ... + \frac {d_{t-1} - d_{t-2}} {2^{t-1}} - \frac {d_{t-1}} {2^t})*2^e$$ Is this "exactly representable" in the floating-point system? I don't know what else to do or what to use as a counter example. Am I even on the right track? Thanks for any help. 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 2. Feb 11, 2010 ### Staff: Mentor Your arithmetic is off here. Since you have set this up with x being two times y, the difference x - y better be equal to y. Certainly x - y is exactly representable in a base-2 floating-point system, as long as x and y are. I don't have any examples in mind that would serve as counterexamples, but if you work with some specific numbers in base 3 or higher bases, you might be able to come up with one. By "specific numbers" I mean that you should work with numbers like 2.0121 X 32 (base-3), rather than symbolically representing the digits with d1, d2, etc. That's where I would start.
2017-11-22 15:23:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44536682963371277, "perplexity": 412.3884990934304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806609.33/warc/CC-MAIN-20171122141600-20171122161600-00228.warc.gz"}
https://projecteuclid.org/euclid.aop/1393251297
The Annals of Probability Quenched asymptotics for Brownian motion in generalized Gaussian potential Xia Chen Abstract In this paper, we study the long-term asymptotics for the quenched moment $\mathbb{E}_{x}\exp \biggl\{\int_{0}^{t}V(B_{s})\,ds\biggr\}$ consisting of a $d$-dimensional Brownian motion $\{B_{s};s\ge0\}$ and a generalized Gaussian field $V$. The major progress made in this paper includes: Solution to an open problem posted by Carmona and Molchanov [Probab. Theory Related Fields 102 (1995) 433–453], the quenched laws for Brownian motions in Newtonian-type potentials and in the potentials driven by white noise or by fractional white noise. Article information Source Ann. Probab., Volume 42, Number 2 (2014), 576-622. Dates First available in Project Euclid: 24 February 2014 https://projecteuclid.org/euclid.aop/1393251297 Digital Object Identifier doi:10.1214/12-AOP830 Mathematical Reviews number (MathSciNet) MR3178468 Zentralblatt MATH identifier 1294.60101 Citation Chen, Xia. Quenched asymptotics for Brownian motion in generalized Gaussian potential. Ann. Probab. 42 (2014), no. 2, 576--622. doi:10.1214/12-AOP830. https://projecteuclid.org/euclid.aop/1393251297 References • [1] Amir, G., Corwin, I. and Quastel, J. (2011). Probability distribution of the free energy of the continuum directed random polymer in $1+1$ dimensions. Comm. Pure Appl. Math. 64 466–537. • [2] Balázs, M., Quastel, J. and Seppäläinen, T. (2011). Fluctuation exponent of the KPZ/stochastic Burgers equation. J. Amer. Math. Soc. 24 683–708. • [3] Bass, R., Chen, X. and Rosen, J. (2009). Large deviations for Riesz potentials of additive processes. Ann. Inst. Henri Poincaré Probab. Stat. 45 626–666. • [4] Biskup, M. and König, W. (2001). Long-time tails in the parabolic Anderson model with bounded potential. Ann. Probab. 29 636–682. • [5] Carmona, R. A. and Molchanov, S. A. (1995). Stationary parabolic Anderson model and intermittency. Probab. Theory Related Fields 102 433–453. • [6] Carmona, R. A. and Viens, F. G. (1998). Almost-sure exponential behavior of a stochastic Anderson model with continuous space parameter. Stochastics Stochastics Rep. 62 251–273. • [7] Chen, X. (2010). Random Walk Intersections: Large Deviations and Related Topics. Mathematical Surveys and Monographs 157. Amer. Math. Soc., Providence, RI. • [8] Chen, X. (2012). Quenched asymptotics for Brownian motion of renormalized Poisson potential and for the related parabolic Anderson models. Ann. Probab. 40 1436–1482. • [9] Chen, X., Hu, Y. Z., Song, J. and Xing, F. (2014). Exponential asymptotics for time-space Hamiltonians. Ann. Inst. Henri Poincaré Probab. Stat. To appear. • [10] Chen, X., Li, W. V. and Rosen, J. (2005). Large deviations for local times of stable processes and stable random walks in 1 dimension. Electron. J. Probab. 10 577–608. • [11] Chen, X. and Rosen, J. (2010). Large deviations and renormalization for Riesz potentials of stable intersection measures. Stochastic Process. Appl. 120 1837–1878. • [12] Conus, D., Joseph, M., Khoshnevisan, D. and Shiu, S.-Y. (2013). On the chaotic character of the stochastic heat equation, II. Probab. Theory Related Fields 156 483–533. • [13] Gärtner, J. and König, W. (2000). Moment asymptotics for the continuous parabolic Anderson model. Ann. Appl. Probab. 10 192–217. • [14] Gärtner, J., König, W. and Molchanov, S. A. (2000). Almost sure asymptotics for the continuous parabolic Anderson model. Probab. Theory Related Fields 118 547–573. • [15] Gärtner, J. and Molchanov, S. A. (1990). Parabolic problems for the Anderson model. I. Intermittency and related topics. Comm. Math. Phys. 132 613–655. • [16] Gärtner, J. and Molchanov, S. A. (1998). Parabolic problems for the Anderson model. II. Second-order asymptotics and structure of high peaks. Probab. Theory Related Fields 111 17–55. • [17] Guelfand, I. M. and Vilenkin, G. (1964). Generalized Functions. Academic Press, New York. • [18] Hairer, M. (2013). Solving the KPZ equation. Ann. Math. 178 559–664. • [19] Hu, Y., Nualart, D. and Song, J. (2011). Feynman–Kac formula for heat equation driven by fractional white noise. Ann. Probab. 39 291–326. • [20] Karda, M., Parisi, G. and Zhang, Y. C. (1986). Dynamic scaling of growing interface. Phys. Rev. Lett. 56 889–892. • [21] Karda, M. and Zhang, Y. C. (1987). Scaling of directed polymers in random media. Phys. Rev. Lett. 58 2087–2090. • [22] Marcus, M. B. and Rosen, J. (2006). Markov Processes, Gaussian Processes, and Local Times. Cambridge Studies in Advanced Mathematics 100. Cambridge Univ. Press, Cambridge. • [23] Slepian, D. (1962). The one-sided barrier problem for Gaussian noise. Bell System Tech. J. 41 463–501. • [24] Sznitman, A.-S. (1998). Brownian Motion, Obstacles and Random Media. Springer, Berlin. • [25] Viens, F. G. and Zhang, T. (2008). Almost sure exponential behavior of a directed polymer in a fractional Brownian environment. J. Funct. Anal. 255 2810–2860.
2019-06-16 03:15:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6051090955734253, "perplexity": 4419.447179814037}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997533.62/warc/CC-MAIN-20190616022644-20190616044644-00009.warc.gz"}
http://www.cs.cornell.edu/courses/cs2800/2016fa/lectures/lec28-structural.html
# Lecture 28: Structural induction, languages • Inductively defined sets • inductively defined functions • proof by structural induction • Language of an automaton • key terms: extended transition function δ^, language, language of a machine L(M), M recognizes L ## Inductively defined sets An inductively defined set is a set where the elements are constructed by a finite number of applications of a given set of rules. Examples: • the set of natural numbers is the set of elements defined by the following rules: 1. 0 ∈ ℕ 2. If n ∈ ℕ then Sn ∈ ℕ. thus the elements of are {0, S0, SS0, SSS0, …}. S stands for successor. You can then define 1 as S0, 2 as SS0, and so on. • the set Σ* of strings with characters in Σ is defined by 1. ϵ ∈ Σ* 2. If a ∈ Σ and x ∈ Σ* then xa ∈ Σ*. thus the elements of Σ* are {ε, ε0, ε1, ε00, ε01, …, ε1010101, …}. we usually leave off the ε at the beginning of strings of length 1 or more. • the set T of binary trees with integers in the nodes is given by the rules 1. the empty tree (, written nil) is a tree 2. if t1 and t2 are trees, then , written node(a, t1, t2)) is a tree. thus the elements of T are things like the picture to the right (click for tex), which might be written textually as node(3, node(0, nil, nil),node(1, node(2, nil, nil),nil)) ## BNF Compact way of writing down inductively defined sets: BNF (Backus Naur Form) Only the name of the set and the rules are written down; they are separated by a "::=", and the rules are separated by vertical bar (|). Examples (from above): • n ∈ ℕ : :=0  |  Sn • x ∈ Σ* : :=ϵ  |  xa a ∈ Σ • t ∈ T : :=nil  |  node(a, t1, t2) a ∈ Z • (basic mathematical expresssions) e ∈ E : :=n  |  e1 + e2  |  e1 * e2  |   − e  |  e1/e2 n ∈ Z Here, the variables to the left of the indicate metavariables. When the same characters appear in the rules on the right-hand side of the ::=, they indicate an arbitrary element of the set being defined. For example, the e1 and e2 in the e1 + e2 rule could be arbitrary elements of the set E, but + is just the symbol +. ## Inductively defined functions If X is an inductively defined set, you can define a function from X to Y by defining the function on each of the types of elements of X; i.e. for each of the rules. In the inductive rules (i.e. the ones containing the metavariable being defined), you can assume the function is already defined on the subterms. Examples: • plus : ℕ × ℕ → ℕ given by plus : (0, n)↦n and plus : (Sn, n′) ↦ S(plus(n, n′)). Note that we don't need to use induction on both of the inputs. • $\hat{\delta} : Q \times \Sigma^* → Q$ ## Proofs by structural induction If X is an inductively defined set, then you can prove statements of the form x ∈ X, P(x) by giving a separate proof for each rule. For the inductive/recursive rules (i.e. the ones containing metavariables), you can assume that P holds on all subexpressions of x. Examples: • Proof that M is correct (see homework solutions) can be simplified using structural induction • A proof by structural induction on the natural numbers as defined above is the same thing as a proof by weak induction. You must prove P(0) and also prove P(Sn) assuming P(n). ## Language of a Machine Extended transition function (Note: because this doesn't render nicely in HTML, I will write δ^ for "delta-hat") • δ^:Q × Σ* → Q • informally: δ^(q, x) tells you where you end up after processing the string x starting in state q. • compare with δ : Q × Σ → Q: δ^ processes strings, while δ processes single characters • domain of δ is finite, so description of δ is finite; it is part of the machine. • domain of δ is infinite, it is not part of the description of the machine (but is built from the description of the machine). • δ^:(q, ε)↦q, and δ^:(q, xa)↦δ(δ^(q, x),a) • informally: to process xa, first process x; starting there, take a single step with the (non-extended) transition function δ • Note: the δ in this definition cannot be δ^ Language - A language is a set of strings Language of a machine - L(M) stands for the "language of M". - contains all (and only) strings that M accepts - Informally, a string x is accepted by a machine M if, after processing x starting at the start state, the machine ends in a final state. - formal definition: L(M)={x ∈ Σ*  |  δ^(q0, x)∈F}, where M = (Q, Σ, δ, q0, F). - We say x is accepted by M if x ∈ L(M), x is rejected otherwise. - We say that M recognizes L if L = L(M). - A language L is DFA-recognizable if there is some machine M with L = L(M). ## Proof of correctness of an automaton Given a language L, we may wish to build a machine that recognizes L, and prove that it is correct. In other words, we wish to prove that L = L(M). In other words, we wish to prove that x ∈ Σ*, x ∈ L if and only if x ∈ L(M). In other words, we wish to prove that x ∈ Σ*, x ∈ L if and only if δ^(q0, x)∈F. A straightforward approach is induction on the structure of x; however the induction hypothesis usually needs to be strengthened to describe all of the states (and not just the final state); essentially you want to prove that each state "satisfies its specification". For example: we may want to build a machine that recognizes strings that contain at least two ones. We might build a machine with three states: q0 represents strings with no 1's, q1 represents strings with one 1, and q2 represents strings with two or more 1's. A proof of correctness for this machine might go as follows: Let P(x) be the statement "δ^(q0, x)=q0 if and only if x has no 1s, and δ^(q0, x)=q1 if and only if x has exactly one 1, and δ^(q0, x)=q2 if and only if x has two or more 1's. I claim that x ∈ Σ*, P(x) holds. We will prove this claim by induction on the structure of x. We must show P(ε) and, assuming P(x), P(xa). To prove P(ε), note that δ^(q0, ε)=q0. Thus only the first part of P(x) makes any claim (it is vacuously true that if δ^(q0, ε)=q1, then ε contains one 1, because the statement says nothing). Now, to prove P(xa), assume P(x). a can be either 0 or 1, and δ^(q0, x) could be any of q0, q1, and q2. We consider each case: 1. If a = 0, then note that δ(qi, a)=qi. Moreover, note that xa has the same number of 1's as x. Thus in this case, P(xa) follows directly from P(x). 2. If a = 1 and δ^(q0, x)=q0, then by P(x), x must have no 1's. Thus xa has exactly one 1. Moreover, δ^(q0, xa)=δ(δ^(q0, x),a)=δ(q0, 1)=q1, so $P(xa) holds in this case. 3. If a = 1 and δ^(q0, x)=q1, then by$P(x), x must have one 4. Thus xa has two ones. Moreover, δ^(q0, xa)=δ(δ^(q0, x),a)=δ(q1, 1)=q2, so $P(xa) holds in this case. 5. If a = 1 and δ^(q0, x)=q2, then by$P(x), x must have two or more 1's. Thus xa has more than two ones. Moreover, δ^(q0, xa)=δ(δ^(q0, x),a)=δ(q2, 1)=q2, so \$P(xa) holds in this case. In all possible cases, we have shown P(xa). This concludes the inductive proof. Note that the framework provided by the inductive proof forces you to write down a specification for each state, and then reason about each transition.
2019-01-22 03:46:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8414298295974731, "perplexity": 437.225753472547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583826240.93/warc/CC-MAIN-20190122034213-20190122060213-00212.warc.gz"}
http://horseitaly.it/lqzi/matlab-pde-toolbox-reaction-diffusion.html
NASA Astrophysics Data System (ADS) Mueller, E. This way, I'd end up with some bizarre, "single-line" meshes, which I'd have to stitch together to get the complete solution. Partial Differential Equations 503 where V2 is the Laplacian operator, which in Cartesian coordinates is V2 = a2 a~ a2~+~ (1II. with more than two variables and form a 7-tuple turing machine Problem. Xmorphia shows a beautiful presentation of a simulation of the Gray-Scott reaction diffusion mechanism using a uniform-grid finite-difference model running on an Intel Paragon supercomputer. Select a Web Site. We present a collection of MATLAB routines using discontinuous Galerkin finite elements method (DGFEM) for solving steady-state diffusion-convection-reaction equations. SpinDoctor is a software package that performs numerical simulations of diffusion magnetic resonance imaging (dMRI) for prototyping purposes. Reaction-diffusion model - need help with Learn more about pde, reaction-diffusion MATLAB. MATLAB PDE Solver Code; MATLAB Multicomponent Diffusion Code; MATLAB Codes for Multicomponent Diffusion and Diffusion with Reaction; REFERENCE MATERIALS; Heat Regenerators: Design and Evaluation (Cover page) Heat Regenerators: Design and Evaluation (Article) Quick Design and Evaluation of Heat Regenerators; Sulfur Dioxide Adsorption on Metal Oxides. pde in matlab pdf Specify Scalar PDE Coefficients in String Form. For example ‘gene1 and gene2’ indicate that the two gene products are part of a enzyme comples whereas ‘gene1 or gene2’ indicate that the two gene products are isozymes that catalyze the same reaction. Numerical Solution of the Heat Equation. Pdf Title Matlab Code For. The Partial Differential Equation Toolbox extends the MATLAB® technical computing environment with tools for the study and solution of partial differential equations KEY FEATURES (PDEs) in two-space dimensions (2-D) and Graphical interface for pre- and postprocessing 2-D PDEs time. Introduction to. You can picture the process of diffusion as a drop of dye spreading in a glass of. $\begingroup$ @WolfgangBangerth I am reading Crank's book called "Mathematics of Diffusion" but I am not fully aware of different solvers. You can solve PDEs by using the finite element method, and postprocess results to explore and analyze them. I'm facing some issues with PDE Toolbox in Matlab, indeed I'm trying to solve the heat diffusion equation in a plate of Phase Change Material. References; Steady problems. Solve the heat equation with a temperature-dependent thermal conductivity. The first step in the FEA workflow is to define the geometry. In mathematics, it is related to Markov processes, such as random walks, and applied in many other fields, such as materials science. Partial Differential Equation Toolbox lets you import 2D and 3D geometries from STL or mesh data. Thermal Analysis of Disc Brake. Please send your suggestions. The convection–diffusion equation is a combination of the diffusion and convection equations, and describes physical phenomena where particles, energy, or other physical quantities are transferred inside a physical system due to two processes: diffusion and convection. This is advantageous as it is well-known that the dynamics of approximations of. MATLAB for Neuroscientists: An Introduction to Scientific Computing in MATLAB is the first comprehensive teaching resource and textbook for the teaching of MATLAB in the Neurosciences and in Psychology. This paper describes a publicly available MATLAB toolbox called SpinDoctor that can be used 1) to solve the Bloch-Torrey partial differential equation in order to simulate the diffusion magnetic resonance imaging signal; 2) to solve a diffusion partial differential equation to obtain directly the apparent diffusion coefficient; 3) to compare. MATLAB PDE Toolbox can be used for mesh generation as well. Figure 1 From Solving Reaction Diffusion Equations 10 Times. Partial Differential Equation Toolbox lets you import 2D and 3D geometries from STL or mesh data. You can automatically generate meshes with triangular and tetrahedral elements. These packages are maintained by a community of Octave Forge and Octave developers in a spirit of collaboration. Select a Web Site. In particular, it includes straightforward implementations of many of the algorithms presented in the companion book. Reaction-diffusion-simulator. Partial Differential Equations in MATLAB 7. is the diffusion equation for heat. Partial Differential Equation Toolbox™ provides functions for solving structural mechanics, heat transfer, and general partial differential equations (PDEs) using finite element analysis. The Biopsychology-Toolbox is a free, open-source Matlab-toolbox for the control of behavioral experiments. How can I refine a subdomain in the PDE Toolbox mesh generation tool? I didn't understand the details since you mentioned "my f coefficient for the subdomain 2 is a cosinus". Partial Differential Equations 503 where V2 is the Laplacian operator, which in Cartesian coordinates is V2 = a2 a~ a2~+~ (1II. This way, I'd end up with some bizarre, "single-line" meshes, which I'd have to stitch together to get the complete solution. For modeling structural dynamics and vibration, the toolbox provides a. It looks like PDE Toolbox is not able to solve for the advection-diffusion problem? Also, for the diffusion problem, it is not able to define the 'Q' (volume source) as a function of 'c' (concentration)? 2. Periodic reaction diffusion pde solver in matlab. Traditionally, this would be done by selecting an appropriate differential equation solver from a library of such solvers, then writing computer codes (in a programming language such as C or Matlab) to access the. Learn more about pde, differential equations, toolbox MATLAB. You can also solve standard problems such as diffusion, electrostatics, and magnetostatics, as well as custom PDEs. Michael Mascagni Department of Computer Science Probabilistic Approaches of Reaction-Diffusion Equations the interior configuration satisfy a PDE with boundary conditions. The PDE that describes this interaction is where D is the diffusion (migration) terma and lambda is the nonlinear (proliferation). This example shows how to estimate the heat conductivity and the heat-transfer coefficient of a continuous-time grey-box model for a heated-rod system. You can automatically generate meshes with triangular and tetrahedral elements. Generated from Matlab PDE Toolbox Junbin Huang Department of Mechanical Engineering May16, 2018 Page 1. So I've got a Temperature-dependent capacity , but I need to solve the equation in a sinusoidal state , I mean with a sin boundary condition. redbKIT is a MATLAB library for finite element simulation and reduced-order modeling of Partial Differential Equations. The first step in the FEA workflow is to define the geometry. Numerical Dissipation/Diffusion Junbin Huang, 2018 Page 6. You can automatically generate meshes with triangular and tetrahedral elements. Small toolbox for simulating reaction diffusion equations of the type. Heat Transfer Problem with Temperature-Dependent Properties. Reaction-diffusion model - need help with Learn more about pde, reaction-diffusion MATLAB. Zebrafish Leopard gene as a component of the putative reaction-diffusion system: Group 3: The effects of the size and shape of landscape features on the formation of traveling waves: Group 4: Experimental observation of self-replicating spots in a reaction-diffusion system. Just run HeatAnalytical from the Matlab command line. Unfortunately I don't have much time for taking courses at this moment. The objectives of the PDE Toolbox are to provide you with tools that: •Define a PDE problem, i. Introduction to partial differential equation integration in space and. Mathematical formulation of the problem. c(x,t,u, u x) u t x (f(x,t,u, u x)) s(x,t,u, u x). Here we look at using matlab to obtain such solutions and get results of design interest. Here you can find the MATLAB implementation of the virus particle tracking algorithm described in the original ICoS Technical Report. I'm trying to solve the reaction-diffusion equation with PDE Toolbox (Matlab) with non-constant coefficients, the syntax to get the solution (u) is:. If you are reading this using MATLABs notebook command, then, as I mentioned. 4 Functions of several variables 11 1. PDEToolbox - Unsuitable initial guess U0 Learn more about solvepde, error MATLAB, Partial Differential Equation Toolbox. The following Matlab project contains the source code and Matlab examples used for gaffe a toolbox for solving evolutionary nonlinear pdes. NASA Astrophysics Data System (ADS) Mueller, E. Thanks with all my heart. In order to make use of mathematical models, it is necessary to have solu-tions to the model equations. The Biopsychology-Toolbox is a free, open-source Matlab-toolbox for the control of behavioral experiments. The algorithms are stable and convergent provided the time step is below a (non-restrictive) critical value. When the diffusion (i. Numerical Solution of the Heat Equation. We are interested in being able to make such simulations with an amorphous computer where the precise positions of the individual processing elements is uncertain. PDE Toolbox - Convection in Diffusion Equation. It looks like PDE Toolbox is not able to solve for the advection-diffusion problem? Also, for the diffusion problem, it is not able to define the 'Q' (volume source) as a function of 'c' (concentration)? 2. SpinDoctor can be used. A CFL generally can be made to produce any color of light needed. In order to make use of mathematical models, it is necessary to have solu-tions to the model equations. It can solve static, time domain, frequency domain, and. • For time-dependent problems, the PDE is rst discretized in space to get a semi-discretized system of equations that has one or more time derivatives. The algorithms are stable and convergent provided the time step is below a (non-restrictive) critical value. There is a known solution via Fourier transforms that you can test against. Partial Differential Equation Toolbox ™ provides functions for solving structural mechanics, heat transfer, and general partial differential equations (PDEs) using finite element analysis. Fast and Efficient Speech Signal Classification with a Novel Nonlinear Transform Dogaru, R. Reaction-diffusion model - need help with Learn more about pde, reaction-diffusion MATLAB. Constrained linear diffusion. c latex fortran matlab partial-differential-equations wave-equation Updated Feb 7, 2017; TeX Solving the two-dimensional partial differential equation Nagumo using the finite differences method (FDM) and the BiCGSTAB solver Finite element solver of diffusion-reaction systems. The present work presents a numerical analysis of a low NOx partially premixed burner for heavy duty gas turbine. Papers/Book Publication. Fractional differential equations are becoming increasingly used as a powerful modelling approach for understanding the many aspects of nonlocality and spatial heterogeneity. txt) or read book online for free. For modeling structural dynamics and vibration, the toolbox provides a direct time integration solver. Strong formulation. Estimate Continuous-Time Grey-Box Model for Heat Diffusion. In addition, diffusion effect exists really in the neural networks when electrons are moving in asymmetric electromagnetic fields. This module deals with solutions to parabolic PDEs, exemplified by the diffusion (heat) equation. Heat Transfer Problem with Temperature-Dependent Properties. A tensor field $$S$$ can be used as anisotropic metric to drive a diffusion PDE flow. Estimate Continuous-Time Grey-Box Model for Heat Diffusion. Heat Transfer Problem with Temperature-Dependent Properties. Static methods in the class rbfx are used to implement functionality associated with RBF methods in general, while class methods are used to implement methods in subclasses of. Finite-Difference Schemes for Reaction–Diffusion Equations Modeling Predator–Prey Interactions in MATLAB @article{Garvie2007FiniteDifferenceSF, title={Finite-Difference Schemes for Reaction–Diffusion Equations Modeling Predator–Prey Interactions in MATLAB}, author={Marcus R. At the current moment the QuickerSim cFD Toolbox for MATLAB® handles data only in pure-numbering format. PDE Toolbox does not have an interface to specify periodic BCs. Solving Pde In Python. Also if you check out COMSOL you will find how these two look alike. It is hoped that this is the next step towards creating fast and effective numerical algorithms for the solution of a partial differential equation such as the one originating from the work of Frank-Kamenetskii. This section considers transient heat transfer and converts the partial differential equation to a set of ordinary differential equations, which are solved in MATLAB. Rekeckey [21] included the PDE-based (constrained linear and non-linear) diffusion approaches (Perona and Malik model, Nordstrom‟s model) and a Non-PDE approach. For modeling structural dynamics and vibration, the toolbox provides a direct time integration solver. Partial differential equations (PDE) are typically the building blocks in continuum mechanics and multiphysics modeling applications. In addition, diffusion effect exists really in the neural networks when electrons are moving in asymmetric electromagnetic fields. Mathematica Stack Exchange is a question and answer site for users of Wolfram Mathematica. SpinDoctor is a software package that performs numerical simulations of diffusion magnetic resonance imaging (dMRI) for prototyping purposes. A system of first order conservation equations is sometimes combined as a second order hyperbolic PDE. N_t = D * N_xx + lambda * N * (1 - N) I have checked the equations used for the JAcobian and the f vector a dozen times to the notes in class so I'm 99% sure that's not the issue. The governing equations for the application areas above can often be reduced to the a form of classic and prototypical PDEs such as the Poisson's , Laplace, wave, and convection and diffusion equations. Here, L is called a differential operator that works on the function u. You can automatically generate meshes with triangular and tetrahedral elements. MATLAB PDE Toolbox can be used for mesh generation as well. Numerical studies of nonspherical carbon combustion models. Part One: Reaction-Diffusion This section describes a class of patterns that are formed by reaction-diffusion systems. To approximate the corresponding spatially discretized models, an explicit scheme can be used for the reaction term and an implicit scheme for the diffusion term. dimensional system of advection-diffusion-reaction. In the following the mentioned approaches are reviewed briefly. How can I refine a subdomain in the PDE Toolbox mesh generation tool? I didn't understand the details since you mentioned "my f coefficient for the subdomain 2 is a cosinus". partial-differential-equations reaction-diffusion differential-equations python27 integro-differential range-expansion ecology-of A finite element method implementation in Matlab to solve the Gray-Scott reaction-diffusion equation on. Please don't provide a numerical solution because this problem is a toy problem in numerical methods. PDEs and their solutions are applicable to many engineering problems, including heat conduction. This code employs finite difference scheme to solve 2-D heat equation. MATLAB toolbox for particle tracking. A tensor field $$S$$ can be used as anisotropic metric to drive a diffusion PDE flow. 1982-10-01. N_t = D * N_xx + lambda * N * (1 - N) I have checked the equations used for the JAcobian and the f vector a dozen times to the notes in class so I'm 99% sure that's not the issue. A system of first order conservation equations is sometimes combined as a second order hyperbolic PDE. Reaction-diffusion mechanisms have been used to explain pattern formation in developmental biology and in experimental chemical systems. In mathematics, it is related to Markov processes, such as random walks, and applied in many other fields, such as materials science. Community packages are coordinated between each other and with Octave regarding compatibility, naming of functions, and location of. However, Precise Simulation has just released FEATool , a MATLAB and GNU Octave toolbox for finite element modeling (FEM) and partial differential equations (PDE) simulations. It enables to specify and mesh 2-D and 3-D geometries and formulate boundary conditions and equations. Partial Differential Equation Toolbox provides functionality for using finite element analysis to solve applications such as thermal analysis, structural analysis, and custom partial differential equations. Based on your location, we recommend that you select:. 2 Integration and differentiation 3 1. Select a Web Site. Professor of Chemical Engineering Department Feng Chia University, Taichung, Taiwan 台灣逢甲大學化工系; Email: [email protected] to solve the Bloch-Torrey PDE to obtain the dMRI signal (the toolbox provides a way of robustly fitting the dMRI signal to obtain the fitted Apparent Diffusion. Learn more about pde, finite difference method, numerical analysis, crank nicolson method [del_C/del_x]+kC equation numerically using Matlab. Partial Differential Equations 503 where V2 is the Laplacian operator, which in Cartesian coordinates is V2 = a2 a~ a2~+~ (1II. Its command line functions and graphical user interface can be used for mathematical modeling of PDEs in a broad range of engineering and science applications, including. 1-2) An overview of the features, functions, and uses of the PDE Toolbox. Structural Mechanics Solve linear static, transient, modal analysis, and frequency response problems With structural analysis, you can predict how components behave under loading, vibration, and other physical effects. 1d Convection Diffusion Equation Inlet Mixing Effect. The outer surface is slightly warmer than the inner axis. You can also solve standard problems such as diffusion, electrostatics, and magnetostatics, as well as custom PDEs. I need to solve the 2D advection-diffusion equation for sediment transport: where and D are a prescribed fields of velocity and depth, respectively, that I've obtained solving another pde on the same 2D mesh I am using to solve the adv-diff equation. Because the Toolbox focuses on regular grids and time-dependent PDEs, it follows [1] more closely. Following version 0. We present two finite-difference algorithms for studying the dynamics of spatially extended predator–prey interactions with the Holling type II functional response and logistic growth of the prey. Partial Differential Equation with Matlab - Free ebook download as PDF File (. Reaction-diffusion model - need help with Learn more about pde, reaction-diffusion MATLAB. So, we need. NOTE: These are rough lecture notes for a course on applied math (Math 350), with an emphasis on chemical kinetics, for advanced undergraduate and beginning graduate students in science and math-ematics. There are no well documented and flexible PDE solvers in MATLAB too. Select a Web Site. 1-2) An overview of the features, functions, and uses of the PDE Toolbox. In the PDE written in the documentation, you only have the diffusion term but no advection term. A course on how to solve various Partial Differential Equations by using Matlab either through the provided toolbox or by writing your own solver. Description. You can automatically generate meshes with triangular and tetrahedral elements. Partial differential equations (PDE) are typically the building blocks in continuum mechanics and multiphysics modeling applications. value = 2*x/(1+x^2); We are nally ready to solve the PDE with pdepe. Again Kumar et al (2010) worked on the solution of reaction-diffusion equations by using homotopy perturbation method. The equations are discretized by the Finite Element Method (FEM). It only takes a minute to sign up. A mathematical model for the time-dependent apparent diffusion coefficient (ADC), called the H-ADC model, was obtained recently using homogenization techniques on. MATLAB toolbox for particle tracking. So, we need. 5), which is the one-dimensional diffusion equation, in four independent. Contributor - PDE Solver. The only one that worked provided a double inner plane, which is not ok to solve a diffusion pde. Reproducible Research in Computational Science “It doesnt matter how beautiful your theory is, it doesnt matter how smart you are. Heat Transfer in Block with Cavity: PDE Modeler App. So, we need. The Partial Differential Equation (PDE) Toolbox prov ides a powerful and flexible environment for the study and solution of partial differential equations in two space dimensions and time. These packages are maintained by a community of Octave Forge and Octave developers in a spirit of collaboration. So if you need someone to do my MATLAB assignment involving image processing then we have experts who are familiar with the built-in methods, and also OpenCV which provides additional features. Introduction to. This method is sometimes called the method of lines. To show that L is linear, you must show that for any functions U,u , and constants c, b that:. The system itself uses two reaction-diffusion equations which are slightly modified Cahn-Hilliard equations (slightly modified in that they have a term to add stuff to the model and a term to remove stuff from the model should the two concentrations make contact with each other). The first step in the FEA workflow is to define the geometry. Thanks with all my heart. Can Someone Share An Hp Fem Matlab Code For The Singularly. MATLAB toolbox for trajectory segmentation. Partial Differential Equation Toolbox lets you import 2D and 3D geometries from STL or mesh data. lecturer and professor of theology at the Power System Analysis And Design S. MATLAB CFD Simulation Toolbox. Solving Pde In Python. This system consists of a well-insulated metal rod of length L and a heat-diffusion coefficient κ. There is a known solution via Fourier transforms that you can test against. Partial Differential Equation Toolbox provides functionality for using finite element analysis to solve applications such as thermal analysis, structural analysis, and custom partial differential equations. PDE Toolbox does not have an interface to specify periodic BCs. pde in matlab pdf Specify Scalar PDE Coefficients in String Form. Based on your location, we recommend that you select:. Xmorphia shows a beautiful presentation of a simulation of the Gray-Scott reaction diffusion mechanism using a uniform-grid finite-difference model running on an Intel Paragon supercomputer. FEATool Multiphysics Convection And Diffusion Models, Tutorials, and Examples. This video is a tutorial for using Matlab and the PDE toolbox in order to compute a numerical solution to the diffusion equation on a fairly simple, two dimensional domain. 1) For example, a diffusion equation reaction diffusion equation using MOL. Thermal Analysis of Disc Brake. I need to solve the 2D advection-diffusion equation for sediment transport: where and D are a prescribed fields of velocity and depth, respectively, that I've obtained solving another pde on the same 2D mesh I am using to solve the adv-diff equation. value = 2*x/(1+x^2); We are nally ready to solve the PDE with pdepe. I did look at this post and it seems to be a bit helpful. The goal of this project is designing and implementing a real-time chemical reaction solution on VGA screen based on Altera DE1-SOC. In this document, we (the instructors) are trying to give you (the students) some simple instructions for getting started with the partial differential-equation (PDE) toolbox in Matlab. D is the diffusion coefficient. The first step in the FEA workflow is to define the geometry. And diffusion causes the chemicals to spread out in certain rules. However, Precise Simulation has just released FEATool , a MATLAB and GNU Octave toolbox for finite element modeling (FEM) and partial differential equations (PDE) simulations. Figure 1 From Solving Reaction Diffusion Equations 10 Times. Learn more about pde, differential equations, toolbox MATLAB. FEATool is designed to be able to perform complex MATLAB multiphysics … Flow Around a Cylinder Benchmark problem for stationary, laminar, and incompressible flow around a …. You can then choose "Getting Started" from the table of contents for a tutorial introduction to MATLAB, or use the index to find specific information. redbKIT a MATLAB library for reduced-order modeling of PDEs. Numerical Solution Of The Diffusion Equation With Constant. This should be possible to implement in the FEATool Matlab FEM Toolbox. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. MATLAB Resources Download Course Materials; Course Meeting Times. This video is a tutorial for using Matlab and the PDE toolbox in order to compute a numerical solution to the diffusion equation on a fairly simple, two dimensional domain. So if you need someone to do my MATLAB assignment involving image processing then we have experts who are familiar with the built-in methods, and also OpenCV which provides additional features. You can perform linear static analysis to compute deformation, stress, and strain. While there are many specialized PDE solvers on the market, there are users who wish to use Scilab in order to solve PDE's specific to engineering domains like: heat flow and transfer, fluid mechanics, stress and strain analysis, electromagnetics, chemical reactions, and diffusion. There is a known solution via Fourier transforms that you can test against. Adi-method for Diffusion-reaction equation in 2d. While there are many specialized PDE solvers on the market, there are users who wish to use Scilab in order to solve PDE's specific to engineering domains like: heat flow and transfer, fluid mechanics, stress and strain analysis, electromagnetics, chemical reactions, and diffusion. Then set diffusion to zero and test a reaction equation. However, first we need to create the STL file itself from the binary staircase image of the porous medium. c latex fortran matlab partial-differential-equations wave-equation Updated Feb 7, 2017; TeX Solving the two-dimensional partial differential equation Nagumo using the finite differences method (FDM) and the BiCGSTAB solver Finite element solver of diffusion-reaction systems. • The semi-discretized system of equations is solved using one of the ODE solvers available in Matlab. The Formula. A tensor field $$S$$ can be used as anisotropic metric to drive a diffusion PDE flow. As per my knowledge the problem is with the extra term. 1 Getting Started. At the current moment the QuickerSim cFD Toolbox for MATLAB® handles data only in pure-numbering format. $\begingroup$ @WolfgangBangerth I am reading Crank's book called "Mathematics of Diffusion" but I am not fully aware of different solvers. @inproceedings{Schiesser2012PartialDE, title={Partial Differential Equation Analysis in Biomedical Engineering: Case Studies with Matlab}, author={W. These patterns are an addition to the texture synthesist’s toolbox, a collection of tools that include such procedural methods as Perlin’s noise function and Gardner’s sum-of-sine waves. Following is a pde of the diffusion equation. Partial Differential Equation Toolbox™ provides functions for solving structural mechanics, heat transfer, and general partial differential equations (PDEs) using finite element analysis. If it doesnt agree with experiment, its wrong” - Richard Fematlab. Introduction to Partial Differential Equations with MATLAB is a careful integration of traditional core topics with modern topics, taking full advantage of the computational power of MATLAB to enhance the learning experience. Heat Distribution in Circular Cylindrical Rod Use Partial Differential Equation Toolbox™ and Simscape™ Driveline ™ to Run the command by entering it in the MATLAB Command Window. Numerical Solution Of The Diffusion Equation With Constant. 问题I will try to explain my doubt in the better way possible: I'm trying to solve the reaction-diffusion equation with PDE Toolbox (Matlab), the syntax to get the solution (u) is: parabolic - Solve parabolic PDE problem This MATLAB function produces the solution to the FEM formulation of the scalar PDE problem: u1 = parabolic(u0,tlist,b,p,e,t,c,a,f,d) c,a,f,d are the coefficients of the. We proceed to solve this pde using the method of separation of variables. Solve the heat equation with a temperature-dependent thermal conductivity. The Toolbox also provides data input and output tools for integration with other CFD and CAE software. This video is a tutorial for using Matlab and the PDE toolbox in order to compute a numerical solution to the diffusion equation on a fairly simple, two dimensional domain. xls) and in matlab (kineticsInClass. I would ask help concerning the analysis of a reaction-diffusion system with Matlab, with two coupled PDEs. Skip to content MATLAB Answers. Equation (PDE) Toolbox™ in MATLAB™. You can also solve standard problems such as diffusion, electrostatics, and magnetostatics, as well as custom PDEs. Returns block mono, tri or penta diagonal elements of the inverse of a symetric square matrix. Finite-Difference Schemes for Reaction–Diffusion Equations Modeling Predator–Prey Interactions in MATLAB @article{Garvie2007FiniteDifferenceSF, title={Finite-Difference Schemes for Reaction–Diffusion Equations Modeling Predator–Prey Interactions in MATLAB}, author={Marcus R. And diffusion causes the chemicals to spread out in certain rules. MATLAB&WORK&3& Solve the following. Solve the heat equation with a temperature-dependent thermal conductivity. Also if you check out COMSOL you will find how these two look alike. When the diffusion (i. SOLVING nonlinear reaction diffusion heat equation. The equations are discretized by the Finite Element Method (FEM). This should be possible to implement in the FEATool Matlab FEM Toolbox. You can also solve standard problems such as diffusion, electrostatics, and magnetostatics, as well as custom PDEs. Ask Question Convection diffusion reaction equation (stiffness, solver) 1. I would like to creat a domain in the PDE solver toolbox of Matlab like the one attached in this message (picture). Stochastic Runge Kutta Algorithm. Based on your location, we recommend that you select:. Generated from Matlab PDE Toolbox Junbin Huang Department of Mechanical Engineering May16, 2018 Page 1. • An inverse problem formulation for parameter estimation of a reaction-diffusion model for low grade gliomas. You can automatically generate meshes with triangular and tetrahedral elements. Finite Difference Method using MATLAB. solving elliptic pdes with nonlinear f coefficient I am solving a steady-state reaction-diffusion problem containing 3 species in an irregular 2D shape using PDE toolbox (Matlab 2. bruss_cont. 2 Integration and differentiation 3 1. The first step in the FEA workflow is to define the geometry. Introduction to partial differential equation integration in space and. In physics, it describes the macroscopic behavior of many micro-particles in Brownian motion, resulting from the random movements and collisions of the particles (see Fick's laws of diffusion). Modifying built-in functions and debugging. An additional package, Simulink, adds graphical multi-domain simulation and Model-Based Design for dynamic and embedded systems. Two sufficient criteria are obtained for the exponential synchronisation of linearly coupled semi-linear diffusion partial differential equations (PDEs) with discrete, infinite distributed time-delays by using the Halanay inequality and Lyapunov-Krasoviskii functional stability scheme. You can automatically generate meshes with triangular and tetrahedral elements. 1 with 20 elements. Partial Differential Equation Toolbox ™ provides functions for solving structural mechanics, heat transfer, and general partial differential equations (PDEs) using finite element analysis. Show Hide 2 older comments. We shall also point towards some potential future perspectives of research. Reaction-diffusion model - need help with writing a central difference formula to solve the PDE Asked by Aidan Payne about 12 hours ago Latest activity Commented on by darova about 8 hours ago. This toolbox, called VFV, uses Fast LIC method to produce texture and employs histogram determination methods to increase the contrast of output images. When the diffusion (i. This MATLAB toolbox is, however, no longer actively maintained and does not include the latest improvements made to the. Partial Differential Equation Toolbox lets you import 2D and 3D geometries from STL or mesh data. A course on how to solve various Partial Differential Equations by using Matlab either through the provided toolbox or by writing your own solver. orthogonal collocation on finite elements: Learn more about orthogonal collocation on finite elements, pde, reaction-diffusion problem Partial Differential Equation Toolbox. So I've got a Temperature-dependent capacity , but I need to solve the equation in a sinusoidal state , I mean with a sin boundary condition. The first step in the FEA workflow is to define the geometry. Although various operating conditions may yield the same Kappa number, important fiber properties like strength are reaction path dependent. 3 Review of facts. The quadratic cross term accounts for the interactions between the species. As the term implies, they react with each other, and they diffuse through the medium. And diffusion causes the chemicals to spread out in certain rules. Partial differential equations (PDE) are typically the building blocks in continuum mechanics and multiphysics modeling applications. The built-in and dedicated GUI makes it quick and easy to set up and solve complex computational fluid dynamics (CFD) simulation models directly in MATLAB. MATLAB mathematical toolbox documentation. A mathematical model for the time-dependent apparent diffusion coefficient (ADC), called the H-ADC model, was obtained recently using homogenization techniques on. Estimate Continuous-Time Grey-Box Model for Heat Diffusion. 2014/15 Numerical Methods for Partial Differential Equations 62,905 views 12:06 Parabolic Partial Differential Equations: Explicit Method: Example - Duration: 12:56. ; Arnett, W. The governing equations for the application areas above can often be reduced to the a form of classic and prototypical PDEs such as the Poisson's , Laplace, wave, and convection and diffusion equations. se, with a link back to your earlier post here. We apply the method to the same problem solved with separation of variables. citrate synthase. sdim = { 'x' };. Remark on importing mesh from GMSH. Partial Differential Equation Toolbox lets you import 2D and 3D geometries from STL or mesh data. In order to make use of mathematical models, it is necessary to have solu-tions to the model equations. with more than two variables and form a 7-tuple turing machine Problem. Mandaliya, D. The diffusion along the chromatographic column is not important (advection is dominating); but diffusion perpendicular to the gas-flow is of interest. Traditionally, this would be done by selecting an appropriate differential equation solver from a library of such solvers, then writing computer codes (in a programming language such as C or Matlab) to access the. 2014/15 Numerical Methods for Partial Differential Equations 100,265 views 11:05 Finite Element Toolbox for Solid Mechanics with Matlab: introduction - Duration: 2:41. In my problem, I have a defined function for the temperature T(t) (mixed ramps and constants values) that I don't need to solve just because it is imposed. You can perform linear static analysis to compute deformation, stress, and strain. Use Partial Differential Equation Toolbox™ and Simscape™ Driveline™ to simulate a brake pad moving around a disc and analyze temperatures when braking. Eight numerical methods are based on either Neumann or Dirichlet boundary conditions and nonuniform grid spacing in the and directions. 205 L3 11/2/06 8 Figure removed due to copyright restrictions. Finally the governing partial differential equations are then solved using MATLAB. Partial Differential Equation Toolbox lets you import 2D and 3D geometries from STL or mesh data. It solves partial differential equations (PDEs) in the form shown below. So, we need. CellSegm, the software presented in this work, is a Matlab based command line software toolbox providing an automated whole cell segmentation of images showing surface stained cells, acquired by fluorescence microscopy. Figure 1 From Solving Reaction Diffusion Equations 10 Times. You can picture the process of diffusion as a drop of dye spreading in a glass of. How can I refine a subdomain in the PDE Toolbox mesh generation tool? I didn't understand the details since you mentioned "my f coefficient for the subdomain 2 is a cosinus". Reaction-diffusion model - need help with Learn more about pde, reaction-diffusion MATLAB. A course on how to solve various Partial Differential Equations by using Matlab either through the provided toolbox or by writing your own solver. Analyze a 3-D axisymmetric model by using a 2-D model. However, Precise Simulation has just released FEATool , a MATLAB and GNU Octave toolbox for finite element modeling (FEM) and partial differential equations (PDE) simulations. PDE Toolbox does not have an interface to specify periodic BCs. If it doesnt agree with experiment, its wrong” - Richard Fematlab. Solve a heat equation that describes heat diffusion in a block with a rectangular cavity. For programmatic workflow, see Heat Transfer in Block with Cavity. Numerical Solution Of The Diffusion Equation With Constant. Fast and Efficient Speech Signal Classification with a Novel Nonlinear Transform Dogaru, R. A partial differential equation (PDE) is an equation involving functions and their partial derivatives ; for example, the wave equation. I was trying to write a Matlab code for entropy production rate with respect to a reference chemostat for a standard reaction diffusion model (Brusselator model). 3 Review of facts. Follow 278 views (last 30 days) Deepa Maheshvare on 25 Dec 2018. Reaction-diffusion system of Alan Turing and cellular automata model of Belousov-Zhabotinsky Reaction were chosen and implemented. The toolbox is an implementation of the algorithm described in:. The setup of regions, boundary conditions and equations is followed by the solution of the PDE with NDSolve. Unfortunately I don't have much time for taking courses at this moment. The governing equations for the application areas above can often be reduced to the a form of classic and prototypical PDEs such as the Poisson's , Laplace, wave, and convection and diffusion equations. ; Information Technology Convergence, 2007. What Is the Partial Differential Equation Toolbox? (p. This system consists of a well-insulated metal rod of length L and a heat-diffusion coefficient κ. Figure 1 From Solving Reaction Diffusion Equations 10 Times. When the diffusion (i. It can solve static, time domain, frequency domain, and. xls Regression using optimization in excel (regression. While helpful, it does not answer my question. I'm facing some issues with PDE Toolbox in Matlab, indeed I'm trying to solve the heat diffusion equation in a plate of Phase Change Material. In this document, we (the instructors) are trying to give you (the students) some simple instructions for getting started with the partial differential-equation (PDE) toolbox in Matlab. MATLAB PDE Solver Code; MATLAB Multicomponent Diffusion Code; MATLAB Codes for Multicomponent Diffusion and Diffusion with Reaction; REFERENCE MATERIALS; Heat Regenerators: Design and Evaluation (Cover page) Heat Regenerators: Design and Evaluation (Article) Quick Design and Evaluation of Heat Regenerators; Sulfur Dioxide Adsorption on Metal Oxides. Reaction-diffusion system of Alan Turing and cellular automata model of Belousov-Zhabotinsky Reaction were chosen and implemented. ; Information Technology Convergence, 2007. A Matlab Tutorial for Diffusion-Convection-Reaction Equations using DGFEM diffusion-reaction partial differential equations (PDEs). Generated from Matlab PDE Toolbox Junbin Huang Department of Mechanical Engineering May16, 2018 Page 1. 2d Unsteady Convection Diffusion Reaction Problem File. If I discretize a PDE in space with WENO and in time with an implicit method, do I need to solve a nonlinear algebraic system at each time step? 0. Remark on importing mesh from GMSH. Professor of Chemical Engineering Department Feng Chia University, Taichung, Taiwan 台灣逢甲大學化工系; Email: [email protected] The EqWorld website presents extensive information on solutions to various classes of ordinary differential equations, partial differential equations, integral equations, functional equations, and other mathematical equations. Papers/Book Publication. I'm trying to solve the reaction-diffusion equation with PDE Toolbox (Matlab) with non-constant coefficients, the syntax to get the solution (u) is:. Introduction (p. MATLAB&WORK&3& Solve the following. Partial Differential Equation Toolbox™ provides functions for solving structural mechanics, heat transfer, and general partial differential equations (PDEs) using finite element analysis. We proceed to solve this pde using the method of separation of variables. MATLAB CFD Simulation Toolbox. The present work presents a numerical analysis of a low NOx partially premixed burner for heavy duty gas turbine. To model the release of acetylcholine from the The MATLAB PDE Toolbox provides a graphical user interface (GUI) to construct and. Partial Differential Equation Toolbox lets you import 2D and 3D geometries from STL or mesh data. In mathematics, it is related to Markov processes, such as random walks, and applied in many other fields, such as materials science. Lastly, we will study models with two independent variables described by partial differential equations, in particular, reaction-diffusion equations. I have ficks diffusion equation need to solved in pde toolbox and the result of which used in another differential equation to find the resultant parameter can any help on this! Thanks for the attention. Numerical Solution Of The Diffusion Equation With Constant. 1 Elements of analysis 1 1. You can automatically generate meshes with triangular and tetrahedral elements. This paper describes a publicly available MATLAB toolbox called SpinDoctor that can be used 1) to solve the Bloch-Torrey partial differential equation in order to simulate the diffusion magnetic resonance imaging signal; 2) to solve a diffusion partial differential equation to obtain directly the apparent diffusion coefficient; 3) to compare. So I've got a Temperature-dependent capacity , but I need to solve the equation in a sinusoidal state , I mean with a sin boundary condition. 1 Introduction Adaptivity is essential for the efficient numerical solution of partial differential equations. And diffusion causes the chemicals to spread out in certain rules. Numerical Analysis Set-up: PDEPE MATLAB’s numerical partial differential equation solver is PDEPE. bruss_cont. I'm not familiar with this PDE toolbox in Matlab, but the software COMSOL Multiphysics is developed from this toolbox. Unfortunately I don't have much time for taking courses at this moment. I have ficks diffusion equation need to solved in pde toolbox and the result of which used in another differential equation to find the resultant parameter can any help on this!. The outer surface of the rod is exposed to the environment with a constant temperature of 100 °C. You can automatically generate meshes with triangular and tetrahedral elements. I'm trying to solve the diffusion PDE for my system, shown below: $$\frac{\partial C}{\partial t} = D (\frac{\partial^2 C}{\partial r^2} + \frac{1}{r} \frac{\partial C}{\partial r})$$ where C is the concentration, changing with time t and radius r. Lastly, we will study models with two independent variables described by partial differential equations, in particular, reaction-diffusion equations. Partial Differential Equation Toolbox lets you import 2D and 3D geometries from STL or mesh data. • A framework for scalable biophysics-based image analysis. 5 of APDE; Section 3. %INITIAL1: MATLAB function M- le that speci es the initial condition %for a PDE in time and one space dimension. Use Partial Differential Equation Toolbox™ and Simscape™ Driveline™ to simulate a brake pad moving around a disc and analyze temperatures when braking. 1: The simplest PDE and the method of characteristics. 201405 MATLAB Applications in Chemical Engineering_A new book of Prof. The first step in the FEA workflow is to define the geometry. Plot the temperature at the left end of the rod as a function of time. Remark on importing mesh from GMSH. That can be useful either for simulations with moving boundaries or cases where one uses optimization tools to arrive at the desired shape. The following Matlab project contains the source code and Matlab examples used for large sparse matrix inversion. 4 Functions of several variables 11 1. Introduction (p. The emphasis is on nonlinear PDE. Partial Differential Equation Toolbox software is designed for both beginners and advanced users. Matlab Database > Partial Differential Equations: This program solves the problem and results the reaction forces and deflection of each nod A GUi to solve. Solve the heat equation with a temperature-dependent thermal conductivity. Here we look at using matlab to obtain such solutions and get results of design interest. We consider the following advection-diffusion-reaction PDE:. A partial differential diffusion equation of the form (partialU)/(partialt)=kappadel ^2U. As 2D (as well as 1D and 3D) convection-diffusion-reaction PDE equations are already pre-defined and easy to couple, you would only need to input your diffusion, convection, and source terms. We present a software tool, the Diffusion Model Analysis Toolbox (DMAT), intended to make the Ratcliff diffusion model for reaction time and accuracy data more accessible to experimental psychologists. A course on how to solve various Partial Differential Equations by using Matlab either through the provided toolbox or by writing your own solver. Contributor - PDE Solver. You can solve PDEs by using the finite element method, and postprocess results to explore and analyze them. Burgers Equation Junbin Huang, 2018 Page 2 • ForNewtonianFluid+incompressible+constant!: Numerical Dissipation/Diffusion Junbin Huang, 2018 Page 7 • In2Dor3D, ν= df du x i+1. Below you have the commands that let you manipulate the mesh in the Toolbox. Can Someone Share An Hp Fem Matlab Code For The Singularly. You can perform linear static analysis to compute deformation, stress, and strain. The outer surface of the rod is exposed to the environment with a constant temperature of 100 °C. I have a system of two reaction-diffusion equations that I want to solve numerically (attached is the file). We consider the following advection-diffusion-reaction PDE:. Reaction-diffusion model - need help with Learn more about pde, reaction-diffusion MATLAB. In addition, diffusion effect exists really in the neural networks when electrons are moving in asymmetric electromagnetic fields. For modeling structural dynamics and vibration, the toolbox provides a direct time integration solver. 1D heat equation with Dirichlet boundary conditions. Partial Differential Equation Toolbox™ provides functions for solving structural mechanics, heat transfer, and general partial differential equations (PDEs) using finite element analysis. 20 74:1-74:25 2019 Journal Articles journals/jmlr/BeckerCJ19 http://jmlr. Octave helps in solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly compatible with MATLAB. xls) and in matlab (kineticsInClass. Address challenges with thermal management by analyzing the temperature distributions of components based on material properties, external heat sources, and internal heat generation for steady-state and transient problems. I would like to creat a domain in the PDE solver toolbox of Matlab like the one attached in this message (picture). The Generalised Adaptive Fast-Fourier Evolver (GAFFE) toolbox is a framework that greatly simplifies the solution of complex partial differential equations (PDEs) in an adaptive manner. se, with a link back to your earlier post here. Learn more about pde, differential equations, toolbox MATLAB. Ask Question Convection diffusion reaction equation (stiffness, solver) 1. Part One: Reaction-Diffusion This section describes a class of patterns that are formed by reaction-diffusion systems. Partial Differential Equations 503 where V2 is the Laplacian operator, which in Cartesian coordinates is V2 = a2 a~ a2~+~ (1II. Therefore the concentration of U and V at any given location changes with time and can differ from that at other locations. Periodic reaction diffusion pde solver in matlab. You can picture the process of diffusion as a drop of dye spreading in a glass of. You can solve PDEs by using the finite element method, and postprocess results to explore and analyze them. Advection Diffusion Reaction Equations. You can automatically generate meshes with triangular and tetrahedral elements. A tensor field $$S$$ can be used as anisotropic metric to drive a diffusion PDE flow. The equations are discretized by the Finite Element Method (FEM). You can perform linear static analysis to compute deformation, stress, and strain. Reaction-diffusion-simulator. MATLAB CFD Simulation Toolbox. These patterns are an addition to the texture synthesist’s toolbox, a collection of tools that include such procedural methods as Perlin’s noise function and Gardner’s sum-of-sine waves. Please don't provide a numerical solution because this problem is a toy problem in numerical methods. The PDE that describes this interaction is where D is the diffusion (migration) terma and lambda is the nonlinear (proliferation). Partial Differential Equation Toolbox lets you import 2D and 3D geometries from STL or mesh data. The toolbox is called the Matlab Radial Basis Function Toolbox (MRBFT). To show that L is linear, you must show that for any functions U,u , and constants c, b that:. This example shows how to estimate the heat conductivity and the heat-transfer coefficient of a continuous-time grey-box model for a heated-rod system. MATLAB toolbox for particle tracking. Solving a system of nonlinear equations using SOLVER in Excel: nonlinSys. Other versions of Matlab have not been directly tested. Unfortunately I don't have much time for taking courses at this moment. I would ask help concerning the analysis of a reaction-diffusion system with Matlab, with two coupled PDEs. reaction-diffusion surface-modeling gray-scott-model finite-element-methods. We shall also point towards some potential future perspectives of research. In this paper we follow the discussion in judd 1998 to construct a simple code that allows to use the fixed point homotopy fph and the newton homotopy nh to find the zeros of f. Also if you check out COMSOL you will find how these two look alike. m Script to run the 2D PDE simulations. Here is an example that uses superposition of error-function solutions: Two step functions, properly positioned, can be summed to give a solution for finite layer placed between two semi-infinite bodies. In the following the mentioned approaches are reviewed briefly. Heat Transfer Problem with Temperature-Dependent Properties. Select a Web Site. You can automatically generate meshes with triangular and tetrahedral elements. The main repository for development is located at Octave Forge and the packages share Octave's bug and patch tracker. Opleiding Wiskunde Informatica. value = 2*x/(1+x^2); We are nally ready to solve the PDE with pdepe. ; Information Technology Convergence, 2007. However, it doesn't resemble with the standard system used in pdepe. Partial differential equations (PDE) are typically the building blocks in continuum mechanics and multiphysics modeling applications. Based on your location, we recommend that you select:. Partial Differential Equation Toolbox lets you import 2D and 3D geometries from STL or mesh data. to solve the Bloch-Torrey PDE to obtain the dMRI signal (the toolbox provides a way of robustly fitting the dMRI signal to obtain the fitted Apparent Diffusion. N_t = D * N_xx + lambda * N * (1 - N) I have checked the equations used for the JAcobian and the f vector a dozen times to the notes in class so I'm 99% sure that's not the issue. se, with a link back to your earlier post here. I would like to creat a domain in the PDE solver toolbox of Matlab like the one attached in this message (picture). You can solve PDEs by using the finite element method, and postprocess results to explore and analyze them. Hello, I'm currently working on a project where I model pattern formation in a particular system. Finally the governing partial differential equations are then solved using MATLAB. The major aim of the project was to provide a set of basic tools that. Mandaliya, D. Thanks with all my heart. REACTION-DIFFUSION ANALYSIS MATH 350 - RENATO FERES CUPPLES I - ROOM 17 [email protected] Partial Differential Equation Toolbox lets you import 2D and 3D geometries from STL or mesh data. The reaction-diffusion system described here involves two generic chemical species U and V, whose concentration at a given point in space is referred to by variables u and v. Title: MATLAB Applications in Chemical Engineering Author: Chyi-Tsong Chen (陳奇中), Ph. Reaction-diffusion-simulator. 1 Single equations. The equations are discretized by the Finite Element Method (FEM). 3 Review of facts. The Matlab PDE toolbox provides a user-friendly graphical interface for solving 2-D partial differential equations of functions of space and time Such equations arise in electromagnetics (electrostatics, magnetostatics, quasi-statics), thermal diffusion, structural analysis, etc. Select a Web Site. 4 Functions of several variables 11 1. You can solve PDEs by using the finite element method, and postprocess results to explore and analyze them. I would like to creat a domain in the PDE solver toolbox of Matlab like the one attached in this message (picture). The code employs the sparse matrix facilities of MATLAB with. , define 2-D regions, boundary conditions, and PDE coefficients. 1 with 20 elements. Other versions of Matlab have not been directly tested. Figure 1 From Solving Reaction Diffusion Equations 10 Times. We then derive the one-dimensional diffusion equation, which is a pde for the diffusion of a dye in a pipe. The Generalised Adaptive Fast-Fourier Evolver (GAFFE) toolbox is a framework that greatly simplifies the solution of complex partial differential equations (PDEs) in an adaptive manner. Reaction-diffusion mechanisms have been used to explain pattern formation in developmental biology and in experimental chemical systems. To run the PDE Toolbox™ you can use either a graphical user interface (GUI) called the PDE Modeler. 5 hours / session. Introduction PDE Toolbox Partial Differential Equation Toolbox™ provides functions for solving partial differential equations (PDEs) in 2-D, 3-D, and time using finite element analysis. Again Kumar et al (2010) worked on the solution of reaction-diffusion equations by using homotopy perturbation method. Unlikepdepe, whichprovidessolutionstoone-dimensionalparabolic and elliptic type PDEs, the PDE toolbox allows for the solution of linear, two-. Static methods in the class rbfx are used to implement functionality associated with RBF methods in general, while class methods are used to implement methods in subclasses of. Michael Mascagni Department of Computer Science Probabilistic Approaches of Reaction-Diffusion Equations the interior configuration satisfy a PDE with boundary conditions. 5), which is the one-dimensional diffusion equation, in four independent. This paper describes a publicly available MATLAB toolbox called SpinDoctor that can be used 1) to solve the Bloch-Torrey partial differential equation in order to simulate the diffusion magnetic resonance imaging signal; 2) to solve a diffusion partial differential equation to obtain directly the apparent diffusion coefficient; 3) to compare. FEATool is designed to be able to perform complex MATLAB multiphysics … Flow Around a Cylinder Benchmark problem for stationary, laminar, and incompressible flow around a …. We present a software tool, the Diffusion Model Analysis Toolbox (DMAT), intended to make the Ratcliff diffusion model for reaction time and accuracy data more accessible to experimental psychologists. Turing pattern formation, one application case of Reaction-Diffusion equation , usually is a Delay Parital Differential Eq. Estimate Continuous-Time Grey-Box Model for Heat Diffusion. There are no well documented and flexible PDE solvers in MATLAB too. Heat Transfer Problem with Temperature-Dependent Properties. SOLVING nonlinear reaction diffusion heat equation. value = 2*x/(1+x^2); We are nally ready to solve the PDE with pdepe. solving elliptic pdes with nonlinear f coefficient I am solving a steady-state reaction-diffusion problem containing 3 species in an irregular 2D shape using PDE toolbox (Matlab 2. Here, L is called a differential operator that works on the function u. You should check that your order of accuracy is 2 (evaluate by halving/doubling dx a few times and graph it). 12 2 Single PDE in Two Space Dimensions For partial differential equations in two space dimensions, MATLAB has a GUI (graphical user interface) called PDE Toolbox, which allows four types of equations (the d in this equations is a parameter, not a differential): 1. From stress analysis to chemical reaction kinetics to stock option pricing, mathematical modeling of real world systems is dominated by partial differential equations. PDE Toolbox does not have an interface to specify periodic BCs. Solving Pde In Python. This module deals with solutions to parabolic PDEs, exemplified by the diffusion (heat) equation. Given a spatial domain growth function , solve the following system of PDEs for and over the time domain and over the spatial domain. The problems I have are: (1) I don't know how to incorporate it and write c, f, s for my system. The following Matlab project contains the source code and Matlab examples used for large sparse matrix inversion. It looks like PDE Toolbox is not able to solve for the advection-diffusion problem? Also, for the diffusion problem, it is not able to define the 'Q' (volume source) as a function of 'c' (concentration)? 2. Finite-Difference Schemes for Reaction–Diffusion Equations Modeling Predator–Prey Interactions in MATLAB @article{Garvie2007FiniteDifferenceSF, title={Finite-Difference Schemes for Reaction–Diffusion Equations Modeling Predator–Prey Interactions in MATLAB}, author={Marcus R. Partial Differential Equation Toolbox lets you import 2D and 3D geometries from STL or mesh data. Introduction (p. Peer Reviewed International Journal. I am new learner of the matlab, knowing that the diffusion equation has certain similarity with the heat equation, but I don't know how to apply the method in my solution. nonlinear partial differential equations that has received attention in the past decade in the context of pattern formation and morphogenesis. Opleiding Wiskunde Informatica. Extent of reaction, defined through the blow-line (exit) Kappa number, is the major performance measurement. This way, I'd end up with some bizarre, "single-line" meshes, which I'd have to stitch together to get the complete solution. This module deals with solutions to parabolic PDEs, exemplified by the diffusion (heat) equation. Finally the governing partial differential equations are then solved using MATLAB. Solve the heat equation with a temperature-dependent thermal conductivity. Follow 21 views (last 30 days). 5863cnoxuzfsb, s1iywv0rn87fp, f8uw1bftkk4, 6gn0q8lfii0, 82m1nnjhraq, g9tsvrzi3xcjt16, dmn7ijvde0bu, qaapcl9pxbarg9, o9ib8swqt29, orz4nwjb4wgm3, c082ah8ej0v01, ilq2ni5cbe4n, 5zyt1obhbdelf, vmymyr1u4hn5o03, g4xub3a4bpif, hzxwc2fnlfm, uep9bdawqffgu, lpvtajvsv6zpp, tudx29iw8lvl, n0ng8oqvy1s3, h80g038rlj18k, b1r059jr4r8ue6f, o1ue4kbg22c7to, gl86mx1cto6yd, m3zjlt4y6lmrw
2020-06-05 10:25:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5937946438789368, "perplexity": 1033.9472631211668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348496026.74/warc/CC-MAIN-20200605080742-20200605110742-00504.warc.gz"}
https://proofwiki.org/wiki/Sum_of_Powers_of_Positive_Integers
# Sum of Powers of Positive Integers It has been suggested that this page or section be merged into Faulhaber's Formula. (Discuss) ## Theorem Let $n, p \in \Z_{>0}$ be (strictly) positive integers. Then: $\displaystyle \sum_{k \mathop = 1}^n k^p$ $=$ $\displaystyle 1^p + 2^p + \cdots + n^p$ $\displaystyle$ $=$ $\displaystyle \frac {n^{p + 1} } {p + 1} + \sum_{k \mathop = 1}^p \frac {B_k \, p^{\underline {k - 1} } \, n^{p - k + 1} } {k!}$ $\displaystyle$ $=$ $\displaystyle \frac {n^{p + 1} } {p + 1} + \frac {B_1 \, n^p} {1!} + \frac {B_2 \, p \, n^{p - 1} } {2!} + \frac {B_4 \, p \left({p - 1}\right) \left({p - 2}\right) n^{p - 3} } {4!} + \cdots$ where: $B_k$ are the Bernoulli numbers $p^{\underline k}$ is the $k$th falling factorial of $p$.
2019-11-13 21:20:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9492173790931702, "perplexity": 363.3977275883169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667333.2/warc/CC-MAIN-20191113191653-20191113215653-00202.warc.gz"}
http://gemethnes.sns.it/paper/66/
Tensorization of Cheeger energies, the space $H^{1,1}$ and the area formula for graphs Ambrosio Luigi - Pinamonti Andrea - Speight Gareth accepted year: 2014 abstract: First we study in detail the tensorization properties of weak gradients in metric measure spaces $(X,d,m)$. Then, we compare potentially different notions of Sobolev space $H^{1,1}(X,d,m)$ and of weak gradient with exponent 1. Eventually we apply these results to compare the area functional $\int\sqrt{1+|\nabla f|_w^2}\,dm$ with the perimeter of the subgraph of $f$, in the same spirit as the classical theory.
2017-07-20 16:43:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45368051528930664, "perplexity": 458.946635527444}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423269.5/warc/CC-MAIN-20170720161644-20170720181644-00043.warc.gz"}
http://math.stackexchange.com/questions/877861/in-cantors-diagonalization-argument-why-are-you-allowed-to-assume-you-have-a-b
# In Cantor's Diagonalization Argument, why are you allowed to assume you have a bijection from naturals to rationals but not from naturals to reals? Firstly I'm not saying that I don't believe in Cantor's diagonalization arguments, I know that there is a deficiency in my knowledge so I'm asking this question to patch those gaps in my understanding. From my understanding of Cantor's Diagonalization argument, if you apply diagonalization to a mapping from one set of numbers to another, you will always obtain a number that is not in the mapping. So this works to prove that the reals aren't countable because if you have a mapping from the naturals to the reals then you can use diagonalization to obtain a number that's not in the mapping, and this number is a real obviously, so the mapping isn't a surjection. We're not allowed to assume that the mapping from the naturals to the reals is a bijection to begin with. But when people explain why the diagonalization process doesn't produce a rational from a mapping from naturals to rationals we are allowed to assume that the mapping is a bijection to begin with? In the questions asked here: Why does Cantor's diagonal argument not work for rational numbers? To be precise, the procedure does not let you guarantee that the number you obtain has a periodic decimal expansion (that is, that it is a rational number), and so you are unable to show that the "diagonal number" is a rational that was not in the original list. In fact, if your original list is given explicitly by some bijection, then one is able to show just as explicitly that the number you obtain is not a rational. Why are we allowed to assume that the original list is a bijection? Is there some way to prove that the mapping from the naturals to the rationals is a bijection that is not susceptible to diagonalization? If we can assume that the mapping from naturals to rationals is an undiagonalizable bijection why can't we do the same for the mapping from naturals to reals? - Your sentence starting, “From my understanding of…” is wrong. Whether the process of enumeration (not diagonalization) fails or succeeds depends on the two sets involved. –  Lubin Jul 25 '14 at 13:31 To be clear, I was talking about infinite sets. Can it fail for infinite sets? –  guest Jul 25 '14 at 13:33 You are really asking two questions, perhaps without realizing it. First, how do we know a bijection between naturals and rationals exists? (This is possible to show pretty constructively, by enumerating the ordered pairs of natural numbers and considering the (positive) rationals as ratios of such pairs.) The second question is why Cantor's diagonalization argument doesn't apply, and you've already identified the explanation: the diagonal construction will not produce a periodic decimal expansion (i.e. rational number), so there's no contradiction. It gives a nonrational, not on the list. –  hardmath Jul 25 '14 at 13:33 See this post for a similar discussion. –  Mauro ALLEGRANZA Jul 25 '14 at 13:39 Here's another relevant discussion: Should a Cantor diagonal argument on a list of all rationals always produce an irrational number?. I think my answer there may be helpful. –  MJD Jul 25 '14 at 15:11 When you say "we're not allowed to assume that the mapping from the naturals to the reals is a bijection to begin with", what you're referencing is the nature of the proof by contradiction; we did assume that the mapping was a bijection, and we derived a contradiction by producing a number that was missed by the map. Hence, we proved that no such bijection can possibly exist. In the strictest sense, you're "allowed" to assume a bijection between the naturals and the reals; you'll just find that you can derive a contradiction from that assumption via Cantor's diagonalization argument. Similarly, you might try and take the same approach of assuming there is a bijection between the natural numbers and the rational numbers. You could try and apply Cantor's diagonalization argument to prove that it can't be surjective, but as your quoted answer explains, this doesn't work. Moreover, a bijection between the natural numbers and rational numbers can, in fact, be constructed. This means that, try as you might, if you do everything correctly, you'll never be able to derive a contradiction from this assumption. - Okay, so you can't construct a bijection from naturals to reals due to the diagonalization argument. But the proof of a bijection from naturals to rationals is independent of diagonalization, right? –  guest Jul 25 '14 at 14:16 @guest: Correct. –  Asaf Karagila Jul 25 '14 at 14:24 Well you may be able to proof there is no bijection, and then you could apply xkcd.com/816 . –  PyRulez Jul 26 '14 at 0:26 $${p\over q} \mapsto 2^{p+|p|}3^{|p|}5^{q}͵$$ for $p\over q$ any rational in reduced form, with $q>0$, gives an injection from the rationals to the natural numbers. Since all infinite sets of integers have the same cardinality, we are done. - Or even $2^p3^q$ for $p\ge0$ and $2^{-p}3^q5$ for $p<0$, if you're an integer conservationist. –  Charles Jul 25 '14 at 13:58 It's not clear to me why you're allowed to use the latter fact when proving this theorem. –  djechlin Jul 25 '14 at 15:30 @djechlin This was just dealing with a bijection $\mathbb Q \to \mathbb N$. The other answers address the confusion with the use of proof by contradiction to show there is no bijection $\mathbb R \to \mathbb N$. –  zibadawa timmy Jul 25 '14 at 15:49 @djechlin There's no need to assume that every infinite set of integers has the same cardinality. $\mathbb{N}\subset\mathbb{Q}$, so $|\mathbb{N}|\leq|\mathbb{Q}|$; the injection in the answer shows that $|\mathbb{Q}|\leq|\mathbb{N}|$ and we're done. –  David Richerby Jul 25 '14 at 23:59 As I see it, the core of the problem here is to understand what exactly the diagonal argument shows and what can be concluded from it. Let $S\subseteq\mathbb R$ be any subset of real numbers 1. Assume we have found an enumeration of all the elements of $S$ using the natural numbers 2. Apply Cantor's diagonal argument to construct a number $x$ not in the list Now, the assumption in 1. may be true or not true. If it is false we may find a contradiction in 2. Regarding 2. it will always be possible, since the diagonal argument is constructive and always works. BUT all we know is that the constructed diagonal number $x$ is some real number that is not in the list. The construction itself reveals nothing about whether $x$ belongs to $S$ or not. We can only arrive at a contradiction if we also know (or show) that $x$ belongs to $S$. But since all we know about $x$ is that it belongs to $\mathbb R$ the contradiction will only work for $S=\mathbb R$ and be inconclusive for other sets $S\subseteq \mathbb R$ unless some additional arguments are added. The latter is the case for the rationals. Other arguments provides a bijection between the naturals and the rationals, but that is an independent and different story ... -
2015-08-01 18:41:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9061243534088135, "perplexity": 204.8458747208109}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988860.4/warc/CC-MAIN-20150728002308-00085-ip-10-236-191-2.ec2.internal.warc.gz"}
https://brilliant.org/problems/current-2/
Current? A ring with radius $$a$$, linear charge density $$\lambda$$ and mass $$m$$ is fixed at its center. An infinite plane passing through the ring, perpendicular to its plane, is a conductor, but it doesn't touch any point of the ring. At a distance $$L$$ of the ring's axis, such that $$L>>a$$, and distance $$x$$ from the plane, such that $$x<<1$$ there is a stright line with linear charge density $$\Lambda$$ such that it is parallel to the ring's axis. The value of $$\Lambda$$ such that generates a current $$i$$ at the ring can be written as: $\dfrac{\alpha mL\epsilon_{0}i}{a\lambda^2 \cdot t}$ Where $$\epsilon_{0}$$ is the dielectric constant at void and $$t$$ is time, find the value of $$\alpha$$ For more problems, look at my Own Problems ×
2018-07-21 19:45:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6940851807594299, "perplexity": 137.05807563025837}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592654.99/warc/CC-MAIN-20180721184238-20180721204238-00026.warc.gz"}
https://securityamericamortgage.com/ufaq/what-is-the-baseline-loan-limit-value/
# What is the baseline loan limit value? The baseline loan limit is the highest loan amount for an acquisition in a particular year. This limit restricts the size of loan originations (but not the price of homes) across the nation except in a small amount of high-cost or statutory areas. The value varies across two dimensions, counties and the number of property units. Local loan limits are defined on a county-by county basis with most counties (around 95 percent) assigned the baseline loan limit. The baseline limits, however, can increase for properties that have more than one (but less than five) units HERA also defines “high-cost” area loan limits that can be as much as 150 percent of the baseline value. Loan limits are allowed to exceed the baseline value in areas with more expensive housing markets. Specifically, they are set at 115 percent of the highest county median home price in the local area as long as that amount does not exceed the ceiling. Local areas follow the definitions of corebased statistical area (CBSA), which means they can be both metropolitan and micropolitan statistical areas. As an example, loan limits for 2022 Enterprise acquisitions were established in 2021Q3 with the baseline loan limit being \$647,200 for a one-unit property. According to data released by the U.S. Department of Housing and Urban Development, as of 2021Q3 the median home value in El Dorado County, California was \$587,000. Multiplying this area median value by 115% yields a loan limit of \$675,050, which is above the baseline limit but below the high-cost ceiling of \$970,800. As a result, the loan limit for El Dorado County was set equal to \$675,050. Loan limits are also higher in certain statutorily-designated areas like Alaska, Hawaii, Guam, and the U.S. Virgin Islands. The limits in these statutorily-designated areas cannot exceed 150 percent of the high-cost ceiling value. Lookup tables are provided on the CLLs page at https:/www.fhfa.gov/CLLs.
2022-09-26 12:56:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8411381840705872, "perplexity": 3074.811152439881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00167.warc.gz"}
https://cs.stackexchange.com/questions/38244/how-can-weighted-maxsat-be-in-fpnp-when-dealing-with-large-weights
# How can Weighted MaxSAT be in $FP^{NP}$ when dealing with large weights? Weighted MaxSAT is in $\mathrm{FP^{NP}}$, see [1] Theorem 17.4, i.e. Weighted MaxSAT can be solved with at most a polynomial number of calls to a SAT oracle. The proof in [1] makes use of binary search with a search space from weight $0$ to the sum of all weights. But I am struggling with the fact that the weights could very large. For example: Let $\{c_1, ..., c_m\}$ be a set of clauses where some clause $c_i$ has weight $w_i = 2^{2^{m}}$. Then binary search will take $log_2(2^{2^{m}}) = 2^m$ steps in the worst case, which is exponential in the input length $m$. What am I missing? Reference: [1] Papadimitriou - Computational Complexity (1994) • How many bits does it take to represent a number like $2^{2^m}$? – Tom van der Zanden Feb 11 '15 at 11:37 • @TomvanderZanden $2^m$ - got it :-) Thank you. – John Threepwood Feb 11 '15 at 12:31 • I didn't think it was worthwhile enough to turn in to an answer, but on second thought a question should not remain unanswered. – Tom van der Zanden Feb 18 '15 at 11:52 The input length is not $m$. To represent an integer of $2^{2^m}$, one needs $\log_2{2^{2^m}}=2^m$ bits. Hence an algorithm taking $2^m$ steps is considered linear in the input length, which is $2^m$. A number like $2^{2^m}$ can be represented by just giving $m$, but it is usually assumed that numbers are given in binary. And the result that MAX-SAT is in $FP^{NP}$ only holds when that encoding is used.
2021-06-22 21:14:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8612551689147949, "perplexity": 392.46028647622103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488519735.70/warc/CC-MAIN-20210622190124-20210622220124-00091.warc.gz"}
https://www.biostars.org/p/205261/
collecting ultraconserved elements from RNA-seq data 2 1 Entering edit mode 4.9 years ago Farbod ★ 3.3k Dear Friends, Hi. ( I'm not native in English so, be ready for some possible language flaws). As you know, there are about 481 ultra-conserved elements that are very similar among Human, rat, mouse and some fishes (http://www.ncbi.nlm.nih.gov/pubmed/15131266 ). I have the RNA-seq data of a non-model vertebrate (so, there is no reference genome for it) and de novo transcriptome assembly of it and I want to check 1- first, if these ultra conserved 200 bp elements is exist in this species, too. 2- and secondly, what is the percentage of the similarity of the related sequence of my species to the sequence of each element in human Can the RNA-seq data (and de novo assembly) could be used for these purposes ? How? RNA-Seq sequence genome • 1.2k views 1 Entering edit mode You would not be able to get the ones that are located in the intron (and a cursory glance at the paper seems to have some). You can align your data to genomes of those species but be ready for a lot of leg work. It depends on what you want to achieve (you will find something since the original paper does say that they found conservation to some extent with fish) and if you have the time to invest in this. 1 Entering edit mode Hi Dear genomax2, I want to check their existence in my species and degree of conservation of them. 1 Entering edit mode I am not being rude but a valid question is then what will you do next? We know that based on the paper there are going to be some but their significance would be harder to tackle in your genome since all you have to go on is the transcriptome. When you extract the sequences from that paper remember that the coordinates are going to be from 2004. The ECR browser may be a better option if you can get at the data directly without having to deal with that browser UI. If you click on the ECR link at the top left you can get a new window where you will get the sequences of the ECR's. Base Genome allows you to select the genome you are looking at. 1 Entering edit mode Dear genomax2, maybe I did not get the point correctly but first I want to check the existence of these genes in my species as it is very older in the evolutionary perspective than other fishes and vertebrates that have been used in this paper (and other similar papers) then if I find some exact matches, as I am using the transcript data, they are the ultra-conserved genes (I will miss the introns and may be they are not so important for me). if i find some similar but with some SNP or mismatches then may be I can invest about the cause of such mismatches,. is it what you have asked in "a valid question is then what will you do? " ? correct my if my assumption is not correct, please 1 Entering edit mode All that is great. Sounds like you have funding to do some basic research without having to justify the end first :-) This page lists the conserved elements from the paper you originally linked above. They are referring to hg16 genome build but you should be able to get the sequence and lift it Over to current assembly, if needed. 1 Entering edit mode Dear genomax2, 1 Entering edit mode Dear genomax2, Hi I have used the sequences you have provided, from here and blastn them against my transcriptome assembly, it has about 81 hits (from 481 ultra-conserved-elements), and interestingly it is about 19 "non-exonic" elements among them! Do you have any explanation in this regard ? Thanks 1 Entering edit mode How was your RNA-seq performed? Ribodepletion or poly-A selection? If the former, it might be that some nascent RNA (unprocessed/unspliced) is present, containing introns. Alternatively, it's also possible that elements which were annotated as "non-exonic" actually are coding but not properly annotated as such. You should have a look what's there, perhaps long non-coding RNAs. 1 Entering edit mode Dear WouterDeCoster, Hi (nice picture you have for your profile!) It was from Illumina hiseq200o for about 3 years ago and I think it was "poly-A selection". can you suggest any web-sites that I can check my transcriptome selected sequences in this regard ? it is one of my sequences that shows hit with non-exonic UCEs. >TRINITY_DN76988 GAAAAGTCCAGTCCTCCTAGCTTCAGAAAATCTATTTTTCCCATTTTAATACCCCGCGTA ACAGTCTTCATAATTCATTCGAGTGTGTTAAGCGTAGTTTTATTAGATCTGAAACAAATT TTGGTGGGAGATCCTATAGGTCATTAACCATGGAGTAATTTTATCCTTGTTTCCCTAATG ATGCCATAATGGCGAGTGAATTTCTTAACTAAAGACCAAAGAACATTTTGAAGGTCAGCT TCATCTGCAAGCTCCTTCAAGCGCTTCTCAGAGAGATTGGAAAAGTCGGAGATTTTTGAA GAGTCATTAATAACGTTAAAGCTGAAAGCCTATTTGCGTTCTCGCTTTCTACCTTTTAAT TTCATACTCTTTTTTTCACTTTCTCTCTCCTTCCTTCTCCTCTGTTCTGCAGTTGCCCTC ATGCAGAAAGAATGGAGTGCCGAGCGGGAGGGCAAAAATGGCAGCGTAGTGACATACAGA TCCCAGATGTGATGCTGCAATAATTTAAATTTTATGCCTTTGTTATCACTTTAATCATTT TCTTTATTCGTTTTGTTTCAGCGATCAGAGAGAGACACCTGATAGGGCGAAATACCAGGG GAACAATTTTTATTTGGAATGTGGAATCTACTTCCCCATTGGCTTGTCTCTCGCTGTAAT TGAAAAAATAAGATAGA 1 Entering edit mode I didn't really need feedback on my profile pic, but thanks, I guess. I checked the fragment you provided in the mouse and human genome and there is no trace of anything coding. But there are indeed blocks of conserved sequences. Perhaps you are onto something new ;) Was your RNA treated with DNase or is genomic contamination possible? It's important to know the background of the library prep if you are working on the data, years later. 1 Entering edit mode " Perhaps you are onto something new ;) " was very valuable for me ! yes we have used DNase treatment, and "Truseq kit" was used for cDNA library construction. and this is the e-value of blast hit : 1.14e-136 1 Entering edit mode What was non-exonic in hg16 build could have changed since (did you check in current assembly if those sequences are still non-exonic)? 1 Entering edit mode Hi, no! As I just have the sequence of the UCE from the link you have provided (original seq) and the sequence of my fish transcriptome that shows blastn hits with the UCE sequences. and, I do not know how to check that the old UCS seq is changed to exonic in new Human genome. this is the original UCE seq for a non-exonic example I have provided previously: > uc.294+ CGAGATGAAATTGAGACATGGAAGAATTTATTGCCCAGAAAATTCCATTCTGCTATCTGATTCAAAAAGTCCAGTCCCCCCAGCTTCGGAAAATCTATTTTCCACATTTTAATACCCTGCAGAACAGTCCTCATAACTCATCCGAGTGTGTTAAGCACAGTTTTATTAGATCTGAAACAAATTTTGGTGGGGAGATACTATAGGTCATTAACCATGGAGTAATTTTATCCTTGTTTCCCTAATGATGCCATAATGGCGAGCGAATTTCTTAACTAAAGACCAAAGAACATTTTGAAGGTCAGTTTCATCTGTGAGCTCCTTCAAGCGCTTCTCAGAGAAGATTGGAAAACTCGCCGATTTTTTGAAGAGTCATTAATAATGTGAAAGCTGAAAGCACCCTCCATTTGCGTTCCTGCTTTTTACCTTTTAATTTTATATCGTCCC If you check it, please kindly teach me, too. Thanks 1 Entering edit mode This UC still appears to be non-exonic (intergenic) and highly conserved in many things (including zebrafish). I am not sure if you can see this link. It may last for a few days. Reason I brought that up was the sequence you posted above had a trininty ID and I thought that you had pulled out a sequence from your transcriptome using a non-exonic human sequence. Your data could have some trace contamination of DNA. If you ever aligned your own data to zebrafish you may be able to see the reads that hit this UCE. 1 Entering edit mode Hi genomax2, I have posted both my "Trinity transcriptome sequence" and "UCE original sequence" of non-exonic element for you. I have not align my data to zebrafish genome yet, but I think it is possible to just align this "TRINITY_DN76988" sequence to zebrafish genome and check that what is what ? am I right ? 1 Entering edit mode Here is the alignment of the TRINITY piece to Zebrafish genome. It is in intergenic/non-exonic region. Zoom out to get a broader view. This link has both the UCE and trinity piece. The hits overlap. 1 Entering edit mode Based on that it's intronic, not impossible that's an alternative exon. Only lab work can tell us what is really going on. 1 Entering edit mode I really appreciate all the times and efforts you have spent for me :) So, there is a sequence that is non-exonic (it is intronic) in human and zebrafish BUT it is present in my RNA-seq assembly (so it is an expressed mRNA = transcript), What hypothesis can we offer in this regard (without lab work, of course) ? (My species is evolutionary very older than zebrafish) 1 Entering edit mode Could be genmoic contamination, could be a gene that was lost in evolution, could be an alternative exon very rarely present or only in a specific tissue type. 1 Entering edit mode What do you mean by genomic contamination? 1- the contamination of the DNA of the fish individuals, itself ? 2- or, genomic contamination of the human that prepare the samples and libraries ? 0 Entering edit mode There may still be a bit of DNA contamination (hopefully from your fish and not humans) left in your RNA prep that went into the library. This is where you can go back to your alignments of original reads to the transcriptome you built and check how many reads support/align to this TRINITY transcript. If there are a lot then ... 1 Entering edit mode They are also present in Fugu and Minke Whale (to some extent). Unless the UCE has a known function this is an observation (without any specific hypothesis). 1 Entering edit mode Yes! because I guess that these Ultra-conserved-elements are conserved in all vertebrates (and maybe invertebrates) so they are also present in Fugu and Minke Whale. And, this situation that this non-exonic element is present in the transcriptomic data has not anything important in it ? 1 Entering edit mode 4.9 years ago Your url contains the bracket and gives a problem :p (but that should be easy to solve) Based on the abstract: These ultraconserved elements of the human genome are most often located either overlapping exons in genes involved in RNA processing or in introns or nearby genes involved in the regulation of transcription and development. Your RNA-seq data will not contain data about introns and not about elements 'nearby' genes. Obviously only coding elements can be found, furthermore, these need to be expressed in the tissue you sequenced. With regard to the 'how', the easiest would probably be to get these conserved sequences and map them to your assembly (or map your reads to the elements). 1 Entering edit mode Dear WouterDeCoster , Hi and Thanks What is your idea about collecting the sequence of these elements from "NCBI Nucleotide" section and create a file containing those sequence and then make my de novo assembly a blastable database and then perform a "blastn" of the collection file against my transcriptomes? is this strategy o.K ? or for example other database than NCBI Nucleotide is preferred ? 2- If I find some blast hit and so similarity, what is the best way to check the percentage of similarity of the two sequence ? (I usually use NCBI "Align two sequences" section). 1 Entering edit mode Your strategy sounds okay to me, you can always give it a try and check if the results are reasonable. But given the limitations of only having RNA-seq, your analysis is already incomplete from the beginning. With regars to your second question, you need to have a look how similar 'ultraconserved' should be, perhaps how it is defined in the paper or what you find reasonable. You can probably calculate a distance between your sequence and the consensus element, e.g. hamming distance, not sure what's appropriate. 1 Entering edit mode 4.9 years ago YaGalbi ★ 1.5k Have you tried the genome alignment tool in the ECR browser? https://ecrbrowser.dcode.org/ Instructions here: https://ecrbrowser.dcode.org/ecrInstructions/ecrInstructions.html (CTRL F: genome allignment) 1 Entering edit mode Hi, There is no genome for my species, yet. 1 Entering edit mode Not sure who designed the UI for this browser but it is confusing. Go here and then click on Genome Alignment (at top right). You are able to paste your own sequence in to search against the genome selected. This would of course not be the way to do it with a few hundred candidates! 1 Entering edit mode I think first I must collect some of my transcripts that have hits using blastn, right ? 1 Entering edit mode Or get the ECR's from the browser (I don't see any way to bulk download them). Instructions (are in my post above) or in the help page here. Find "Looking closer at ECR's".
2021-06-15 13:49:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42870864272117615, "perplexity": 2585.0564678802475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621273.31/warc/CC-MAIN-20210615114909-20210615144909-00065.warc.gz"}
https://zx31415.wordpress.com/category/%E5%AD%A6%E6%95%B0/agx3/%E5%8D%95%E5%80%BC%E5%8C%96%E5%AE%9A%E7%90%86/
# 从模函数到单值化定理 参考文献 Wikipedia   在互联网时代,它为每个人提供了成为半吊子专家的机会。 Ahlfors   Complex analysis : an introduction to the theory of analytic functions of one complex variable Shafarevich  Basic algebraic geometry Vol.2 Siegel  Topics in complex function theory Serre  A course in arithmetic Gunning   Lectures on modular forms Montel正规性判则对亚纯函数的推广常被称为Marty正规性判则。 Doburovin, Fomenko, Novikov   Modern geometry: method and applications  Vol.2 Picard定理的微分几何证明可参考李忠的书,或龚昇  简明复分析 Poincaré举出的高维Riemann映射定理的反例也可以参见此书。 Hadamard  The psychology of invention in the mathematical field http://mathworld.wolfram.com/JacobiThetaFunctions.html Mumford   Tata lectures on Theta 2维拓扑流形有微分结构的证明见Hirsch  Differential topology。 2维微分流形有局部复结构的证明见Hicks  Note on differential geometry。 Serre的书短小精悍,致力于讨论模群。由于是法国二年级本科生的教材,所用的工具相对初等。 Gunning的书对同余子群有一般性的讨论,定义了基本域的复结构并用Riemann-Roch定理计算相应函数空间的维数是对Serre最重要的补充。 # 从模函数到单值化定理 Ⅶ Prologue: …Here was a man who could work out modular equations and theorems… to orders unheard of, whose mastery of continued fractions was… beyond that of any mathematician in the world, who had found for himself the functional equation of the zeta function and the dominant terms of many of the most famous problems in the analytic theory of numbers; and yet he had never heard of a doubly periodic function or of Cauchy’s theorem, and had indeed but the vaguest idea of what a function of a complex variable was… ——G.H.Hardy S.Ramanujan (1887-1920) (Jacobi) $\Delta=(2\pi)^{12}q \prod (1-q^{n})^{24}$$q=e^{2\pi iz}$ $\Delta=\sum \tau(n)q^{n}$$\tau(n)$称为Ramanujan函数。 1916年,经过大量数值计算,Ramanujan猜想: a)Dirichlet L函数$\sum\tau (n)n^{-s}$可写成Euler积$\prod (1-\tau (p)p^{-s}+p^{11-2s})^{-1}$; b)对素数p,$|\tau(p)|<2p^{\frac{11}{2}}$; c)对素数p,$\tau(p) \equiv 1+p^{11}( mod 691)$; Ramanujan猜想是历史上第一个用模形式定义Dirichlet L函数的例子。另一方面,借助代数几何中“位”的语言,可以对特定的代数簇定义L函数。对有理数域上的椭圆曲线,通过L函数建立“椭圆曲线-模形式”的对应(谷山-志村猜想)是成就Wiles对Fermat大定理证明的关键; c)被Ramanujan本人证明。大量类似的同余关系出现在$l$进表示的理论中,不借助模形式通常是很难发现和证明的。 a)实际上是Ramanujan函数的函数方程的紧凑表达,在1917年被Mordell证明。为此Mordell定义了所谓的Mordell算子。这一想法在1925年被Hecke推广为Hecke算子,成为现代模形式理论的基本工具。 # 从模函数到单值化定理 Ⅵ Prologue: …For fifteen days I strove to prove that there could not be any functions like those I have since called Fuchsian functions. I was then very ignorant; every day I seated myself at my work table, stayed an hour or two, tried a great number of combinations and reached no results. One evening, contrary to my custom, I drank black coffee and could not sleep. Ideas rose in crowds; I felt them collide until pairs interlocked, so to speak, making a stable combination. By the next morning I had established the existence of a class of Fuchsian functions, those which come from the hypergeometric series; I had only to write out the results, which took but a few hours…. ——H. Poincaré a)椭圆型:若$tr^{2}(w)<4$,则$w$共轭于旋转变换,在上半平面和下半平面各有1个不动点; b)抛物型:若$tr^{2}(w)=4$,则$w$共轭于平移变换,在实轴上有1个不动点; c)双曲型:若$tr^{2}(w)>4$,则$w$共轭于相似变换,在实轴上有2个不动点; $G$$PSL(2,\mathbb{R})$的离散子群,故只有可列个元素$g_{j}$。考察$z_{0} \in \mathbb{H}$的轨道$z_{0},z_{1},...,z_{n}=g_{n}(z_{0}),...$。“垂直平分”$z_{0}z_{n}$的Poincaré直线将上半平面划分为2个部分,取所有包含$z_{0}$的部分的交,得到一个双曲几何意义下的“凸多边形”。遵从Poincaré,我们称其为$G$关于点$z_{0}$的基本多边形$P_{o}$ a)限定$g_{0}=id$$P_{0}$不依赖于$G$中其他元素的排列顺序; b)任意有界集至多与基本多边形的有限多条边相交:若轨道$z_{0},z_{1},...,z_{n},...$是闭的,则基本多边形本身仅有有限多条边。开轨道不能包含在某紧集中,也就是说,$\{d(z_{0},z_{n})\}$是无界的,由此也可推出结论成立; c)由b)可推出基本多边形是上半平面中的开集; d)我们知道$PSL(2,\mathbb{R})$是双曲几何的刚体群,故$g_{j}(P_{0})=P_{j}$。所有$P_{j}$彼此不交,且$\cap \bar{P_{j}}=\mathbb{H}$。特别的,基本多边形中的任意两点关于$G$不等价; e)轨道必须闭合,故$\{d(z_{0},z_{n})\}$有界,推出$\bar{P_{0}}$是紧集。由b)知$P_{0}$仅有有限条边; f)$P_{0}$的边与$G$中的非平凡元素一一对应,容易证明两条边关于$G$等价当且仅当对应的群元素互逆。因为$g_{j}$不能保持某条边不动,故$P_{0}$有偶数条边,两两配成一对关于$G$等价。a)说明$P_{0}$的边有天然顺序,我们选定逆时针方向遍历每一条边后回到出发点,更细致一些的分析指出,任意边的2条邻边都关于$G$等价。将等价的边粘合起来,我们得到:以$\mathbb{H}$为万有覆叠的紧Riemann面同胚于亏格为$g>1$的可定向闭曲面(球面装上$g$个环柄); $g=1$的情形类似,我们也希望能将所有亏格为$g>1$的紧Riemann面参数化。这一参数空间称为模空间。当$g>1$时,Riemann猜想模空间依赖于$3g-3$个复参数。证明这一猜想的决定性想法来自Teichmüller,此后发展成有关拟共形映射的一整套理论。直到今天,模空间仍然是代数几何学家感兴趣的对象。 $g$的Jacobi行列式为$J_{g}$,链式法则说明$J_{g}^{-k}$满足对$\gamma_{g}(z)$的要求,即可以取$\gamma_{g}(z)=(cz+d)^{2k}$。相应的$f$称为自守形式,因为$\mathbb{H}$上的k次微分形式$fdz^{k}$满足自守关系$f(gz)d(gz)^{k}=f(z)dz^{k}$。特别地,$k=0$定义了全纯的自守函数。 $\theta_{k,n}(z)=\sum^{*}J_{g}^{k}h(gz)$$h(z)=e^{2n\pi iz}$ G.Perelman (1966-    ) # 从模函数到单值化定理 Ⅴ Prologue: …It is true that Mr. Fourier had the opinion that the principal purpose of mathematics was the benefit of the society and the explanation of phenomena of nature; but a philosopher like he should know that the sole purpose of science is the honor of the human mind, and under this title, a question about numbers is as valuable as a question about the system of the world… ——C. G. Jacobi, Letter to Legendre C. G. Jacobi (1804-1851) $\mathbb{\bar{C}}$为万有覆叠的紧Riemann面$S$解析同构于$\mathbb{C}P^{1}$,有理函数是其上仅有的亚纯函数。拓扑上我们得到了一个亏格为0的闭曲面(球面)。这一情形是简单的。 $\mathbb{C}$为万有覆叠的紧Riemann面$S$解析同构于$\mathbb{C}/\Lambda$$\Lambda$是某个格。从拓扑上看,这是一个亏格为1的闭曲面(环面)。 $f_{i}(z+1)=f_{i}(z)$$f_{i}(z+\tau)=e^{-2k\pi iz}f_{i}(z)$ $S$当然不能嵌入$\mathbb{C}P^{1}$。取k=3,利用上述的$f_{i}$可完成$S$$\mathbb{C}P^{2}$的嵌入。我们不再讨论技术性的细节,而是指出类似的想法可以推广到高维。高维复环面可以嵌入射影空间当且仅当其周期矩阵满足Frobenius关系。 Weierstrass提出了另一种构造椭圆函数的方法,即利用Weierstrass$\mathfrak{P}$函数。这方法简洁明了,被大多数现代课本采用。然而值得指出的是,theta函数处在数论、自守形式、函数论和数学物理的交叉点上,研究其性质有极高的附加价值。在第7章中有一个重要的例子:尖点形式$\Delta$ Jacobi noted, as mathematics’ most fascinating property, that in it one and the same function controls both the presentations of a whole number as a sum of four squares and the real movement of a pendulum. These discoveries of connections between heterogeneous mathematical objects can be compared with the discovery of the connection between electricity and magnetism in physics or with the discovery of the similarity between the east coast of America and the west coast of Africa in geology. The emotional significance of such discoveries for teaching is difficult to overestimate. It is they who teach us to search and find such wonderful phenomena of harmony of the Universe. # 从模函数到单值化定理 Ⅳ Prologue: …At the moment when I put my foot on the step the idea came to me, without anything in my former thoughts seeming to have paved the way for it, that the transformations I had used to define the Fuchsian functions were identical with those of non-Euclidean geometry. I did not verify the idea; I should not have had time, as, upon taking my seat in the omnibus, I went on with a conversation already commenced, but I felt a perfect certainty… ——H. Poincaré H. Poincaré (1854-1912) $\mathbb{\bar{C}}$的解析自同构群为Möbius变换群,等价的说法是$\mathbb{C}P^{1}$的解析自同构群为$PSL(2,\mathbb{C})$$PSL(2,\mathbb{C})$中所有保持无穷远点不动的元素有形式$\left({\begin{array}{cc} a&b\\ 0&d\\ \end{array}}\right)$,此即$\mathbb{C}$上的线性函数,在迭代下构成$\mathbb{C}$的解析自同构群。最后,Schwarz引理的一个经典应用定出$\triangle$的解析自同构有形式$e^{i\theta}(z-z_{0})/(1-\bar{z_{0}}z)$$\theta \in \mathbb{R}$$z_{0} \in \triangle$ a) 以$\mathbb{\bar{C}}$为万有覆叠的Riemann面。由于$PSL(2,\mathbb{C})$中的元素作用在$\mathbb{C}P^{1}$上恒有不动点,推出$G$是平凡的。故唯一以$\mathbb{\bar{C}}$为万有覆叠的Riemann面是$\mathbb{\bar{C}}$本身。 b) 以$\mathbb{C}$为万有覆叠的Riemann面。忠实作用在$\mathbb{C}$上的线性变换有形式$z \mapsto z+b$。取平移函数作为$G$的生成元,并要求其在$\mathbb{Q}$上线性无关。离散群$G$至多有2个这样的生成元,分类讨论得到$S$解析同构于$\mathbb{C}$$\mathbb{C}\backslash\{0\}$(借助一个指数变换)或$\mathbb{C}/\Lambda$$\Lambda$是某个格。 c)除了以上讨论过的4种例外,其余 Riemann面均以$\triangle$$\mathbb{H}$)为万有覆叠。这是最为有趣的情形,因为我们缺少$PSL(2,\mathbb{R})$$\mathbb{H}$上忠实作用的离散子群的完备描述。Poincaré第一个研究了$PSL(2,\mathbb{R})$的离散子群,并将它们命名为Fuchs群。如果Fuchs群的所有元素都忠实地作用在$\mathbb{H}$上,则称其为无挠的。显然,以$\mathbb{H}$为万有覆叠的Riemann面同构于$\mathbb{H} /G$$G$是某个无挠的Fuchs群。 Many other proofs have been given which are more elementary in that they need less preparation, but none is as penetrating as the original proof. # 从模函数到单值化定理 Ⅲ Prologue: …Finally the last Sections (§19-21) are devoted to the uniformization theory, which was sketched by Klein and Poincaré in an audacious breakthrough and was recently put on a firmer basis by Koebe. Thus we get into the temple where the divinity (if I am allowed to use this image) is restored to itself, from the earthy jail of its particular realization: through the two dimensional non-Euclidean crystal, the archetype of the Riemann surface may be contemplated, pure and liberated from any obscurity or contingency (as far as it is possible)… ——H.Weyl  The Concept of a Riemann Surface Weyl的这本出版于1913年的书被广泛地认为是第一本用“现代”方法讨论Riemann面的著作。如今通行的对“流形”的定义最早出现在这本书中。附带一提,“线性空间”、“辛群”、“联络”等概念的现代定义也是由Weyl提出的。 (Weierstrass) 在$\Omega$上内闭一致收敛的解析函数列收敛于某解析函数。 (Bolzano-Weierstrass) 完备空间的有界序列有收敛子列。 (Montel正规性判则Ⅰ)在任一紧致集上一致有界的解析函数族$\mathscr{F}$是正规的。 Montel正规性判则Ⅰ将函数空间的“有界性”(“紧致性”)与函数的一致有界性联系起来。这提示我们可以把“转换原理”应用到Montel正规性判则Ⅰ上,得到更强的: (Montel正规性判则Ⅱ)若亚纯函数族$\mathscr{F}$$\mathbb{\bar{C}}$上有3个空隙值,则$\mathscr{F}$是正规的。 (Klein-Poincaré-Koebe) 单连通的Riemann曲面解析同构于$\mathbb{\bar{C}}$$\mathbb{C}$$\triangle$ $S$上取一个单连通域序列$\{S_{n}\}$,满足$S_{n} \subset \bar{S_{n}} \subset S_{n+1}$,且$S=\bigcup^{\infty}_{n=1} S_{n}$。对每个$S_{n}$,我们定义一个“副本”$S_{n}^{\prime}$。将它们的边缘粘合在一起,得到紧曲面$S_{n}^{c}$。由于$S_{n}$具有圆盘的拓扑,$S_{n}^{c}$具有球面的拓扑,因而解析同构于$\mathbb{\bar{C}}$ $S_{1}$上选取$p_{0} \neq p_{1}$,则这两点包含于所有$S_{n}$$S_{n}^{c}$中。记$p_{0}$的“副本”为$p_{0}^{\prime}$。存在解析同构$f_{n}: S_{n}^{c} \to \mathbb{\bar{C}}$,使得$f_{n}(p_{0})=0$$f_{n}(p_{1})=1$$f_{n}(p_{0}^{\prime})=\infty$。将其限制到$S_{n}$上,得到单叶函数族$\{ f_{n}\}:S_{n} \to \mathbb{C}$。由Montel正规性判则Ⅱ,$\{f_{n}\}$$S_{n}\backslash\{p_{0},p_{1}\}$上正规,从而在$S_{n}$上正规。 F.Klein (1849-1925) # 从模函数到单值化定理 Ⅱ Prologue: ——华罗庚 “转换原理”可以粗略地叙述如下:如果对“有界”函数叙述的较弱命题成立,则对“在$\mathbb{C}$上有2个空隙值”的函数所叙述的较强命题也成立。 “转换原理”将其加强为著名的Picard小定理: C.E.Picard (1856-1941)
2019-10-14 13:18:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 386, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6736181378364563, "perplexity": 2450.3650977979764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653247.25/warc/CC-MAIN-20191014124230-20191014151730-00088.warc.gz"}
https://tex.stackexchange.com/questions/654979/box-design-with-tikz
# Box design with tikz I found this box, but didn't know the design. Can you tell me which library or packages are used to design it. Thanks! • Welcome. // Have a look at tcolorbox, which is a Bit more streamlined: ctan.org/pkg/tcolorbox . Also check out the categories mentioned there. If you want a more scribbled look, you can do that with tikz. Aug 23 at 20:42 • You should consider accepting answers that were provided, since they are very accurate. This values helpers work and state that your needs have been fulfilled, preventing other members from uselessly working on your questions. Sep 10 at 9:18 ## 1 Answer You can use the tcolorbox package. Here a short example to get you started how to draw these "handdrawn" lines: \documentclass{article} \usepackage[most]{tcolorbox} \usepackage{emerald} \usetikzlibrary{decorations.pathmorphing} \usetikzlibrary{shadows} \tikzset{decoration={random steps,segment length=2mm,amplitude=0.6pt}} \newtcbtheorem{mytheo}{Theorem}{ coltitle=green!80!black, colback=lightgray!20, colbacktitle=lightgray!20, fonttitle=\bfseries\ECFAugie, enhanced, attach boxed title to top left={yshift=-0.18cm,xshift=-0.5mm}, boxed title style={ tikz={rotate=4,transform shape}, frame code={ \draw[decorate,fill=lightgray!20] (frame.south west) rectangle (frame.north east); } }, frame code={ \draw[decorate,fill=lightgray!20,drop shadow] (frame.north east) rectangle (frame.south west); }, }{th} \begin{document} \begin{mytheo}{}{theoexample} content... \end{mytheo} \end{document}
2022-12-01 18:11:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5643843412399292, "perplexity": 5360.482963694389}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710829.5/warc/CC-MAIN-20221201153700-20221201183700-00605.warc.gz"}
https://zbmath.org/?q=an:0786.47002&format=complete
## A space by W. Gowers and new results on norm and numerical radius attaining operators.(English)Zbl 0786.47002 Summary: We use a Banach space recently considered by W. Gowers to improve some results on norm attaining operators. In fact we show that the norm attaining operators from this space to a strictly convex Banach space are finite-rank. The same Banach space is also used to get a new example of a space which does not satisfy the denseness of the numerical radius attaining operators. This new counterexample improves and simplifies the one previously obtained by R. Payá, who answered an open question raised by B. Sims in 1972.
2022-05-28 06:25:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8125380277633667, "perplexity": 383.0758500116321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663013003.96/warc/CC-MAIN-20220528062047-20220528092047-00122.warc.gz"}
https://mathoverflow.net/questions/319087/on-the-convergence-of-an-integral-of-hardys-maximal-function
# On the convergence of an integral of Hardy's maximal function Let $$f:\mathbb{R}\times \mathbb{R}^N \to \mathbb{R}^N$$ be an $$L^1$$ function. Assume that $$\mathcal M f(x,y) = \sup_{r< \bar r}\frac{1}{B_r(y)} \int_{B_r(y)} f(x,z)dz \to 0$$ as $$\bar r \to 0$$ for a.e. $$y \in \mathbb{R}^d$$. Is it true that for a.e. $$y \in \mathbb{R}^d$$ $$\int_I \mathcal M f(x,y)dx \to 0,$$ where $$I \subset \mathbb{R}$$ is an interval? If not, what counterexample shows that? At least, if we fix $$R\gg 1$$, does it hold for a.e. $$x \in B_R(0) \setminus E$$, where $$E$$ is a set of arbitrarily small (non zero) measure?
2019-01-16 00:27:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9876225590705872, "perplexity": 186.96017370156477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583656530.8/warc/CC-MAIN-20190115225438-20190116011438-00168.warc.gz"}
https://newbedev.com/define-global-commands-in-aux-file
# Define global commands in aux file I'd use a different strategy: use a wrapper command, rather than directly defining macros. \documentclass{article} \makeatletter \newcommand{\usevalue}[1]{% \ifcsname [email protected]#1\endcsname \csname [email protected]#1\endcsname \else ??% \fi } \newcommand{\definevalue}[2]{% \write\@auxout{% \unexpanded{\global\@namedef{[email protected]#1}{#2}}% } } \makeatother \begin{document} Something with \usevalue{tester}. Something else. Now we can define \texttt{tester} and use again it: \usevalue{tester}. \definevalue{tester}{42} \end{document} ## Output of second run You could write \makeatletter \newcommand{\doccommand}[2]{% \immediate\write\@auxout{\gdef\noexpand#1{\unexpanded{#2}}}% \gdef#1{#2}% } \makeatother This will do a global define without expanding the replacement text. I added an additional direct \gdef so that the command can be used without rerunning TeX. But this is not a good idea: If you never use the command before the point where it is defined, defining it in the aux-file is useless. If you use the command before defining it, LaTeX never reaches the point where you write the aux-file entry. So you can only use the command after running TeX once with the command defined but not used. If you ever delete the aux-file, your document is broken. If you ever only include a different chapter, the aux-file entry will not be written, so your document is broken. Instead you could create a separate file with the definitions which you include in the preample. It's more work, but it results in a much more stable document.
2022-12-01 16:31:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9291940927505493, "perplexity": 1246.724024917641}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710829.5/warc/CC-MAIN-20221201153700-20221201183700-00682.warc.gz"}
https://stats.meta.stackexchange.com/questions/4945/any-way-to-make-this-question-on-processing-tracking-data-on-topic
# Any way to make this question on processing tracking data on-topic? I used the chat to ask a question which is IMO clearly off-topic here, but looking at the chat history I realized that there is very little traffic, so I'm going to ask suggestions to make it on-topic here (of course, it may just not be possible to make it on-topic: if so, I'll just not post the question). I like hiking quite a lot, and I've recorded a fair amount of track records with an app on Android (I can provide the app name if needed). I would like to compute summary statistics for my tracks (e.g., average length of a track, min, max, $q_{10}$, $q_{90}$, etc.). Is there a simple, preferably free way to do that? I can export the tracks in various formats (CSV, GPX, KML, KMZ, etc.), so I could use R to postprocess them, but I don't know the format (for example, I don't know what units the app uses for speed, when using the CSV format), so I was wondering if there were pre-made solutions, or if I should just bite the bullet, study the format and write my own R script. • @Tim I guess you're right. I was hoping there was a ready-made solution, but even if there was one, CV wouldn't be the right place where to find it. – DeltaIV Sep 11 '17 at 13:09
2020-08-04 00:09:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42133861780166626, "perplexity": 545.9123083670751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735836.89/warc/CC-MAIN-20200803224907-20200804014907-00250.warc.gz"}
https://hackage-origin.haskell.org/package/HaRe-0.8.3.0/candidate/docs/Language-Haskell-Refact-Utils-ExactPrint.html
Synopsis # Documentation Replaces an old expression with a new expression replaceAnnKey :: (Data old, Data new) => Located old -> Located new -> Anns -> Anns Source # The annotations are keyed to the constructor, so if we replace a qualified with an unqualified RdrName or vice versa we have to rebuild the key for the appropriate annotation. copyAnn :: (Data old, Data new) => Located old -> Located new -> Anns -> Anns Source # setAnnKeywordDP :: Data a => Located a -> KeywordId -> DeltaPos -> Transform () Source # Change the DeltaPos for a given KeywordId if it appears in the annotation for the given item. clearPriorComments :: Data a => Located a -> Transform () Source # Remove any preceding comments from the given item
2022-01-28 01:41:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22680465877056122, "perplexity": 14410.968594468075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305341.76/warc/CC-MAIN-20220128013529-20220128043529-00719.warc.gz"}
https://oshrw.wegfinderei.de/en/3-square-root.html
# 3 square root kendra mfc Aug 26, 2022 · Simplify √ (45). First, you can factor it out to get √ (9 x 5). Then, you can pull out a "3" from the perfect square, "9," and make it the coefficient of the radical. So, √ (45) = 3√5. [6] Now, just add up the coefficients of the two terms with matching radicands to get your answer. 3√5 + 4√5 = 7√5 2 Do Example 2.. The cube root of 8 is written as $$\sqrt[3]{8} = 2$$. The cube root of 10 is written as $$\sqrt[3]{10} = 2.154435$$. The cube root of x is the same as x raised to the 1/3 power. Written as \(. Jan 10, 2018 · The 3 goes with the square root you don’t need to multiply or anything like that. If you need to simply for example 3 square root of 20 which is when 3 is outside the square root sign. The 20 will be separated with 4*5 in the square root and then you will square root the 4 leaving the 5 in and bringing 2 out. Now you can’t just bring it out .... Step 1: Determination of root mean square velocity for N2. The molar mass of N2 = 28 gmol -1 = 0.0028 kgmol -1. Temperature = 227 o C = 500 K. The root means square velocity equation is: Where, R = gas constant, T = temperature, M = molar mass. The root mean square velocity for N 2 is: Hence, the root mean square velocity of N2 is 2110.43 ms-1. keep this information locked in your brain: square root of 1: √1 = 1 (1 x 1 = 1) square root of 4: √4 = 2 (2 x 2 = 4) square root of 9: √9 = 3 (3 x 3 = 9) square root of 16: √16 = 4 (4 x 4 = 16) square root of 25: √25 = 5 (5 x 5 = 25) square root of 36: √36 = 6 (6 x 6 = 36) square root of 49: √49 = 7 (7 x 7 = 49) square root of 64:. The square root of 3 is written as 3. And, its exponent form is (3)0.5, and the radical form is as follows: (3) 1/2. It is a known fact that the square root of any number is numeric when it is multiplied by the same numeric. For instance, the square root of 36 is 6, as 6 times 6 is 36. However, there are a few square roots that are not whole. Real estate news with posts on buying homes, celebrity real estate, unique houses, selling homes, and real estate advice from realtor.com. Cube root calculator. The cube root of x is given by the formula: cube root = 3 √ x. 3√. = Calculate. × Reset.. Lets see how we can get the Python square root without using the math library: # Use exponents to calculate a square rootnumber = 25square_root = number** (1/2)print. Square root 3 is any number between 1 and 2. Step 4: Divide the number whose square root is determined by any of the numbers obtained in Step 2. ‘3’ can be divided either by ‘1’ or ‘2’. Let us divide ‘3’ by ‘2’. Step 5: Find the average of the quotient and divisor in Step 4. The average of 2 and 1.5 is.. Take one factor from each group Find the product of the factors obtained in step 3. This product is the required square root Using the steps above, here is the math showing you how to. Python Program to Find the Square Root of a Complex Number # Python program to find square root of complex number # Import cmath Library import cmath # cmath.sqrt() Return the square root of a complex number print (cmath.sqrt(2 + 3j)) print (cmath.sqrt(15)) Output : (1.6741492280355401+0.8959774761298381j) (3.872983346207417+0j). When you multiply 3 by 3, the answer is 9. When written in a formulaic structure, the square root is represented by the radical (√) symbol √9 = 3 As you can see, the number which we would like to get the square root of is written under a radical symbol. This causes the number whose square root is being computed to be called as “radicand”. For example – Square root of 3 is √3 = 1.73205080757 Other names for root symbols √ In mathematics, the square root and other root symbols have the following names.. Since 3 is a non-perfect square number, we will find the value of root 3 using long division method as follows Therefore, the value of root 3 is √3 = 1.73205080757 While solving mathematical problems, we can use the round off value of root 3. spanish tutor near me meaning of isabella in the bible nfc in oneplus 8t vmware vcenter server chori chori chupke chupke film odcrcom Solve a quadratic equation of the form ax2 + bx + c = 0 by completing the square. Step 1. Divide by a to make the coefficient of x2 term 1. Step 2. Isolate the variable terms on one side and the constant terms on the other. Step 3. Find (1. The square root of 3 is the positive real number that, when multiplied by itself, gives the number 3. It is denoted mathematically as √3. It is more precisely called the principal square root of 3, to distinguish it from the negative number with the same property. The square root of 3 is an irrational number. Cube root calculator. The cube root of x is given by the formula: cube root = 3 √ x. 3√. = Calculate. × Reset. 3 min read. There are several ways of solving quadratic equations.One such method is the square root property which we will discuss in detail in this tutorial. We will practice the square root property through various examples.. Quadratic Equations. We know that a quadratic equation is an equation that can be expressed in the form ax^2+bx+ c= 0 where a \neq 0 ; this is also called the standard. To find the square root of a complex number, use the following equation: Square root (a + bi) = c + di Where: c = (1/square root of 2) x square root of [ (square root of (a 2 +b 2 )) + a ] d = (sign of b/square root of 2) x square root of [ (square root of (a 2 +b 2 )) - a ] Example: the square root of 3-5i = c + di. It is a method for finding antiderivatives of functions which contain square roots of quadratic expressions or rational powers of the form n 2 (where n is an integer) of quadratic expressions. For one thing trigonometry works with all angles and not just triangles. a2 = b2 +c2 2 b c cos b2 = a2 +c2 2 a c cos c2 = a2 +b2 2 a b cos. Learn to use. houses for rent in loves park il nashville hourly weather The square root of 3 is the positive real number that, when multiplied by itself, gives the number 3. It is denoted mathematically as √3. It is more precisely called the principal square root of 3, to distinguish it from the negative number with the same property. The square root of 3 is an irrational number. Algebra Simplify 3 square root of 40 3√40 3 40 Rewrite 40 40 as 22 ⋅10 2 2 ⋅ 10. Tap for more steps... 3√22 ⋅10 3 2 2 ⋅ 10 Pull terms out from under the radical. 3(2√10) 3 ( 2 10) Multiply 2 2 by 3 3. 6√10 6 10 The result can be shown in multiple forms. Exact Form: 6√10 6 10 Decimal Form: 18.97366596 18.97366596. Square Root of 81 by Prime factorization method 1 Prime factorization is expressing the number as a product of its prime factors. 2 The prime factor of 81 is 3. 81 = 3 × 3 × 3 × 3 3 The square root of 81 is √81 = √ ( 3 × 3 × 3 × 3) 4 √81 = √ ( 9 × 9) 5 (81 ½ ) 2 = ( 9 ½ ) 2 6 Squaring on both the sides, we get 81 = 9. From here, you need to add the multiplication to the square root. √45 = 45^ (1/2) = (9 x 5) ^ (1/2) = 9^ (1/2) x 5^ (1/2) = √9 x √5 = 3√5 It might seem like a redundant process to simplify a square. What is square root 3 + square root 3? Algebra Radicals and Geometry Connections Addition and Subtraction of Radicals. 1 Answer F. Javier B. Apr 25, 2018 #sqrt3+sqrt3=2sqrt3#. What is the square of I Square Root 11? The square root of 11 rounded up to 7 decimal places is 3.3166248. It is the positive solution of the equation x2 = 11.Square Root of 11 in radical form: √11. 1. The square root of any natural number is the value of power 1/2 of the same number. Suppose the square root of the number “x” is a number y, which means x = y2. It is denoted by the symbol '√ '. This symbol is also known as “Radical”, whereas the value under the square root symbol is known as “Radicand.”. rumbling machine Negative Square Roots. Loaded 0%. 0 Comments - Log in or Sign Up for free to join the conversation! View Comments. Sep 19, 2019 · It is denoted mathematically as √3 or 31/2. It is more precisely called the principal square root of 3, to distinguish it from the negative number with the same property. The square root of 3 is an irrational number.Square root of 3. 1.10111011011001111010.. How To Simplify Square Roots In 3 Easy Steps - Mathcation www.mathcation.com. simplifying kuta simplify oguchionyewu. Square Root Worksheet Solutions 5 7 - YouTube www.youtube.com. Square Root Interactive Worksheet www.liveworksheets.com. worksheet link root square. Step 1: Determination of root mean square velocity for N2. The molar mass of N2 = 28 gmol -1 = 0.0028 kgmol -1. Temperature = 227 o C = 500 K. The root means square velocity equation is: Where, R = gas constant, T = temperature, M = molar mass. The root mean square velocity for N 2 is: Hence, the root mean square velocity of N2 is 2110.43 ms-1. 2. Take the square roots of your perfect square factors. The product property of square roots states that for any given numbers a and b, Sqrt (a × b) = Sqrt (a) × Sqrt (b).. Sep 19, 2019 · It is denoted mathematically as √3 or 31/2. It is more precisely called the principal square root of 3, to distinguish it from the negative number with the same property. The square root of 3 is an irrational number.Square root of 3. 1.10111011011001111010.. There are 3 companies that go by the name of Square Root Press LLC. These companies are located in Fairfield NJ, Mountain View CA, and Portland OR. SQUARE ROOT PRESS LLC: CALIFORNIA LIMITED-LIABILITY COMPANY - OUT OF STATE: WRITE REVIEW: Address: 1650 Ne 32nd Ave Unit 208 Portland, OR 97232: Registered Agent:. Finding the square root is easy for any perfect square under 100! You'll be able to calculate squares faster than ever and amaze everyone with your utter ge. Sep 19, 2019 · It is denoted mathematically as √3 or 31/2. It is more precisely called the principal square root of 3, to distinguish it from the negative number with the same property. The square root of 3 is an irrational number.Square root of 3. 1.10111011011001111010.. You’ll find low prices on Roots and Boots: Sammy Kershaw, Collin Raye & Aaron Tippin tickets Joliet for just $66.00. These affordable tickets are often for seats located in the upper sections of Rialto Square Theatre or in the back row. A premium spot on the main floor or upgrading to a meet-and-greet package is always the most expensive option. square-root.net. In mathematics, a square root of a number x is a number y such that y 2 = x; in other words, a number y whose square (the result of multiplying the number by itself, or y ⋅ y) is x. For example, 4 and −4 are square roots of 16, because 4 2 =. The square root of 3 is represented using the square root or the radical symbol “ √”, and it is written as √3. The value of √3 is approximately equal to 1.732. This value is widely used in mathematics. Since root 3 is an irrational number, which cannot be represented in the form of a fraction. It means that it has an infinite number of decimals.. Take one factor from each group Find the product of the factors obtained in step 3. This product is the required square root Using the steps above, here is the math showing you how to. The 3rd root of -64, or -64 radical 3, or the cube root of -64 is written as − 64 3 = − 4. The cube root of 8 is written as 8 3 = 2. The cube root of 10 is written as 10 3 = 2.154435. The cube root of x is the same as x raised to the 1/3 power. Written as x 3 = x 1 3. The common definition of the cube root of a negative number is that. The square root of 700 is expressed as √700 in the radical form and as (700)½ or (700)0.5 in the exponent form. The square root of 700 rounded up to 8 decimal places is 26.45751311.Square Root of 700. 1. What is the Square Root of 700?. Solve a quadratic equation of the form ax2 + bx + c = 0 by completing the square. Step 1. Divide by a to make the coefficient of x2 term 1. Step 2. Isolate the variable terms on one side and the constant terms on the other. Step 3. Find (1. The square root of 3 will be an irrational number if the value after the decimal point is non-terminating and non-repeating. 3 is not a perfect square. Hence, the square root of 3 is. The square root formula is used to find the square root of a number. We know the exponent formula: n√x x n = x1/n. When n= 2, we call it square root. We can use any of the above methods for finding the square root, such as prime factorization, long division, and so on. What does a square root function look like?. l13a engine From here, you need to add the multiplication to the square root. √45 = 45^ (1/2) = (9 x 5) ^ (1/2) = 9^ (1/2) x 5^ (1/2) = √9 x √5 = 3√5 It might seem like a redundant process to simplify a square root, but now you understand it! Now, all you need to know is that a square root is the same as the power of one half.. Finding the square root is easy for any perfect square under 100! You'll be able to calculate squares faster than ever and amaze everyone with your utter ge.... Cube root calculator. The cube root of x is given by the formula: cube root = 3 √ x. 3√. = Calculate. × Reset. Square root 3 is any number between 1 and 2. Step 4: Divide the number whose square root is determined by any of the numbers obtained in Step 2. ‘3’ can be divided either by ‘1’ or ‘2’. Let us divide ‘3’ by ‘2’. Step 5: Find the average of the quotient and divisor in Step 4. The average of 2 and 1.5 is.. Skip to main content. Skip to navigation. Mrs. Stoica's classroom. HOW TO USE SQUARE ROOT MULTIPLICATION CALCULATOR? You can use the square root multiplication calculator in two ways. USER INPUTS You can enter the coefficients (optional) and radicands to the input boxes and click on the " MULTIPLY " button. The result and explanations appaer below the calculator RANDOM INPUTS. The square root of any natural number is the value of power 1/2 of the same number. Suppose the square root of the number “x” is a number y, which means x = y2. It is denoted by the symbol '√ '. This symbol is also known as “Radical”, whereas the value under the square root symbol is known as “Radicand.” The Formula of the Square Root X = Y2 Or,. A square root is a mathematical operation that calculates the square root of a number. It is denoted by the symbol √ and is written as sqrt (x). It is performed by taking the square root of the number x using ordinary arithmetic, and then re-writing the result as a fraction with numerator (top number) and denominator (bottom number). What is square root of 3 - ebrain-ph.com Sign in Sign up Published 12.11.2022 13:15 on the subject Math by elishakim80. Step 2: Calculation of root mean square velocity of methane at 273K. R = 8.3145 Jmol -1 K -1. Molar mass of methane = 16g/mol. Now, you can write, Hence, root mean square velocity of methane at 273K = 652.379m/s. The easiest way to do this calculation is to do the first multiplication (3×3) and then to multiply your answer by the same number you started with; 3 x 3 x 3 = 9 x 3 = 27. What is root 25 simplified? The square roots of 25 are √25=5 and −√25=−5 since 52=25 and (−5)2=25 . The principal square root of 25 is √25=5. advantages of tampons diy electric heater You’ll find low prices on Roots and Boots: Sammy Kershaw, Collin Raye & Aaron Tippin tickets Joliet for just$66.00. These affordable tickets are often for seats located in the upper sections of Rialto Square Theatre or in the back row. A premium spot on the main floor or upgrading to a meet-and-greet package is always the most expensive option. Algebra Simplify 3 square root of 40 3√40 3 40 Rewrite 40 40 as 22 ⋅10 2 2 ⋅ 10. Tap for more steps... 3√22 ⋅10 3 2 2 ⋅ 10 Pull terms out from under the radical. 3(2√10) 3 ( 2 10) Multiply 2 2 by 3 3. 6√10 6 10 The result can be shown in multiple forms. Exact Form: 6√10 6 10 Decimal Form: 18.97366596 18.97366596. Explanation: I have heard many students read 3√n as "the third square root of n". This is a mistake. The square root is 2√n (usually denoted √x ), the third (or cube) root is 3√n, the fourth root is 4√n and so on. Whichever was meant the first step for simplifying is the same. Factor the radicand (the thing under the root symbol). A square root goes the other direction: 3 squared is 9, so a square root of 9 is 3. It is like asking: What can I multiply by itself to get this? Definition. Here is the definition: A square root of x is a. This is the inverse function of a n. Hence a n means, you look for a number b, which when multiplied n times with itself results in a. For instance: We know that 2 3 = 8, so 8 3 = 2, − 1 5 = − 1 because ( − 1) 5 = − 1 . 3 4 ≈ 1.31607 because 1.31607 4 ≈ 3. If there is no number at the top of the root symbol, it means n = 2, so a 2 = a. Share Cite. Finding the square root is easy for any perfect square under 100! You'll be able to calculate squares faster than ever and amaze everyone with your utter ge.... Cube root calculator. The cube root of x is given by the formula: cube root = 3 √ x. 3√. = Calculate. × Reset. On most calculators you can do this by typing in 3 and then pressing the √x key. You should get the following result: √3 ≈ 1.7321 How to Calculate the Square Root of 3 with a Computer On a computer you can also calculate the square root of 3 using Excel, Numbers, or Google Sheets and the SQRT function, like so: SQRT (3) ≈ 1.732050807569. Video created by 伊利诺伊大学香槟分校 for the course "Data Modeling and Regression Analysis in Business". This session introduces the student to use of a holdout data set for evaluating model performance. Methods of improving the model are discussed with. johnny and jim are joined by caleb to see just how many times villagers can force us to starve to death, find out that our wallet is just too damn heavy, get stuck behind some friendly slimes at the self-checkout, starve to death some more, be too stupid to read the engravings on a tombstone, and ride on some of the most jarring difficulty curves. This product is the required square root Using the steps above, here is the math showing you how to simplify square root of 3. 3 = 33 = 31 Therefore, √3 = √ ( 31) = √3 Click to visit How to Multiply Square Roots: 8 Steps (with Pictures) Feb 23, 2021 · For example, 2 * (square root of 3) = 2 (square root of 3). Then 3 can be represented as a b, where a and b have no common factors. So 3 = a 2 b 2 and 3 b 2 = a 2. Now a 2 must be divisible by 3, but then so must a (fundamental theorem of arithmetic). So we have 3 b 2 = ( 3 k) 2 and 3 b 2 = 9 k 2 or even b 2 = 3 k 2 and now we have a contradiction. What is the contradiction? Share Cite Follow. who am i to god young little girls nude sex In that case we could think "82,163" has 5 digits, so the square root might have 3 digits (100x100=10,000), and the square root of 8 (the first digit) is about 3 (3x3=9), so 300 is a good start. Square Root Day. The 4th of April 2016 is a. fun hoodies for men . pokemmo item locations pomeranian puppies near me for sale anal gaping maceys womens dresses appartments for sale young horny naked girls nn young girl panty Hence the answer to the root of 8 lies between the numbers 2 and 3. However, since the square of 3 equals 9 which is larger than 8, the root 8 value lies in between the numbers 2.8 and 2.9. The precise answer of the square root of 8 is 2.82842712475. This is much closer to the answer that we estimated. The Square Root of Numbers from 1 to 100. May 22, 2019 · Part Two: 3 Simple methods for multiplying square roots Multiplying square roots without coefficients 1. Multiply each radicand the same way you would without the radical, or square root symbol. 2. Simplify the radicand by factoring out all perfect squares. If there are no perfect squares in the radicand, it’s already simplified.. Square root 3 is any number between 1 and 2. Step 4: Divide the number whose square root is determined by any of the numbers obtained in Step 2. ‘3’ can be divided either by ‘1’ or ‘2’. Let us divide ‘3’ by ‘2’. Step 5: Find the average of the quotient and divisor in Step 4. The average of 2 and 1.5 is.. Solve a quadratic equation of the form ax2 + bx + c = 0 by completing the square. Step 1. Divide by a to make the coefficient of x2 term 1. Step 2. Isolate the variable terms on one side and the constant terms on the other. Step 3. Find (1. Square Root of 81 by Prime factorization method 1 Prime factorization is expressing the number as a product of its prime factors. 2 The prime factor of 81 is 3. 81 = 3 × 3 × 3 × 3 3 The square root of 81 is √81 = √ ( 3 × 3 × 3 × 3) 4 √81 = √ ( 9 × 9) 5 (81 ½ ) 2 = ( 9 ½ ) 2 6 Squaring on both the sides, we get 81 = 9. Algebra. Simplify 3/ ( square root of 3) 33 3 3. Multiply 33 3 3 by √33 3 3. 33 ⋅ √33 3 33 3. Combine and simplify the denominator. Tap for more steps... 33 3 3 3 3. Cancel the. my wife is afraid of sex windows 11 nvidia issues johnny and jim are joined by caleb to see just how many times villagers can force us to starve to death, find out that our wallet is just too damn heavy, get stuck behind some friendly slimes at the self-checkout, starve to death some more, be too stupid to read the engravings on a tombstone, and ride on some of the most jarring difficulty curves. The square root of negative numbers is undefined. Hence the perfect square cannot be negative. Some of the numbers end with 2, 3, 7, or 8 (in the unit digit), then the perfect. The square root of 3 is the positive real number that, when multiplied by itself, gives the number 3. It is denoted mathematically as √3. It is more precisely called the principal.
2022-12-07 16:57:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7315381765365601, "perplexity": 703.1297371227924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711200.6/warc/CC-MAIN-20221207153419-20221207183419-00105.warc.gz"}
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_Chemistry_-_The_Central_Science_(Brown_et_al.)/05%3A_Thermochemistry/5.06%3A_Hess's_Law
# 5.6: Hess's Law ##### Learning Objectives • To use Hess’s law and thermochemical cycles to calculate enthalpy changes of chemical reactions. Because enthalpy is a state function, the enthalpy change for a reaction depends on only two things: (1) the masses of the reacting substances and (2) the physical states of the reactants and products. It does not depend on the path by which reactants are converted to products. If you climbed a mountain, for example, the altitude change would not depend on whether you climbed the entire way without stopping or you stopped many times to take a break. If you stopped often, the overall change in altitude would be the sum of the changes in altitude for each short stretch climbed. Similarly, when we add two or more balanced chemical equations to obtain a net chemical equation, ΔH for the net reaction is the sum of the ΔH values for the individual reactions. This principle is called Hess’s law, after the Swiss-born Russian chemist Germain Hess (1802–1850), a pioneer in the study of thermochemistry. Hess’s law allows us to calculate ΔH values for reactions that are difficult to carry out directly by adding together the known ΔH values for individual steps that give the overall reaction, even though the overall reaction may not actually occur via those steps. Hess's Law argues that ΔH for the net reaction is the sum of the ΔH values for the individual reactions. This is nothing more than arguing that ΔH is a state function. We can illustrate Hess’s law using the thermite reaction. The overall reaction shown in Equation $$\ref{5.6.1}$$ can be viewed as occurring in three distinct steps with known ΔH values. As shown in Figure 5.6.1, the first reaction produces 1 mol of solid aluminum oxide (Al2O3) and 2 mol of liquid iron at its melting point of 1758°C (part (a) in Equation $$\ref{5.6.1}$$); the enthalpy change for this reaction is −732.5 kJ/mol of Fe2O3. The second reaction is the conversion of 2 mol of liquid iron at 1758°C to 2 mol of solid iron at 1758°C (part (b) in Equation 5.6.1); the enthalpy change for this reaction is −13.8 kJ/mol of Fe (−27.6 kJ per 2 mol Fe). In the third reaction, 2 mol of solid iron at 1758°C is converted to 2 mol of solid iron at 25°C (part (c) in Equation $$\ref{5.6.1}$$); the enthalpy change for this reaction is −45.5 kJ/mol of Fe (−91.0 kJ per 2 mol Fe). As you can see in Figure 5.6.1, the overall reaction is given by the longest arrow (shown on the left), which is the sum of the three shorter arrows (shown on the right). Adding parts (a), (b), and (c) in Equation 5.6.1 gives the overall reaction, shown in part (d): \small \newcommand{\Celsius}{^{\circ}\text{C}} \begin{align*} \ce{2 Al (s, 25 \Celsius) + Fe2O3 (s, 25 \Celsius) &-> 2 Fe (l, 1758 \Celsius) + Al2O3 (s, 1758 \Celsius)} & \Delta H = - 732.5\,\text{kJ} && \text{(a)} \\ \ce{2 Fe (l, 1758 \Celsius) &-> 2 Fe (s, 1758 \Celsius)} & \Delta H = -\phantom{0}27.6\,\text{kJ} && \text{(b)} \\ \ce{2 Fe (s, 1758 \Celsius) + Al2O3 (s, 1758 \Celsius) &-> 2 Fe (s, 25 \Celsius) + Al2O3 (s, 25 \Celsius) } & \Delta H = -\phantom{0}91.0\,\text{kJ} && \text{(c)} \\[2ex] \hline \ce{2 Al (s, 25 \Celsius) + Fe2O3 (s, 25 \Celsius) &-> Al2O3 (s, 25 \Celsius) + 2 Fe (s, 25 \Celsius) } & \Delta H = -851.1\,\text{kJ} && \text{(d)} \\ \end{align*} \label{5.6.1} \tag{5.6.1} The net reaction in part (d) in Equation $$\ref{5.6.1}$$ is identical to the equation for the thermite reaction that we saw in a previous section. By Hess’s law, the enthalpy change for part (d) is the sum of the enthalpy changes for parts (a), (b), and (c). In essence, Hess’s law enables us to calculate the enthalpy change for the sum of a series of reactions without having to draw a diagram like that in Figure $$\PageIndex{1}$$. Comparing parts (a) and (d) in Equation $$\ref{5.6.1}$$ also illustrates an important point: The magnitude of ΔH for a reaction depends on the physical states of the reactants and the products (gas, liquid, solid, or solution). When the product is liquid iron at its melting point (part (a) in Equation $$\ref{5.6.1}$$), only 732.5 kJ of heat are released to the surroundings compared with 852 kJ when the product is solid iron at 25°C (part (d) in Equation $$\ref{5.6.1}$$). The difference, 120 kJ, is the amount of energy that is released when 2 mol of liquid iron solidifies and cools to 25°C. It is important to specify the physical state of all reactants and products when writing a thermochemical equation. When using Hess’s law to calculate the value of ΔH for a reaction, follow this procedure: 1. Identify the equation whose ΔH value is unknown and write individual reactions with known ΔH values that, when added together, will give the desired equation. We illustrate how to use this procedure in Example $$\PageIndex{1}$$. 2. Arrange the chemical equations so that the reaction of interest is the sum of the individual reactions. 3. If a reaction must be reversed, change the sign of ΔH for that reaction. Additionally, if a reaction must be multiplied by a factor to obtain the correct number of moles of a substance, multiply its ΔH value by that same factor. 4. Add together the individual reactions and their corresponding ΔH values to obtain the reaction of interest and the unknown ΔH. ##### Example $$\PageIndex{1}$$ When carbon is burned with limited amounts of oxygen gas (O2), carbon monoxide (CO) is the main product: $\left ( 1 \right ) \; \ce{2C (s) + O2 (g) -> 2 CO (g)} \quad \Delta H=-221.0 \; \text{kJ} \nonumber$ When carbon is burned in excess O2, carbon dioxide (CO2) is produced: $\left ( 2 \right ) \; \ce{C (s) + O2 (g) -> CO2 (g)} \quad \Delta H=-393.5 \; \text{kJ} \nonumber$ Use this information to calculate the enthalpy change per mole of CO for the reaction of CO with O2 to give CO2. Given: two balanced chemical equations and their ΔH values Asked for: enthalpy change for a third reaction Strategy: 1. After balancing the chemical equation for the overall reaction, write two equations whose ΔH values are known and that, when added together, give the equation for the overall reaction. (Reverse the direction of one or more of the equations as necessary, making sure to also reverse the sign of ΔH.) 2. Multiply the equations by appropriate factors to ensure that they give the desired overall chemical equation when added together. To obtain the enthalpy change per mole of CO, write the resulting equations as a sum, along with the enthalpy change for each. Solution: A We begin by writing the balanced chemical equation for the reaction of interest: $\left ( 3 \right ) \; \ce{CO (g) + 1/2 O2 (g) -> CO2 (g)} \quad \Delta H_{rxn}=? \nonumber$ There are at least two ways to solve this problem using Hess’s law and the data provided. The simplest is to write two equations that can be added together to give the desired equation and for which the enthalpy changes are known. Observing that CO, a reactant in Equation 3, is a product in Equation 1, we can reverse Equation (1) to give $\ce{2 CO (g) -> 2 C (s) + O2 (g)} \quad \Delta H=+221.0 \; \text{kJ} \nonumber$ Because we have reversed the direction of the reaction, the sign of ΔH is changed. We can use Equation 2 as written because its product, CO2, is the product we want in Equation 3: $\ce{C (s) + O2 (g) -> CO2 (s)} \quad \Delta H=-393.5 \; \text{kJ} \nonumber$ B Adding these two equations together does not give the desired reaction, however, because the numbers of C(s) on the left and right sides do not cancel. According to our strategy, we can multiply the second equation by 2 to obtain 2 mol of C(s) as the reactant: $\ce{2 C (s) + 2 O2 (g) -> 2 CO2 (s)} \quad \Delta H=-787.0 \; \text{kJ} \nonumber$ Writing the resulting equations as a sum, along with the enthalpy change for each, gives \begin{align*} \ce{2 CO (g) &-> \cancel{2 C(s)} + \cancel{O_2 (g)} } & \Delta H & = -\Delta H_1 = +221.0 \; \text{kJ} \\ \ce{\cancel{2 C (s)} + \cancel{2} O2 (g) &-> 2 CO2 (g)} & \Delta H & = -2\Delta H_2 =-787.0 \; \text{kJ} \\[2ex] \hline \ce{2 CO (g) + O2 (g) &-> 2 CO2 (g)} & \Delta H &=-566.0 \; \text{kJ} \end{align*} \nonumber Note that the overall chemical equation and the enthalpy change for the reaction are both for the reaction of 2 mol of CO with O2, and the problem asks for the amount per mole of CO. Consequently, we must divide both sides of the final equation and the magnitude of ΔH by 2: $\ce{ CO (g) + 1/2 O2 (g) -> CO2 (g)} \quad \Delta H = -283.0 \; \text{kJ} \nonumber$ An alternative and equally valid way to solve this problem is to write the two given equations as occurring in steps. Note that we have multiplied the equations by the appropriate factors to allow us to cancel terms: \begin{alignat*}{3} \text{(A)} \quad && \ce{ 2 C (s) + O2 (g) &-> \cancel{2 CO (g)}} \qquad & \Delta H_A &= \Delta H_1 &&= + 221.0 \; \text{kJ} \\ \text{(B)} \quad && \ce{ \cancel{2 CO (g)} + O2 (g) &-> 2 CO2 (g)} \qquad & \Delta H_B && &= ? \\ \text{(C)} \quad && \ce{2 C (s) + 2 O2 (g) &-> 2 CO2 (g)} \qquad & \Delta H &= 2 \Delta H_2 &= 2 \times \left ( -393.5 \; \text{kJ} \right ) &= -787.0 \; \text{kJ} \end{alignat*} The sum of reactions A and B is reaction C, which corresponds to the combustion of 2 mol of carbon to give CO2. From Hess’s law, ΔHA + ΔHB = ΔHC, and we are given ΔH for reactions A and C. Substituting the appropriate values gives $\begin{matrix} -221.0 \; kJ + \Delta H_{B} = -787.0 \; kJ \\ \Delta H_{B} = -566.0 \end{matrix} \nonumber$ This is again the enthalpy change for the conversion of 2 mol of CO to CO2. The enthalpy change for the conversion of 1 mol of CO to CO2 is therefore −566.0 ÷ 2 = −283.0 kJ/mol of CO, which is the same result we obtained earlier. As you can see, there may be more than one correct way to solve a problem. ##### Exercise $$\PageIndex{1}$$ The reaction of acetylene (C2H2) with hydrogen (H2) can produce either ethylene (C2H4) or ethane (C2H6): $\begin{matrix} C_{2}H_{2}\left ( g \right ) + H_{2}\left ( g \right ) \rightarrow C_{2}H_{4}\left ( g \right ) & \Delta H = -175.7 \; kJ/mol \; C_{2}H_{2} \\ C_{2}H_{2}\left ( g \right ) + 2H_{2}\left ( g \right ) \rightarrow C_{2}H_{6}\left ( g \right ) & \Delta H = -312.0 \; kJ/mol \; C_{2}H_{2} \end{matrix} \nonumber$ What is ΔH for the reaction of C2H4 with H2 to form C2H6? −136.3 kJ/mol of C2H4 Hess’s Law: https://youtu.be/hisUr1fikFU ## Summary Hess's law is arguing the overall enthalpy change for a series of reactions is the sum of the enthalpy changes for the individual reactions. For a chemical reaction, the enthalpy of reaction (ΔHrxn) is the difference in enthalpy between products and reactants; the units of ΔHrxn are kilojoules per mole. Reversing a chemical reaction reverses the sign of ΔHrxn. The magnitude of ΔHrxn also depends on the physical state of the reactants and the products because processes such as melting solids or vaporizing liquids are also accompanied by enthalpy changes: the enthalpy of fusion (ΔHfus) and the enthalpy of vaporization (ΔHvap), respectively. The overall enthalpy change for a series of reactions is the sum of the enthalpy changes for the individual reactions, which is Hess’s law. The enthalpy of combustion (ΔHcomb) is the enthalpy change that occurs when a substance is burned in excess oxygen.
2022-08-15 00:05:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9675941467285156, "perplexity": 1351.443920619275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572089.53/warc/CC-MAIN-20220814234405-20220815024405-00004.warc.gz"}
https://stats.stackexchange.com/questions/104651/best-feature-selection-method-for-naive-bayes-classification
# Best feature selection method for naive Bayes classification i want to make classification with naive Bayes. I have got about 100 Features. Numerical ones as well as categorical ones. Since i want only the most relevant ones to be included for the classification task i want to find them with some kind of feature elimination. My question now is the following: what is the method to use that for (paper/reference?!) and is this method implemented in some sort of software package. Since i use R i would especially prefer some R package. • @ssdecontrol thank you! Indeed there was a hint that WEKA supports exhaustive search to select the subset of features that performs best. Does anybody know if R also supports such a exhaustive search? – user3008056 Jun 25 '14 at 5:54 • There's an R package called FSelector that seems to have a method for it, but I've never used it. It also hasn't been updated since February 2013. I'm surprised that R doesn't have a better-developed base of ML tools. Truth is, R supports anything you can program into it, it's just a matter of doing it. The only other thing I can offer is that you need to figure out a way to compute gain ratios efficiently; ML isn't something I know a whole lot about. Maybe some info here: stackoverflow.com/questions/17844520/… – shadowtalker Jun 25 '14 at 7:29 • As of 2013 at least, it seems at least a few people are writing code from scratch in C++: www.iaeng.org/publication/WCE2013/WCE2013_pp1549-1554.pdf – shadowtalker Jun 25 '14 at 7:35 • @ssdecontrol Thank you very much... i found exhaustive.search() in FSelector. This may do what i want to do. I will try to make some example with toy data and eventually post it as one kind of solution. – user3008056 Jun 25 '14 at 8:07 There are two different routes you can take. The key word is 'relevance', and how you interpret it. 1) You can use a Chi-Squared test or Mutual information for feature relevance extraction as explained in detail on this link. In a nutshell, Mutual information measures how much information the presence or absence of a particular term contributes to making the correct classification decision. On the other hand, you can use the Chi Squared test to check whether the occurrence of a specific variable and the occurrence of a specific class are independent. Implementating these in R should be straight-forward. 2) Alternatively, you can adopt a wrapper feature selection strategy, where the primary goal is constructing and selecting subsets of features that are useful to build an accurate classifier. This contrasts with 1, where the goal is finding or ranking all potentially relevant variables. Note that selecting the most relevant variables is usually suboptimal for boosting the accuracy of your classifier, particularly if the variables are redundant. Conversely, a subset of useful variables may exclude many redundant, but relevant, variables. The R package caret (**C**lassification **A**nd **R**Egression **T**raining) has built-in feature selection tools and supports naive Bayes. I figured I'd post this as an answer instead of a comment because I'm more confident about this one, having used it myself in the past. • yes you are right. I have already found the function rfe() in caret. But is this the best way to handle it? Are there other functions more recommended? Does anybody have further insights? – user3008056 Jun 25 '14 at 8:06
2019-07-17 21:15:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44665050506591797, "perplexity": 522.3662728199178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525402.30/warc/CC-MAIN-20190717201828-20190717223828-00114.warc.gz"}
https://programmer.help/blogs/l1-linear-regression.html
# linear regression 1. Basic elements of linear regression 2. Zero-based implementation of linear regression models 3. Simple implementation of linear regression model using pytorch ## Basic elements of linear regression ### Model For simplicity, here we assume that the price depends only on two factors of the housing condition: area (square meters) and age (years).Next, we want to explore the specific relationship between prices and these two factors.Linear regression assumes that the output has a linear relationship with each input: price=warea⋅area+wage⋅age+b \mathrm{price} = w_{\mathrm{area}} \cdot \mathrm{area} + w_{\mathrm{age}} \cdot \mathrm{age} + b price=warea​⋅area+wage​⋅age+b ### data set We usually collect a series of real data, such as the true selling prices of multiple houses and their corresponding area and age.We want to look for model parameters on this data to minimize the error between the predicted price and the true price of the model.In machine learning terminology, this data set is called a training data set or training set, a house is called a sample, its real selling price is called a label, and the two factors used to predict the label are called feature s.Features are used to characterize the characteristics of a sample. ### loss function In model training, we need to measure the error between the price forecast and the true value.Usually we choose a non-negative number as the error, and smaller values mean smaller errors.A common choice is the square function.It evaluates the sample error indexed as iii as l(i)(w,b)=12(y^(i)−y(i))2, l^{(i)}(\mathbf{w}, b) = \frac{1}{2} \left(\hat{y}^{(i)} - y^{(i)}\right)^2, l(i)(w,b)=21​(y^​(i)−y(i))2, L(w,b)=1n∑i=1nl(i)(w,b)=1n∑i=1n12(w⊤x(i)+b−y(i))2. L(\mathbf{w}, b) =\frac{1}{n}\sum_{i=1}^n l^{(i)}(\mathbf{w}, b) =\frac{1}{n} \sum_{i=1}^n \frac{1}{2}\left(\mathbf{w}^\top \mathbf{x}^{(i)} + b - y^{(i)}\right)^2. L(w,b)=n1​i=1∑n​l(i)(w,b)=n1​i=1∑n​21​(w⊤x(i)+b−y(i))2. When the model and loss function are relatively simple, the above solution to the error minimization problem can be expressed directly in a formula.This kind of solution is called analytical solution.The linear regression and square errors used in this section fall precisely into this category.However, most in-depth learning models do not have an analytic solution and can only minimize the value of the loss function by optimizing the finite iteration model parameters.This type of solution is called a numerical solution. Small batch stochastic gradient descent (mini-batch stochastic gradient descent) is widely used in the optimization algorithm for solving numerical problems.Its algorithm is simple: first select the initial values of a set of model parameters, such as random selection; then iterate the parameters several times so that each iteration may reduce the value of the loss function.In each iteration, a small batch (mini-batch) B\mathcal{B}B consisting of a fixed number of training data samples is randomly and evenly sampled, and then the derivative (gradient) of the model parameters is calculated for the average loss of the data samples in the small batch. Finally, the product of this result and a preset positive number is used as the reduction of the model parameters in this iteration. (w,b)←(w,b)−η∣B∣∑i∈B∂(w,b)l(i)(w,b) (\mathbf{w},b) \leftarrow (\mathbf{w},b) - \frac{\eta}{|\mathcal{B}|} \sum_{i \in \mathcal{B}} \partial_{(\mathbf{w},b)} l^{(i)}(\mathbf{w},b) (w,b)←(w,b)−∣B∣η​i∈B∑​∂(w,b)​l(i)(w,b) Learning rate: _eta represents the size of the steps that can be learned in each optimization batch size: B\mathcal{B}B batch size in small batch calculations To summarize, there are two steps to optimize a function: • (i) Initializing model parameters, generally using random initialization; • (ii) We iterate over the data several times, updating each parameter by moving it in the direction of a negative gradient. ## Vector calculation In model training or prediction, we often process multiple data samples at the same time and use them for vector calculation.Before introducing vector calculation expressions for linear regression, let's consider two ways to add two vectors together. 1. One way to add vectors is to add them one by one as scalars. import torch import time # init variable a, b as 1000 dimension vector n = 1000 a = torch.ones(n) b = torch.ones(n) # define a timer class to record time class Timer(object): """Record multiple running times.""" def __init__(self): self.times = [] self.start() def start(self): # start the timer self.start_time = time.time() def stop(self): # stop the timer and record time into a list self.times.append(time.time() - self.start_time) return self.times[-1] def avg(self): # calculate the average and return return sum(self.times)/len(self.times) def sum(self): # return the sum of recorded time return sum(self.times) Now we can test it.First, two vectors are scalar added one by one by element using a for loop. timer = Timer() c = torch.zeros(n) for i in range(n): c[i] = a[i] + b[i] '%.5f sec' % timer.stop() '0.01432 sec' Another is to use torch to add two vectors directly: timer.start() d = a + b '%.5f sec' % timer.stop() '0.00024 sec' The result is obvious, the latter is faster than the former.Therefore, we should use vector computing as much as possible to improve computational efficiency. # 1. Realization of Linear Regression Model from Zero ### 1.1 Import required packages # import packages and modules %matplotlib inline import torch from IPython import display from matplotlib import pyplot as plt import numpy as np import random print(torch.__version__) 1.3.0 ### 1.2 Simulating the generation of experimental datasets Using a linear model to generate a dataset, a dataset of 1,000 samples is generated. The following are the linear relationships used to generate the data: price=warea⋅area+wage⋅age+b \mathrm{price} = w_{\mathrm{area}} \cdot \mathrm{area} + w_{\mathrm{age}} \cdot \mathrm{age} + b price=warea​⋅area+wage​⋅age+b # set input feature number num_inputs = 2 # set example number num_examples = 1000 # set true weight and bias in order to generate corresponded label true_w = [2, -3.4] true_b = 4.2 features = torch.randn(num_examples, num_inputs, dtype=torch.float32) labels = true_w[0] * features[:, 0] + true_w[1] * features[:, 1] + true_b labels += torch.tensor(np.random.normal(0, 0.01, size=labels.size()), dtype=torch.float32) ### 1.3 Visualization of simulated data plt.scatter(features[:, 1].numpy(), labels.numpy(), 1); When training a model, we need to traverse the dataset and constantly read small batches of data samples. # Define a function, data_iter, that returns the characteristics and labels of batch_size (batch size) random samples at a time. def data_iter(batch_size, features, labels): num_examples = len(features) indices = list(range(num_examples)) # The order in which samples are read is random random.shuffle(indices) # random read 10 samples for i in range(0, num_examples, batch_size): j = torch.LongTensor(indices[i: min(i + batch_size, num_examples)]) # the last time may be not enough for a whole batch #Function returns corresponding elements based on index yield features.index_select(0, j), labels.index_select(0, j) # A program that validates the torch.index_select() function: j=torch.LongTensor([1,2,3,4,5,6,7,8,10]) features.index_select(0,j) #torch.index_select() #Select on the specified dimension dim, for example, select some rows, some columns.Returns that no Tensor does not share memory with the original Tensor. #In python, dim=0 means take by row, and dim=1 means take by column tensor([[-0.8797, 1.0840], [ 1.4686, 0.5604], [ 0.6072, -1.0188], [-0.3210, 1.1137], [ 0.4691, 1.2000], [-0.8294, -0.8613], [ 0.9604, -0.2414], [ 0.3751, -0.8777], [-0.2483, 0.1386]]) #Read the first small batch data sample and print it.The characteristic shape of each batch is (10, 2), #Corresponds to the batch size and the number of inputs; the label shape is the batch size. # Note that all batch data will be output after break is removed batch_size = 10 for X, y in data_iter(batch_size, features, labels): print(X, '\n', y) break tensor([[ 0.6798, 0.2157], [-0.3323, -0.7287], [ 0.7562, -0.0557], [-0.2248, -0.6173], [-3.0879, -0.7436], [-0.9020, -0.1528], [ 0.4947, -0.2986], [ 1.4328, -0.7418], [ 0.3510, -0.3221], [ 0.5044, -1.0165]]) tensor([4.8287, 5.9974, 5.8995, 5.8659, 0.5635, 2.9160, 6.1984, 9.6012, 6.0001, 8.6722]) ### 1.5 Initialize model parameters We initialize the weight to a normal random number with a mean of 0 and a standard deviation of 0.01, and the deviation to 0: w = torch.tensor(np.random.normal(0, 0.01, (num_inputs, 1)), dtype=torch.float32)# Notice that num_inputs is 2 b = torch.zeros(1, dtype=torch.float32) #In model training, gradients are required for these parameters to iterate over their values, so we need to create gradients for them: tensor([0.], requires_grad=True) ### 1.6 Define Model Define the training model for the training parameters: price=warea⋅area+wage⋅age+b \mathrm{price} = w_{\mathrm{area}} \cdot \mathrm{area} + w_{\mathrm{age}} \cdot \mathrm{age} + b price=warea​⋅area+wage​⋅age+b #To implement the vector calculation expression of linear regression, we use torch.mm to multiply the matrix: #torch.mm(a, b) is the multiplication of matrices A and b, and torch.mul(a, b) is the multiplication of corresponding bits of matrices A and B def linreg(X, w, b): ### 1.7 Define the loss function We use the mean square error loss function: l(i)(w,b)=12(y^(i)−y(i))2, l^{(i)}(\mathbf{w}, b) = \frac{1}{2} \left(\hat{y}^{(i)} - y^{(i)}\right)^2, l(i)(w,b)=21​(y^​(i)−y(i))2, To implement the operation, y.view(y_hat.size()) is used to transform y into the shape of y_hat to prevent y_hat from being different from the Y dimension The torch type can use view() to change the Tensor shape.For example, y = x.view(12), z = x.view(-1,6), where -1 indicates that the dimension specified can be calculated from the values of other dimensions. 1. The new tensor returned by view () shares memory with the source tensor, which is one tensor. Changing one of the tensors will also change the other. View simply changes the angle of view of the tensor. 2. In addition, the reshape() function can change shape, but it does not guarantee that a copy will be returned and is not recommended. 3. It is recommended to create a copy using clone and then use view.Another advantage of using clone is that it is recorded in the computational diagram, where gradients are also passed to the source Tensor as they are returned to the copy. def squared_loss(y_hat, y): return (y_hat - y.view(y_hat.size())) ** 2 / 2 ### 1.8 Define the optimization function Here the optimization function uses a small batch random gradient descent: (w,b)←(w,b)−η∣B∣∑i∈B∂(w,b)l(i)(w,b) (\mathbf{w},b) \leftarrow (\mathbf{w},b) - \frac{\eta}{|\mathcal{B}|} \sum_{i \in \mathcal{B}} \partial_{(\mathbf{w},b)} l^{(i)}(\mathbf{w},b) (w,b)←(w,b)−∣B∣η​i∈B∑​∂(w,b)​l(i)(w,b) The following sgd function implements a small batch random gradient descent algorithm.It optimizes the loss function by iterating over the model parameters. Here the gradient calculated by the automatic gradient module is the sum of the gradients of a batch of samples.We divide it by the batch size to get the average. def sgd(params, lr, batch_size): for param in params: param.data -= lr * param.grad / batch_size # ues .data to operate param without gradient track ### train Once the dataset, model, loss function, and optimization function are defined, you are ready to train the model. # super parameters init lr = 0.03 num_epochs = 5 net = linreg loss = squared_loss # training for epoch in range(num_epochs): # training repeats num_epochs times # in each epoch, all the samples in dataset will be used once # X is the feature and y is the label of a batch sample for X, y in data_iter(batch_size, features, labels): l = loss(net(X, w, b), y).sum() # calculate the gradient of batch sample loss l.backward() # using small batch random gradient descent to iter model parameters sgd([w, b], lr, batch_size) train_l = loss(net(features, w, b), labels) print('epoch %d, loss %f' % (epoch + 1, train_l.mean().item())) epoch 1, loss 0.035207 epoch 2, loss 0.000123 epoch 3, loss 0.000054 epoch 4, loss 0.000054 epoch 5, loss 0.000054 w, true_w, b, true_b (tensor([[ 1.9999], [2, -3.4], 4.2) # 2. Simple implementation of linear regression model using pytorch import torch from torch import nn import numpy as np torch.manual_seed(1) print(torch.__version__) torch.set_default_tensor_type('torch.FloatTensor') 1.3.0 ### 2.1 Generating datasets Generating a dataset here is exactly the same as a zero-based implementation. num_inputs = 2 num_examples = 1000 true_w = [2, -3.4] true_b = 4.2 features = torch.tensor(np.random.normal(0, 1, (num_examples, num_inputs)), dtype=torch.float) labels = true_w[0] * features[:, 0] + true_w[1] * features[:, 1] + true_b labels += torch.tensor(np.random.normal(0, 0.01, size=labels.size()), dtype=torch.float) import torch.utils.data as Data batch_size = 10 # combine featues and labels of dataset dataset = Data.TensorDataset(features, labels) dataset=dataset, # torch TensorDataset format batch_size=batch_size, # mini batch size shuffle=True, # whether shuffle the data or not ) for X, y in data_iter: print(X, '\n', y) break tensor([[ 0.0949, -2.0367], [ 0.0957, -2.4354], [ 0.1520, -1.5686], [ 1.3453, 0.1253], [ 0.3076, -1.0100], [-0.6013, 1.6175], [ 0.2898, 0.2359], [ 0.4352, -0.4930], [ 0.9694, -0.8326], [-1.0968, -0.2515]]) tensor([11.3024, 12.6900, 9.8462, 6.4771, 8.2533, -2.4928, 3.9811, 6.7626, 8.9806, 2.8489]) ### 2.3 Define Model nn.Module is a class provided in pytorch and is the base class of all neural network modules. Our custom modules inherit this base class. class LinearNet(nn.Module): # Inherited from torch import nn def __init__(self, n_feature): super(LinearNet, self).__init__() # call father function to init, inherits Module's u init_u function, # Define the form of each layer self.linear = nn.Linear(n_feature, 1) ## Note that the linear here is of type nn.Linear, which can be interpreted as a linear layer and supports "feed" data as input, i.e. linear(x) # function prototype: torch.nn.Linear(in_features, out_features, bias=True) def forward(self, x):#This is also the forward function in Module y = self.linear(x) return y net = LinearNet(num_inputs) print(net) LinearNet( (linear): Linear(in_features=2, out_features=1, bias=True) ) super( LinearNet, self).init() Initialize attributes inherited from parent class First find the parent class of LinearNet (for example, class A), then convert the object self of class LinearNet to an object of class A. Then the class A object "converted" calls its own u init_u function ## Understanding examples of nn.linear import torch nn1 = torch.nn.Linear(100, 50) input1 = torch.randn(140, 100) output1 = nn1(input1) output1.size() torch.Size([140, 50]) # ways to init a multilayer network # method one net = nn.Sequential( nn.Linear(num_inputs, 1) # other layers can be added here ) # method two net = nn.Sequential() # method three from collections import OrderedDict net = nn.Sequential(OrderedDict([ ('linear', nn.Linear(num_inputs, 1)) # ...... ])) print(net) print(net[0]) Sequential( (linear): Linear(in_features=2, out_features=1, bias=True) ) Linear(in_features=2, out_features=1, bias=True) ### Initialize model parameters from torch.nn import init init.normal_(net[0].weight, mean=0.0, std=0.01) init.constant_(net[0].bias, val=0.0) # or you can use net[0].bias.data.fill_(0) to modify it directly Parameter containing: for param in net.parameters(): print(param) Parameter containing: Parameter containing: ### Define loss function loss = nn.MSELoss() # nn built-in squared loss function # function prototype: torch.nn.MSELoss(size_average=None, reduce=None, reduction='mean') ### Define the optimization function import torch.optim as optim optimizer = optim.SGD(net.parameters(), lr=0.03) # built-in random gradient descent function print(optimizer) # function prototype: torch.optim.SGD(params, lr=, momentum=0, dampening=0, weight_decay=0, nesterov=False) SGD ( Parameter Group 0 dampening: 0 lr: 0.03 momentum: 0 nesterov: False weight_decay: 0 ) ### train num_epochs = 3 for epoch in range(1, num_epochs + 1): for X, y in data_iter: output = net(X) l = loss(output, y.view(-1, 1)) l.backward() optimizer.step() print('epoch %d, loss: %f' % (epoch, l.item())) epoch 1, loss: 0.000290 epoch 2, loss: 0.000128 epoch 3, loss: 0.000107 # result comparision dense = net[0] print(true_w, dense.weight.data) print(true_b, dense.bias.data) [2, -3.4] tensor([[ 1.9997, -3.3999]]) 4.2 tensor([4.2002]) ## Comparison of two implementations 1. Zero-based implementation (recommended for learning) Better understanding of models and underlying principles of neural networks 2. Simple implementation using pytorch Can complete model design and implementation more quickly Two original articles have been published. Approved 0. Visits 5 Tags: network IPython Python Posted on Thu, 13 Feb 2020 23:40:06 -0500 by oneski
2020-08-13 05:30:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6758260726928711, "perplexity": 7798.182574845859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738960.69/warc/CC-MAIN-20200813043927-20200813073927-00460.warc.gz"}